#juju 2012-01-23
<_mup_> Bug #920312 was filed: open-tunnel leaves juju unable to connect to an environment if local IP changes <juju:New> < https://launchpad.net/bugs/920312 >
<SpamapS> heh.. juju status | ccze -A
<SpamapS> looks quite nice actually
 * SpamapS opens a feature request to have ccze colorize ec2 instance ids.
<SpamapS> oh hah.. 2007 was the last release
<shaon> hey SpamapS
<shaon> SpamapS: did you guys manage to try juju on euca?
<SpamapS> shaon: the trouble is the leading path recommended/required on walrus
<SpamapS> shaon: should be possible to bypass with a reverse proxy that hosts walrus on /
<shaon> SpamapS: hmm
<SpamapS> shaon: I'd be surprised if we were the only ones running into this.. seeing as Euca themselves had to patch s3cmd/boto to work this way
<shaon> SpamapS: so what do you suggest in this case?
<SpamapS> shaon: try setting up a mod_proxy server with ProxyPass / http://your.walrus.server/with/its/path:9999
<SpamapS> shaon: you could even run it on the same box, but with a different port
<shaon> SpamapS: okay, will inform you the update
<shaon> SpamapS: thanks :)
<SpamapS> shaon: no, thank you for testing. :)
<niemeyer> Good morning!
<mpl> hello
<_mup_> Bug #920454 was filed: juju bootstrap hangs for local environment under precise on vmware <juju:New> < https://launchpad.net/bugs/920454 >
<niemeyer> Why is it so silent in here today? Everybody in their hacking hats? :)
<jimbaker> niemeyer, i suppose so
<m_3> it's a monday after a big conference
<nijaba> m_3: Hello. can you explain the client and server interface of mediawiki to an ignorant like me?
<m_3> nijaba: hey man...  lemme take a look
<nijaba> m_3: hey, I thought it was your charm.
<nijaba> m_3: guess not
<m_3> nijaba: I've worked on it since it was written... but most of my changes are in a separate branch waiting for colocation to land
<nijaba> k
<m_3> (things like monitoring and a shared image-store to use when spreading to multiple wiki instances)
<nijaba> m_3: was it SpamapS originally?
<m_3> heck dunno... lemme look at the history
 * nijaba looks too
<m_3> I think kapil orig... then clint... then me
<m_3> what're you wondering about?  I've worked with it a bunch
<nijaba> m_3: why the db has 2 interfaces, client and server
<m_3> oh... db and slave?
<m_3> lemme refresh to make sure I'm looking at the most recent version of the charm
<nijaba> m_3: thanks :)
<m_3> the wiki is set up to use a replicated mysql setup... that's prob what's going on
<m_3> nijaba: look in hooks/combine-dbservers
<nijaba> m_3: k
<nijaba> m_3: thanks, got it.  Cheers
<m_3> I've got a working stack script for mediawiki using a replicated mysql setup... it's around here somewhere
 * m_3 digging
<m_3> it might be named ensemble tho :)
<m_3> so s/working/was-working/ :)
<m_3> nijaba: http://paste.ubuntu.com/814448/
<nijaba> m_3: neat
<m_3> looks _great_ in gource too btw
<SpamapS> adam_g: btw, don't know if you noticed, but the precise PPA is finally up to date
<SpamapS> I think its about time we updated the precise distro version actually
<SpamapS> reminder to devs: feature freeze is Feb 16!
<adam_g> SpamapS: ah, cool
<SpamapS> hazmat: thanks, btw, for the twisted fix!
<SpamapS> http://wtf.labix.org/447/ec2-wordpress.out.FAILED
<SpamapS> how do we retry these?
<SpamapS> just looks like maybe connectivity was compromised
<hazmat> SpamapS, looks like the instance just took a while to startup up, and the test fell out of its retry loop (120 times)
<hazmat> http://bazaar.launchpad.net/~juju/juju/ftests/view/head:/tests/ec2-wordpress/run.sh
<SpamapS> Yeah I know it works.. I've been using 447 for a while now its fine
<SpamapS> Just wondering if we can tell wtf to try again or we have to wait for the next commit
<hazmat> not sure afaik its a wait for new change, its setup as  a cron trigger polling for new revs, alternatively perhaps niemeyer could run it again from the that machine
 * SpamapS longs for jenkins in all its monstrous glory
<koolhead17> :(
<niemeyer> SpamapS: Be my guest :)
 * SpamapS presses enter on his juju deploy of it into canonistack
<niemeyer> Meanwhile, wtf.labix.org is running 447 again.
<SpamapS> cool ty!
<niemeyer> SpamapS: np
 * SpamapS really just wants jenkins for the IRC bot and pretty graphs. ;)
<niemeyer> SpamapS: It's marking 447 as FAILED, but if you click on the FAILED, you'll see it redirects back because it doesn't exist
<niemeyer> SpamapS: By the time it ends, it should have ok/FAILED, and the output will be there again
<SpamapS> sweet.. just wanted to make sure it passes before uploading that to Ubuntu
<SpamapS> whoa.. bug listings just got like.. rainbowized
<SpamapS> https://bugs.launchpad.net/ubuntu/+source/juju
<niemeyer> :-)
<niemeyer> SpamapS: You got a green
<SpamapS> niemeyer: w00t
<SpamapS> local build test passed too
 * SpamapS uploads
<SpamapS> Build needed 00:05:27, 16640k disc space
<SpamapS> I love eatmydata ... :) takes 16 minutes on the buildds
<niemeyer> eatmydata?
<niemeyer> SpamapS: Is that something that wraps a process to monitor it or something?
<SpamapS> niemeyer: http://www.flamingspork.com/projects/libeatmydata/
<SpamapS> libeatmydata is a small LD_PRELOAD library designed to (transparently) disable fsync (and friends, like open(O_SYNC)). This has two side-effects: making software that writes data safely to disk a lot quicker and making this software no longer crash safe.
<niemeyer> SpamapS: Ah, neat
<SpamapS> Good for iterating over tests
<niemeyer> SpamapS: That's awesome indeed, and the name is awesome too :)
<SpamapS> niemeyer: most of what Stewart does is quite awesome actually
<niemeyer> SpamapS: I guess that could be done with a ramfs int his case, tho
<niemeyer> SpamapS: I can see that being unfeasible for larger values of "data", though :)
<SpamapS> niemeyer: right, but eatmydata requires no privileges.
<niemeyer> SpamapS: Good point
<SpamapS> niemeyer: tmpfs/ramfs will always outperform it.. but eatmydata is a nice way to opportunistically take full advantage of memory
 * niemeyer mumbles something about plan9
<SpamapS> when you don't care for durability
<andrewsmedina> Has anyone had problems with openstack returns hostname that juju doest not can resolve?
<niemeyer> jcastro: ping
<niemeyer> andrewsmedina: Hmm.. sounds like a networking issue
<niemeyer> andrewsmedina: Do you recognize the hostnames it can't resolve?
<andrewsmedina> niemeyer: I think that it's a nova-network problem
<niemeyer> andrewsmedina: Gotcha
<SpamapS> man.. ccze even makes bzr logs more readable. :)
<SpamapS> Though I'm not sure why it highlights Clint, but not Byrum.. hm
<m_3> andrewsmedina: yes, this is a common problem... I'm not sure of the best long-term solution
<m_3> andrewsmedina: start on a box that can hit the openstack endpoint using euca-tools or equiv
<andrewsmedina> m_3: eucatools work's fine here, but the vm's create with juju doesn't
<m_3> andrewsmedina: then 'juju bootstrap' should come up
<m_3> andrewsmedina: then you have to euca-allocate-address and associate that with the bootstrap instance
<andrewsmedina> m_3: hmm
<m_3> thereafter juju works fine
<m_3> when you need to "expose" a service, you'll have to attach another public address
<andrewsmedina> I miss those things in the juju docs
<m_3> (I'm working on an openstack installation that only gives me two public IPs... so I have to swap them around sometimes)
<m_3> andrewsmedina: yeah, funny that :)
<m_3> andrewsmedina: not sure if they're in there atm
<andrewsmedina> m_3: I'm working in a PaaS project that use juju as engine
<m_3> andrewsmedina: cool!
<m_3> andrewsmedina: I'd love to discuss details of your setup sometime
<andrewsmedina> m_3: cool! :D
<SpamapS> andrewsmedina: The docs for working with OpenStack are a bit obtuse because the docs we have are mostly focused on using the EC2 API for.. EC2. :)
<andrewsmedina> SpamapS: I know
<SpamapS> we should definitely add notes for where openstack might differ
<SpamapS> same for eucalyptus too really
<andrewsmedina> SpamapS: If I can help
<andrewsmedina> tell me
<m_3> andrewsmedina: please, by all means keep track of what you're doing... even if it's rough notes it's valuable
<m_3> andrewsmedina: there're _lots_ of different ways to run an openstack or eucalyptus setup...
<andrewsmedina> m_3: I now
<andrewsmedina> OpenStack is very flexible
<m_3> it's a good thing fundamentally.. but it makes it pretty tough to target
<andrewsmedina> we are wrote some charms too
<m_3> whoohoo!
<SpamapS> One of the openstack core devs was at SCALE and wanted to work on an OSAPI provider
<niemeyer> jcastro: ping
<SpamapS> niemeyer: he may be swapping today, since we all worked the weekend at SCALE
<niemeyer> SpamapS: Thanks, I imagined that could be the case.. was just hoping to catch him passing by to ask about the docs stuff
<SpamapS> still waiting on the IS ticket to build from lp:juju/docs ?
<niemeyer> SpamapS: It's more about how the splitting is going
 * m_3 lunch
<niemeyer> SpamapS: I want to push the charm store spec to the server
<niemeyer> SpamapS: and it's not clear to me where things currently live
<SpamapS> niemeyer: seems like you still have to commit it to lp:juju and then merge to lp:juju/docs until the specs are generated from the right place
<niemeyer> This is a bit weird.. it sounds like juju/docs not only offers no advantage, but is an additional burden only
<SpamapS> I think the eventual goal is to have the possibility of another team owning it
<SpamapS> so ~juju-docs can work on the documentation
<SpamapS> (a hypothetical team I made up to support my hypothesis)
<SpamapS> niemeyer: once the docs are generated from lp:juju/docs directly, the double-maintenance wouldn't be necessary
<niemeyer> SpamapS: Yeah, I understand the end goal, it just sounds like we've made a detour that isn't necessary for it
<niemeyer> SpamapS: We can just take the actual doc files, put in a new branch without the code, and maintain it there from now on
<SpamapS> thats exactly what we did..
<SpamapS> but the step of generating the online displayed version has not completed
<niemeyer> SpamapS: Ok, maybe I misunderstand then.. you seemed to indicate that docs were still being merged in lp:juju, and that this was merged onto lp:juju/docs afterwards?
<SpamapS> Well they will need to be if you want them to be displayed on juju.ubuntu.com
<SpamapS> (which is why they haven't been rm'd from lp:juju just yet)
 * SpamapS checks on status of well known RT ticket
<niemeyer> SpamapS: Understood.. we can publish our own docs elsewhere meanwhile
<SpamapS> hmm.. not so well known.. I think hazmat opened a ticket but I can't seem to find reference to it
<SpamapS> hazmat: whats the ticket to re-direct juju.ubuntu.com/docs to build from lp:juju/docs ?
<hazmat> SpamapS, https://portal.admin.canonical.com/48456
<hazmat> SpamapS, i amended the original request for a cron job from october
<hazmat> which was still open
<hazmat> i'm a little confused as to the delay
<SpamapS> OH
<SpamapS> so it is that one
<SpamapS> I thought I was mistaken
<SpamapS> hazmat: since its your ticket, could you bump the vanguard to take a look?
<SpamapS> its lamont.. he'll nail it
 * SpamapS goes to find food
<hazmat> SpamapS, ok
<niemeyer> hazmat: https://codereview.appspot.com/5570047
<niemeyer> hazmat: It's the same old formula store specification, but revised and cleaned up to reflect what we implemented and the whole renaming
<hazmat> niemeyer, cool
<hazmat> niemeyer, one thing that would be off interest is having a notion of multiple ppas per user
<hazmat> right now we just have the single namespace/ppa per user
<niemeyer> hazmat: Yeah, that's coming
<niemeyer> hazmat: But that's a follow up
<niemeyer> hazmat: There are several ways in which this spec/feature will have to be expanded
<niemeyer> hazmat: But these only matter right now if they'd change what's there
<niemeyer> (as in, being incompatible)
<niemeyer> hazmat: And this adds .lbox to the repository, for better supporting lbox: https://codereview.appspot.com/5575046
<hazmat> nice
<niemeyer> hazmat: So LGTM it there, and I'll lbox submit it :)
<hazmat> niemeyer, i'll definitely have a look, but probably not till latter today or early tomorrow, i've got a few other reviews i'm trying to catch up on atm
<niemeyer> hazmat: I meant the latter one
<niemeyer> hazmat: It's a one liner
<hazmat> niemeyer, LGTM ;-)
<hazmat> niemeyer, the first one mostly looks the same on a casual review , so should be straightforward
<niemeyer> hazmat: There, please
<niemeyer> hazmat: Doesn't matter much for this case, but that's the workflow.. lbox submit picks up your presence in the discussion
<hazmat> ah
<niemeyer> hazmat: See the message now: https://codereview.appspot.com/5575046
<niemeyer> hazmat: Same message goes onto the commit
<hazmat> niemeyer, nice, lbox rocks
<hazmat> rust and go have some similiar high levels
<pindonga> hi, I'm trying to use juju with openstack... I get it to start bootstrapping, but it never finishes. All I get is : Environment still initializing. Will wait.
<pindonga> any ideas on how to debug this?
<andrewsmedina> pindonga: Have you tried to run in verbose mode?
<andrewsmedina> pindonga: juju -v bootstrap
<_mup_> juju/relation-already-exists-error r444 committed by jim.baker@canonical.com
<_mup_> Better error message
<_mup_> juju/relation-already-exists-error r445 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/relation-already-exists-error r446 committed by jim.baker@canonical.com
<_mup_> Fix add-relation test
<SpamapS> pindonga: do you see your instances starting up?
#juju 2012-01-24
<pindonga> SpamapS, so, regarding the instances never finishing, it seems to be up, as I can ssh into directly, but juju status will never complete
<niemeyer> Goood mornings!
<andrewsmedina> niemeyer: morning
<niemeyer> andrewsmedina: Hey hey
<wrtp> niemeyer: yo!
<SpamapS> pindonga: ok, perhaps there is some helpful information in /var/log/cloud-init.log or /var/log/cloud-init-output.log
<pindonga> SpamapS, those files don't exist
<SpamapS> pindonga: interesting
<SpamapS> pindonga: anything in /var/log with cloud-init in the name?
<pindonga> no
<pindonga> SpamapS, this is on the machine running juju, yes?
<pindonga> SpamapS, I suspect something with zookeeper, as I'm getting timeouts related to it
<pindonga> 2012-01-24 07:52:34,302:14454(0x7f7c94925700):ZOO_WARN@zookeeper_interest@1461: Exceeded deadline by 12ms
<pindonga> 2012-01-24 07:52:35,757:14454(0x7f7c94925700):ZOO_ERROR@handle_socket_error_msg@1528: Socket [127.0.0.1:37230] zk retcode=-7, errno=110(Connection timed out): connection timed out (exceeded timeout by 0ms)
<pindonga> that's just one example
<_mup_> juju/trunk r448 committed by jim.baker@canonical.com
<_mup_> merge relation-already-exists-error [r=clint-fewbar,hazmat][f=837724]
<_mup_> Provide a better error message if the relation already exists.
<bac> hi i'm having problems with juju 0.5+bzr447-0ubuntu1 in that config-get returns nothing.  previous version worked as does the one in the PPA, also for revno 447.
<m_3_> bac: I have charms deployed using 447 that use default config values correctly
<m_3_> lemme spin up something else with passed-in config
<bac> m_3 from ppa or regular repo?
<SpamapS> bac: can you share your config.yaml ?
<m_3_> bac: ah, right from the ppa
<SpamapS> bac: the precise PPA and precise have the same code
<hspencer> mornin
<robbiew> m_3: we're all set for devopsday in Austin
<m_3_> robbiew: woohoo!  thanks for taking care of that
<robbiew> m_3_: np -> http://devopsdays.org/events/2012-austin/
<robbiew> sponsorship, done.
<bac> hi SpamapS, here is config.yaml with transcripts of the failure.  this exact charm was previously deployed with the 447 ppa.  http://paste.ubuntu.com/815592/
<_mup_> juju/bool-and-validate-defaults r448 committed by kapil.thangavelu@canonical.com
<_mup_> add boolean servcie config type, validate schema defaults when parsing
<_mup_> juju/bool-and-validate-defaults r449 committed by kapil.thangavelu@canonical.com
<_mup_> sample configuration is reused by other tests :-(, use a separate config for additional type tests
<gary_poster> hi.  I'm trying to run juju with lxc on precise.  I have an existing non-juju-related lxc instance on the machine that is working fine.  When I try to run juju bootstrap, I get the error here.  http://pastebin.ubuntu.com/815678/  hazmat has helped people with this before: http://askubuntu.com/questions/67156/juju-bootstrap-on-local-machine-gives-an-error http://irclogs.ubuntu.com/2011/10/05/%23juju.txt
<gary_poster> I don't have the libvirtd group, even after a restart
<gary_poster> otoh, sudo virsh net-start default fails identically, so that doesn't seem the same to me.
<gary_poster> I suppose I'll go randomly add myself to libvirtd to see if that helps, but I wonder why I'm not a member already
<hazmat> gary_poster, the libvirt-bin package install normally takes care of that
<gary_poster> If anybody has any ideas, I'd receive them gratefully :-)
<gary_poster> yeah I figured
<gary_poster> I could dpkg-reconfigure and see what happens maybe?
<SpamapS> gary_poster: are you an admin on the machine?
<SpamapS>     for u in $(grep "^admin:" /etc/group | sed -e "s/^.*://" -e "s/,/ /g"); do
<SpamapS>         adduser "$u" libvirtd >/dev/null || true
<SpamapS>     done
<hazmat> gary_poster, perhaps.. or just work around by hand, there's another pre-requisite related issue i just started working on namely that juju lxc always creates oneiric containers.. regarding the output your seeing, that's a little odd, juju should be detecting the status of the libvirt network.. i'll take a look and see if there's anything obvious
<gary_poster> SpamapS, no.  My os had to be created manually--via debootstrap and adduser.  I'll addmyself to admin
<gary_poster> uh
<gary_poster> adduser: The group `admin' does not exist.
<hazmat> nothing obvious, it does check the network status before attempting to start
<SpamapS> gary_poster: oh you're running your own custom mini-buntu?
<gary_poster> my edge appears to be bleeding
<hazmat> sounds painful
<gary_poster> :-) SpamapS, I guess so
<SpamapS> hazmat: perhaps a wishlist bug to verify that the user is in 'admin'
<hazmat> SpamapS, sounds good
<gary_poster> ok, so...should I bother rebooting now that I am in libvirtd, or do my other symptoms suggest that it wouldn't help anyway??
 * gary_poster thought he was running precise as installed a la https://help.ubuntu.com/11.04/installation-guide/hppa/linux-upgrade.html
<gary_poster> Well, I am running precise
<gary_poster> I wasn't aware that doing it this way would make things so customized.
 * gary_poster will try rebooting...
<gary_poster> coulda just relogged in, but anyway...
<gary_poster> that didn't help
<gary_poster> thought I am now a member of libvirtd
<gary_poster> virsh net-start default gives me the same error
<gary_poster> hazmat ^^ any other ideas?
<SpamapS> gary_poster: hppa ?
<SpamapS> gary_poster: you may be a member, but you don't get those groups fully until you log out and back in
<gary_poster> SpamapS, not sure what you mean by hppa.  I don't have a group by that name.  I rebooted after manually adding myself to libvirtd, and groups now reports that I am a member of it
<SpamapS> gary_poster: groups isn't sufficient though. you won't actually have the permissions of that group until you've logged in again... but rebooting would suffice
<gary_poster> yeah
<SpamapS> gary_poster: and I said hppa because you linked to the 'hppa' install guide.. which I didn't know existed.
<gary_poster> SpamapS, oh, hah.  yeah, I was trying to install Precise on a Macbook Pro with a missing optical drive.  I tried every USB stick I could find, and then some optical drive replacement tricks, and then...oh some other approach I found.  Then I saw the debootstrap approach and figured that ought to work with what I already had (a separate oneiric partition)
<gary_poster> This is the first hint that things might be a little odd for me
<SpamapS> gary_poster: most people just use VMs. :)
<gary_poster> SpamapS, heh.  :-) I did that for a long time, but was getting pressure to joing the Ubuntu-on-metal club.
<SpamapS> gary_poster: oh.. and you didn't want to just backup/install/restore ?
<SpamapS> gary_poster: oh sorry now I'm gathering.. your usb sticks all failed
<SpamapS> :-/
<gary_poster> SpamapS, right :-/
<gary_poster> Well...for some reason net-list started showing default, without me doing anything.  I did a net-destroy default before juju bootstrap (perhaps unnecessarily) and it worked.  So...concerning, but it moves me along
<bac> SpamapS: did you see the config.yaml i posted earlier?
<SpamapS> bac: sorry I didn't, backscrolling now
<bac> SpamapS: i know it sounds dubious, but i've verified repeatedly the PPA for r447 works where the one in the repo does not
<SpamapS> bac: weird! so yeah I'd expect it to show you a yaml with at least the default value for the file. that looks like a serious bug
<SpamapS> bac: yeah that is definitely weird
<SpamapS> bac: can you make sure you are deploying with juju-origin: distro  in your environment config?
<bac> SpamapS: i do not have that in my enviroment.yaml
<bac> SpamapS: my redacted environment.yaml looks like http://paste.ubuntu.com/815750/
<SpamapS> bac: do you still have the broken instance running?
<SpamapS> bac: if you do, apt-cache policy juju on it
<bac> SpamapS: i do not.  i needed to make some progress so i'm running with the ppa version
<bac> i can easily revert if it'll give you helpful info
<SpamapS> bac: no worries, I can repeat
<bac> SpamapS: http://paste.ubuntu.com/815765/
<SpamapS> bac: but thats after you installed from the PPA..
<bac> SpamapS: correct, but it shows the origin of the other.  i assumed that might be helpful
<gary_poster> OK, here's another issue.  Per Nick Barcet's demo instructions, I checked out the charms.  I now have ~/charms/charm-repo with lots of charms
<gary_poster> Then I try the command he gave:
<gary_poster> juju deploy --repository /path/to/charm-repo local:mysql
<gary_poster> I get "ERROR Charm 'local:oneiric/mysql' not found in repository /home/gary/charms/charm-repo"
<gary_poster> bac tells me I can just make an oneiric directory and it will all work out
<gary_poster> (if I link the charms in that directory)
<gary_poster> but it seems like there's a bug in there somewhere
<koolhead17> gary_poster: whats your current working directory
<koolhead17> i had same issue and i remember eating lots of SpamapS time :)
<gary_poster> koolhead17, heh.  I did it in the repo dir and in my home dir with same result.  on call now tho
<koolhead17> gary_poster: i would try this
<koolhead17> juju deploy --repository /usr/share/doc/juju/examples local:mysql
<koolhead17> to start with :)
<koolhead17> and see if am getting error with it
<SpamapS> gary_poster: should be ~/charms/oneiric as the root of all charms
<hazmat> gary_poster, juju expects to find the series name under the repo dir.. so it should be repo_directory/<release-series>/mysql
<koolhead17> so gary_poster you have to have directory/oneiric/mysql
<koolhead17> hazmat: in my case i had a bug too, the revision file has to have a non-zero value :)
<gary_poster> :-) ok thanks guys I will adjusrt after call
<hazmat> koolhead17, that's been improved, it will report charms with error in the repo
<koolhead17> hazmat: will try tommorow from office, i will have to upgrade juju pkg which i am using from PPA
<koolhead17> jcastro: around
<jcastro> yeah
<koolhead17> did i do something wrong? i don`t see the modified documentation merged :(
<jcastro> koolhead17: ? I don't know, I don't run the docs
<jcastro> koolhead17: did you submit a merge proposal?
<koolhead17> yes
<koolhead17> https://code.launchpad.net/~koolhead17/juju/jujudoc
<jcastro> hazmat: you are watching for docs merge proposals right?
<koolhead17> hspencer: hey there
<hazmat> jcastro, i am now
<hazmat> https://code.launchpad.net/~juju/juju/docs/+activereviews
<jcastro> hmmm, how come his branch doesn't show up there?
<koolhead17> jcastro: hazmat was my path supposed to be https://code.launchpad.net/~koolhead17/juju/docs/jujudoc instead what it is currently ?
<gary_poster> (I don't think you can create that alternate path)
<gary_poster> If I'm wrong that will be mildly embarrassing since I've worked on the LP team for a few years :-)
<koolhead17> gary_poster: :P am total n00b jelmer helped me when i started using it :P
<gary_poster> :-)
<hazmat> koolhead17, your push path is lp:~koolhead17/juju/<your-branch-name>
<koolhead17> yeah, so its correct :)
<hazmat> koolhead17, jcastro, that branch/merge is against juju trunk not the docs branch
<koolhead17> gosh
<koolhead17> i will need some help
#juju 2012-01-25
<hspencer> hey koolhead17|zzZZ
<hspencer> sorry
<hspencer> was in meetings all day
<hspencer> whats goin on?
<niemeyer> Good morning!
<lamont> ./1
<andrewsmedina> hi
<andrewsmedina> In my try to use juju + openstack when I run "juju status" return this error
<andrewsmedina> 2012-01-25 14:07:39,567:1419(0x7fdf04f58700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:34932] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
<_mup_> juju/local-respect-series r448 committed by kapil.thangavelu@canonical.com
<_mup_> allow creating precise lxc containers
<Tim__> Hello
<_mup_> juju/local-respect-series r449 committed by kapil.thangavelu@canonical.com
<_mup_> lxc unit deployment parameterizes series
<hazmat> andrewsmedina, that's with juju status -v ? does it eventually connect? what version of juju and which provider are you using?
<andrewsmedina> hazmat: Im using -v and using ec2 provider with openstack
<andrewsmedina> my juju version 0.5+bzr443-1juju2~oneiric1
<hazmat> andrewsmedina, does it eventually complete? right after bootstrap it can take some time for the remote machine to finish installation and get zk setup.. the client will retry automatically, so some of the zk messages are noise in that regard.. but that should stop after the bootstrap instance is fully functional
<andrewsmedina> hazmat: in vm log "cloud-init boot finished at Wed, 25 Jan 2012 16:15:56 +0000. Up 1290.47 seconds"
<andrewsmedina> are juju status slowly with openstack?
<hazmat> andrewsmedina, should be the same, it sounds like a connectivity issue perhaps based on zk not running
<andrewsmedina> this looks like a timeout problem
<hazmat> andrewsmedina, what version of openstack? are you able to login into the machine?
<andrewsmedina> I can ping and access the vm via ssh
<andrewsmedina> diablo
<hazmat> andrewsmedina, its able to setup the ssh tunnel, and cloud-init is finished.. so i would imagine zk should be running unless there's an error
<hazmat> andrewsmedina, if you can ssh into the machine can you check if zookeeper is running (should be only  java process)
<andrewsmedina> hazmat: it's in the log too "juju-admin: error: unrecognized arguments: 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04"
<hazmat> andrewsmedina, could you pastebin the log
<SpamapS> andrewsmedina: that looks like maybe the metadata service returned bad data
<SpamapS> or userdata service more likely
<andrewsmedina> http://dpaste.org/FpsiB/
<Tim__> Hello, as it's "Office Hours", is there somebody aware of ical server charms being written for 12.04?
<hazmat> Tim__, none that i know of.. the apple caldav server would be nice .. http://trac.calendarserver.org/
<hazmat> lots of others out there though
<Tim__> Yes, or Davical.
<hazmat> andrewsmedina, thanks, could you pastebin the contents of /var/lib/cloud/instance/user-data.txt
<hazmat> andrewsmedina, fwiw there's a handy cli tool for this pastebinit
<SpamapS> Tim__: best place to look is https://bugs.launchpad.net/charms
<SpamapS> Tim__: if its not there, file it, as 'Charm Needed: ical server' with a description of the software you want to work on. Then assign the bug to yourself.
<Tim__> Thanks SpamapS,  I will see if somethig is there
<Tim__> I do not expect myself to start writing charms a.t.m. still evaluating the usefullness for our organisation.
<Tim__> No bugs with "ical", "calendar" or "agenda"..
<SpamapS> Tim__: could be as easy as 'charm create calendarserver' :)
<SpamapS> or davical
<SpamapS> Interesting...
<SpamapS> calendarserver depends on memcached.. thats a silly thing to do since memcached is a network daemon
<SpamapS> looks like davical uses postgres
<jcastro> I just saw davical. :)
<Tim__> Yes, davical can only work with postgres as far as I know.
<jcastro> I hadn't filed the bug on it yet, it does sound pretty awesome
<jcastro> SpamapS: robbiew: When we go to the openstack summit, you guys can roll on my bus: http://www.openstack.org/conference/san-francisco-2012/
<SpamapS> OH SNAP
<robbiew> heh
<Tim__> Apple's calendarserver won't always work on Linux. It's not supported by Apple. :-)
<SpamapS> It would be nice to have something even 1/10th as good as google calendar that could be deployed for personal use
<jcastro> Tim__: are you interested in working on davical? That would be great
<Tim__> We created a bash script for an older version a few years ago, never actualy put it into production..
<Tim__> Let me ask.
<SpamapS> It should be dead simple to do since it is in the archive already
<Tim__> My colleague here says that bash script is not finished. Was based on some tutorial on the internet,
<jcastro> if you get that working it shouldn't be too much work charming it
<Tim__> Is postgress "charmd"?
<SpamapS> Tim__: it is, though it needs some improvement to be tuned properly.
<jcastro> yup
<jcastro> Here's a good intro: https://juju.ubuntu.com/docs/write-charm.html#have-a-plan
<jcastro> of the things to think about before charming
<SpamapS> Oo, good t-shirt idea
<SpamapS> Write Charm. Have a plan.
<SpamapS> http://bit.ly/xey5cj
<SpamapS> with that image on it
<Tim__> Let me ask my colleague, maybe he is likes to do it..?
<SpamapS> jcastro: ^^
<SpamapS> robbiew: ^^
<jcastro> Tim__: that would be great!
<jcastro> We'd be more than happy to answer questions, etc.
<robbiew> SpamapS: heh...well you're free to make any shirt you want to pay for ;)
<koolhead17> hi all
<SpamapS> robbiew: yes, but its so much easier to incite you to do my evil deeds for me. :) <hypnoticvoice>you know you want that shirt</hypnoticvoice>
<robbiew> lol..good luck with that ;)
<jcastro> SpamapS: usshurper
<SpamapS> jcastro: been thinking about it all week. :)
<koolhead17> T-Shirt
<_mup_> juju/local-respect-series r450 committed by kapil.thangavelu@canonical.com
<_mup_> propogate release series to machine agent and unit deployments from local provider
<Tim__> Me and my colleague could look into it at some point but we do not expect to finish before the 12.04 release. Nor would we support anything that's not direcly related to our own organisation.
<negronjl> m_3: ping
<jcastro> Tim__: that's fine, mostly working would be great, we can find someone to finish it
<jcastro> Tim__: and we keep the charm store open past release, we don't hard freeze charms until way after
<jcastro> SpamapS: actually, for LTSes we should keep that branch open for longer than the typical 6 months we talked about right?
<SpamapS> jcastro: its open forever, all of them are...
<SpamapS> jcastro: just that the dev focus changes to the new release of Ubuntu
<jcastro> oh, so when you guys said open for development, we don't close the old ones, right.
 * jcastro nods
<SpamapS> jcastro: so people wills till be able to push new stuff int o lp:charms/precise/foo for a long time
<SpamapS> The only rule we're going to make there is that interfaces *must not* change.
<SpamapS> We may also get heavy handed and say that upstream versions shouldn't change either.
<SpamapS> But.. I kind of want to stay out of that water. ;)
<jcastro> that's why I want that --upstream switch in each one.
<SpamapS> Yeah, maybe the default upstream version can't change.
<Jorge> Hi, I would like to know if is there a way to send parameters to an instance when using juju? I have a openstack private cloud and I would like to test juju. But I have a proxy and when I run 'juju bootstrap' the instance try to install some packages from an apt mirror. So, I need to tell the instance to use a proxy, or use my mirror. Is it possible ?
<Jorge> In a common instance I can use -f with euca-run-instances and send cloud-config files or bash scripts.
<SpamapS> Jorge: your best bet is to override archove.ubuntu.com with your local mirror's IP in your local DNS
<Jorge> I'm running Diablo, installed from repo (ubunut oneiric).
<Jorge> <SpamapS> OK.
<SpamapS> Jorge: there are two bugs open that discuss this.. bug 809634 and bug 897645
<_mup_> Bug #809634: allow additional user-defined userdata/cloud-config <juju:New> < https://launchpad.net/bugs/809634 >
<_mup_> Bug #897645: juju should support an apt proxy for private clouds <cloud-init:Fix Committed> <juju:Confirmed> < https://launchpad.net/bugs/897645 >
<SpamapS> Jorge: If either of those would be helpful to you, it would help us if you noted that you are affected by them, and/or commented on them.
<Jorge> <SpamapS> I'm going to note that bugs so! They really affect me!
<Jorge> I'm testing with a ubuntu cloud image. If I configure juju to use a customized image and I configure that image with the proxy environment variables, will it work?
<Jorge> Or juju just works with ubuntu cloud images?
<SpamapS> Jorge: yes that will work fine
<SpamapS> Jorge: you can specify the image id to use in environments.yaml
<Jorge> Great, i'm going to change my image! Thanks! :)
<Tim_Blokdijk> About the iCal server, whatever happens, the first thing needed is to report a bug at https://bugs.launchpad.net/charms correct?
<jcastro> yep
<jcastro> something like "Charm Needed: davical"
<jcastro> and then have a link to the project
<jcastro> also having a direct link to the project's install instructions can be helpful.
<Tim_Blokdijk> Ok, I will report a bug about it then.
 * SpamapS needs to finish his dbconfig-common charm helper so we can just automatically charm all the packages that depend on it.
<jcastro> koolhead17: your conference pack is on the way!
<koolhead17> jcastro: thank you sir!! i have 2 conf in feb now!! :)
<Tim_Blokdijk> Alright, I wrote a bug report for a iCal sever charm. https://bugs.launchpad.net/charms/+bug/921772
<_mup_> Bug #921772: Charm Needed: iCal server for calendar/agenda functionality <agenda> <calendar> <calendarserver> <davical> <ical> <server> <Juju Charms Collection:New> < https://launchpad.net/bugs/921772 >
<koolhead17> hazmat: let me know when you have sometime
<Tim_Blokdijk> jcastro still here?
<jcastro> yep
<Tim_Blokdijk> The CalendarServer Debian maintainer like to have version 3.1 uploaded in 12:04.. that would be really nice. Current version is 2.4 https://answers.launchpad.net/ubuntu/+source/calendarserver/+question/185654
<Tim_Blokdijk> He posted that yesterday.
<jcastro> ah so we need to turn that into a sync request.
<Tim_Blokdijk> I guess so?
<jcastro> I posted
<jcastro> basically, when he's ready, he can file a bug
<jcastro> and then someone will sync it
<jcastro> m_3: have you done your travel for strata yet?
<Tim_Blokdijk> Thanks, 3.1 would be a good basis to do that charming thingy on top of.
<SpamapS> me too :)
<SpamapS> jcastro: err rather, I posted too
<gary_poster> Hi.  Given this install hook http://pastebin.ubuntu.com/817000/ and this config.yaml http://pastebin.ubuntu.com/816999/ (notice installdir's default of /tmp/buildbot at the top) I get a failure to install, apparently because `config-get installdir` does not return a value.  bac said he came here with a similar problem yesterday (using the same files/charm) and using the juju from the ppa fixed it for him, but I'm running fro
<gary_poster> m the ppa (0.5+bzr447-1juju2~precise1) and no luck.  Could someone help me diagnose this?
<gary_poster> A demo of why I think config-get installdir is not working is that the install has "BUILDBOT_DIR=`config-get installdir`" and "juju-log "Creating master in $BUILDBOT_DIR"", and running ./install in juju debug-hooks results in "juju-log "+ juju-log 'Creating master in '" (notice the empty string)
<gary_poster> SpamapS, any chance you have a sec for this?
<gary_poster> "a sec" -> "some indeterminate amount of time, constrained by the fact that I will be at EoD soon :-)"
<SpamapS> gary_poster: sure
<SpamapS> gary_poster: I'm going to run out to lunch in about 10 minutes tho
<gary_poster> SpamapS, cool.  Can try again tomorrow.  What can I do now?
<SpamapS> gary_poster: did you try 'juju set your-service-name installdir=/something/that/makes/sense' ?
<hazmat> gary_poster, can you push the charm to lp.. per https://juju.ubuntu.com/Charms Contributing Charms
<hazmat> gary_poster, i can take a look at it
<SpamapS> gary_poster: just to be careful, try wrapping /tmp/buildbot in quotes.. though I think yaml will interpret it as a string..
<gary_poster> SpamapS, no.  I'll try.  hazmat, ok thanks will do.  We have it on lp now but in random crazy place (lp:~yellow/launchpad/buildbot-master)
<hazmat> gary_poster, anywhere i can get to it should work, if its already published
<gary_poster> hazmat, ok cool.  Yeah, bac has been working on this.  It's public
<hazmat> gary_poster, are you deploying the charm with the config?
<hazmat> gary_poster, if you use juju set  .. to set the config values, it won't be picked up till after the install hook has already run
<gary_poster> hazmat in this case the question is why the default value is not being picked up
<SpamapS> perhaps there is a bug
<SpamapS> we did change some code in that area recently IIRC
<hazmat> yeah.. that config looks fine
<hazmat> yeah.. i changed the config values to be pick up defaults dynamically, instead of serializing defaults as initial values.
<hazmat> revno 446
<_mup_> juju/ssh-known_hosts r485 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<gary_poster> fwiw, SpamapS, putting quotes around it in the default did not work.  I didn't try the "set" because of what hazmat said (install will already have been run).  I futzed around and the only other thing I discovered is that juju get buildbot-master gives my back the value I expect for installdir.
<hazmat> gary_poster, so deploying that charm, and using debug-hooks i do see installdir set correctly
<hazmat> http://paste.ubuntu.com/817054/
<hazmat> from the install hook
<hazmat> that's the output of config-get
<gary_poster> hazmat, so you see Creating master in /tmp/buildbotin the log also?
<gary_poster> Though I was using config-get too
<gary_poster> in the lxc
<gary_poster> (I need to leave in 5)
<hazmat> gary_poster, it looks like its working correctly to me, i see a buildbot running
 * hazmat looks around for a unit log to pastebin
<gary_poster> hazmat, ok.  So...something is wrong in my environment...
<gary_poster> hazmat, are you on precise or oneiric?
<gary_poster> though it is working fine for bac today, and was not yesterday mand he is on precise
<hazmat> gary_poster, but to clarify what are you using for a provider? and what value for default-series
<gary_poster> hazmat, default-series oneiric; provider I'm guessing you mean lxc (rather than ec2)
<hazmat> gary_poster, here's the log.. http://paste.ubuntu.com/817057/
<hazmat> gary_poster, yup thanks
<hazmat> gary_poster, i ran it on oneiric
<hazmat> bzr revno 5 of the charm
<gary_poster> yeah, that's what I'm using too (though I had the same problem with 4)
<gary_poster> ok...hazmat, so maybe I'll reinstall juju...?  Not sure what could explain this so I'm at the rattle-the-windows-and-see-if-the-engine-starts stage
<gary_poster> That log looks happy enough
<hazmat> gary_poster, could you pastebin the environ of the machine agent (from /proc/pid/environ)
<hazmat> its possible it was pointing to 'distro' instead of 'ppa' you where getting an old version of juju deployed
<hazmat> gary_poster, an easy way to verify is to set juju-origin: ppa in local environment section of environments.yaml
<hazmat> and if it works that its.. the environment of the machine agent would be more definitive/quicker to verify though
<hazmat> the environments.yaml change needs a new bootstrap
<gary_poster> hazmat: the proc of what? the lxc instance?
<hazmat> gary_poster, the juju machine agent ... $ ps aux | grep machine  should show it
<hazmat> its on the host
<hazmat> not the container, but it configures the containers with a base juju environment
<gary_poster> got it.
<hazmat> hmm.. actually .. even better would have be a pastebinit of the master-customize.log in  $data-dir
<gary_poster> hazmat, http://pastebin.ubuntu.com/817062/
<hazmat> $data-dir/units/master-customize.log
<hazmat> thanks
<hazmat> gary_poster, the environ thing has null chars, can't be piped readily
<gary_poster> ah sorry
<hazmat> gary_poster, no worries that master-customize.log has even more info
<hazmat> bcsaller, is that lxc debug ready to land?
<bcsaller> hazmat: afaik, yes
<hazmat> bcsaller, would be helpful to have a single script we could just ask people to run
<bcsaller> thats why I wrote it
<bcsaller> I might not capture everything thats useful in there now, but we can add to it, its a good start
<gary_poster> hazmat, http://pastebin.ubuntu.com/817066/ is proc
<gary_poster> I have to run go get child now or big trouble :-P
<gary_poster> will leave this around and check back later or tomorrow
<gary_poster> thank you hazmat
<gary_poster> Also will get that file and send
<gary_poster> bye
<hazmat> gary_poster, cheers
<hazmat> aha that was the problem
<hazmat> JUJU_ORIGIN=distro
<hazmat> ah.. i wonder.. if its a precise machine deploying an oneiric container, then distro represents two entirely different versions
<SpamapS> hazmat: good example of why we shouldn't infer what the user intends just because they're deploying *from* a particular release of Ubuntu. :)
#juju 2012-01-26
<_mup_> Bug #921895 was filed: local provider should offer ability to turn on '--force-unsafe-io' for dpkg to speed up package installs. <juju:New> < https://launchpad.net/bugs/921895 >
<mpl> rog: and tomorrow is?
<mpl> hi all
<rog> mpl: almost the weekend!
<mpl> hehe, that's right.
<mpl> hopefully I'll finally have some time for some juju hacking then
<hazmat> gary_poster, ping
<gary_poster> hey hazmat
<hazmat> gary_poster, i think i might have a line on the problem you where experiencing, but one question first.. what ubuntu version are you running on the host?
<gary_poster> hazmat, did you see the email I sent you?  I think it was explainable simply by my environment.yaml (but maybe I'm wrong :-) ).  My host is precise.
<hazmat> gary_poster, cool, so i think the problem is the 'distro' version of juju differs between precise and oneiric significantly. the oneiric containers are getting a much older version of juju.. if you add a line to your environment with 'juju-origin': 'ppa' that should resolve it.
<gary_poster> hazmat, cool, yeah, it did, thanks.  Like I said in email, I also added an edit to http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage to hopefully help others.  I actually explicitly had "juju-origin: distro" because of cargo-cult (or at least following-the-fine-manual-cult).
<hazmat> doh
<gary_poster> :-)
<smoser> hazmat, ping
<hazmat> smoser, pong
<smoser> local provider.... if i just juju deploy say N things at once
 * hazmat nods
<smoser> is there anything that will protect the initial create form happening N times ?
<hazmat> smoser, serial instantiation
<smoser> i think in juju terms, that is the creation of "master template" that i'm talking about.
<smoser> are you saying I should do that serially? or you're saying juju covers that for me..
<hazmat> smoser, indeed, juju should cover that for you
<hazmat> it will create the unit containers serially not in parallel
<hazmat> smoser, question for you re getting a cloud image suitable for lxc
<smoser> so i'm just curious, where/how is that accomplished?
<smoser> cloud images should boot in lxc.
<hazmat> afaics, i need to qemu-nbd or guestfs to mount the qcow2 and then copy it over to an fs dir or lvm mount
<smoser> ah. don't use the disk image
<smoser> and you might be duplicating effort
<hazmat> smoser, what should i use?
<smoser> utlemming might also be looking at createing 'ubuntu-cloud-image' (ie, in 'lxc-create -t ubuntu-cloud-image')
<smoser> the easiest consumable thing for you is the .tar.gz file
<hazmat> it should be an additional option to the ubuntu template
<hazmat> maybe not
<hazmat> smoser, what format is that in? tar.gz?
<hazmat> i mean the f iles inside
<smoser> you'll unfortunately have to extract it (tar -Sxvzf image.tar.gz "*.img" && mount -o loop *.img /mnt && rsync -a ....)
<smoser> .img file, kernel, ramdisk.
<smoser> we used to create a .img.tar.gz,which was just a sparse partition image compressed with sparse tar
<smoser> but stopped doing that.
<smoser> so you'll aste the download of th e ocmpressed kernel and ramdisk
<smoser> more annoyhing (to me) is that the .tar.gz isn't immediately useful like the qemu image is.
<smoser> you can use the qemu image, if you'd like. but using the partition image will mean you don't have to deal with qemu-nbd
<smoser> hazmat, if it would make your life massively simpler, i would consider adding a filesystem contents tarball
<smoser> although i worry about somewhat arbitrarily adding things to the downloads.
<smoser> hazmat, SpamapS pointed me at http://paste.ubuntu.com/816820/ which he started on
<hazmat> smoser, thanks, thats useful
<smoser> hazmat, so were you looking to create an lxc template script?
<hazmat> smoser, i was just looking at the problem, we could either bypass and do the download in juju or use a new template script
<smoser> i think the template script in lxc is generally more useful
<smoser> it serves a wider audience.
<hazmat> probably won't fly with others, but i was leaning towards trying to do the download in juju to be able to give the user some feedback
<hazmat> true
<smoser> a middle of the road solution..
<smoser> woudl be to let juju do the download (duplicating download code)
<smoser> and then let it pass '--image' to lxc
<smoser> although.. id dont' know.
<dpb_> Was looking for information on how to properlly configure .juju/environments.yaml for an openstack deployment.  I understand the basic stuff (url, key, secret key), but what about control-bucket and admin-secret?
<dpb_> Also, do I need a special branch?  or would the packages on oneiric suffice?
<m_3> dpb_: control-bucket and admin-secret can be anything afaik... juju generates it when you try to run 'juju' without an environemnts.yaml file
<m_3> dpb_: I typically let vi replace it with `echo "some junk" | md5sum` when creating a new env
<jimbaker> m_3, the control bucket simply needs to be unique globally for s3
<m_3> right.. thanks
<dpb_> m_3: OK, great, thanks for the info
<m_3> dpb_: here's a cleaned-up example I use atm http://paste.ubuntu.com/817806/
<dpb_> m_3: awesome, thanks!
<m_3> dpb_: obviously, make the default match.. that's a typo
<jorge> hi folks! I'm testing juju with a diablo openstack installation. when i run 'juju status', i get an error but the command do what is expected.
<jorge> root@sold016:~# juju status
<jorge> 2012-01-26 14:07:54,476 INFO Connecting to environment.
<jorge> 2012-01-26 14:07:59,576 ERROR SSH forwarding error: bind: Cannot assign requested address
<jorge> machines:
<jorge>   0: {dns-name: 172.16.0.2, instance-id: i-0000001b}
<jorge> services: {}
<jorge> 2012-01-26 14:08:05,313 INFO 'status' command finished successfully
<jorge> Is that normal?
<m_3> jorge: it's normal for some networking setups
<jorge> ok
<m_3> jorge: you have to associate a public address with the bootstrap instance
<jorge> now I lost my connection.
<m_3> then it should respond to subsequent juju commands
<jorge> Cannot connect to machine i-0000001b (perhaps still initializing): could not connect before timeout after 2 retries
<koolhead17> m_3: is there a proper doc on this part available somewhere? i tried my best and failed to get it working.
 * m_3 looking for using openstack with juju docs
<m_3> koolhead17: ha!
<koolhead17> and i thought its happening because whole openstack infra is behind proxy
<koolhead17> even nova
<m_3> koolhead17: the setup can vary... greatly
<koolhead17> m_3: yeah i have not been successful with my deployment :(
<m_3> koolhead17: lemme see if I can find my notes for the openstack cloud I'm using now
<koolhead17> although juju works with my LXC
<jorge> i'm using proxy too
<jorge> when running the euca* commands you cannot have the http_proxy variables set up.
<m_3> so I'm using euca2ools... that's probably the first place to start
<m_3> from a machine where you can hit the endpoint urls directly
<m_3> (so either your endpoints need to be public or your client machine needs to be in a visible network segment)
<SpamapS> jorge: if there is a box in between you and the instances that you can SSH bounce off of, you don't need a public address, btw
<SpamapS> jorge: you just have to setup ~/.ssh/config to bounce properly
<jorge> some information more:
<jorge> when I run juju bootstrap and after juju status (the first time), all worked fine.
<jorge> after some minutes, stop to work.
<jorge> now I can ssh to the instance using just the ssh command, with the ip of the instance. i'm on the controller and the instance in running on it. all in the same host.
<koolhead17> jorge: your executing all this from the same internal network as of the instances
<koolhead17> hey SpamapS
<jorge> yes. my instances are in 172.16.0.0/24 and i'm executing these commands from the network controller (from the host with ip 172.16.0.1)
<jorge> the connectivity to the instance are ok using ssh from command line. I can log in that.
<jorge> see that
<jorge> juju -v status
<jorge> DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="172.16.0.2" remote_port="2181" local_port="41754".
<SpamapS> koolhead17: o/
<SpamapS> jorge: right, thats so juju can talk to zookeeper over a secure connection
<koolhead17> jorge: k. does it tries to connect with user ubuntu
<koolhead17> also home your using smoser`s cloud-image for the instances
<jorge> 2181 is zookeeper?
<jorge> let me think... shouldn't i create a rule to permit this port? maybe iptables is dropping this connection.
<jorge> euca-authorize -P ....
<jorge> i'm going to start tcpdump in the instance to see the connections attempts.
<SpamapS> jorge: no, you don't need to euca-authorize 2181
<SpamapS> jorge: its connecting to it on 127.0.0.1 *through* an ssh tunnel
<jorge> humm, ok
<SpamapS> jorge: can you pastebin the whole failure? like 'juju -v status 2>&1 | pastebinit'
<negronjl> m_3: ping
<m_3> negronjl: hey
<jorge> SpamapS: http://pastebin.com/SHmAszL7
<koolhead17> SpamapS: i was getting similar error
<SpamapS> jorge: is there something listening on port 54122 ?
<jorge> SpamapS: no. I've tried many times and juju tries to use other ports and the problem is the same.
<jorge>  DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="172.16.0.2" remote_port="2181" local_port="50093".
<jorge> ERROR SSH forwarding error: bind: Cannot assign requested address
<SpamapS> jorge: try ssh -v -L 50093:127.0.0.1:2181 ubuntu@172.16.0.2
<jorge> Ok, just a minute because i destroy the environment. It is starting again.
<jorge> SpamapS: ssh -v -L 45039:127.0.0.1:2181 ubuntu@172.16.0.2 WORKED!!! I've changed the port because I've started the env again.
<jorge> but juju status return error.
<SpamapS> jorge: ok, thats good, and on that box, do you see zookeeper running?
<jorge> SpamapS: yes, a java process and port 2181 opened.
<SpamapS> jorge: ok, same juju status error though?
<jorge> SpamapS: yes
<SpamapS> jorge: really puzzling
<jcastro> hazmat: I need to drop off for another call, thanks for inviting me!
<hazmat> jcastro, thanks for joining
<jorge> SpamapS: inttermitent problems. I run the juju status and in the first attemp i can see an error, in the second it worked. and, trying one more time, error. see http://pastebin.com/T4xE3KAG
<SpamapS> jorge: I wonder if this is somehow related to the fact that you're running this on the network controller
 * hazmat lunches
<jcastro> http://pad.ubuntu.com/charmschool
<_mup_> juju/deploy-invalid-conf r448 committed by kapil.thangavelu@canonical.com
<_mup_> validate config before deploying service.
<hazmat> bcsaller, jimbaker could i get a +1 on this trivial?  its a fix for bug 903149
<_mup_> Bug #903149: juju fails silently with empty revision file. <juju> <juju:New> < https://launchpad.net/bugs/903149 >
<hazmat> http://paste.ubuntu.com/818065/
<bcsaller> hazmat: so you don't trap ServiceConfigError anymore? Thats more than just revision and can stop the processing of the repository, right?
<hazmat> bcsaller, CharmError is a base class for serviceconfigerror
<hazmat> its a more generic handling case
<bcsaller> ahh, ok
<bcsaller> +1
<jcastro> niemeyer: I have this work item on the community charm docs for you "[niemeyer] drive dicussion about interface documentation on juju mailing list"
<jcastro> have we done this yet?
<niemeyer> jcastro: yo
<niemeyer> jcastro: We haven't, but I'm on a roll on some development here ATM.. would you mind to ping me about this tomorrow?
<jcastro> sure
<niemeyer> jcastro: Thanks!
<jcastro> SpamapS: you've got one WI too
<jcastro> [clint-fewbar] add README to 'charm create' template
<_mup_> juju/repo-find-report-charm-error r448 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] Repositories report/log charm structural errors. [r=bcsaller][f=901495]
<SpamapS> jcastro: ACK, that one should be done b4 FF
<jcastro> SpamapS: ok, just keeping an eye on our burndown, ta.
<statik> hey niemeyer, newbie question about lbox
<statik> I tried goinstalling lbox on precise, and got some dependency package? errors
<statik> I'm think I'm running the golang packages from precise. http://pastebin.ubuntu.com/818207/
<statik> any tips on how to get lbox installed? I'm probably missing something simple
<sidnei> bcsaller, hey, around?
<bcsaller> sidnei: hey, whats up?
<niemeyer> statik: Hey
<niemeyer> statik: Ah, yes, I see
<niemeyer> statik: You'll have to install golang-tip from the ppa
<niemeyer> statik: Optionally, you can install lbox from the PPA as well
<niemeyer> statik: pre-built
<SpamapS> Watching debug-log .. its kind of a bummer that the output of the hooks is double-newlined
<SpamapS> hrm.. upgrade-charm needs to re-join all peer relationships
<SpamapS> Otherwise if you change joined/changed hooks.. they won't be re-run
<adam_g> any way to specificy an environments.yaml located somewhere other than ~/.juju/environments.yaml?
<adam_g> ya, specificy
<m_3> adam_g: you might get away with resetting HOME for a subshell of juju commands... it'd move .ssh too I guess though
#juju 2012-01-27
<adam_g> m_3: oh jeez, ya..
<m_3> there's wiring inside for the EnvironmentConfig to take another path... but the cli itself doesn't pass it thru
<_mup_> Bug #922398 was filed: cli should accept alternate config file path <juju:New> < https://launchpad.net/bugs/922398 >
<niemeyer> Good Friday, Magicians
<jorge> Hi folks! I would like some help with the same problem from yesterday: juju can't connect to instances when i call juju status. the complete description of the problem is here http://pastebin.com/KkTZkEjv ... thanks.
<niemeyer> jorge: WOw, that's an awesome debugging session, thanks
<niemeyer> jorge: So if you actually connect to the host, does it close on you like your verification suggests?
<jorge> you're welcome. i'm going to try with the ubuntu user and other... just a moment.
<niemeyer> jorge: There's another trick worth trying to investigate this as well
<niemeyer> jorge: We have a little-known "open-tunnel" commnad
<niemeyer> jorge: You can use it by opening a separate terminal, and running the command "juju open-tunnel"
<niemeyer> jorge: This will open a connection to the host, and block
<niemeyer> jorge: Then, on your first terminal, try running "juju status"
<niemeyer> jorge: It should tunnel the command over the ssh session from the blocked open-tunnel command
<jorge> niemeyer: this way worked! but, something strange still occurs. The first attempt get error and the connection just is established on the second attempt.
<niemeyer> jorge: There's something really bizarre going on there
<niemeyer> jorge: Do you have anything in the logs of the server saying why sshd is closing the connection abruptly?
<jorge> the first attempt is always closed. the second works (using open-tunnel). really strange!
<niemeyer> jorge: Sounds like a networking issue
<jorge> no, just these lines indicating opened session and the next second, closed
<niemeyer> jorge: Hmm
<niemeyer> jorge: Try to ssh onto the machine
<niemeyer> jorge: juju ssh 0
<niemeyer> (that's a zero at the end)
<jorge> ops... looks like it was just luck ... now i'm getting the same error with open-tunnel
<niemeyer> With the open-tunnel command live
<niemeyer> Ok
<niemeyer> So, close the open-tunnel
<niemeyer> and try to ssh onto it using "juju ssh 0"
<jorge> i'm going to try that when the open-tunnel come back hehe.
<jorge> curious, if I leave the machine idle for a while and test the command, works. after the first attempt, stop work.
<niemeyer> jorge: It really looks like there's something weird going on with the machine/networking there, but I'm not sure what
<niemeyer> jorge: Have you tried to juju ssh 0?
<jorge> hum, juju ssh 0 not working...
<niemeyer> jorge: What happens?
<jorge> the same error. i think because open-tunnel is not alive. i'm trying to restablish it
<niemeyer> jorge: Nope
<niemeyer> jorge: Please forget open-tunnel for the moment
<niemeyer> jorge: It's unrelated to the problem
<niemeyer> jorge: What is the error message you get?
<jorge> ERROR SSH forwarding error: bind: Cannot assign requested address
<niemeyer> jorge: Is that in the first call, or in the second one?
<niemeyer> I suspect the TCP port is simply lingering
<niemeyer> jorge: Do you have lsof installed?
<jorge> second call
<jorge> yes, i do
<niemeyer> jorge: I think I know what's up
<niemeyer> jorge: Your kernel is likely configured to linger TCP ports in a different way
<niemeyer> jorge: Are you using juju from source or from the package?
<jorge> i don't know about 'lingering' ports... juju was instaled from ubuntu repo, 11.10 oneiric
<niemeyer> jorge: Ok
<jorge> juju 0.5+bzr398-0ubuntu1
<jorge> Linux  3.0.0-12-server #20-Ubuntu SMP Fri Oct 7 16:36:30 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
<jorge> default from ubuntu server 11.10
<niemeyer> jorge: Please run "dpkg -L juju | grep 'state\/utils.py'"
<niemeyer> jorge: and open this file for editing as root
<jorge> hey, i'll be back in 1 hour... sorry...
<jorge> and i'll try this
<jorge> thanks
<niemeyer> jorge: Have a good lunch
<robbiew> can anyone tell me why I have a virbr0 *and* lxcbr0 interface now? (running precise)
<robbiew> is that related to juju local provider somehow?
<m_3> robbiew: juju's still using virbr0 (installed from libvirt)... I assume the other's newly installed along with precise lxc
<m_3> they've both got their own little dnsmasqs tho
<robbiew> interesting
<robbiew> seems if lxc now provides a device, we could drop virbr0, no?
<robbiew> just seems to be a waste of resources to have both virbr0 and lxcbr0
<robbiew> bcsaller:  hazmat: ^ ?
<hazmat> robbiew, precise lxc includes its own bridge?
<robbiew> I suppose users can easily disable lxcbr0 in /etc/defaults/lxc , so maybe we have the juju package do that when installed
<robbiew> hazmat: yeah
<robbiew> hazmat: hallyn said it just recently landed in precise
<hazmat> robbiew, cool, the problem is backwards compatibility, i'll try it out.. but for now i'd rather just leave it.. unless its a problem/conflict.
 * hazmat fires up a precise instance
<robbiew> just thinking folks will get a bit pissy when they see a virbr0 and lxcbr0 created....waste of resources (though small)
<robbiew> eh...I guess it's no huge deal...just surprised me
<hazmat> robbiew, we shouldn't really be installing libvirt-d and lxc as recommends
<hazmat> they only apply to local development
<hazmat> installing those packages on orchestra or ec2 machines doesn't serve any purpose
<hazmat> local provider usage already checks and warns appropriately if run without required packages
<hazmat> SpamapS, ^
<m_3> the 'juju-origin: distro
<m_3> seems to be the default for oneiric... and it's _old_
<m_3> is there any way to bump that up?  or do we always need to use 'ppa' for oneiric?
<m_3> my units are going straight to 'start_error' with version 398
<SpamapS> hazmat: we should be turning off recommends when juju installs itself actually
<SpamapS> hazmat: the idea behind recommends is that if you let your machine install recommends, as most Ubuntu users do, you get the full functionality of the package.
<SpamapS> hazmat: so for automated things, usually --no-install-recommends is advisable
<SpamapS> in fact we should probably make most charms do that.
<m_3> it'd be strange to have different default behavior for apt in the charm -vs- laptop... it should still be an explicit --no-install-recommends
<SpamapS> m_3: agreed, I'm suggesting we should consider making that the recommended way to call apt
<bac> this page needs to be updated wrt 'charm' vs 'charms' in lp.  the bzr examples fail if you use 'charm'.  should i file a bug, if so against what?
<bac> https://juju.ubuntu.com/Charms
<SpamapS> bac: done, thanks!
<bac> great, thanks SpamapS
<jorge> Hi all, i'm back.
<jorge> i make a loop to detect what command is juju calling.
<jorge> while [ 1 ]; do ps aux | grep ssh | grep juju ; ls /root/.juju/ssh/ ; sleep 1; done
<jorge> so, I call juju status or juju open-tunnel
<jorge> the result is the following process
<jorge> root     29772  0.2  0.1  41176  2712 pts/0    S+   14:22   0:00 ssh -T -o ControlPath /root/.juju/ssh/master-%r@%h:%p -o ControlMaster auto -o PasswordAuthentication no -Llocalhost:33511:localhost:2181 ubuntu@172.16.0.2
<jorge> if i try to run this command on terminal i get command-line line 0: Missing argument.
<jorge> So, when juju tries to connect on localhost in a local port that should be forward to 2181 of instance, the problem occurs.
<jorge> if this command is not successful the local port doesn't open ... right? could this be the problem?
<SpamapS> jorge: right
<SpamapS> jorge: for future reference, btw, you can use 'strace -e trace=exec -f juju status'
<jorge> great
<jorge> trace=exec works on linux? here i get one error. looking at the man page, there is no exec parameter for trace=
<jorge> -e trace=process :)
<jorge> execve("/usr/bin/ssh", ["ssh", "-T", "-o", "ControlPath /root/.juju/ssh/mast"..., "-o", "ControlMaster no", "-o", "PasswordAuthentication no", "-Llocalhost:55700:localhost:2181", "ubuntu@172.16.0.2"], [/* 36 vars */]) = 0
<niemeyer> jorge: Ok, should we continue from where we stopped?
<jorge> yes
<jorge> root@sold016:~/.juju/ssh# dpkg -L juju | grep 'state\/utils.py' /usr/share/pyshared/juju/state/utils.py /usr/lib/python2.7/dist-packages/juju/state/utils.py /usr/lib/python2.6/dist-packages/juju/state/utils.py
<niemeyer> jorge: So please do the suggested command, and open the file in your editor as root
<niemeyer> jorge: Then, find the get_open_port function
<niemeyer> jorge: Right above the line temp_sock = socket.socket(...)
<niemeyer> jorge: Enter this, on the same indentation:
<niemeyer>     import random; return random.randint(50000, 60000)
<niemeyer> jorge: Then, try to run juju again, and let me know please
<niemeyer> I'll step out to get a cup of coffee meanwhile.. brb
<jorge> irght
<jorge> the first time I ran juju status worked, now getting the same error.
<niemeyer> jorge: Please run "juju ssh 0" for testing
<niemeyer> jorge: You're having some fundamental issues there that don't even involve juju itself.. it's failing well before getting into more interesting logic
<jorge> well, the first time I ran juju ssh 0 worked. So i ran again and get the error.
<jorge> :/
<jorge> yes, there is some network problem ...
<SpamapS> jorge: is there any way you can run juju *not* on your openstack network controller?
<jorge> I tried from another hosts but it didn't work because the difference between versions.
<jorge> But, I've won a new host just now! I need to test it ... So, i'm going to install the same version and try juju on it
<niemeyer> jorge: Can you please run 5 times in a row and paste the output for all of them?
<jorge> when I do that I come back to report here.
<jorge> yes
<jorge> any of the commands? juju ssh 0 or juju status?
<niemeyer> jorge: juju ssh 0
<niemeyer> jorge: Is Embrapa deploying OpenStack internally?
<jorge> yes, this is a test of openstack ... i want to run some other scenarios, simulations etc
<niemeyer> jorge: Very nice
<niemeyer> jorge: Glad to see you guys are putting juju to good use there
<niemeyer> jorge: Please step by if you have any other issues
<jorge> do you know embrapa?
<niemeyer> jorge: Yeah, my Dad worked there for most his life until retiring
<jorge> I'm trying to use juju to create (or use charms) aiming to set up clusters quickly.
<jorge> niemeyer: cool!! so, you are from Brazil?
<niemeyer> jorge: Cool, that's something it does well :)
<niemeyer> jorge: Yep
<jcastro> This charm could use a review: https://bugs.launchpad.net/charms/+bug/919907
<_mup_> Bug #919907: new-charm salt-master <new-charm> <Juju Charms Collection:New> < https://launchpad.net/bugs/919907 >
<jcastro> https://bugs.launchpad.net/charms/+bug/918803
<_mup_> Bug #918803: charm for python-oops-tools <new-charm> <Juju Charms Collection:New> < https://launchpad.net/bugs/918803 >
<jcastro> this one too!
<niemeyer> Hmm, curious
<niemeyer> Aha, ok..
<niemeyer> https://code.launchpad.net/~yellow/charms/oneiric/buildbot-master/trunk
<niemeyer> There are branches with no revisions.. that's ok I suppose
<SpamapS> Weird.
<niemeyer> SpamapS: Hmm
<niemeyer> No, it's working.. I'm clearly doing something silly here
<niemeyer> Ah, and I found what it is
<adam_g> hazmat: hey, i had a chance to point juju at openstack essex, seemed to have bootstrapped just fine.
<hazmat> adam_g, good to know, thanks
<adam_g> if we start testing a cloud image on this openstack CI stuff instead of ttylinux, ill probably add at least a 'juju bootstrap' test of some kind
<andrewsmedina> someone can explain me what does this juju code? http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/juju/providers/ec2/launch.py#L91
<SpamapS> andrewsmedina: the comment above it is accurate as far as I can read it
<andrewsmedina> SpamapS: it's not works with my openstack setup =/
<SpamapS> andrewsmedina: what version of openstack?
<andrewsmedina> SpamapS: diablo
<SpamapS> andrewsmedina: we test against diablo daily.. should work
<andrewsmedina> SpamapS: here it's returns a unexpected error in ec2 client
<andrewsmedina> should be a problem in my openstack setup.
<andrewsmedina> I don't know where
<SpamapS> andrewsmedina: what version of Ubuntu are you running juju on?
<andrewsmedina> 11-10
<SpamapS> andrewsmedina: ok, that should be fine. Hrm.
<andrewsmedina> SpamapS: how you install your openstack setup?
<SpamapS> andrewsmedina: do you have the juju from oneiric, or the one from our PPA?
<SpamapS> andrewsmedina: I'm not sure, our IS team set it up for us.
<andrewsmedina> SpamapS: from your PPA
<SpamapS> andrewsmedina: maybe try updating openstack from oneiric-proposed
<SpamapS> andrewsmedina: *lots* of fixes
<SpamapS> python-nova | 2011.3+git20111117-0ubuntu1 | oneiric-proposed | all
<andrewsmedina> ok
<andrewsmedina> I installed openstack from source
<SpamapS> uh
<SpamapS> why?
<andrewsmedina> SpamapS: I create a script like devstack
<SpamapS> andrewsmedina: ah ok cool
<hazmat> andrewsmedina, the fix for that might not have made it to the diablo release tarballs,  there was an error with the internal group authorization previously in ostack, but i thought it was fixed for diablo though
<jcastro> SpamapS: chords, strings, we brings .... charm review pls?
<gary_poster> jcastro, hey.  I'm on a squad from the LP team using juju for a project.  We are writing a buildbot master recipe and a buildbot slave recipe ATM.  We have an internal review process, but we don't know juju well enough to be competent reviewers.  We want to get better at writing and reviewing.  Could we have a juju review mentor--someone who looks over our internal reviews of our charm branches to improve both the reviews an
<gary_poster> d the branches?  If so...how would we set that up? :-)
<jcastro> absolutely
<jcastro> m_3: SpamapS: marcoceppi:  should we just have them tag their branches/bugs with new-charm and ping pong back and forth?
<m_3> jcastro: yeah
<m_3> jcastro gary_poster: is there any reason for those to be separate?
<m_3> those charms should eventually be pushed to lp:charms too I assume?
<gary_poster> m_3 definitely
<gary_poster> m_3, jcastro, we'll use the new-charm tag and look forward to your help.  Thank you!
<m_3> ok, then yeah... charm review queue = { charm bugs, with 'new-charm' tag, attached branch, and New/FixCommitted status }
<m_3> you might have to kick some of us to get through reviews a little faster :)
<gary_poster> m_3, heh, cool, understood.  Those three people jcastro mentioned are the ones to kick?  and charm bugs, I assume you mean https://bugs.launchpad.net/charms ?
<m_3> gary_poster: anyone in ~charmers can review, but most are done by marcoceppi, nijaba, SpamapS, and myself.  yes, bugs.lp.net/charms
<gary_poster> awesome, thanks again m_3
<m_3> sure thank
<m_3> s/thank/thang/
<gary_poster> heh
<SpamapS> jcastro: reviewing charms now, had to triage bugs this morning. :)
<SpamapS> jcastro: cuase charming is life, and life is charming
<jcastro> <3
<SpamapS> heh.. salt calls servers minions
<SpamapS> +1 for mirth
<SpamapS> hrm.. people keep making the mistake of starting their charm as a subdir of the bzr branch, rather than just one branch for the whole charm
 * SpamapS wonders if there was a recent change that is leading people to this
<m_3> all of our charm dev docs/presentations should really assume they've never heard of bzr
<m_3> start from scratch
<SpamapS> hah, are you suggesting that bzr is obscure? ;)
<m_3> I'd never suggest such a thing
<m_3> rather that our user-base is so broad
<m_3> :)
<SpamapS> juju, user base, SOOOO BIG
<SpamapS> HUGE
<SpamapS> bzr user base, smarr
<SpamapS> </southpark>
<m_3> really though... the concept of putting even configuration under revision control isn't all that new
<m_3> sorry... logic barf... isn't all that old
<SpamapS> true
<SpamapS> I wanted to make /etc a davfs mount of an SVN server on our servers back when I was wild and crazy and didn't know better....
<m_3> ha!
<SpamapS> I would have gotten away with it if it hadn't been for you meddling (git) kids
<m_3> I still start long-lived hw and libvirt instances with (cd /etc; git init; git commit -m'initial revision')
<m_3> itching to move them to juju
<m_3> actually excited about the testing stack
<m_3> learned a bunch of lessons about longer-term -vs- development stacks already!
<m_3> really love to choose instance size per service... that bootstrap node can get expensive otherwise!
<SpamapS> m_3: yeah, I'd like to have the option to consider the bootstrap node as available for deploys too
<SpamapS> m_3: there's always my old branch which adds --machine to deploy and add-unit.. ;)
<SpamapS> Probably out of date now
<m_3> right
<m_3> 398's not working with 447 clients... I'd imagine your branch pre-dates that
<SpamapS> m_3: did you report that as a bug yet? ubuntu-bug juju is all you need to do on the box with r398 on it
<m_3> haven't taken time to really isolate it... definitely happens with O running O, and P running P in local and O running O in ec2
<m_3> there's a problem with config-get coming up empty (not even defaults)
<m_3> but don't know what else is going on
<m_3> juju-origin: ppa works around it
<SpamapS> We really need to fix the bug where you can't upgrade-charm if config-changed or install or upgrade-charm has had an error.
<SpamapS> My dev method for the mediagoblin charm has been juju scp'ing the new, fixed script up, --retry, then upgrade-charm again.
<SpamapS> which is t3h lame
<m_3> yup
<SpamapS> Wow the error message on ambiguous relations is *SO much better* now
<m_3> so I'm about to do something... and I want you to talk me out of it
<SpamapS> 2012-01-27 14:58:12,782 ERROR Ambiguous relation 'oops-tools postgresql'; could refer to: 'oops-tools:db postgresql:db' (pgsql client / pgsql server) 'oops-tools:db postgresql:db-admin' (pgsql client / pgsql server)
<SpamapS> m_3: DO IT YOU KNOW YOU WANT TO
<SpamapS> err.. I mean.. what is it?
<m_3> config-changed for jenkins just calls install again... I wanna catch certain config changes, and then pass it on to install again
<m_3> the only way to do that is to save out the config set every time the hook is called and compare changes
<m_3> wow!  yeah, that message is just nice... makes me want to go relate things just to see the message
<SpamapS> m_3: config-get is always available, so why even have special logic in config-changed if you're just going to run the installer again?
<m_3> only run it conditially based on which config params are passed
<m_3> that's what I want to know... which config params were passed to this particular call of config-changed
<SpamapS> ah
<SpamapS> you want to use it to kick things off. :)
<m_3> well... and change the config without a reinstall
<m_3> but yes, in general, I like the ability to use 'juju set' to control the service
<m_3> add new test jobs
<m_3> kick off tests
<m_3> etc
<SpamapS> juju set charmtester runstuff=(x=$(juju get charmtester runstuff); echo $(($x+1))))  ? ;)
<m_3> juju set charmtester add_branch=lp:my/new/charm
<m_3> ha!
<SpamapS> If only it could feed data back..
<m_3> you laugh... but there might be legitimate use for that in some relation vars
<m_3> (varnish)
<m_3> I totally want a 'juju get' too
<m_3> right, like your 'runstuff' above
<SpamapS> juju get exists!!
<m_3> is it coupled to config-set from within the hooks?
<m_3> what's it getting?
 * m_3 digs through the code
<SpamapS> the values
<SpamapS> so you can retrieve what the current value of a config option is
<m_3> cool... never used that one before
<SpamapS> fairly recent feature
<m_3> we're missing config-set tho
<m_3> _that_'d be a really useful combo
<SpamapS> Yeah actually it would
<SpamapS> Hey I had a crazy idea while walking to/from lunch
<m_3> I could get my api_token _back_ instead of forcing one forward
<SpamapS> what if zookeeper just ran on client machines..and juju open-tunnel opened a tunnel to the machines it was managing, for the agent to connect *back* through
<SpamapS> kind of like the local provider works.. but not local. :)
<_mup_> Bug #922900 was filed: config-set callable from hooks <juju:New> < https://launchpad.net/bugs/922900 >
<m_3> bug filed
<SpamapS> http://ec2-184-73-125-5.compute-1.amazonaws.com/
<SpamapS> oops-tools
<SpamapS> *ALMOST* ready for promulgation
<m_3> hmmm... my charmtester environment of xlarge's would be cheaper
<m_3> actually when you think about it, that's a nice thing about the architecture as it stands... the zk node really could be moved around
<m_3> might be some security-group nonsense
<m_3> and you'd ack the fact that there might be a perf hit with really large infras
<SpamapS> I think a service with 1000 units and 2 or 3 peer relationships is going to get ugly to scale up/down because of all the joined and departed churn that will go on.
<m_3> that's gonna happen no matter where things live
<SpamapS> 1000 nodes running relation-list with 1000 responses all at once :)
<SpamapS> m_3: unless it becomes decentralized
<m_3> every peer's connected to every other, so it'd be combinatorially explosive
<m_3> yup
<m_3> what would that look like
<m_3> we'd need every node to have a 'torrent map' of the whole
<m_3> be a big change
<SpamapS> well unless we just used bittorent to keep the map updated ;)
<m_3> hey, I'm a huge fan of peer-based infra
<SpamapS> anyway, I want to get to the long living 10 unit service before I start thinking about 1000 unit services. :)
<m_3> I just honestly think we need to make sure that services with peer relations _also_ have master-slave options so they can scale
<m_3> there's this is cool the way it _should_ work -vs- make it work and be stable
<SpamapS> hm, should we enforce that readme always be called README or README.* ?
<m_3> sure
<m_3> it's really not hard to be flexible with it tho
<m_3> I _do_ like having the option of README.rst, README.md, or just README
<SpamapS> what I mean is, README(\..+)?
<SpamapS> so, not readme.txt
<SpamapS> or ReadMe.md
<m_3> old gnu std used to bug me... but it was useful sometimes in looking at noise that it screamed INSTALL at you
<SpamapS> heh, our INSTALL is hooks/install
<m_3> but yeah, it's worth asserting emphatically that they should read it :)
<m_3> so on this note, it bugs me to see non-hooks in hooks/
<SpamapS> I agree on that
<m_3> I'd rather bin/ or lib/ or scripts/
<SpamapS> but I'm ok with "common.py" or a single "all-hooks.sh" that is symlinked to
<m_3> scripts, templates, files for things you're installing elsewhere
<m_3> bin/ lib/ for things your hooks are using
<m_3> IMO that should be symlinked into bin/all-hooks.sh or lib/common.py... with only the hook links in hooks/
<m_3> no biggie of course
<m_3> just style
<SpamapS> it gets REALLY hard with things like python though
<m_3> huh?
<SpamapS> now you're adding things to PYTHONPATH .. and hating the guy who kicked stuff out
<SpamapS> import common will import ./common.py
<SpamapS> as relative to the current python script...
<m_3> CWD is pretty set tho
<SpamapS> i am not sure its not going to do a readlink if $0 is a symlink..
<SpamapS> not CWD
<SpamapS> relative to the python script
<m_3> right, but some people write wrt __file__
<m_3> understand
<SpamapS> just so complicated for a tiny win
<m_3> yeah
<m_3> agree... very tiny win
<SpamapS> m_3: nice job on the charm tester thing.. hopefully next week we can work on running charm-embedded tests with it. :)
<m_3> SpamapS: thanks!  I'll roll it out to the list once we get a real url... then we can extend it
#juju 2012-01-28
<m_3> I think I'm just gonna cat the unit logs on to the end of the build log upon failure
<m_3> that way it's readable from the web
<SpamapS> m_3: there's a workspace dir that you can stick them in too
<SpamapS> m_3: would be good to dump the whole charm in there too
<m_3> yeah, I was just dumping the log, but I guess there's no harm in more
<m_3> SpamapS: what do you think jenkins would do if jobs/ was a remote mount of some sort?
<m_3> it really just looks like logging, so it shouldn't choke it up too much
<SpamapS> m_3: I think it copies the work dir into a build-id specific place after the run
<m_3> right, but the work dir isn't really doing anyting atm
<SpamapS> m_3: I'd RTFM .. but I'm too busy reviewing nijaba's drupal charm :)
<m_3> just thinking of multiple units... charmtester-P charmtester-Q charmtester-billyjoe-jimbob
<m_3> and like mounts rather than rsyncs for persistence
<m_3> but I'll let you get back to charm reviews :)
 * m_3 thinks thin... maybe nobody will notice how far behind the review queue is
<SpamapS> m_3: we need to poke the Incompletes for status
<m_3> roger
<SpamapS> man, I'm sorry, but the new look for bug listings *sucks*
<SpamapS> pastels? really?
 * SpamapS sends apologies to those responsible for the design
<m_3> I'm confused... it's legal yaml to leave quotes off of strings
<m_3> why are we barfing them if they're not?
<SpamapS> m_3: where are we barfing on them?
<SpamapS> btw.. I wish I could say I haven't asked that on a drunken night of merriment
<m_3> the charm-store... gustavo sent out an "Updated list of charm problems"
<m_3> when you declare a config param of type: string
<m_3> the default value can't be just 0
<m_3> it's gotta be "0"
<m_3> but that's not strictly required of yaml
<m_3> I think we're trying to add stronger typing on top of yaml
<m_3> but that's not what users will expect at all... it's called '.yaml' and they'll expect yaml
<m_3> maybe I misunderstand and it's not just quoting
<m_3> crap, yeah... it's forcing int and bool... that sucks
<m_3> it's much easier in shell to use strings for everything
<m_3> ok, nevermind... I'm being dumb
<SpamapS> m_3: I agree with Gustavo on this one. If its "string", put a yaml *string* there. 0 is an int.. and the way things are converted is hard to predict.
<SpamapS> m_3: I doubt yaml defines the algorithm that one must use to convert between types. :)
<SpamapS> m_3: btw, you can also do reviews on charm-tools. :)
#juju 2012-01-29
<grapz> Hi. I'm giving it a shot trying to fix a juju bug, but I was wondering how I can run it from the bzr checkout I have? If i do a bin/juju in my checked out folder, it seems to be loading the juju files that comes with the official package, not the ones I've changed.
<hspencer> hey guys
<hspencer> quick question
<hspencer> why was the decision made to go with txAWS instead of boto?
<hspencer> just curioius?
<hspencer> curious i mean
<SpamapS> hspencer: because txaws makes it easier to take advantage of twisted for doing other things while waiting for AWS
<SpamapS> like responding to zookeeper watches
<fwereade_> grapz, if you're around, "export PYTHONPATH=$(pwd)" and then you can bin/juju ... to your heart's content
<avl_> hello
<^LoGiN^> does anybody familiar with juju/orchestra integration ?
<^LoGiN^> i am getting following issue durung juju bootstrapping within orchestra environment
<^LoGiN^> juju bootstrap
<^LoGiN^> 2012-01-29 14:23:56,159 INFO Bootstrapping environment 'orchestra' (type: orchestra)...
<^LoGiN^> Could not find any Cobbler systems marked as available and configured for network boot.
<^LoGiN^> 2012-01-29 14:23:56,184 ERROR Could not find any Cobbler systems marked as available and configured for network boot.
<fwereade_> ^LoGiN^, can you tell me a bit more about your environment?
<^LoGiN^> i have 2 servers
<^LoGiN^> 1 orchestra server+juju
<^LoGiN^> second one is orchestra installed node
<^LoGiN^> all of them are running in Virtualbox VMS
<fwereade_> ^LoGiN^, ok, what systems do you have configured in cobbler on the orchestra server?
<^LoGiN^> orchestra server works like a charm but when i try to run juju on it it doesn' bootstrap
<fwereade_> if you go to /cobbler_web on the orchestra server and log in there, there should be a sidebar link to"Systems"
<^LoGiN^> there is one system configured
<^LoGiN^>     Profiles
<^LoGiN^>     Systems
<^LoGiN^>     Repos
<^LoGiN^>     Images
<^LoGiN^>     Kickstart Templates
<^LoGiN^>     Snippets
<^LoGiN^>     Management Classes
<fwereade_> ^LoGiN^, cool: would you check the management classes on that system?
<^LoGiN^> Resources
<^LoGiN^>     Packages
<^LoGiN^>     Files
<^LoGiN^> Actions
<^LoGiN^>     Import DVD
<^LoGiN^>     Sync â¼
<^LoGiN^>     Reposync â¼
<^LoGiN^>     Hardlink â¼
<^LoGiN^>     Build ISO â¼
<^LoGiN^> Cobbler
<^LoGiN^>     Settings
<^LoGiN^>     Check
<^LoGiN^>     Events
<^LoGiN^>     Online Documentation
<^LoGiN^>     Online Help Chat
<^LoGiN^> Systems
<^LoGiN^>     Create new system
<^LoGiN^>     Items/page: â â
<^LoGiN^> 	Name â 	Profile 	Netboot_Enabled 	Actions
<^LoGiN^> 	oneiric01.openstack.lan 	oneiric-x86_64-juju
<^LoGiN^> how can i do it
<^LoGiN^> in config ?
<fwereade_> ok, I think there's an "edit" link next tothe system in the systems view
<^LoGiN^> orchestra-juju-available is selected for this system
<fwereade_> ^LoGiN^, cool, there should also be a netboot enabled checkbox somewhere
<fwereade_> ^LoGiN^, is that ticked?
<^LoGiN^> yes
<^LoGiN^> it is checked and i cannot change it
<fwereade_> ^LoGiN^, hmm, ok
 * fwereade_ thinks
<fwereade_> ^LoGiN^, would you do a "juju -v bootstrap" and pastebin me the output?
<^LoGiN^> holdon
<^LoGiN^> i think i found it
<fwereade_> ^LoGiN^, cool
<^LoGiN^> when i have started the system it was enabled
<^LoGiN^> after deployment it was disabled somehow not by me
<^LoGiN^> works now
<^LoGiN^> thanks a lot
<fwereade_> ^LoGiN^, yeah, that's intentional -- once you've deployed a bunch of stuff to that system we expect you'd be pissed off if it reinstalled itself from scratch just because you rebooted it
<^LoGiN^> i see
<fwereade_> ^LoGiN^, if you terminate-machine that system, or if you destroy-environment, then it should be set back to the state it was in before yu started
<^LoGiN^> how can i access to the newly installed system ?
<^LoGiN^> i have generated key but it asks me a password which i don't know
<fwereade_> ^LoGiN^, once it's gone through the whole process (which does take a while, it installs it from scratch) -- ie when juju status actually shows it as running -- you should be able to "juju ssh 0" (to machine 0, substitute other numbers as required)
<^LoGiN^> i see
<fwereade_> ^LoGiN^, btw, I'd suggest having 4 VMs to play around with really: one orchestra server, one for juju to bootstrap on, and 2 more so you can deploy 2 services and relate them
<^LoGiN^> do i need to run coation ?mmandprompt for massive deployments or there will be some web interface integr
<fwereade_> ^LoGiN^, if you want to ssh manually it should be fine: juju will have installed your public key, and you can "ssh ubuntu@..."
<^LoGiN^> do i need to run commandprompt for massive deployments or there will be some web interface integration?
<^LoGiN^> i want to deploy openstack cluster on it
<fwereade_> ^LoGiN^, we'd definitely like a web interface but for now it's just the command line really
<fwereade_> ^LoGiN^, I've been focused on the framework much more than the charms but as I understand it the openstack charms are in a good state
<fwereade_> ^LoGiN^, I should point out that it's readily scriptable
<fwereade_> ^LoGiN^, and I'm 99% sure that you can "juju add-unit foo -n 20" for 20 new units of foo
<fwereade_> ^LoGiN^, let me double-check that
<fwereade_> ^LoGiN^, yep, -n or --num-units
<^LoGiN^> yes i am trying to play with that to deploy in production
<^LoGiN^> juju bootstrap
<^LoGiN^> 2012-01-29 14:35:40,680 INFO Bootstrapping environment 'orchestra' (type: orchestra)...
<^LoGiN^> 2012-01-29 14:35:40,920 INFO 'bootstrap' command finished successfully
<fwereade_> ^LoGiN^, sweet :D
<fwereade_> ^LoGiN^, btw, are you familiar with the local provider? I understand not using it if you're trying to do the closest-possible thing to a real orchestra deployment, but when you're casually experimenting I think it's the most convenient way to go
<fwereade_> ^LoGiN^, it brings up new services in LXC containers on your local machine, so you shouldn't have to configure VMs etc
<^LoGiN^> actually i am trying to deploy smth realistic
<^LoGiN^> to make a copy on real systems
<fwereade_> ^LoGiN^, sounds sensible :)
<^LoGiN^> actually i have demand from customers to deploy mixed environment(openvz,kvm,lxc)
<^LoGiN^> can you tell me something about orchestra dns ?
<fwereade_> ^LoGiN^, interesting, I know cobbler can work with kvm machines, that's what I developed against
<^LoGiN^> how does it work
<^LoGiN^> where machine names are added
<fwereade_> ^LoGiN^, ...heh, probably not, I'm afraid I'm really not a networking guy :(
<^LoGiN^> ok
<fwereade_> ^LoGiN^, sorry about that
<^LoGiN^> np,
<fwereade_> ^LoGiN^, #ubuntu-server may be your best bet there
<^LoGiN^> anyway thanks for steering me out of the problem
<fwereade_> ^LoGiN^, a pleasure, glad to help :)
<^LoGiN^> by the way do you know what can be the issue ?
<^LoGiN^> stic
<^LoGiN^> <^LoGiN^> to make a copy on real systems
<^LoGiN^> <fwereade_> ^LoGiN^, sounds sensible :)
<^LoGiN^> <^LoGiN^> actually i have demand from customers to deploy mixed environment(openvz,kvm,lxc)
<^LoGiN^> * rog Ð²Ð¸Ð¹ÑÐ¾Ð² (Quit: Konversation terminated!)
<^LoGiN^> juju status
<^LoGiN^> 2012-01-29 14:54:28,813 INFO Connecting to environment.
<^LoGiN^> 2012-01-29 14:54:34,274 ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname oneiric01.openstack.lan: Name or service not known
<^LoGiN^> Cannot connect to machine MTMyNzgzMzQwNS45Nzg3NDg1MS42MjQ1NTY (perhaps still initializing): Invalid host for SSH forwarding: ssh: Could not resolve hostname oneiric01.openstack.lan: Name or service not known
<^LoGiN^> 2012-01-29 14:54:34,283 ERROR Cannot connect to machine MTMyNzgzMzQwNS45Nzg3NDg1MS42MjQ1NTY (perhaps still initializing): Invalid host for SSH forwarding: ssh: Could not resolve hostname oneiric01.openstack.lan: Name or service not known
<fwereade_> ^LoGiN^, hm, assuming you know that machine is installed and running... sounds like network configuration :(
<fwereade_> ^LoGiN^, re: mixed environment, I'm not sure how well cobbler will play with containers as opposed to VMs -- I do know that some people are using the local provider on single servers to deploy in LXC containers, but I'm not sure we really *recommend* that
<^LoGiN^> i mean we would have 2 or 3 types of nodes
<^LoGiN^> but management will be centralised by cobbler+juju
<fwereade_> ^LoGiN^, understood -- and we've talked about similar use cases, but I'm afraid that it's not planned for anytime very soon
<fwereade_> ^LoGiN^, can I ask you to enter a bug on launchpad describing your use case? it's the best way to make sure it gets proper visibility
<^LoGiN^> anyway it would be very interesting for optimal resource distribution
<fwereade_> ^LoGiN^, definitely agreed
<fwereade_> ^LoGiN^, one thing that's definitely on our minds (although certainly not in time for 12.04 sadly) is machine sharing by using containers
<fwereade_> ^LoGiN^, I'm not au fait with the details but I understand that the networking is the really tricky bit there
<^LoGiN^> very useful thing for extreme fast service rollouts)
<fwereade_> ^LoGiN^, we will have "subordinate charms" by 12.04, which are "helper" services (sharing machines *without* segregation), but I don't think that really addresses your use case
<fwereade_> ^LoGiN^, definitely
<^LoGiN^> first of all i need to understand how could it be used i real application world
<^LoGiN^> for now it is still blurry
<fwereade_> ^LoGiN^, I'm not sure how best to help there -- I rather like the "it's like apt, but for whole servers" formulation, but I'm not sure that answers your question
<grapz> fwereade_, thanks :)
<SpamapS> ssh -A clint-MacBookPro.local
<SpamapS> DOHHHH
<fwereade_> grapz, btw, just shout if you need pointers about the code
<hspencer> SpamapS, thanks for the response
<hspencer> appreciate it
#juju 2014-01-20
<jose> lazypower: hey, thanks for reviewing! I was checking and don't see a MP on the branch atm
<jose> oh, there we go
<lazypower> Yeah, I was still composing :)
<lazypower> There's a few things I had to do to bring it up to spec for continued review, most of which was mentioned previously by Ben
<lazypower> If you like i can include those in the MP for you to use as a baseline for the fixes
<jose> lazypower: what are those? I can try and fix them now
<jose> (already merged the branch)
<lazypower> in config-changed you compare using =! instead of -n
<lazypower> it should read if [ -n "$SSL" ]; then
<jose> fixed that
<jose> (fixed everything that's on the comment already)
<lazypower> and your PWDIR variable is unbound
<lazypower> ah then you should be up to snuff for the charm-run completing, just validate the intent is being performed :)
<jose> lemme check then :)
<lazypower> I'm going to head out and get some sleep. I'll be around closer to 9AM EST for a review update if its ready
<jose> sure, I should be pushing in a minute :)
<noodles775> rogpeppe: Hi there. RE you question last Friday, yes requests is a python library you'd need to install. Sorry, I didn't mention that in my paste. I saw you were able to repro the issue in the end though...(with the other persons steps). Great!
<rogpeppe> noodles775: the issue should be fixed in trunk now
<noodles775> woot :)
<eagles0513875_> hey all :)
<eagles0513875_> hey fwereade
<eagles0513875_> fwereade: are you around?
<fwereade> eagles0513875_, yeah, but meeting in a mo
<eagles0513875_> fwereade: ok hit me up when you return then it can wait.
<fwereade> eagles0513875_, better to ask
<eagles0513875_> fwereade: just a quick question for using lxc containers or using juju on my vas i need to use manual provisioning right
<fwereade> eagles0513875_, I might see and answer, so might someone else
<fwereade> eagles0513875_, those are different things really -- you can experiment locally with lxc/kvm in the local provider, you can experiment on arbitrary machines with manual provisioning
<eagles0513875_> ok fwereade but is it going to be a bit tricky though to ensure that even though I'm using manual provisioning that it won't spawn another instance?
<eagles0513875_> or want to spawn another instance
<fwereade> eagles0513875_, if you're using a manual environment juju never provisions any other instances itself
<fwereade> eagles0513875_, you have to add-machine them manually
<eagles0513875_> fwereade: ahh ok
<eagles0513875_> so basically if i need to deploy another instance of lets say wordpress on a different vps i would need to add the machine but then would it ask me which machine I would like to put it on?
<fwereade> eagles0513875_, if you care in detail about placement, you should probably specify it by hand with --to
<fwereade> eagles0513875_, otherwise we'll put new units onto clean empty machines hat match the service/environment constraints
<eagles0513875_> got it
<eagles0513875_> fwereade: it should i think allow you to specify which machine you want to deploy it on even if you already have an instance on it
<eagles0513875_> hey all is it possible to install juju on debian?
<jamespage> marcoceppi, not having 'default' config is no longer acceptable?
<fwereade> eagles0513875_, there's a --to param for deploy/add-unit
<eagles0513875_> fwereade: ?
<eagles0513875_> oh ok
<fwereade> eagles0513875_, that's how you specify what machine to deploy a unit to
<eagles0513875_> ahh ok
<bloodearnest> heya all, finally figured out my local lxc issuesm in case anyones interested: https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1205086
<eagles0513875_> fwereade: you can't really do that from the web interface can you
<_mup_> Bug #1205086: lxc-net dnsmasq --strict-order breaks dns for lxc non-recursive nameserver <libvirt (Ubuntu):Expired> <lxc (Ubuntu):Expired> <https://launchpad.net/bugs/1205086>
<eagles0513875_> fwereade: also is it possible to work with juju on debian
<fwereade> eagles0513875_, yeah, I don't think that's exposed there
<eagles0513875_> fwereade: which is a shame tbh
<fwereade> eagles0513875_, work on machines in the gui is ongoing
<eagles0513875_> everything you do on the command line imho should be exposed and usable on the web interface
<fwereade> eagles0513875_, they're separate projects, they don't completely lockstep
<eagles0513875_> btw fwereade  sorry if I'm interrupting your meeting
<fwereade> eagles0513875_, but yeah I understand that position and generally support it ;)
<eagles0513875_> fwereade: what would be interesting to see is if juju would become platform agnostic meaning will work on debian or ubuntu
<fwereade> eagles0513875_, depends which bits tbh
<fwereade> eagles0513875_, you can currently only deploy to ubuntu, but can deploy *from* from win/mac as well
<fwereade> eagles0513875_, you never know, the `juju` binary might just work unmodified on debian
<eagles0513875_> fwereade: what the person I'm talking to is interested in knowing is if we can deploy stuff to debian based machines.
<eagles0513875_> it would be great if that could be done
<eagles0513875_> tbh
<fwereade> eagles0513875_, it can surely be done, but not without some work
<eagles0513875_> fwereade: would be  more then willing to help out with that but i guess i should get familiar with the code first
<eagles0513875> fwereade: what would be the best and first steps to get things goign with contributing plus working towards joining your team
<fwereade> eagles0513875_, I would start by dpeloying an environment and fiddling with the charm
<eagles0513875> fwereade: regarding the above as i just saw this would you recommend I do it with the local provider?
<eagles0513875> fwereade: out of curiosity to get juju to work on an ubuntu private cloud
<eagles0513875> ubuntu server would need to be setup as MAAS right
<eagles0513875> fwereade: i think there is an issue with the documentation for the local provider
<eagles0513875> getting an error when i try to bootstrap to the local provider about an ssh key
<eagles0513875> and there in the documentation there is no mention of generating an ssh key there
<eagles0513875> fwereade: https://juju.ubuntu.com/docs/config-LXC.html
<fwereade> eagles0513875, https://juju.ubuntu.com/docs/ has it right at the top
<fwereade> eagles0513875, but fwiw the trunk version generates one for you if it can't find one
<eagles0513875> in terms of the juju site where would i file a suggestion
<eagles0513875> getting started the way it is i wouldnt think to click it as it seems to be a section name for the menu
<eagles0513875> i would put getting started as a first menu
<eagles0513875> fwereade: brb
<marcoceppi> jamespage: yes, if you want a "null" default just use a empty string `default: ""`
<marcoceppi> jamespage: It's a W so it's not a critical Juju error, but rather it affects juju tools (ie: GUI, deployer, etc)
 * marcoceppi should document what each level of error is for proof
<jamespage> marcoceppi, that does not work
<jamespage> "" != None with config-get --format=json
<jamespage> marcoceppi, btw do you have a mysql charm for trusty yet? we're still using a butchered one in the openstack test lab
<eagles0513875> fwereade: i find it a bit strange if one is using a local provider an SSH key is required
<eagles0513875> fwereade: it make sense if you are testing on a remote server but in my case i am not
<eagles0513875> i guess it should be presented as an option to the user when setting up the local provider
<eagles0513875> hey guys is it possible if im running the local provider on my laptop to bypass the need for ssh keys
<fwereade> eagles0513875, it's not really -- every difference between the local provider and the others is a source of potential nastiness, and adding a special case to allow people not to use ssh isn't worth it
<fwereade> eagles0513875, is there a problem with just running ssh-keygen, or setting authorized-keys[-path] directly?
<eagles0513875> fwereade: non no i just find it a bit useless if im running the local provider on the same laptop im on now
<eagles0513875> here is the use case im thinking of
<eagles0513875> fwereade: what im trying to get at is why not provide users the ability if they are using the local provider to use local host which doesntn require ssh
<eagles0513875> and if they want to use the local provider obviously they would need ssh there
<fwereade> eagles0513875, because you're not sshing tolocalhost? you're sshing to the containers
<eagles0513875> ahhh ok
<eagles0513875> that isnt that clear in the documentation
<fwereade> eagles0513875, "you need a keypair" is pretty clear, I think
<eagles0513875> sorry for the confusion
<fwereade> eagles0513875, no worries :)
<eagles0513875> fwereade: what isnt clear is what its used for. it just says generate a key pair
<fwereade> eagles0513875, the reasons could be better explained, but IMO that's not really appropriate for step-0 introduction
<eagles0513875> just have to figure out a way to keep my production private key seperate from the one im going to be generated
<fwereade> eagles0513875, what's the concern there?
<eagles0513875> with the id_rsa file be appended to?
<fwereade> eagles0513875, I don't follow -- you have a private key set up already but haven't got the public key there?
<eagles0513875> i think my mind is just out to confuse me today
<fwereade> eagles0513875, is it that you have id_rsa in ~/.ssh, but not id_rsa.pub
<fwereade> ?
<eagles0513875> fwereade: correct
<fwereade> eagles0513875, hmm, why would you do that?
<eagles0513875> fwereade: i have my private key in there so when i ssh into the vps's i have in production i just have to give the key's password
<fwereade> eagles0513875, sure, but where do you get the public key from when you're setting up one of those machines?
<eagles0513875> i have the public key saved in a secure location so i then upload it to the server
<fwereade> eagles0513875, it's the public key
<fwereade> eagles0513875, you can hand it out willy-nilly
<eagles0513875> fwereade: ok i have my mind in knots today lol
<fwereade> eagles0513875, it's the private key you need to take care of ;p
<eagles0513875> id_rsa the private key
<fwereade> eagles0513875, yeah
<eagles0513875> ok
<eagles0513875> i have one of those already in there the public key is on the servers
<eagles0513875> if i generate another one will it apend the new private key to the already existing id_rsa
<fwereade> eagles0513875, no, I would recommend sshing into one of your servers and extracting your pubkey from authorized_keys
<fwereade> eagles0513875, I don't think it's very meaningful to have a private key appended to another?
<eagles0513875> fwereade: ok i have both my public and private key pair for my prodcution servers in a secure location
<eagles0513875> let me see what happens
<fwereade> eagles0513875, so you've got an id_rsa.pub right next to the matching id_rsa now? cool
<eagles0513875> ya
<fwereade> eagles0513875, note: first time you spin up a local provider will take a long time
<eagles0513875> fwereade: whats not making much sense for me is that with out it and just the private key on my laptop ssh keys work fine im just trying to understand why i should have the .pub key there
<eagles0513875> especially if i have two private keys in .ssh
<fwereade> eagles0513875, I can't see a reason offhand *not* to keep your public key alongside your private one
<eagles0513875> fwereade: i dunno why im worrying it gives you a default but also allows you to specify a different file name for the private key when generating
<eagles0513875> good point
<eagles0513875> also fwereade
<eagles0513875> im an idiot lol
<eagles0513875> i didnt notice the path when generating the key from within .juju is that it creates the pair in .juju
<eagles0513875> and not in .ssh
<fwereade> eagles0513875, oh, did we already release a version that creates a keypair?
<eagles0513875> fwereade: seems like it
<fwereade> eagles0513875, hmm, in that case we ought to be using it
<eagles0513875> fwereade: its 1.16
 * fwereade grumbles a bit
<eagles0513875> fwereade: Installed: 1.16.5-0ubuntu1~ubuntu13.10.1~juju1
<eagles0513875> thats whats showing up in my apt-cache policy
<eagles0513875> fwereade: probably because im using the ppa
<fwereade> eagles0513875, weird, I'm sure that was before we added that, let me take a look
<eagles0513875> and installed from the ppa instead of directly from the repo
<eagles0513875> anyway i gave it a name to differentiate between the two anyway if it did land in .ssh
<fwereade> eagles0513875, just a mo, what did you do? didn't you just get the pubkey from your servers?
<eagles0513875> fwereade: no no i was just worried of the key being overwritten when i generated the key for the juju local provider
<fwereade> eagles0513875, sure, I'm just wondering why you'd need to generate a new one when you have one already
<eagles0513875> good point just like keeping my testing and production stuff seperate
<eagles0513875> fwereade: there is something missing in the documentation
<eagles0513875> fwereade: do i need to add the pub key to the environments.yaml file for local cuz im getting this error ERROR error parsing environment "local": no public ssh keys found
<fwereade> eagles0513875, please tell me exactly what you did
<eagles0513875> sudo juju bootstrap
<eagles0513875> it doesnt seem to pick up the keys
<fwereade> no, what you did to your ssh setup
<eagles0513875> fwereade: for juju
<eagles0513875> fwereade: when generating the keypair should it end up in .juju if you dont use the default path to .ssh
<fwereade> eagles0513875, AFAICT you should not be generating a keypair at all
<eagles0513875> ok
<fwereade> eagles0513875, you should have got your pubkey and saved it in ~/.ssh/id_rsa.pub
<eagles0513875> ok
<eagles0513875> fwereade: when one learns to untangle their complex mind setting up juju as the local provider is very very simple
<lazypower> I managed to remove my lbox authentication from launchpad in haste, and I'm not positive where it caches its copy of the oauth token. Can someone point me where I need to look to clear its cached authorization?
<fwereade> lazypower, I think it's ~/.lpad_oauth?
<fwereade> eagles0513875, glad it worked out -- I hadn't considered the private-key-only situation
<lazypower> that was it, thank you
<fwereade> lazypower, yw
<eagles0513875> fwereade: should i file something on launchpad
<fwereade> eagles0513875, yes please, that would be very helpful
<eagles0513875> fwereade: under juju core?
<fwereade> eagles0513875, please
<eagles0513875> ok fwereade also any bugs i can help or work on in terms of fixing?
<eagles0513875> fwereade: should i run a test on what we just discussed
<eagles0513875> and maybe its something i could be mentored in fixing
<eagles0513875> fwereade: if you are interested in following this bug https://bugs.launchpad.net/juju-core/+bug/1270858
<_mup_> Bug #1270858: possibility of .ssh not having a public key with the private key <juju-core:New> <https://launchpad.net/bugs/1270858>
<fwereade> eagles0513875, so, I think the fix is to scan for files with/without .pub suffixes, ignore the ones that don't, and complain about the lack of a keypair (rather than just of a public key)
<eagles0513875> fwereade: wouldnt it be good i test with out having the public key first
<eagles0513875> to see if it works or does not work cuz right now the issue i filed seems rather vague and not useful
<lazypower> fwereade, if lbox fails to propose due to a non existant [branch|repository] is this a sign of a larger problem?
<fwereade> eagles0513875, well, the ideal bug report is generally along the lines of "what I did; what I expected; what actually happened"
<eagles0513875> fwereade: ok will get more information when i relocate for my ccna class
<fwereade> lazypower, hmm, nothing really springs to mind there
<eagles0513875> anyway im out for now just deploying wordpress to try things out :)
<fwereade> lazypower, is it saying the source or the target doesn't exist?
<eagles0513875> fwereade: i have any idea for a really good charm setup. how can I see lets say what the apache charm installs for the web server as apache has about 4 different versions
<lazypower> fwereade, http://paste.ubuntu.com/6786390/
<lazypower> eagles0513875, that would be in the install hook of the charm you're looking at.
<fwereade> lazypower, hmm, I'm not completely sure there -- marcoceppi, have you ever seen that?
<fwereade> eagles0513875, lazypower: or more generally inferrable from a variety of hooks, and/or from the config file
<eagles0513875> ok
<lazypower> Fair enough :)
<eagles0513875> fwereade: lazypower not to but in here but could it be the directory /charms/ was moved and the path you are looking for doesnt exist anymore
<fwereade> eagles0513875, lazypower: several charms let you choose  different versions to run
<lazypower> Its complaining about the remote. My charm is still here locally, but i believe its trying to push to the wrong remote.
<eagles0513875> lazypower: check which branch you are on
<fwereade> lazypower, hmm, should you be pushing to ~lazypower/charms/mediawiki/trunk perhaps?
<lazypower> yeah, i've got an open MP that I was trying to get on codereview
<fwereade> marcoceppi, do yu have an immediate answer for lazypower by any chance?
<lazypower> thats why i include the parent branch in the config, and i think marco is afk for now.
<lazypower> s/config/output listing on pastebin
<eagles0513875> fwereade: i have some enhancement suggestions but will need to discuss those with you when i return
<fwereade> lazypower, ah, hold on, surely you just need to push to that branch before you first lbox propose -- right?
<lazypower> fwereade, https://code.launchpad.net/~lazypower/charms/precise/mediawiki/tests
<lazypower> ah ok, now i've got additional legwork. My MP branch and local copy have diverged
<fwereade> lazypower, ok, that's pretty unambiguous :)
<lazypower> sarcasm? :)
<lazypower> and i apologize for my ineptness, i'm still getting comfortable with the toolchain
<fwereade> lazypower, no, not at all, the existence of your branch is 100% clear
<fwereade> eagles0513875, if I was digging at anyone it was myself ;)
<lazypower> haha
<fwereade> lazypower, er^
<lazypower> You get my seal of approval gustavo
<fwereade> lazypower, I'm not gustavo, but I do observe that your error message doesn't include the "precise" I'd expect
<niemeyer> lazypower: Heya
<fwereade> lazypower, and I was just about to direct you towards the man himself :)
<lazypower> O_O
<niemeyer> lazypower: Thanks, I suppose :)
<lazypower> mind=blown
<lazypower> i ran across a post somewhere from you that referenced gustavo and made the mental note it was you... whooops!
<lazypower> clearly i need to go back downstairs and not return until i've finished my first cup of coffee
<lazypower> niemeyer, gustavo i presume?
 * marcoceppi reads scroll back fwereade lazypower
<marcoceppi> lazypower fwereade use the -for flag
<marcoceppi> lazypower:  lbox propose -cr -for lp:charms/mediawiki
<lazypower> marcoceppi, lbox propose -for lp:~blah blah
<niemeyer> lazypower: Yeah :)
<lazypower> ooohh
<lazypower> and good morning
 * marcoceppi still has reservations about using lbox for charm reviews, but those are on hold
<fwereade> huh, I hadn't read that paste as a problem with the target
<marcoceppi> lazypower: can you run bzr info as well?
<jose> lazypower: hey, got to fix the postfix charm
<marcoceppi> fwereade: I re-read it looks like it's trying to push to lp:~lazypower/charms/mediawiki instead, so maybe the -for flag isn't the issue
<marcoceppi> maybe he doesn't have a push branch in bzr yet?
<lazypower> jose, great news. Did you put it back in the rev queue? I'll fish it out after i've solved my lbox issue.
<jose> lazypower: yeppers! marked it as new
<lazypower> marcoceppi, http://paste.ubuntu.com/6786495/
<marcoceppi> lazypower: okay, looks good to me. did the -for flag fix this? If not just do `bzr lp-propose lp:charms/mediawiki` and we'll look at lbox for charm reviews tomorrow
<lazypower> marcoceppi, i'm working on getting my branch undiverged from whats open as the MP - one moment. Reading up on bzr
<eagles0513875> fwereade: will ping you veyr soonn
<eagles0513875> i have some nutty but useful ideas :)
<eagles0513875> hey fwereade im back
<teknico> hi all, got a problem with 1.17 and MAAS, br0 isn't being brought up by Juju when adding a new MAAS machine
<marcoceppi> teknico: is the juju agent running on these new machines though?
<teknico> marcoceppi, at the end I got this: https://pastebin.canonical.com/103229/
<marcoceppi> teknico: is that from cloud-init log?
<teknico> marcoceppi, nope, it's the end of the output from trying to start the container manually with "sudo lxc-start --name juju-machine-1-lxc-0"
<teknico> in the log there was nothing more than the original error itself
<teknico> which was: agent-state-info: '(error: error executing "lxc-start": command get_init_pid failed to receive response)'
<teknico> which in turn was caused by br0 missing, at least according to bbcmicrocomputer :-)
<marcoceppi> teknico: makes sense, can you paste the cloud-init log file?
<marcoceppi> it should be done during lxc install tbh
<teknico> marcoceppi, the whole of cloud-init.log? it's 45KB, it'll take a while pasting it :-)
<dpb1> marcoceppi: I had a question on that swap charm before I proceed with fixing the review feedback.  You want me to document config options in the readme *and* in the config.yaml?  Seems like I would just be cut-and-pasting descriptions?  Or am I missing what I should be doing there?
<marcoceppi> dpb1: config.yaml should have a brief description, readme should have a more detailed explaination as to what the config options does and provide example values
<dpb1> marcoceppi: OK
<dpb1> thx
<dpb1> marcoceppi: in amulet, how do I make use of an existing deployer file?
<mgz> dpb1: you can load a deployer config in your script
<mgz> it's something like... sec
<dpb1> mgz... thx.
 * dpb1 waits
<mgz> d = amulet.Deployment()
<mgz> with open(path):
<mgz>   script = yaml.safe_load(f.read) # as f above, whoops
<mgz> d.load(script)
<mgz> d.setup()
<mgz> dpb1: hope that helps
<dpb1> mgz: thx, trying now
<jose> lazypower: hey, did you get to review the charm?
<lazypower> jose: I did not, i was sidetracked upvoting on askubuntu and their rowdy crowd. Let me promote this in my queue and have a look
<lazypower> apologies for not getting to it sooner
<jose> sure :)
<jose> no worries
<lazypower> Great work on the revisions jose, I'm going to add my +1 to this charm and place it in another charmers rev queue
<jose> awesome, thanks!
<lazypower> hey thats good info to have handy mgz, that loads a deployment yaml, ergo a bundle, into amulet?
<marcoceppi> lazypower: yeah, if you have a deployer file already in your tests dir, you can bypass all the d.add() stuff and just do d.load()
<lazypower> amazingness
<jose> hey marcoceppi, if you could also take a look at the postfix charm it'd be great
<jose> there you pop up :P
<lazypower> I'm writing up my +1 review right now, give me another 5 and i'll have it complete
<jose> cool!
<marcoceppi> jose: technically a day off today, I'll be in th queue still since lazypower is +1 reviewing it as a "jr charmer"
<marcoceppi> so someone from ~charmers will cycle on to it sometime this week
<jose> cool thanks!
<lazypower> Does juju/maas use a special configuration for DHCP IP Assignment? I know that the local provider allows me to assign a network interface in the environments.yaml if i've opted into configuring my own bridge device.
<marcoceppi> lazypower: maas has it's own DHCP server
#juju 2014-01-21
<cargill_> hi, trying to set up juju on debian, and I'm getting "INFO juju.environs.sync sync.go:235 built 1.16.5.1-unknown-amd64 (4540kB); ERROR supercommand.go:282 invalid series "unknown""
<cargill_> using the saucy ppa, since the env is mostly sid
<cargill_> where does it get the "unknown" tag?
<eagles0513875> hey all
<eagles0513875> hey fwereade :)
<bloodearnest> heya all - having issues with my openstack setup (on canonistack)
<bloodearnest> juju status tells me in cannot connect, I need to bootstrap
<bloodearnest> juju bootstrap tells me already bootstrapped
<bloodearnest> nova boot works fine, and I'm well within my quota
<melmoth> bloodearnest, when i use juju on canonistack, i use a ... canonistack vm to run juju
<bloodearnest> melmoth: me too
<melmoth> if it tells me it s alreadh bootstraped, just destroy environment
<bloodearnest> melmoth: already tried that
<melmoth> may be it bootstrapped but the bootstrap node vm was not able to run
<bloodearnest> ...and it's working
<melmoth> so the bootstrap node is up and running ?
<melmoth> ssh into it, and try to find some juu related logs
<bloodearnest> melmoth: looks like
<bloodearnest> melmoth: I did multiple destroy-environments already, I'm sure
<bloodearnest> and juju status looks like it's timing out
<melmoth> ssh into the node
<melmoth> when you bootstrap , it create a vm.
<bloodearnest> yeah
<melmoth> ssh into it
<melmoth> if you cant, that explain why juju is not working
<melmoth> if you can, you need to find out what s wrong in it.
<melmoth> but i have no clue how the internal works
<melmoth> so appart from looking atr /var/log randomly, not sure
<melmoth> (well, randomly... cloud-init.log cloud-init-output.log and anything that looks like juju)
<bloodearnest> couple of request failures from swift in the logs
<bloodearnest> melmoth: /var/log/juju/all-machines.log
<bloodearnest> but status is working again
<bloodearnest> trying a deploy
<bloodearnest> so it seems to be working
<noodles775> rogpeppe: jfyi, I updated juju-core and retried an amulet test, I still see bug 1269519 : http://paste.ubuntu.com/6791133/
<_mup_> Bug #1269519: Error on allwatcher api <juju-core:In Progress by rogpeppe> <juju-deployer:Confirmed> <https://launchpad.net/bugs/1269519>
<rogpeppe> noodles775: bother.
<rogpeppe> noodles775: i shall recheck.
<noodles775> Yeah, sorry (It's not blocking me or anything, so don't prioritise it unless it's affecting others)
<teknico_> fyi, filed bug #1271144
<_mup_> Bug #1271144: br0 not brought up by cloud-init script <juju-core:New> <https://launchpad.net/bugs/1271144>
<cargill_> hi, trying to set up juju on debian, and I'm getting "INFO juju.environs.sync sync.go:235 built 1.16.5.1-unknown-amd64 (4540kB); ERROR supercommand.go:282 invalid series "unknown""
<cargill_> that's juju-local
<mgz> cargill_: you probably need to teach juju about debian series names
<eagles0513875> cargill_: i was trying to make a push for the use of juju for the Document foundation but was asked if it works with debian lol
<eagles0513875> cargill_: would be nice if juju was a bit platform neutral be it for debian or any debian derivatives
<mgz> cargill_: see updateDistroInfo in environs/simplestreams/simplestreams.go - you'll want to make that read debian csv as well, and go from there
<rogpeppe1> noodles775: i can't reproduce the problem any more with latest juju-core trunk
<rogpeppe1> noodles775: i see "The rabbitmq-server passed this test."
<noodles775> rogpeppe1: let me check the repro instructions and see how it differs from my amulet test.
<rogpeppe1> noodles775: previously, i got 100% failure rate, so i think that *something* has been fixed
<noodles775> rogpeppe1: yeah - you were always checking with the rabbitmq-server instructions right? (as my instructions didn't work because you didn't have python-requests installed).
 * noodles775 can try with the other instructions too.
<rogpeppe1> noodles775: i was trying with the basic_deploy_test.py in bzr branch lp:~mbruzek/charms/precise/rabbitmq-server/tests
 * noodles775 does the same.
<rogpeppe1> noodles775: you *did* "go install" the latest juju, did you?
<rogpeppe1> noodles775: (and do a fresh bootstrap with it)
<noodles775> rogpeppe1: I did this: http://paste.ubuntu.com/6791620/
<noodles775> (and yes, I'm rebootstrapping for every run)
<rogpeppe1> noodles775: and i presume that "which juju" prints "/home/michael/golang/bin/juju" ?
<rogpeppe1> noodles775: could you paste me the contents of ~/.juju/local/logs/machine-0.log?
<noodles775> rogpeppe1: first, here's the run with the rabbitmq log: http://paste.ubuntu.com/6791637/
<noodles775> er, rabbit-mq steps to repeat.
<noodles775> rogpeppe1: and here's the machine-0.log - http://paste.ubuntu.com/6791655/
<rogpeppe1> noodles775: that doesn't look like output from the latest version
<rogpeppe1> noodles775: the latest version prints "connection from" and "connection terminated" lines when API connections are made and dropped
<noodles775> rogpeppe1: excellent - so, let me find out how I could be possibly running the old version given the steps taken.
<rogpeppe1> noodles775: i didn't see your entire terminal log - for example, i didn't see the bootstrap step, or the contents of your $PATH, so i can't be sure
<noodles775> They're what you'd expect, I'll paste with a re-run (while trying to find out what else juju may still have running)
<rogpeppe1> noodles775: try destroying the environment and re-bootstrapping with "--debug"
<noodles775> hrm, old tools?
<noodles775> 2014-01-21 13:51:17 INFO juju.provider.local environ.go:473 tools location: /home/michael/.juju/local/storage/tools/releases/juju-1.17.0.1-saucy-amd64.tgz
<noodles775> Right, even the binary version is 1.17.0, let me paste.
<rogpeppe1> noodles775: the --debug flag should induce bootstrap to say where it's getting the jujud binary from
<noodles775> rogpeppe1: Right - lots of 1.17.0 in there... http://paste.ubuntu.com/6791688/
<cargill_> mgz: that means rebuilding juju, right? where's the sources located at?
<rogpeppe1> noodles775: ah, i know the problem
<rogpeppe1> noodles775: it's frickin' sudo
<rogpeppe1> noodles775: which ignores your $PATH
<noodles775> Urg... right :/
<mgz> cargill_: I assumed you had, which would be how you got the -unknown- there
<mgz> cargill_: but, lp:juju-core and see README
 * noodles775 sudo bootstraps with an explicit path.
<rogpeppe1> noodles775: i've aliased sudo to http://paste.ubuntu.com/6791714/
<noodles775> Thanks rogpeppe1
<rogpeppe1> noodles775: i consider it a real problem that sudo doesn't use the same PATH, although i'm aware the writers of sudo consider it a feature
<cargill_> it's a debian machine with a saucy ppa set up, I'm working on getting an ubuntu lxc container, but don't have that ready yet
<noodles775> Yeah, I'd agree that it should be different, ideally we shouldn't need sudo to bootstrap, but there's a plan for that I guess.
<Ming> Does charm still use 'revision' file to maintain the version?
<Ming> Docs said deprecated but don't know what is alternative
<lazypower> Ming, from what I understand it still uses the revision file to maintain the currently deployed version, otherwise I do believe it uses the bzr revision information. (needs citation)
<Ming> Thx
<marcoceppi> Ming: not quite
<marcoceppi> Ming: revision is only used for local deployments
<marcoceppi> Ming: otherwise, the charmstore maintains a seperate revision of the charm
<marcoceppi> dpb1: is this ready for review? https://code.launchpad.net/~davidpbritton/charms/precise/haproxy/fix-service-entries/+merge/202387
<dpb1> marcoceppi: yup.  I just addressed all of sideni's comments
<lazypower> is there something specific i need to add to my juju environment to get the `charm test` command to work proper? I've to tests in $CHARM_PATH/tests, yet when i run charm test, it complains about 'None does not exist in ~/.juju/environments.yaml'
<lazypower> is this related to the null provider?
<lazypower> s/to/got
<eagles0513875> hey guys question if i use the --to flag do i need to specify the ip address or can a domain name which has a dns entry for that server work as well
<lazypower> eagles0513875, let me try and find out 1 moment while i bootstrap an AWS environment
<eagles0513875> thanks lazypower
<eagles0513875> lazypower: reason im asking is im planning on potentially using juju to help ease deployments for me to my vps provider.
<eagles0513875> so far my tests with getting use to the command line are quite nice and successful :)
<lazypower> Thats great news :) I've been doing the same in my time out of work.
<eagles0513875> lazypower: right now just using the local provider
<eagles0513875> on my laptop which is very nice for testing and development of charms etc
<lazypower> local has really streamlined the process. The "null provider" or manual provisioning stuff is still fairly experimental.
<jamespage> marcoceppi, we never really taked about default: "" again
<jamespage> marcoceppi, what is the reasoning behind doing that?
<lazypower> jamespage, to offer a "sane default", that passes the lint test. charm linting via charm proof throws a W: if there isn't a default provided.
<marcoceppi> lazypower: right, but that was a recent addition to charm proof
 * marcoceppi looks through the commit history
<jamespage> lazypower,  yes - but it changes the way config-get --format=json behaves
<jamespage> and empty string is not the same as unset
<lazypower> ah, i had not considered that.
<Ming> Thx Marco, just back from a meeting
<marcoceppi> jamespage: I think this came from a discussion on the list about empty strings and unset options
<lazypower> eagles0513875, so the workflow as I'm seeing it, you have to juju add-machine(dnsname) then you can specify --to with the machine ID that the orchestration node assigns the machine.
<marcoceppi> ultimately that there was no juju unset and not way to have an unset item so empty strings should be preferred
<lazypower> eagles0513875, trying to specify --to with a DNS Name results in an error. error: invalid --to parameter "ec2-54-205-81-94.compute-1.amazonaws.com"
<eagles0513875> ok
<eagles0513875> i think the --to should work with either a dns name or ip address
<marcoceppi> eagles0513875 lazypower --to should be a machine number
<marcoceppi> from juju status output
<lazypower> marcoceppi, thats what I just validated :)
<marcoceppi> ah, missed that line
<Ming> Does juju can help to map Amazon instance private ip and public ip?
<jamespage> marcoceppi, how do I unset a non-string item then?
<lazypower> Ming, yes. it aggregates all of the above.
<marcoceppi> jamespage: there are only int and booleans
<marcoceppi> jamespage: outside of strings, iirc
<marcoceppi> jamespage: since NULL is technically not a string, it's a mismatched type/value
<vila> I'm encountering issues trying to 'juju bootstrap' on canonistack lcy02  with 1.17.0-saucy-amd64
<vila> In one case it times out after 10 mins failing to connect to node 0 in the other I got: https://pastebin.canonical.com/103325/
<mgz> vila: if you `nova --debug list` does it succeed in talking to the same api endpoint?
<vila> mgz: that was then, nova list is empty right nwo
<vila> mgz: there seems to be something fishy with lcy02 as the bootstrap succeeded in lcy01
<mgz> right, it's probably an ask-IS moment
<vila> mgz: also, is there a way to put --constraints "mem=1G" somewhere below ~/.juju ?
<mgz> nope.
<mgz> it should be the default though, something is a little borked
<vila> mgz: damn 'nova list' is still empty, yet juju says I'm already bootstrapped >-/
<mgz> just do `juju destroy-enviroment` and try again
<vila> ok, going further after destroy --force/bootstrap 1G
<vila> mgz: well, further... back at 'Attempting to connect to ...:22'
<vila> mgz: that connection attempt is quite early in the bootstrap process right ?
<mgz> yeah, did you run with -v?
<vila> mgz: --show-log ? (Neither -v not --show-log appear in juju help bootstrap AFAICS)
<mgz> no, they're top level juju things
<mgz> like bzr flag things
 * vila coughs
<vila> at least bzr display them under --help no ?
<vila> hmm, may be not, irrelevant anyway ;)
<vila> mgz: https://pastebin.canonical.com/103330/ times out after 10 mins
<vila> mgz: and 'juju status' won't work until sshutle is started right ?
<mgz> right
<vila> mgz: thanks, I was confused about which command was requiring the tunnel, sounds obvious that it's status in retrospect...
<vila> mgz: so, any idea on what is going on ?
<mgz> looks like the nova endpoint for lcy02 is down, that's why I wondered what `nova --debug list` showed
<vila> mgz: urgh, you know who to ping about that ?
<mgz> #is vanguard and ask, if that shows the dns as unreachable like your juju log did
<vila> mgz: but that occurred only once and doesn't seem to be the case right now, will check #is but they bounced us to another channel ;)
<vila> mgz: rats, bad timing, Vanguard just switch to : None :-/
<arosales> bloodearnest, ping
<Ming_> in depart hook which variable can tell this is the leaving node?
<lazypower> Ming_, example?
<Ming_> I have a cluster let's my 5 nodes. one nodes is destroyed by "juju destroy-unit" all node will run *depart hooks
<lazypower> ahh, i dont know. let me find out for you.
<Ming_> but the leaving node will handle this differently than others
<Ming_> k. thx
<_sEBAs_> hey!
<marcoceppi> o/ _sEBAs_
<lazypower> Ming_, i'm not getting a quick response, but i'll keep running legwork until I get an answer and will ping when I have an update.
<Ming_> no problem
<Ming_> another question, is there a way to hide conga variables not expose to user in charm GUI? We have some public internal variables in config.yaml
<lazypower> Ming_, is this charm going to be for internal use only? or are you writing a charm that will eventually be submit for charm store review?
<Ming_> yes. very soon
<lazypower> Ok, Since this will be submit for the charm store, I can't think of a good way to hide configuration options persay  - as its not readily apparent to the end user. You can alternatively provide a sane default for the configuration options that suit 90% of the use cases and that would satisfy the requirement.
<lazypower> Otherwise if you are going to be using this extensively internally, read from an external file - or fork the config and maintain one for iternal use, and maintain another for public releases.
<lazypower> are a few of your options.
<Ming_> k
<_sEBAs_> thank you all!
#juju 2014-01-22
<bloodearnest> arosales: pong
<jona_> hi, I'm running juju on a virtual machine that I connect to through a vpn
<jona_> if I run an apache-server on this virtual machine I am able to connect to it through the vm's ip
<jona_> but when I run a service through juju, I don't know how to connect to it from my computer (that is, not from the virtual machine)
<jona_> is it possible to make some sort of redirection so that when I go to the vm's ip in the browser the vm redirects me to the correct service (for example juju-gui)
<jamespage> hazmat, do I need to bump juju-deployer in trusty to matchup with 1.17.0?  seeing some odd issues
<stub> Should charms be running apt-get upgrade, or is that left to other mechanisms like Landscape?
<lazypower> jona_, your question needs a bit of clarification. Are you looking for a way to route to a juju lxc container inside of the virtual machine? or are you looking to reach a LXC on the HOST from the vm?
<lazypower> Or am I inferring too much, and there are no containers, just a service on a manually provisioned VM?
<jona_> hi lazypower
<jona_> I want to reach the lxc container inside of the virtual machine
<jona_> the VM is on an openstacks-server, and I connect to the VM with ssh and a vpn
<jona_> I am using the VM as a development environment, and juju seemed easy to use :)
<jona_> when I hosted a plain apache-service on the VM I was able to connect to it by just putting the VM's ip in the browser
<lazypower> Withing knowing much of your network topology, i'm going to make recommendations based on what i feel is the path of least resistance
<jona_> okay
<lazypower> Juju by default does not use a routeable IP schema, it creates a bridged network interface to handle the IP routing of the local machine to the LXC containers. It doesn't sound like you have a setup that will lend itself easily to altering this bridged device for public/lan ip assignment, so you will need ot create a tunnel through your tunnel to reach the containers.
<lazypower> This is largely untested through a VPN tunnel, but in theory it should work without a problem. There's an app called sshuttle that will allow you to create a virtual VPN within your VPN (man this seems heavy handed)
<marcoceppi> jona_: did juju create the openstack vm? or did you provision it in horizon?
<jona_> actually I'm not the openstacks admin, the admin created the VM for me, and it's running ubuntu, and on the VM I installed juju
<marcoceppi> jona_: gotchya
<jona_> I see, a vpn in the vpn was actually what I had in mind, but I'm rather new to these kinds of things
<jona_> I'm more of a developer than a networking guy
<lazypower> jona_, well after conferring with Marco we both agree you can do this with just SShuttle. Let me fish up some docs for this, as its been a while since I had to do this.
<jona_> oh all right
<lazypower> http://blog.utlemming.org/2013/12/beta-cross-platform-juju-development.html
<lazypower> utlemming did a great write up, its listed under (optional) SSHuttle
<jona_> oh okay
<jona_> I'll take a look
<marcoceppi> jona_: vpn is the way to go, its not as painful as you might think, just needs the right configuration options
<lazypower> now replace the user@localhost bits accordingly with proper end points, ports, and users and you should be able to reach those LXC containers.
<jona_> oh all right
<jona_> seems doable :D
<jona_> I'll try it
<jona_> juju seems reaaally neat so I'd like to get to know it better :D
<jona_> I tried the demo on the page and read a bit about it
<jona_> and at least deploying the charms(?) is super practical, hehe
<marcoceppi> jona_: once you get it installed, replace vagrant@localhost:2222 with the ssh address for your ubintu machine
<jona_> I'm supposed to do this on the VM right?
<marcoceppi> jona_: if you can install virtualbox, you might want to use the vagrant image. it'll install an Ubuntu vm on your desktop with the local provider already configured
<marcoceppi> jona_: well. for testing, sure! but juju runs best against a cloud environment like open stack for production stuff
<jona_> oh okay, so if I actually want to use vm I should just get my admin to install it directly on the openstacks-server?
<lazypower> jona_, in regards to the VM talk, marco was speaking about pulling down the vagrant image on your workstation for testing purposes
<lazypower> you already have juju installed and functioning in that VM, so you're good to go once you get your VPN or SSHuttle configured.
<marcoceppi> jona_: he would give you an account on the openstack install. you put your account details in juju then juju will spin openstack vms when you deplpy charms
<lazypower> ^ that is the most desireable outcome from this talk, as you're getting the integration juju provides out of the box.
<jona_> ah I see
<marcoceppi> juju supports multiple types of clouds. aws, openstack, azure, lxc, bare metal, etc
<jona_> yeah I noticed
<jona_> hm, I'm not sure if I misunderstood something, when does sshuttle do the shuttling? when I ssh to the VM's ip?
<lazypower> Correct
<marcoceppi> jona_: from your desktop to the vm
<jona_> I ran this command
<jona_> sshuttle -e 'ssh -o UserKnownHostsFile=/dev/null ubuntu@192.168.10.222:22' 10.0.3.0/24
<lazypower> once you run that sshuttle command, your desktop will forward the request over that SSH tunnel to the VM, and route to the addresses of the LXC containers provisioned by juju/lxc.
<jona_> or wait, I'm not supposed to touch the ports ?
<jona_> I figured I'd set them to the port I use when SSHing
<lazypower> that's correct
<lazypower> :22
<lazypower> the :2222 is vagrant specific
<jona_> hm okay, but when I ssh'd to the ip it connected me to the VM as usual
<lazypower> run juju status, and find the IP of the deployed apache charm
<jona_> sure
<jona_> I should put that in instead?
<lazypower> Yep, try to ping it, or plug it into your browser
<jona_> it doesn't seem to find a connection
<jona_> no ping packages received
<jona_> but just so I understand the steps
<jona_> I ran
<jona_> sshuttle -e 'ssh -o UserKnownHostsFile=/dev/null ubuntu@192.168.10.222:22' 10.0.3.0/24
<jona_> then I ssh'd again to the VM, where I ran started sshuttle
<lazypower> 192.168.10.222 is the IP of the VM?
<jona_> then I tried pinging the ip for juju-gui (10.0.3.23) as provided by juju status
<jona_> yes
<lazypower> thinking, 1 moment
<marcoceppi> jona_: if you put 10.0.3.23 in your browser after running sshuttle, do you get the GUI?
<jona_> nope, it just gets stuck on trying to conenct
<marcoceppi> and you're running sshuttle from your desktop?
<jona_> no on my VM
<marcoceppi> jona_: ah, there we go
<marcoceppi> run sshuttle from your desktop
<jona_> I actually asked about that :D
<jona_> okay
<marcoceppi> jona_: sorry we didn't make that more clear
<jona_> no biggie ^^ thanks for helping me at all, haha
<jona_> btw I run arch locally, and it just claimed "Errno 2 no such file or directory", maybe it's empty in /dev/null?
<lazypower> good call marco
<jona_> it wasn't
<jona_> hm, it says no such file or directory when I run sshuttle -e 'ssh -o UserKnownHostsFile=/dev/null ubuntu@192.168.10.222:22' 10.0.3.0/24 locally
<marcoceppi> jona_: is sshuttle installed?
<jona_> yes
<jona_> http://pastebin.com/dCHD5JSJ
<jona_> this is an error sshuttle generates, I think
<marcoceppi> jona_: feel free to remove the '-o User.../dev/null' stuff from the -e parameter
<marcoceppi> and give that a whirl
<jona_> weird, it still gives the same error
<marcoceppi> try giving the entire path to ssh
<marcoceppi> ie, /usr/bin/ssh
<jona_> still same error :S
<jona_> really weird
<jona_> I am able to run ssh-V and sshuttle -V so they're both runable
<marcoceppi> can you run sshuttle -v for more verbosity? and paste the entire output
<jona_> oh I googled the error
<jona_> apparently I needed net-tools
<jona_> though now I get some other errors
<jona_> iptables v1.4.21: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
<marcoceppi> I guess you need insmod? Sorry haven't run arch in quite a while ;)
<jona_> I tried running it but it said "missing filename"
<jona_> I actually don't know what insmod is
<lazypower> it loads modules into the kernel space
<jona_> hm
<jona_> I'm just gonna try rebooting brb
<jona_> hello!
<jona_> it works now
<marcoceppi> huzzah
<lazypower> boom
<jona_> at least sshuttle does
<jona_> now I'll try connecting with firefox
<jona_> hmm
<jona_> it doesn't seem to work
<jona_> oh wait
<jona_> I forgot
<jona_> hmm
<jona_> nope still doesn't work
<jona_> now I've started sshuttle locally, connected to the VM, and tried pinging both 10.0.3.23 and 10.0.3.0
<jona_> might it have with the ports to do? and does the /24 after 10.0.3.0 in the sshuttle command matter?
<lazypower> the /24 defines the CIDR range you want to route through that tunnel
<lazypower> 10.0.3.0/24 will route everything from .0 to .255 on the 10.0.3 block, if that makes sense.
<lazypower> but no, the ports shouldn't be a problem. So long as sshuttle is actively running when you try to ping those ip's, or pull them up in a browser.
<jona_> hm okay
<jona_> http://pastebin.com/bWxzhKsC
<jona_> got these warnings
<jona_> don't know what they mean
<lazypower> well those are good signs, it means sshuttle is establishing a connection with the server. It may not be active, but its trying
<lazypower> kill it and restart it, do you still get the warnings?
<jona_> the server or shuttle?
<lazypower> sshuttle
<lazypower> brb
<jona_> ok
<jona_> hm, it seems to give the warning after like 5 minutes or so
<jona_> afk for a while
<marcoceppi> jona_: can you try running sshuttle with the -v flag, might illuminate what's going on
<jona_> http://pastebin.com/PBm45Wbq
<jona_> the last row appeared when I entered 10.0.3.23 in my browser
<jona_> I have to eat now, but I'll write again when I'm back
<marcoceppi> jona_: cool, it seems to be working :\
<hazmat> jamespage, i've had a bunch of similiar issue reports in the last week relating to the same watcher error, rogpeppe1 added some additional logging, and i was going to have a look this week, would be good to confirm we're referencing the same issue. the ppa needs updating though
<rogpeppe1> hazmat: i've fixed that in trunk
<hazmat> rogpeppe1, logging or watcher error?
<rogpeppe1> hazmat: both
<hazmat> rogpeppe1, what was the issue for the watcher?
<rogpeppe1> hazmat: FWIW the additional logging *was* very useful for diagnosing the problem
<hazmat> rogpeppe1, awesome
<rogpeppe1> hazmat: the client watcher wasn't pinging
<rogpeppe1> hazmat: (i updated the bug report)
<hazmat> rogpeppe1, cool.. i thought that might be the case and commented on the bug to that effect..
<rogpeppe1> hazmat: https://codereview.appspot.com/53810045/
<rogpeppe1> hazmat: yeah - i'd actually merged the fix before you commented there :-)
<hazmat> :-)
<hazmat> strange normally i get a bug email from lp
<rogpeppe1> hazmat: it was a bit of a giveaway that the client connection was closed after exactly 3 minutes
<rogpeppe1> hazmat: a quick grep of the source for "3 \* time.Minute" found the source of the problem immediately...
<hazmat> rogpeppe1, given a synchronous client not sure how to make that nicer.. outside of background threads..... ideally the ping timeout could be client configured
<rogpeppe1> hazmat: i don't think we care about the client pinging
<hazmat> rogpeppe1, umm.. i think you do.. guard against bad clients.
<hazmat> rogpeppe1, your saying just wait for socket close?
<rogpeppe1> hazmat: we should probably set SO_KEEPALIVE if it's not set already
<rogpeppe1> hazmat: yeah
<rogpeppe1> hazmat: but anyway, you should definitely make your client library concurrent-safe, because at the moment it wouldn't be possible to send a ping request in general, even if you wanted to
<hazmat> hmmm.. so alternatively for a sync client, i can hold store pings, and send opportunistic pongs
<rogpeppe1> hazmat: that's not a good idea
<hazmat> rogpeppe1, without concurrency how else to do that?
<rogpeppe1> hazmat: use concurrency
<rogpeppe1> :)
<rogpeppe1> hazmat: or callbacks, or something
<hazmat> rogpeppe1, concurrent safe / concurrency.. mean event loop/scheduler.. i'm happy to do that, but the majority of users want idiomatic not framework
<jamespage> hazmat, that sounds like what i see
<rogpeppe1> hazmat: the main point is: clients need to be able to make concurrent calls on the same connection, however you allow that
<hazmat> seems like something for the plain ride, a subclass that plays nice with gevent/eventlet
<rogpeppe1> hazmat: yeah, you could have subclasses for various kinds of concurrency framework (twisted, gevent, callback)
<rogpeppe1> hazmat: you could layer them all on top of a callback-based base layer
<hazmat> not doing callback oriented api, twisted and gevent can both do non-callback base, and i want the normal python usage to be non call back based.
<rogpeppe1> hazmat: fair enough
<jona_> I am back marcoceppi
<jona_> this is how the log looks now:
<jona_> http://pastebin.com/GtnnbTuy
<josepht> is this the place to ask juju-deployer questions?
<jamespage> josepht, its as good as any - hazmat wrote it :-)
<NemesisMate> Hi, I'm having troubles with juju, I'm just trying to do my first: `# juju bootstrap` (on ubuntu) but I'm getting the error: "ERROR no reachable servers". If I see through the `# juju status -v` I can see it stucks on "INFO juju.state open.go:68 opening state; mongo addresses: ["10.0.3.1:37017"]; entity """.
<josepht> I'm using canonistack.  I've sourced my novarc.  I get this: http://paste.ubuntu.com/6797562/ when I run: juju-deployer -bdv -c juju-deployer/webui.yaml, where webui.yaml's content is: http://paste.ubuntu.com/6797578/
<hazmat> josepht, so your probably using juju switch.. there's like 4 different ways that juju-core can lookup its env to operate on..deployer supports 3.. there's a bug outstanding for juju-core that it should just pass the env unambigiously to the plugin via env variables.. in the meantime.. you can just set JUJU_ENV env var to the env your using.
<hazmat> core bug is https://bugs.launchpad.net/juju-core/+bug/1246156..  arguable deployer should just support the four different ways till its done.
<NemesisMate> hazmat, I don't know if you see my message, did you?, I can't solve that :(
<hazmat> NemesisMate, that sounds like bootstrap never succeeded.. which provider are you using.. juju-core trunk has already changed provisioning to be synchronous so that these sorts of things are more obvious/easier to diagnose.
<hazmat> er.. changed s/provisioning/bootstraping
<NemesisMate> what do you mean with provider?
<josepht> hazmat: thanks, that was it.
<hazmat> NemesisMate, like openstack, ec2, etc.. your environment is configured to use which provider?
<hazmat> in environments.yaml .. its the 'type' key on the environment config
<NemesisMate> `# juju version` gives me: 1.16.5-quantal-amd64. I just installed it with an apt-get install from "ppa:juju/stable", I made a `juju init`, after that I changed to "default: local" on "~/.juju/environments.yaml", I made an apt-get install mongodb-server and a juju bootstrap
<NemesisMate> It's a local one
<hazmat> NemesisMate, ic.. do you have a firewall setup on your local machine?
<hazmat> NemesisMate, normally a local env bootstrap requires sudo ..
<NemesisMate> No, even I get "[initandlisten] connection accepted from 127.0.0.1:35831 #1" on /var/log/mongodb if I change mongodb port to 37017, but juju bootstrap hangs on that line
<NemesisMate> yes, I'm using sudo
<hazmat> NemesisMate,  juju will setup and install its own mongodb on a different port
<hazmat> NemesisMate, typically you can just disable shutdown the package one unless your using it for other things
<NemesisMate> if I let it without initializing mongo-db, I get the error I comment at start
<hazmat> NemesisMate, the issue is what's happening to the separate mongodb process that juju is trying to setup
<hazmat> NemesisMate,  ie ls /etc/init/juju* should show juju-db in addition to the machine agent
<hazmat> NemesisMate, so starting back at square one.. i'd sudo service mongodb-server stop
<hazmat> oh.. the juju jobs are prefixed with user name in the local provider case
<NemesisMate> Ok, I stopped mongodb, and I "juju destroy-environment -y" and try the "sudo juju bootstrap --verbose" and it ended with: http://pastebin.com/rT1zDDnY
<NemesisMate> I just have this file: "/etc/init/juju-db-<user_name>-local.conf" from juju there
<NemesisMate> If I try to execute the command on that file (on the "exec" parameter) I get "error command line: unknown option sslOnNormalPorts". Maybe I need another mongodb version?
<Ming_> When deploy on Amazon AWS is it OK to use our own AMI or must use precise, saucy,etc?
<NemesisMate> ok, thanks, that was it. I did again an apt-get mongodb-server install and this time it updated again.... (Why it didn't installed this version before?). This time is working great
<hazmat> NemesisMate, cool.. yeah.. mongodb needs ssl support
<hazmat> Ming_, juju is setup to use pristine cloud-images
<hazmat> NemesisMate, what distro version are you on? .. if your on precise.. the typically way to get that is via the cloud-archive tools ppa
<hazmat> NemesisMate, there's also a mongodb with ssl in the juju stable ppa, but if you installed mongodb prior to adding and updating from the ppa you would have gotten the distro version (sans ssl)
<marcoceppi> rogpeppe: still getting watcher was stopped using 1.17.1
<rogpeppe> marcoceppi: can you paste me the machine-0.log ?
<rogpeppe> marcoceppi: i suspect you're not actually using 1.17.1
<marcoceppi> rogpeppe: mbruzek ran in to it ^^
<marcoceppi> rogpeppe: http://pastebin.ubuntu.com/6797420/ (not machine-0.log)
<mbruzek> rogpeppe, how to get machine-0.log
<mbruzek> ?
<marcoceppi> mbruzek: ~/.juju/local/log/machine-0.log
<rogpeppe> mbruzek: how do you know you're running 1.17.1 ?
<mbruzek> That is on my local filesystem somwhere right?
<rogpeppe> mbruzek: what does "sudo which juju" print?
<mbruzek> /usr/bin/juju
<mbruzek> sudo juju version
<mbruzek> 1.17.1-trusty-amd64
<marcoceppi> rogpeppe: he installed the trusty binary that james put out
<rogpeppe> marcoceppi: from which revision?
<rogpeppe> mbruzek: please paste machine-0.log - that will tell me which version is actually running, and whether the issue is the same one
<marcoceppi> rogpeppe: I have no idea which one james cut that release from
<mbruzek> http://pastebin.ubuntu.com/6797873/
<rogpeppe> mbruzek: right, that's not the version with the fix in
<marcoceppi> rogpeppe: ah, sorry, we'll go back and compile/test again
<rogpeppe> mbruzek: erm
<mbruzek> Which line tells you that?
<rogpeppe> mbruzek: maybe it is!
<rogpeppe> mbruzek: i'm not sure.
<rogpeppe> mbruzek: i see it has the log fixes, but i still see the connection termination
<rogpeppe> mbruzek: it would be good to know which revno james cut that version from
<mbruzek> I still have the .deb
<mbruzek> How can I get you that information?
<rogpeppe> mbruzek: i'm not sure i can easily find that info from the deb
<rogpeppe> mbruzek: that just has the binary in, right?
<rogpeppe> mbruzek: actually, perhaps you could send me the .deb?
<mbruzek> doing that now
<rogpeppe> mbruzek: thanks
 * mbruzek is slowly uploading the file...
 * mbruzek curses his slow upload speed!
<marcoceppi> rogpeppe: it's on the juju list
<marcoceppi> maybe...it's not?
<rogpeppe> marcoceppi: was it jamespage that put the binary out?
<marcoceppi> idk now, mbruzek where did you get that trusty.deb file?
<mbruzek> rogpeppe, the note is sent
<rogpeppe> mbruzek: thanks
<mbruzek> let me check my notes
<rogpeppe> mbruzek: i'm still suspecting it's built from a revision earlier than 2219
<rogpeppe> mbruzek: got your email and downloaded, thanks
<mbruzek> I don't think it was james
<mbruzek> I was pointed to the jenkins site,
<marcoceppi> oh, shoot, I was confused, it's from the ci
<mbruzek> sinzui
<sinzui> hi mbarnett
<sinzui> sorry mbarnett . hi mbruzek
<mbarnett> :)
<mbarnett> hello anyways!
<mbruzek> We were trying to locate the origin of the new-trusty.deb that was built
<mbruzek> sinzui, I am still seeing the wait problem with the 1.17.1 version of juju local that I have
<rogpeppe> mbruzek: have you tried building from tip?
<rogpeppe> mbruzek: perhaps i could send you a binary and you could replace /usr/bin/jujud with it and see if you still get the problem?
<mbruzek> That would be fine, otherwise Marco showed me how to complile the go code
<mbruzek> But others had problems getting the right jujud tool installed
<sinzui> mbruzek, rogpeppe: CI build binaries using the release tarball and the source package branch. They are stored at http://162.213.35.54:8080/job/prepare-new-version/lastSuccessfulBuild/artifact/
<sinzui> The ones there are from r2239 according to http://162.213.35.54:8080/job/prepare-new-version/
<rogpeppe> sinzui: great.
<rogpeppe> mbruzek: could you try using one of those and see if you can reproduce the issue?
<mbruzek> Downloading it now, but that link appears to already be visited.
<mbruzek> I will diff them
<rogpeppe> mbruzek: FWIW i was able to reproduce the issue from your instructions, but with the fix applied it all seemed to work
<mbruzek> Ok
<mbruzek> The files are different
<mbruzek> c45da6c0387f2452a9f28a65a6577a5dbb9f585c  new-trusty.deb
<mbruzek> mbruzek@skull:~/Downloads$ sha1sum new-trusty2.deb
<mbruzek> e4907880eefa931ed63bdb85bb3cf6eb49b315f2  new-trusty2.deb
<rogpeppe> mbruzek: let me know how you get along with the new version. i will keep my fingers crossed :-)
<mbruzek> doing that now rogpeppe
<rogpeppe> sinzui: i wonder if it might be good to bake the current bzr revision (long version) into the binary when building it
<rogpeppe> sinzui: particularly for dev versions
<marcoceppi> sinzui: since the debs are used, would it be wise to update the versions
<marcoceppi> right, what rogpeppe said
<rogpeppe> well, this isn't even a dev version, it's an arbitrary snapshot of tip
<sinzui> marcoceppi, rogpeppe. We are testing what will be released. we create actual deb versions as a part of the test. The names you see are symlinks to proper names. I don't know how to include the revision in the version without corrupting the test
 * sinzui has pondered using the real package names instead of the simplified names
<rogpeppe> sinzui: i wouldn't include the revision in the version
<sinzui> That breaks the test
<rogpeppe> sinzui: but i would try to embed it in the binary
<sinzui> rogpeppe, we are really testing the packaging.
<marcoceppi> sinzui: so, in other words, don't install this debs?
<marcoceppi> rather, install at your own risk
<sinzui> I did say it was bleeding edge. I didn't put them in a ppa
<marcoceppi> true
 * marcoceppi didn't even know they existed until this morning
<sinzui> marcoceppi, I have had to tell a lot of people about them recently and I also need to talk about tools-url/tools-metadata-url. I think I need to post going out to all the juju emos who want to cut themselves
<marcoceppi> sinzui: CI can be very sharp ;)
<marcoceppi> rogpeppe: looks like the bug is fixed, sorry about that
<marcoceppi> thanks again!
<rogpeppe> mbruzek: did your tests succeed with the new binary?
<mbruzek> yes it did
<rogpeppe> marcoceppi: cool
<mbruzek> I am in a hangout with marco
<rogpeppe> ah
<mhall119> marcoceppi: ping
<teknico> marcoceppi (or anyone else), how do I determine which openstack version the charms are installing?
<teknico> specifically, the openstack-dashboard one
<teknico> the installed Horizon code is using a Django 1.4-ism alongside Django 1.5, and code goes boom :-)
<marcoceppi> teknico: juju get openstack-dashboard
<marcoceppi> teknico: that'll show your the optoins, including the openstack-release (I think that's the config option name)
<marcoceppi> pong mhall119
<teknico> marcoceppi, openstack-origin is cloud:precise-grizzly, thanks
<marcoceppi> teknico: there ya go :)
<teknico> and how does the charm know which django version to install? can't find it
<marcoceppi> teknico: isn't that a dependency on the package? The pocket cloud has a django version that matches each release of openstack
<marcoceppi> ie, doesn't horizon in grizzly  require django X and havana django Y, in that case django will be in the cloud archive for each version of openstack release
<teknico> marcoceppi, so in this case will it be http://cloud-tools.archive.ubuntu.com/ubuntu/dists/precise-backports/ ?
<marcoceppi> teknico: depends, precise-grizzly maps to another cloud archive, not nessiarily the cloud-tools archive
<teknico> where is that mapping defined?
<marcoceppi> teknico: jamespage would probably be better at illuminating it
 * marcoceppi goes and looks
<jamespage> teknico, you are hitting a bug in the fact that juju adds the cloud-tools repository to all precise instances
<jamespage> cloud-tools has a newer version of django that precise-grizzly - #boom
<jamespage> teknico, bug 1240667
<jamespage> https://bugs.launchpad.net/juju-core/+bug/1240667
<teknico> jamespage, so it's known, ok, thanks
<teknico> and there's even a workaround on the bug :-)
<jamespage> sinzui, mramm: any chance https://bugs.launchpad.net/juju-core/+bug/1240667 could be bumped in priority
<mhall119> marcoceppi: since jcastro isn't around, can you do a juju update on the ubuntu engineering live call in 30 minutes?
<marcoceppi> mhall119: uh, sure
<nboumaza> Hi Antonio...Can I interrupt for a few minutes..?
<ev> juju-deployer respects JUJU_REPOSITORY, right?
<marcoceppi> ev: it /should/
 * marcoceppi checks code
<marcoceppi> I don't see any explicit mention of it, ev
<ev> marcoceppi: as in it doesn't override it or affect juju underneath juju-deployer's ability to use it, right?
<marcoceppi> ev: I assume it doesn't affect juju underneath, but hazmat would know more about that
<hazmat> ev, it doesn't.. there's a bug and branch outstanding
<ev> d'oh
<ev> bug number?
<hazmat> ev  https://bugs.launchpad.net/juju-deployer/+bug/1229390
<ev> thanks
<hazmat> i've got some deployer items to merge later today, i'll add that to the list, just need to get out from under some sisyphean work
<dpb1> marcoceppi: I got an approve from sid-nei here: https://code.launchpad.net/~davidpbritton/charms/precise/haproxy/fix-service-entries/+merge/202387, then I think he added another review.  Is that normal?
<marcoceppi> dpb1: looks like it's always been assigned to charmers? I'll do a cursory review of it in a few mins and merge if there's no major issues
<dpb1> marcoceppi: thx
<dpb1> using juju test, is there a way to use an environment that has an existing bootstrap, and leave it up?
<marcoceppi> dpb1: yes
<marcoceppi> dpb1: err, there was a way
<marcoceppi> dpb1: could you file a bug?
<dpb1> marcoceppi: ok
<marcoceppi> dpb1: I'll have that rolled out very soon
<dpb1> marcoceppi: where do I file it?
<marcoceppi> charm-tools
<dpb1> k
<dpb1> marcoceppi: https://bugs.launchpad.net/charm-tools/+bug/1271745
<jcastro> marcoceppi, do you know if we can export a bundle from the CLI yet?
<marcoceppi> dpb1: thanks
<marcoceppi> jcastro: not off the top my head
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1267518
<jcastro> aha!
<marcoceppi> jcastro: we could pluginize one
<marcoceppi> wouldn't be too hard
<jcastro> WARNING no tools available, attempting to retrieve from https://streams.canonical.com/juju
<jcastro> ERROR bootstrap failed: cannot find bootstrap tools: XML syntax error on line 9: element <hr> closed by </body>
<jcastro> anyone see this today?
<marcoceppi> jcastro: yeah, this has been happening for a while
<marcoceppi> jcastro: what version?
<jcastro> 1.17, doing an --upload-tools worksaround
<jcastro> I was just wondering if we were tracking it somewhere
<marcoceppi> jcastro: use 1.17.1, I think it automatically uses upload tools now
<marcoceppi> jcastro: it's def known
<jcastro> ack
<marcoceppi> (when stream isn't found)
<hazmat> jcastro, i've seen that on ec2 for a while when i forget --upload-tools
<hazmat> running with trunk
#juju 2014-01-23
<adam_g> does juju now automatically bump the revision number of a charm when deploying it an env for the first time?
<davecheney> adam_g: a local charm ?
<adam_g> davecheney, yeah
<davecheney> was the a 'revision' file on disk already ?
<adam_g> davecheney, yes, at precies/$foo/revision
<davecheney> i'm not sure about first time, but if you do upgrade charm, and the current spec for the charm is local:precise/foo then the revision will be bumped and the new charm uploaded
<stub> 2014-01-23 09:14:34 INFO juju.worker.uniter modes.go:421 ModeContinue starting
<stub> 2014-01-23 09:14:34 INFO juju.worker.uniter modes.go:101 found queued "install" hook
<stub> 2014-01-23 09:14:34 INFO juju fslock.go:144 attempted lock failed "uniter-hook-execution", pg/0: running hook "install", currently held: postgresql/0: running hook "install"
<stub> There is no longer a postgresql service. It was destroyed a few moments after the deploy was run
<stub> Any one want some debugging information from my env before I blow it away?
<stub> At the moment it looks stuck, totally empty except for the pg/0 unit stuck in pending.
<davidgtonge> hi, does anyone know the best place to find a freelancer with Juju experience? thanks
<lazypower> davidgtonge, there isn't a public listing presently. Someone here may know. Email the mailing list and you will probably get a response or two.
<_mup_> Bug #1272013 was filed: juju destroy-environment also destroys nodes that are not in a ready state <pyjuju:New> <https://launchpad.net/bugs/1272013>
<jcastro> hey evilnickveitch
<jcastro> for the quickstart docs, do some screenshots, I used it for real yesterday and the UI is pretty sexy!
<sarnold> screnshots++  :) first thing I look for when looking at a new project :)
<rick_h_> jcastro: woot
<rick_h_> marcoceppi: sent the pull request with updates
<marcoceppi> rick_h_: thanks!
<rick_h_> marcoceppi: I'm suggested to join a call about the makefile?
<rick_h_> marcoceppi: or nvm, you've got a pull request that builds and tests pass carry on?
<marcoceppi> ?
<rick_h_> marcoceppi: nvm then. Someone thought you were chatting with bcsaller about makefile stuff and I might be interested
<marcoceppi> rick_h_: naw, other fun stuff ;)
<rick_h_> marcoceppi: ignore me
<rick_h_> gotcha cool
<jcastro> I am getting ssh connection refused errors with the local provider lately
<jcastro> anyone see that?
<marcoceppi> jcastro: how are you trying to ssh?
<jcastro> juju ssh
<jcastro> let me use lazypower's juju cleaning thing for the local provider
<jcastro> aha
<jcastro> 2014-01-23 21:43:08 ERROR juju.cmd supercommand.go:294 cannot use 37017 as state port, already in use
<jcastro> this is a mongo error right?
<lazypower> jcastro, you've got a leftover in there somewhere
<jcastro> yeah
<lazypower> i had to do a full wipe, and that script has some updates
<lazypower> want the plugin version?
<jcastro> oh there's a plugin?
<jcastro> yes please!
<lazypower> its not finished, it doesnt offer help or offer tab completion, but its good enough to recycle your local and fix all those pesky local issues
<lazypower> http://paste.ubuntu.com/6805261/
<lazypower> drop that in your $PATH as juju-recycle-local
<achiang> who here knows about juju.u.c/docs ? specifically i'd like to know how you integrate the nifty sphinx-light-theme into your tree
<jcastro> achiang, you want evilnickveitch but he's on UK time
<lazypower> achiang, thats some evilnickveitch magic
<lazypower> ninja'd
<jcastro> lazypower, is this newer than the gist?
<achiang> i wonder if it's as simple as copying the bzr branch into my existing docs/ dir and running make html
<achiang> jcastro: lazypower: thanks anyhow
<lazypower> jcastro, yeah the gist has some juju- leftover in there when it should be $USER
<lazypower> when the plugin's complete i'll go back and update the AU post
<lazypower> s/post/comment
<jcastro> I tried stopping mongo to see if that would work
<jcastro> but no go
<lazypower> oh right
<lazypower> ps aux | grep mongo
<lazypower> and kill -s that stray mongodb process
<lazypower> s/s/9
<jcastro> it just respawns
<lazypower> thats... odd
<achiang> jcastro: hm, sorry one followup question -- do you know what branch the juju docs live in?
<lazypower> when I ran into that, i needed to kill the mongodb process that ran away, and recycle my local. Things normalized after that.
<jcastro> achiang, lp:juju-core/docs
<achiang> jcastro: perfect, ta
<jcastro> achiang, it's all html5, should be straightforward for ya
<achiang> i only know html1,2,3,4
<achiang> :P
<lazypower> jcastro, do you have a link to that question on AU?
<lazypower> http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider/
<lazypower> found it
<arosales> jcastro, after trying to kill the local environment and if you get a can't bootstrap 37017 still in use and use see a mongod process still running with a var/tmp/juju/db path try
<arosales> sudo mongod --shutdown --dbpath=/var/tmp/juju/db
#juju 2014-01-24
<hazmat> jcastro, sudo service juju-db stop
<hazmat> shutsdown the mongo juju is managing
<hazmat> for local provider there's also a maching agent upstart job but its got the username/env name in it
 * hazmat reads the askubuntu answer
<ahasenack> marcoceppi: hi, do you know if we need two reviews for charms? Or can this one be merged? https://code.launchpad.net/~davidpbritton/charms/precise/haproxy/fix-service-entries/+merge/202387
<ahasenack> anyone else too, for that matter
<ahasenack> without that fix, haproxy is broken for relation-driven proxying, it just erases the previous unit and keeps only the last one
<marcoceppi> ahasenack: sidnei isn't a charmer so he can approve stuff, but a charmer needs to review it, it's still sitting in the queue, http://manage.jujucharms.com/tools/review-queue but I'll take a look at it now given the charm is broken without it
<marcoceppi> needs a charmer to merge*
<ahasenack> marcoceppi: and sidnei did that rewrite that removed this feature, and he "approved" the fix
<marcoceppi> right, just a cursory review is needed at this point, then merge.
<marcoceppi> won't take me long
<ahasenack> ok
<ahasenack> thanks
<marcoceppi> ahasenack: merged and pushed, should be in the store in the next 15 mins
<ahasenack> marcoceppi: thanks!
<marcoceppi> ahasenack: if there are other items in the queue, in the future, that repair broken charms, feel free to ping a charmer to have it reviewed. We don't want those sitting too long. Hopefully in the near future we'll have the review queue down to just a 1-2 days old
<marcoceppi> waves of backlogs keep cropping up
<ahasenack> marcoceppi: is there a word that highlights all charmers on irc?
<ahasenack> I don't know everybody
<marcoceppi> hum, not that I know of, the group of reviewing charmers is pretty small atm though
<marcoceppi> ahasenack: actually, looking at the list, it's pretty much me atm
<marcoceppi> so just ping me :)
<ahasenack> haha :)
<ahasenack> poor you :)
<marcoceppi> ahasenack: but bcsaller also has the ability to review and merge
<ahasenack> that can't be right
<ahasenack> just you, I mean
<marcoceppi> stuart and james are also chamers, but they are typically just as busy
<marcoceppi> ahasenack: we're working on bringing a few more charmers online
<ahasenack> ok
<rick_h_> marcoceppi: how goes the amulet tweaks?
<marcoceppi> rick_h_: good, I had to patch some of the make file, but the tests are running and I was able to get my changes rebased on top of your work
<rick_h_> marcoceppi: coolio. did you get a release bac and I can use to write tests on then?
<marcoceppi> rick_h_: not yet, I plan to have one out around 10a EST
<rick_h_> rgr
<marcoceppi> rick_h_: for some reason, when I created my venv, it put pip and everything else in venv/local/bin, not sure if you were seeing that as well
<marcoceppi> https://github.com/marcoceppi/amulet/commit/714b1308f2a1d71fcc4c19af30758845ea826d6d
<rick_h_> marcoceppi: yes, but if you source it it adds it to your path
<marcoceppi> rick_h_: mabye I was doing something wrong, because it wasn't quite working for me
<rick_h_> marcoceppi: but hmm, yea. it did that to me originally when I was doing a venv --upgrade and didn't do it locally once I changed that
<marcoceppi> either way that worked on my virgin box, so I'm happy it's been updated and we're rolling again
<rick_h_> marcoceppi: yea I don't have a local dir in my venv at all
<rick_h_> I started out by trying to do a python3 venv --upgrade .
<marcoceppi> ah, I just used what was in the make file, maybe that's why
<rick_h_> which did the local stuff and caused issues. Going to the python3 -m venv venv worked all that out
<rick_h_> which should be in the make file
<marcoceppi> right
<rick_h_> yea, maybe you saw the old revision then
<rick_h_> it was missing the activate script when I did it that way and found out all --upgrade does is update an existing venv to point to the current python used.
<marcoceppi> rick_h_: do you think you could do a code review for something small for me in about 20 mins?
<marcoceppi> oh crap, jujuclient isn't py3 compat
<rick_h_> marcoceppi: sure thing on the code review
<rick_h_> marcoceppi: and ugh on the jujuclient. It's not huge, but still
<marcoceppi> rick_h_: I'm going to have to postpone this feature until I can get jujuclient release py3, so I'll start cutting the release. 2to3 doesn't show much that's needed
<rick_h_> marcoceppi: feature branches ftw
<marcoceppi> exactly
<rick_h_> marcoceppi: I did want to see if you had any thoughts on putting amulet in the juju github repo and we could chat about setting it up on our test/landing setup. Not today, but to think on
<marcoceppi> rick_h_: my goal is to move it there, for sure
<rick_h_> k
<marcoceppi> get it out of my namespace, as well as charm-tools and the few other straglers floating around
<rick_h_> cool
<marcoceppi> amulet's easy, I don't use lp for anything but bugs, however charm-tools i'm using pretty much all lp for releases, milestones, etc
<marcoceppi> rick_h_: did you guys migrate your bug tracker for gui to gh?
<rick_h_> marcoceppi: left releases and bugs on LP
<rick_h_> only the source moved
<marcoceppi> rick_h_: ah, cool
<marcoceppi> I'll probably do the same for charm-tools
<sinzui> marcoceppi, If you (or I it appears were to cp charm-tools - 0.3+bzr179-3~precise1 in the ~charmer's ppa to saucy and trusty, a lot more charms would just work. now as you prepare a new package.
<marcoceppi> sinzui: yeah, that's the plan. I really want to get charm-helpers (latest) in to trusty, but I need to create another package that installs the legacy charm-helpers that were removed
<sinzui> marcoceppi, I thought as much
<marcoceppi> sinzui: I just kicked off a build in ppa:~charmers/charm-helpers hopefully it won't run in to any issues
<marcoceppi> rather, I lazily copied the binaries
<sinzui> marcoceppi, I would have done the same :)
<mhall119> hatch: will you have any time to help me today?
<hatch> hey mhall119 I'm actually not the best guy to do that because I haven't run into those issues
<hatch> I've been waiting until someone else has some free time to take a look
<mhall119> hatch: have you worked with webops?
<hatch> not much
<mhall119> know somebody who has?
<hatch> you could try asking in #juju-gui if anyone has a few minutes to take a look - we are in meetings right now though
<mhall119> thanks
<sinzui> smoser, When do you think hp cloud and azure will have trusty amd64 OS images?
<smoser> sinzui, azure does have them.
<smoser> they'ure just labled "daily"
<sinzui> oh, thank you smoser
<smoser> i dont know about sstreams representation there. though.
<smoser> oh. its there
<smoser> http://cloud-images.ubuntu.com/releases/streams/v1/com.ubuntu.cloud:released:azure.json
<sinzui> smoser. I am incompetent. My browser cached the page, I see them now after a refresh
<smoser> funny
<smoser> :)
<smoser> http://paste.ubuntu.com/6810004/
<smoser> as for hp i dont know.
<smoser> we're not publishing image sthere yet.
<sinzui> smoser, ack. I will add azure+trusty+juju testing today. I wont panic over Hp since I have canonistack on openstack testing working
<marcoceppi> rick_h_: couldn't you just add build to the test Make target?
<marcoceppi> err, install
<marcoceppi> though, I guess I see that it's .PHONY so it'll run every time and that'll be annoying
<rick_h_> marcoceppi: no, because the variables are parsed through on the first pass. So it'll see there's no bin/nosetests and default to using local/bin/nosetests. Then we install it, so now it does exist at bin/nosetests
<rick_h_> the issue with install in the test target is the same case, but for pip
<rick_h_> "bin/pip doesn't exist, default to /local/bin/pip"
<rick_h_> but that doesn't exist either
<rick_h_> so then the pip install commands will fail
<marcoceppi> rick_h_: ack, thanks
<dpb1> marcoceppi: thx for the review, I put up a response: https://code.launchpad.net/~davidpbritton/charm-tools/fix-test-env-1271745
<dpb1> .... and you already replied.  nice.
#juju 2014-01-26
<meebey> how can I start juju with full debug logging? juju -vvv seems to be deprecated and doesnt show all log calls either
<meebey> found it: --show-log --debug
<meebey> "verbose is deprecated with the current meaning, use show-log" that was a bit confusing, I thought show-log is a subcommand of juju instead of parameter
<hazmat> meebey, https://lists.ubuntu.com/archives/juju/2013-September/002998.html
<hazmat> meebey, debug-log is a the subcommand
<hazmat> meebey, the link has the config to enable verbose agent logging
<hazmat> meebey, for cli.. --debug does detailed looging, else -v for verbose
<meebey> thanks
<thumper> hazmat: please stop using -v for verbose
<thumper> hazmat: the meaning of verbose is changing
<thumper> hazmat: verbose will (RSN) mean 'show more output' and not be related to logging
<thumper> user output and developer logging are two different things
<hazmat> thumper, -v for what then?
<hazmat> thumper, yes dev log and user log distinction understood, how is that not embodied by --debug vs -v ?
<hazmat> and if its not.. then that's a ux bug imo
<thumper> hazmat: --debug says "show me the debug log and set logging to DEBUG"
<thumper> hazmat: -v should mean "show me more user output"
<thumper> -q should be "show me no user output"
<thumper> should, but isn't yet
<thumper> it has been one of those annoying things I keep meaning to get around to fixing properly
<hazmat> thumper, and that's exactly what i meant to convey
<hazmat> re -v vs. --debug
<thumper> hazmat: perhaps we should just file it as a bug :-)
<thumper> at least it would then be tracked
<thumper> hazmat: also, Hai!
<hazmat> thumper, chairs on the deck imo.
 * thumper looks confused
<hazmat> thedac, also, Mauri ora! Kia ora!  :-)
<hazmat> argh.. thumper ^
<thumper> :)
<rick_h_> is there any hack/way to trick juju into letting me deploy a local charm without creating the directory structure for series? I git clone the charm and just want to deploy it, but it really wants that series directory.
<lazypower> rick_h_, i've tried that myself and have been unsuccessful
<lazypower> rick_h_, the workflow i've adopted is to to keep a "in dev" and a "production" series of directories to islate what i'm checking vs what i'm deploying - and alias those
<rick_h_> lazypower: cool, guess as long as I'm not the only one that thinks it should work a bit differently
<lazypower> I've heard talks of the series going in metadata.yaml instead of being a forced portion of the dir structure
<lazypower> but its not there yet
<rick_h_> yea, that'll be a bit to settle out
<marcoceppi> rick_h_: symlinks
<marcoceppi> otherwise, no
<rick_h_> marcoceppi: yea, but it rejects symlinks because you're trying to reach something outside the dir
<rick_h_> marcoceppi: is there a symlink pattern that does work?
#juju 2015-01-19
<Muntaner> good morning to everyone
<Muntaner> dimitern, hello
<dimitern> Muntaner, oh hey!
<Muntaner> dimitern, I almost did it!
<Muntaner> I think I am at the "final" errors :-)
<dimitern> Muntaner, great - please paste the logs?
<Muntaner> yep! first, I tell you a little background, dimitern
<Muntaner> I followed the guide you posted
<Muntaner> and got rid of the image-metadata problem
<Muntaner> now, I'm able to run bootstrap
<Muntaner> and via Horizon, I see a  juju-ost-machine-0 Active
<Muntaner> but, on the terminal where I started the juju bootstrap, I am stuck and can do nothing
<Muntaner> logs coming
<Muntaner> dimitern, http://paste.ubuntu.com/9783781/
<Muntaner> it's stuck on "Running apt-get update"
<dimitern> Muntaner, let me just tell you I might be a bit distracted as I'm in a meeting now, but I'll check on your progress later
<Muntaner> dimitern, no problem! ;-)
<dimitern> Muntaner, right - does your bootstrap machine has access to the internet ?
<Muntaner> dimitern, yes, it is the laptop that I'm using right now
<dimitern> Muntaner, if you can ssh into it while bootstrap is going on - check /var/log/cloud-init*.log
<Muntaner> dimitern, tried it - can't ssh QQ
<Syed> Hello folks
<Syed> I am trying out juju for deploying Highly Available OpenStack
<Syed> Is it possible to do it 4 nodes ?
<Syed> 2 controller nodes and 2 network nodes ?
#juju 2015-01-20
<blahdeblah> jose: ping - you around?
<blahdeblah> jose: Re: https://code.launchpad.net/~jose/charms/precise/quassel-core/add-tests/+merge/246001 - I'm not familiar with amulet, so I have no way of gauging what the MP does.
<blahdeblah> jose: It all seems reasonable, but perhaps someone more skilled would be better to review it?  I made the charm as a way of learning juju.
<marcoceppi_> blahdeblah: this is a basic Amulet test, used to perform functional/integration tests
<marcoceppi_> going forward we're going to require charms to have tests in order to continue being in the promoted/promulgated store (this doesn't effect personal branches)
<marcoceppi_> blahdeblah: to help get authors jumps started with testing we've been adding these basic/simple amulet tests to give us a good idea of the charms that work and just deploy
<blahdeblah> marcoceppi_: Cool - if you're happy to +1 it, I can take care of merging
<marcoceppi_> blahdeblah: it's fine, but comes wiht some implications
<blahdeblah> marcoceppi_: namely?
<marcoceppi_> blahdeblah: well, you're charm is in precise, so the implications don't actually apply to you - at least not now
<blahdeblah> All of my recent testing was on trusty; should be listed in there as well
<marcoceppi_> blahdeblah: but for charms in trusty, if authors don't respond with updated/better implemented tests charms may be unpromulgated/removed from the namespace
<marcoceppi_> blahdeblah: ah, well, then this will apply to the trusty charm
<marcoceppi_> tests allow us to validate and verify that charms work, as the author indends, on different versions of deployments (trusty, vivid, other linux) as well as different architectures (x86_64, ARM, Power8, etc)
<marcoceppi_> blahdeblah: with tests, we will are automatically testing all charms in trusty and precise, to validate they do and help provide users with feedback on charms that will work in their deployments
<blahdeblah> I expect all of that will be pretty straightforward, since it's such a simple charm (which is why I chose it)
<marcoceppi_> to do this we need authors to help flesh out the tests a bit, I use quassel all the time so for you, it'd would be adding things in the test that check for "after quassel is setup, does it actually listen on the proper port" or "is the quassel-core process running"
<marcoceppi_> blahdeblah: totally
<marcoceppi_> blahdeblah: we'll be publishing a few videos and blog posts to help bridge the gap between "I'm an author" and "what is amulet"
<marcoceppi_> we have a lot available already, but they're pretty long and unweildy to consume
<blahdeblah> Yeah - video is definitely not my preferred learning format
<marcoceppi_> blahdeblah: if you're not already, you may want to subscribe to our mailing list, lists@juju.ubuntu.com to keep up to date
<blahdeblah> Is there a lower-volume announce list?
<marcoceppi_> blahdeblah: no, it's just that list at the moment, though traffic isn't/shouldnt' be too TOO high
<blahdeblah> marcoceppi_: Any other way to keep up with announcements? Let's just say I'm allergic to new mailing list subscriptions. ;-)
<marcoceppi_> blahdeblah: not really :\ I'll propose a new juju-announce list tonight though, see if that takes hold
<blahdeblah> marcoceppi_: w.r.t. this MP, any suggested next steps?
<marcoceppi_> blahdeblah: you can +1 it and it'll get it's way in to the store shortly
<alexis7> what's  mean
<alexis7> ERROR there was an issue examining the environment
<jose> blahdeblah: heyo. sorry for not responding, was afk. If you're still around, I can go through and explain what they do step by step, for sure.
<blahdeblah> jose: No problems - did you see the backscroll of my chat with marcoceppi_ ?
<jose> blahdeblah: yeah, I did :)
<blahdeblah> It all seems pretty sensible to me, but because of my lack of expertise, I've requested marcoceppi_ to review it.
<jose> we do have some docs on Amulet, let me grab the link
<jose> https://juju.ubuntu.com/docs/tools-amulet.html
<blahdeblah> I'm in the middle of a stack deploy right now, so regardless, I won't have time to dig into it until the beginning of next week.
<jose> np
<jose> I'll be taking another look at the charm soon, so stay tuned!
<mthaddon> my jujud is only listening on IPv6 for some reason. Anyone seen anything like that?
<jrwren> mthaddon: ::1 or some other address?
<mthaddon> jrwren: tcp6       0      0 :::17070                :::*                    LISTEN      10231/jujud
<rogpeppe> mthaddon: i'm pretty sure that jujud just does a default listen on tcp with :17070
<mthaddon> rogpeppe: yeah, so that's why I'm wondering why it isn't doing that for me :/
<rogpeppe> mthaddon: if you run some other local daemon, listening on any address, do you get similar behaviour?
<mthaddon> sorry, distracted by a ping in another channel, will try to test soon
<rogpeppe> mthaddon: for example, if you run netcat -l 12345, then netstat -n -l | grep 1234, what do you see?
<mthaddon> tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN
<rogpeppe> mthaddon: hmm, interesting
<Syed_> Hello Folks, This document on HA openstack deployments seems old. https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<Syed_> Does anyone have any pointer where can i find the latest docs on OpenStack HA
<Syed_> Precisely, On Deploying HA OpenStack with Juju.
<rogpeppe> mthaddon: can you compile a small Go program and run it on one of your machines?
<mthaddon> rogpeppe: er, I can if you can point me at instructions as to how :)
<rogpeppe> mthaddon: sudo apt-get install golang-dev
<mthaddon> rogpeppe: where would I get that from on trusty?
<Spads> so squid does a combined listen on its ports, and it shows up as only tcp6 in netstat but you can reach it on ipv4
<Spads> I'd have thought this was the same
<rogpeppe> mthaddon: oops, sorry, wrong package.
<rogpeppe> mthaddon: golang-go
<rogpeppe> mthaddon: or actually
<rogpeppe> mthaddon: just apt-get install golang
<rogpeppe> mthaddon: with sudo
<mthaddon> Spads: I assumed I couldn't contact it because "juju status --debug" was timing out - let me try from the machine itself to check it's not a connectivity issue
<Spads> yeah just telent and see what you get
<mthaddon> ugh, sorry for the noise
<Spads> heh
<Spads> I only knwo this because I hit it only yesterday with squid :)
<mthaddon> so it's working on both 127.0.0.1 and the external IP from the instance itself, so it must be a connectivity issue. I'll dig a bit further
<mthaddon> and of course, now my juju status is working fine. Double sorry for the noise
<alexis7> hello everyone i have an error when i write on terminal linux bootstrap ERROR there was an issue examining the environment
<alexis7> i have an issue with juju bootstrap
<alexis7> anyone can help me
<marcoceppi_> o/ alexis7
<marcoceppi_> what is your issue?
<alexis7> when i write juju bootstrap on terminal linux
<alexis7> for example : ERROR there was an issue examining the environment: open /home/john/.juju/environments.yaml: no such file or directory
<marcoceppi_> alexis7: have you run `juju init` yet in the terminal?
<alexis7> appears for example A boilerplate environment configuration file has been written to /home/john/.juju/environments.yaml.
<alexis7> Edit the file to configure your juju environment and run bootstrap.
<whit> https://code.launchpad.net/~evarlast/charms/trusty/elasticsearch/add-version-config/+merge/237916
<whit> lazyPower, ^
<lazyPower> whit: like the comment re: backporting changes in ~charmers - good point. I'll mark that as a talk point for the charmers meeting this week.
<lazyPower> hazmat: got a new comment on the Juju DO plugin over on the DO community page, *and* the user said they answered themselves with the docs. hi5
<alexis7> which is the error for example:
<alexis7> ERROR there was an issue examining the environment: open /home/john/.juju/environments.yaml: no such file or directory
<lazyPower> alexis7: Seems like you need to run `juju generate-config` to create the environments.yaml - I suggest starting with the documentation here: https://jujucharms.com/docs/
<lazyPower> yo tvansteenburgh
<tvansteenburgh> yo
<lazyPower> hey, i was taking a look at this MP - https://code.launchpad.net/~jacekn/charms/trusty/squid-reverseproxy/squid-reverseproxy-nrpe-fix/+merge/245312  - and it seems liek an unrelated failure is causing the MP to not be merged thanks to some CI schenanigans.
<tvansteenburgh> yeah
<lazyPower> looks like we just need to sneak in an update to the make test target according to the feedback?
<tvansteenburgh> yes
<lazyPower> I dont necessarily want to sneak things in however - would you be allright with me filing a bug and testing the MP based on the merit of the changes for now?
<tvansteenburgh> i was planning to do that but have not gotten to it
<lazyPower> i hear ya :) its in "the stack"
<tvansteenburgh> lazyPower: yes, please do, that would be great
<tvansteenburgh> cheers dude
<lazyPower> ta!
<lazyPower> solid mbruzek beat me to it : https://bugs.launchpad.net/charms/+source/squid-reverseproxy/+bug/1401323
<mup> Bug #1401323: The squid-reverseproxy charm fails automated testing <audit> <auto-test> <squid-reverseproxy (Juju Charms Collection):Triaged> <squid-reverseproxy (Charms Precise):Triaged> <squid-reverseproxy (Charms Trusty):New> <https://launchpad.net/bugs/1401323>
<jcastro> hey cory_fu
<jcastro> on a scale of 1 to 10, how well would you consider DHX documented?
<cory_fu> Hey
<cory_fu> Well, `juju dhx --help` is pretty complete, but doesn't give a good description of why you would want to use it.  But then there's the blog post.  *shrug*  Why?
<jcastro> we have a card to better document debugging
<cory_fu> Ah.  It could definitely use some better narrative documentation (what to use it for, why, and how in a more example oriented way).  The blog post is a decent start, but if it's going to be recommended, then it should be in the docs.
<marcoceppi_> jcastro: dhx is one part of debugging, it's not a complete solution for debugging, but makes it really really easy
<cory_fu> Really, dhx just augments the existing debug-hooks
<jcastro> yeah so have we decided that DHX is going to be the one true way forward?
<jcastro> basically, should it be in the docs is my question.
<marcoceppi_> yes
<marcoceppi_> but
<marcoceppi_> so should vanilla debug-hooks
<cory_fu> Honestly, I'd prefer to see some of the features make their way into core, eventually
<marcoceppi_> cory_fu: that would be the best outcome
<marcoceppi_> cory_fu: we should start making bugs for features that core could implement citing dhx as how it should be implemented
<cory_fu> Good point
<jcastro> marcoceppi_, normal debug hooks is documented already
<marcoceppi_> jcastro: then add dhx
<jcastro> ok, I am just wondering, like; we consider dhx stable/useful enough to be in the docs?
<jcastro> aka. I can go ahead and port his blog post to the docs?
<cory_fu> jcastro: I think there have been several people, including myself, using it pretty regularly.  I think it's pretty stable and useful.  +1 to adding it to the docs
<jcastro> cory_fu, ok I'll take that as a task then, <3
<lazyPower> +1 for it being useful and stable
 * skay likes suggesting makefile target that will handle installing/building dependencies
 * skay has seen makefile that will fail if dependencies are not installed and will ask the user to run the target to install them
 * skay goes back to work
<roadmr> skay: my only request would be ensuring that those dependency targets are either "idempotent" or do proper checking that they have/have not being run
<roadmr> one of my projects has this horrid makefile with no way of ensuring a dep is more recent so it ends up trying to clobber already-downloaded files, it's quite messy
<roadmr> the first time it runs fine, but subsequent runs tend to bork things
<skay> I dread horrid makefiles
<marcoceppi_> skay: you could just make a bunch of bash scripts that are invoked from make targets
<marcoceppi_> skay: nothing wrong with that
<skay> marcoceppi_: that sounds reasonable. though could get unwieldy
<marcoceppi_> skay: how so?
<lazyPower> lets configuration manage our configuration management
<lazyPower> and i'm not even kidding about that - bash patterns for these things exist as marco is suggesting.
<skay> won't I run the risk of having some very long messy bash scripts?
<marcoceppi_> skay: if you do a bash script per task, it should be simple and easily digestable. Also, python scripts could work as well
<marcoceppi_> yes, potential for craziness exists, but these tasks, test, dep-install, lint, etc should be pretty straight forward
<skay> I suppose having multiple reviewers will help keep things from being unwieldy
<skay> I'm happy that the discussion came up in the mailing list
<skay> I want to make MRs to the python-django charm and wasn't sure how best to handle the dependency question
<skay> i've been working with a branch of it for now and applying a sledgehammer approach of standing up and tearing down everything when I wanted to check if it did what I expected
<lazyPower> skay: we're approaching the same problems using containers to solve the isolation testing as well, not sure if you're groovy with a docker file to run the testing in isolation (in tandem with a cloud provider - we haven't resolved local provider in docker yet) but i can certainly send over the work + context when we've hit a stable milestone.
<skay> e.g. make a change? run http://paste.ubuntu.com/9797121/
<skay> then clean with: for p in $(mojo project-list | cut -d ' ' -f 1 | sed 's/\://'); do sudo mojo project-destroy -y ${p}; done
<skay> lazyPower: I have more experience with lxc than docker but wouldn't mind seeing how you do things with docker
<lazyPower> Its not that different than using lxc with bind mount volumes :)
<lazyPower> ...and buzzwords
<skay> lazyPower: I see
<skay> haha
<skay> anyway, marcoceppi_ and lazyPower, once I am outside of work and get a moment, I'll clean up my charm MR to add a dep target to the makefile, perhaps and then be sure it passes the tests properly. (I know its bad hygeine but I've been using the script above to see that things work. it's fairly quick)
<lazyPower> skay: there's literally many ways to cook a soup - and whatever method works for you shouldn't be shamed - that was part of the purpose of the mail to the list, was getting methods people are using so we can introspect and adjust tooling/documentation
<lazyPower> and if one suggested pattern rose above the rest - that we would champion it as ~charmers
<jcastro> hey guys
<jcastro> so the debugging page is kind of long
<jcastro> would it make sense to have 2 top level debugging pages?
<jcastro> "Debugging with debug-hooks" and "Debugging with DHX" ?
<lazyPower> jcastro: i think the DHX page would be more about the express lane that dhx exposes with sync'ing stuff and what not - and sure it does make sense to have it as a topic page all to itself , gives it room to grow. Core debugging concepts are going to apply across the board regardless of juju debug-hooks or juju dhx
<jcastro> ack
<jcastro> cory_fu, hey quick question, for the dhx import id's
<jcastro> what library are you using for that?
 * jcastro is wondering if github id's are also doable
<cory_fu> They are.  It's just using `juju authorized-keys import`
<cory_fu> jcastro: Which in turn uses ssh-import-id which supports launchpad and github
<cory_fu> (On Ubuntu, at least)
<jcastro> ah, neat!
<jcastro> juju dhx -i gh:castrojo should work then
<jcastro> https://github.com/juju/docs/pull/230
<jcastro> first cut, reviews welcome!
<jhobbs> arosales: howdy; once a charm has passed charm store review, how are future updates to it handled? do they need to be reviewed before being accepted too?
<arosales> jhobbs: hello
<arosales> jhobbs: once a charm has passed policy review, all subsequent reviews also have to be reviewed before they are applied to the "recommended" charm
<jhobbs> arosales: cool, thanks
<arosales> sorry all subsequent updates, that is
<bdx> jiasir: are you on here?
<bdx> nofdev: anyone from nofdev online?
<bdx> jiasir: ?????
<bdx> nofdev: anyone from nofdev online?????
#juju 2015-01-21
<jcastro> does anyone remember how long it takes the docs to build?
<jcastro> my dhx docs were merged and it all works locally, not live on the site though
<tvansteenburgh> jcastro: supposed to be 15 min i think
<marcoceppi_> jcastro: there's a delay on jujucharms.com and the ones on juju.ubuntu.com build every 12 hours
<jcastro> ack
<aisrael> Is there a primer on block storage for juju anywhere? I'm reviewing a charm that's added support for it and I'm not quite sure how to test it.
<lazyPower> aisrael: is it via relation with our block storage broker charm?
<aisrael> lazyPower: I think so (it's using the block-storage interface). I see the 'storage' charm. Is there another one I should look at?
<jcastro> marcoceppi_, what was the tldr on choose your own adventure with docs?
<lazyPower> aisrael: the block-storage-broker - and in the event its not apparent - it only works on AWS and Openstack deployments. There's a readme for the charm that outlines limitations/expected usage
<jcastro> blocking on uiteam/design iirc?
<aisrael> lazyPower: ah, ok. The 'storage' charm says it works with nfs; if so, I can use it on local.
<lazyPower> aisrael: https://jujucharms.com/block-storage-broker/precise/8
<marcoceppi_> jcastro: we need time
<jcastro> marcoceppi_, whose time, ours? or ricks?
<marcoceppi_> ours
<blr> can anyone suggest a good way to configure multiple ports for juju expose? There doesn't appear to be an array type in config.yaml, and would rather not hardcode ports in the hooks if I can avoid it.
<marcoceppi_> blr: there is no list/array type, though I've asked for it before
<blr> marcoceppi_: would be handy!
<marcoceppi_> blr: you'll have to use string and just say "space or comma delimited" and do some post processing in the hook
<blr> ok, that's not entirely evil. thanks macro
<blr> marco even :)
#juju 2015-01-22
<bdx> Are there any plans for an official trove charm?
<bdx> :jamespage - I would like to know if there are future plans for a trove charm? If not, I would like to request an official trove charm....I am willing to help out in anyway possible.
<lazyPower> marcoceppi_: i just had a thought - and it might be completely ludacris.. but what if we had a pattern as follows
<marcoceppi_> lazyPower: go on
<lazyPower> if we deploy a charm that requires a build environment to assemble itself from code (in teh instance of flannel for docker) - what if it added container on the host, fetched the build env + source code, ran the buidl routine, and then the artifacts were pulled from the container, and the container destroyed leaving only a job log of the build process - to prevent polluting the host with the build tools
<lazyPower> i could get arm64/ppc64el/x86 artifacts in the host (assuming it has net access) - and not leave build tools on the machine.. it does add some layers for failure in there.. but i think it has interesting possibilities.
<lazyPower> (and those arch targets are respective to what environment i'm deploying to - as this is a golang bin artifact thats produced)
<marcoceppi_> lazyPower: how quickly could you create a container?
<marcoceppi_> seems like a good idea, but only if you could prevent having to spin up a build container for each unit
<lazyPower> Depends, am i fetching a bin image from dockerhub or am i fetching a cloud image and using LXC
<lazyPower> I was considering shipping a dockerfile with a base ubuntu 14.04 image to run the trial on this
<lazyPower> as flannel-docker wont be deployed to a non-docker host. that much is specific in the relationship
<lazyPower> i mean, you can co-lo it on one, but it noops
<lazyPower> I'll run this by whit and mbruzek tomorrow to see if they think its interesting enough to devote a couple hours to prototyping the process.  Thanks for being a sounding board
<marcoceppi_> lazyPower: you still around?
<jamespage> bdx, no plans for a trove charm from my team
<lazyPower> marcoceppi_: back now
<lazyPower> tvansteenburgh: there's a thread going on the list right now about makefile targets that i think you had cautioned me about before wrt bundletester behavior and running integration tests multiple times
<lazyPower> tvansteenburgh: might want to poke your head in and take a look and give a quick reply on behavior here. Thread name is: Makefile target names
<tvansteenburgh> lazyPower: yeah that was started at my suggestion :) i'll comment, just wanted to see others' ideas
<jamespage> gnuoy`, pushed rabbitmq charm up for next branch and merged wolsen's changes
<gnuoy`> jamespage, tip top, thanks
<wolsen> thanks jamespage
<jamespage> marcoceppi_, hey around?
<marcoceppi_> jamespage: o/
<jamespage> marcoceppi_, sooooo
<jamespage> mysql charm
<jamespage> marcoceppi_, how would you feel about putting that under the three month release cycle alongside the openstack charms?
<marcoceppi_> jamespage: Sounds like a great idea
<jamespage> I want todo the same with rabbitmq and percona-cluster as well - but I feel os-charmers already has quite a bit of ownership their
<jamespage> marcoceppi_, OK so we are about to finish up a cycle - 15.01 release is out next week - I suggest that we create the next branch then and switch all merges to the next branch for the 15.04 release
<marcoceppi_> jamespage: sounds good, if it makes iterating quicker, next branch should live under os-charmers
<marcoceppi_> jamespage: then just perform merges to ~charmers branch per typical?
<jamespage> marcoceppi_, that sounds good to me
<jamespage> marcoceppi_, we need a good way to signal to contributors where they should propose merges against - I'd like that to be the next branch
<jamespage> marcoceppi_, I thought maybe  a README.dev or HACKING.md
<marcoceppi_> jamespage: either sounds fine, README probably has the most exposure
<jamespage> marcoceppi_, OK - sounds good to me
<marcoceppi_> jamespage: how does this problem handle patches for bugs to released version?
<marcoceppi_> should they still land in /next ?
<marcoceppi_> s/problem/process/
<jamespage> marcoceppi_, next first and then cherry pick to the stable branch
<jamespage> so that we don't regress on the next release of next
<marcoceppi_> jamespage: okay, lmk when you've got that setup, we should mail the list on this change so it's documented and reviewers know what's up
<jamespage> marcoceppi_, ack will do
<lazyPower> tvansteenburgh: on the subject of bundletester, do you have a second to help me debug some weirdness? its failing on a charm for no lint target, yet the makefile has one implicitly
<lazyPower> http://paste.ubuntu.com/9822358/ <-- traceback
<tvansteenburgh> lazyPower: looking
<lazyPower> http://paste.ubuntu.com/9822371/
<lazyPower> makefile for flannel-docker
<tvansteenburgh> lazyPower: does `make lint` succeed on its own?
<lazyPower> sure does
<lazyPower> http://paste.ubuntu.com/9822406/
<tvansteenburgh> what about `make -s lint`
<lazyPower> same output
<lazyPower> tvansteenburgh: i think i see the problem, i have a sudo apt-get install python-virtualenv in the venv target, and i bet its hanging on that sudo and failing the test
<tvansteenburgh> lazyPower: i see it's bundle, i wonder if the cwd is wrong at the time lint is called
<lazyPower> possible, is there a debug trace i can spit out that would be helpful?
<tvansteenburgh> -l DEBUG -v
<lazyPower> already doing that :(
<tvansteenburgh> yeah figured
<lazyPower> I'll circle back to this - right now i'm interested in making sure the tests i have in the bundle are passing. Its going to fail in CI though with this. its a consistent result when running bundletester
<tvansteenburgh> lazyPower: i'm occupied at the moment but will take a look as soon as i can
<lazyPower> np, just a heads up really :)
<lazyPower> cheers
<tvansteenburgh> cheers man
<lazyPower> whit: https://bugs.launchpad.net/charms/+bug/1413775
<mup> Bug #1413775: New Charm - Docker <Juju Charms Collection:New> <https://launchpad.net/bugs/1413775>
<whit> lazyPower, w00t
<lazyPower> whit: ready for number 2?   https://bugs.launchpad.net/charms/+bug/1413776
<mup> Bug #1413776: New Charm: flannel-docker <Juju Charms Collection:New> <https://launchpad.net/bugs/1413776>
<whit> :D
<lazyPower> The final keystone to those charms: https://bugs.launchpad.net/charms/+bug/1413779
<mup> Bug #1413779: New Bundle: docker-core <Juju Charms Collection:New> <https://launchpad.net/bugs/1413779>
#juju 2015-01-23
<jamespage> gnuoy`, could you give me a review of https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/network-splits/+merge/247384
<jamespage> I've manually run lint, amulet and unit tests
<gnuoy`> jamespage, I don't see the link between network splits and the new ceph.create_pool line
<jamespage> gnuoy`, there is none other than I noticed it was broken
<jamespage> gnuoy`, I can do that as a separate MP if you like
<gnuoy`> jamespage, no, that's fine.
<gnuoy`> jamespage, approved
<jamespage> gnuoy`, I've also switched osci to use next for rabbitmq-server
<gnuoy`> thanks
<jamespage> marcoceppi_, gnuoy`, btw I backported the latest juju-deployer release into the stable PPA
<mwak_> hi
<jamespage> gnuoy`, https://launchpad.net/charms/+milestone/15.01
<occc> hi all
<occc> i would like to run my django tests over several (say 100) servers
<occc> i was considering deploying ec2 instances and make each of them run a part of the tests
<occc> a friend of mine told me about juju
<occc> so my need is basically to create 100 clones of the same instance, run a slightly different command-line on each of them and aggregate results
<gnuoy`> jamespage, could I get a review of the mps associated with Bug #1403132 when you have a moment please?
<mup> Bug #1403132: hacluster default ports conflict between openstack charms <landscape> <openstack> <smoosh> <cinder (Juju Charms Collection):New> <glance (Juju Charms Collection):New> <keystone (Juju Charms Collection):New> <neutron-api (Juju Charms Collection):Invalid> <nova-cloud-controller (Juju
<mup> Charms Collection):New> <percona-cluster (Juju Charms Collection):Invalid> <swift-proxy (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1403132>
<jamespage> gnuoy`, +1 that looks fine
<jamespage> gnuoy`, all of the nrpe stuff has landed right?
<gnuoy`> jamespage, almost, I think there was a mongodb mp that someone else was reviewing and I ceilometer-agent slipped through the net but I'm hoping to have that done rsn
<bloodearnest> arg, juju test doesn't respect $JUJU_HOME
<bloodearnest> must have a hardcoded ~/.juju ?
<lazyPower> bloodearnest: we've been superceeding juju test with bundletester
<bloodearnest> lazyPower, I assumed that was for testing bundles?
<lazyPower> That was the original target, it executes everything that is in /tests that's chmod +x however
<lazyPower> and sniffs makefile targets to run linting + an implicit charm proof
<lazyPower> its a parity tool for use with CI, as thats our exclusive tool for CI test running
<bloodearnest> lazyPower, where can I get it?
<lazyPower> bloodearnest: pip install bundletester - its not a package in the repos yet
<lazyPower> we have that as a target for this/next cycle.
<bloodearnest> ack
<marcoceppi_> bloodearnest fwiw in the next few weeks when charm-tools is released, bundletester will replace juju-test, so running juju test will execute bundletester under the hood
<bloodearnest> lazyPower, marcoceppi_ : and does bundletester respect $JUJU_HOME
<bloodearnest> ?
<marcoceppi_> bloodearnest: it should
<bloodearnest> pipsi install bundletester works nicely, much better output
<bloodearnest> does it leave the env intact on failure?
<lazyPower> it can be configured to do either/or
<skay> is block-storage-broker okay to use in trusty?
<skay> I've been using trusty and now want to add storage for my postgresql relation
<skay> but I see block-storage-broker is in precise. but maybe my searchfu is weeak?
<skay> s/relation/service
<skay> are there docs I can look at on how to upgrade a charm? how much work would it be for a naive user?
<skay> can I do it in a day?
 * skay talks to someone about it
<skay> for a workaround I'll collect the precise repo and tell mojo to think it is trusty
<skay> so cowboy. much irresponsible
<lazyPower> skay: the upgrade work is kind of dependent on how its installing the package. ergo: if the package exists in both trusty/precise repo's it should be a fairly short winded porting process, ensuring there are tests in the charm.
<lazyPower> if the repo doesn't exist that metric can go up by an order of magnitude.
<lazyPower> IIRC we only have storage-broker in precise due to the libs it's consuming - it's not using the AWS SDK last time i checked, it was using the Euca2oolset
<skay> lazyPower: I don't have time to look in to it today or over the weekend. maybe I can look in to it next week. it might be a good exercise in going through the process so that I can participate more in reviews and so on
<cory_fu> whit: cf-weekly?
<lazyPower> certainly - we're here to help :)
<adalbas> mbruzek1, arosales , in one charm hook, is there a way to get specific information from the deployer (such as hostname, ip)?  something like what charmhelpers.core.host does, but for the deployer
<mbruzek1> adalbas: Hello.  Let me see if I understand your question.  Is there a way to get the hostname/ip from a hook?
<arosales> adalbas: hello
<adalbas> mbruzek1, for instance, i want to add the hostname or ip to /etc/hosts of the machine where i'm deploying a charm in the service
<mbruzek1> adalbas: that can be done.  What is the use case here?  The host computer (that is deploying the services) can already access the VMs in the cloud?
<adalbas> mbruzek1, yes, but i want the vm in the cloud to have the host hostname, for i'm not using an external dns resolution
<mbruzek1> adalbas: I am not sure I follow why you would want to do this but here is how I would get that information.
<mbruzek1> juju status etcd/0 | grep public-address | cut -d: -f 2
<mbruzek1> Where "etcd/0" is the deployed charm that you want to find the address for.
<adalbas> mbruzek1, it is actually the other way around. I'm deploying etcd/0 from my server (juju) and in the install hook, i want to be able to get the information from juju, so etcd/0 would have the ip information of it's deployer.
<adalbas> so in the hooks/install, i could add something like "echo $deployer_ip $deployer_hostname >> /etc/hosts"
<mbruzek1> adalbas: I am not sure how to get that information from the install hook.
<adalbas> mbruzek1, i see. i think i might need to change my approach here or have something in config.yaml.
<adalbas> tks
<mbruzek1> adalbas:  From the deployer you could embed the IP address into the VM like this.
<mbruzek1> juju ssh etcd/0 "echo $deployer_ip $deployer_hostname >> /etc/hosts"
<mbruzek1> adalbas: I am not sure the deployed vm knows where it came from. Lets say you are deploying from your laptop.
<mbruzek1> when you bootstrap you create a Juju server on the cloud that you are using.  The juju client just talks to the bootstrap node which does all the provisioning of the other vms
<adalbas> yes, exactly, i wasn't sure juju had a way to identify who is the deployer.
<mbruzek1> So the VMs don't know about your laptop, and I am not sure how they would get that information because the adalbas bootstrap node is doing all the work.
<adalbas> right. that makes sense.
<mbruzek1> Since your laptop can modify the units that sounds like something that you could do from the laptop, I will check to see if the nodes can get that information, but I kind of doubt it.
 * marcoceppi_ reads back
<marcoceppi_> adalbas: that's not possible withinjuju
<marcoceppi_> adalbas: if you needed the hostname/ip of the machine that's executing the deploymeht
<marcoceppi_> you'd have to set it as a configuration value in the charm
<marcoceppi_> and the person would have to `juju set mycharm my-hostname=$(hostname)`
<adalbas> marcoceppi_, tks!
<adalbas> yes, i ll set it on the config
<hazmat> adalbas, latest jujus tie identity to the services/units in play
<hazmat> along with apis for share/unshare env which create additional pricnipals
<hazmat> although perhaps i mistake intent
<skay> I've got juju bootstrapped to an openstack environment rather than a local one, but when I run protect-new and then workspace-new it attempts to make lxc containers
<skay> I'm trying to figure out what step I missed
<skay> oh, I see that mojo creates a local container, but when it gets to running hte manifest juju behaves as I'd expect it to
<whit> so bundletester doesn't support --upload-tools currently eh?
#juju 2015-01-24
<marcoceppi_> whit: why are you using --upload-tools, there shouldn't really be a scenario when --upload-tools and bundletester are used.
#juju 2015-01-25
<lazyPower> whit: There's actually an open thread on the mailing list about that - that relates to fetching the beta toolkits
<whit> lazyPower, wrt to bt?
<lazyPower> i asked about --upload-tools and found dragons to live in there, its intended to be useful for developers working on core.
<lazyPower> yeah, it would transpose seamlessly into bt
<lazyPower> since bt just wraps juju
<whit> lazyPower, yeah, I took a look
<lazyPower> i realize i'm 2 days late to the party when that convo took place, just thought i'd throw in my 10 cents :)
 * whit usually bootstrap before running bt locally so wasn't worth it at the time
<whit> it was just annoying
<lazyPower> yeah
 * whit shrugs
<lazyPower> if you're giving demos, probably be worth your while to stop-gap in the current -stable release
<lazyPower> then re-up when you're done giving the demo
 * whit shrugs
<lazyPower> thats my plan for FOSDEM anyway
<whit> I've been working off dev for weeks now
<whit> it's pretty stable and the status out is better
<lazyPower> same here, since it landed as -alpha1
<lazyPower> I'm crossing fingers it lands before I go so i dont have to dance the jig
<whit> haha
<whit> stick it in a container
<lazyPower> i would, but i plan on leveraging the local provider during the dev sessions
<lazyPower> container in container nesting requires some apparmor heniousness
<whit> yeah not worth it
<whit> though that's something we should explore with wwitzel3 in CO
<lazyPower> well its completely possible in LXC already
<lazyPower> you just have to add 3 lines to your apparmor config for LXC to make it happen, then do the magic with routes to reach into the containers in the container
<whit> lazyPower, it just requires divine intervention
<whit> ?
 * whit nods
<lazyPower> well considering it hasn't been fully tested, i dont think i'd recommend it to anyone at this juncture
<lazyPower> unless they want to be the ones that find the broken corner cases
<lazyPower> whit: i found a project i want to start using for my screencasts - https://github.com/seenaburns/dex-ui
<lazyPower> cyberpunk, the UI
<whit> lazyPower, is that windowmanager?
<lazyPower> its an openframeworks project, that would run in leu of any DE
<lazyPower> the only functional part of it is the terminal, the rest is for sci-fi hollywood type interface
<whit> aha
<whit> gotcha
<lazyPower> it was a featurette on todays LAS - and i have to admit, it looks pretty gnarly
 * whit needs to take a walk
#juju 2016-01-25
<jamespage> D4RKS1D3, I am
<urulama> jamespage: morning ... fyi, we'll rollout new release of charm store this week, which will support xenial charms that you guys have there already
<jamespage> urulama, awesome...
<jamespage> gnuoy, ^^ will need that for 16.04 release
<gnuoy> ack
<stub> tvansteenburgh: AWS job got stuck over the weekend, unreaped. And lxc is failing with the can't-bootstrap-port-in-use problem again.
<D4RKS1D3> Hi jamespage , morning
<D4RKS1D3> I am having problems with add ml2_odl into the ml2_conf.ini file
<jamespage> D4RKS1D3, ok - so you need to help me out a bit here
<jamespage> D4RKS1D3, are you using the neutron-api-odl charm with neutron-api?
<D4RKS1D3> Yes
<jamespage> D4RKS1D3, ok so the neutron-api-odl charm writes that section to ml2_conf.ini
<jamespage> D4RKS1D3, can you check whether you have
<D4RKS1D3> I have the file but, without ml2_odl config
<D4RKS1D3> In the log, i have "juju-log Writing file /etc/neutron/plugins/ml2/ml2_conf.ini root:root 444"
<D4RKS1D3> and the file has change, but without this section
<D4RKS1D3> Probably is the relation between them?
<D4RKS1D3> I test with both but any works
<jamespage> D4RKS1D3, sorry - someone distracted me
<jamespage> D4RKS1D3, right - so can you check
<jamespage> manage-neutron-plugin-legacy-mode=False on the neutron-api charm
<jamespage> otherwise it will keep overwriting the file written by the neutron-api-odl charm
<D4RKS1D3> yes
<jamespage> D4RKS1D3, yes its set like that?
<D4RKS1D3> I am looking
<D4RKS1D3> I have this option put to false
<jamespage> D4RKS1D3, good
<jamespage> so that means that neutron-api-odl should be writing the ml2_conf.ini file
<D4RKS1D3> but before you talk me I have this option to false
<jamespage> 'should'
<jamespage> ;-)
<jamespage> D4RKS1D3, do you have a bundle you are using?
<D4RKS1D3> no
<D4RKS1D3> I'm using the openstack-charms-next (3 charms available in juju store)
<D4RKS1D3> it seems that the neutron-api-odl plugin is not being executed even if the relation are properly set
<jamespage> D4RKS1D3, well maybe - can I see juju status out please
<D4RKS1D3> Yes, one second, please
<D4RKS1D3> - neutron-api/0: 10.1.23.165 (started) 9696/tcp  - neutron-api-odl/2: 10.1.23.165 (error)
<D4RKS1D3> - neutron-api/0: 10.1.23.165 (started) 9696/tcp  - neutron-api-odl/2: 10.1.23.165 (started)
<D4RKS1D3> (this is with a juju resolved)
<D4RKS1D3> the configuration is not updated anyway
<D4RKS1D3> all services are "started"
<jamespage> D4RKS1D3, I need the full status output please "juju status" - pastebin is a good idea
<D4RKS1D3> http://paste.ubuntu.com/14662888/
<D4RKS1D3> jamespage, this is the link
<jamespage> D4RKS1D3, problem 1 - don't use neutron-openvswitch and openvswitch-odl
<jamespage> for ODL drop use of neutron-openvswitch
<jamespage> that might resolve your issue - pls try
<D4RKS1D3> Okey, one minute
<D4RKS1D3> jamespage, now works!
<D4RKS1D3> You know if the configuration of the compute needs to be changed?
<D4RKS1D3> In theory I think it should be changed like the other file
<apuimedo> jamespage: ping
<apuimedo> would http://paste.openstack.org/show/484872/ address your review comment?
<tych0> lazypower: pong
<lazypower> tych0 - hey there. is there anything special i need to do when workign with lxd remotes?
<lazypower> i spun up 2 vm's to sample lxd migration, and was unable to add the remote lxd instance. i'm unsure if its networking, ssl keys, or otherwise
<tych0> lazypower: you have to turn on HTTPS listening on the remote, but then it should work as usual
<lazypower> aaahhh
<lazypower> now that explains why i was losing hair over this
<tych0> https://github.com/lxc/lxd#how-to-enable-lxd-server-for-remote-access
<lazypower> ta
<jamespage> apuimedo, looking
<jamespage> apuimedo, config is not dict like I'm afraid - but it does cache so "if config('midonet-origin') and" would be fine
<apuimedo> are you sure? I thought it's just a json.loads
<apuimedo> jamespage: ^^
<jamespage> apuimedo, I am
<jamespage> def config(scope=None):
<jamespage> if you get the entire dict, you could do that
<jamespage> config = config()
<jamespage> config.get('midonet-origin', '')
<jamespage> don't copy my ugly code that overrides the symbol config :-=)
<apuimedo> yeah.. I was now looking at the class
<apuimedo> jamespage: I'm probably misreading, but in hookenv.py:config
<D4RKS1D3> jamespage, could you answer my question?, Thanks
<apuimedo> nothing... I was ignoring the "except" ::P
<jamespage> D4RKS1D3, I'd set the same config option on nova-compute as I said for neutron-api (sorry missed you question)
 * jamespage watches to many channels
<lazypower> tych0 one thing i noticed, was that BTRFS performance was night/day comparison with lxd. i assume its the same with zfs?
<apuimedo> jamespage: http://paste.openstack.org/show/484880/
<jamespage> apuimedo, +1
 * jamespage likes defensive code
<apuimedo> cool
<apuimedo> I'll update the proposal
<tych0> lazypower: in comparison with lxd?
<tych0> you mean the default directory backend?
<lazypower> tych0 - lxd using btrfs snapshots vs regular ext4 operation
<tych0> yes, naturally :)
<tych0> should be the same with zfs
<lazypower> it reduced container spinup from ~ 30 seconds ot sub 2 seconds
<tych0> yep
<tych0> should also reduce snapshotting, etc.
<apuimedo> jamespage: https://code.launchpad.net/~celebdor/charm-helpers/liberty/+merge/283707
<apuimedo> updated and ready ;-)
<jamespage> apuimedo, ack
<tiagogomes> Hi, I am trying to boostrap JuJu in OpenStack. Is swift required?
<jamespage> tiagogomes, yes
<tiagogomes>  /o\ Can I get away with using nova object store?
<tiagogomes> Anyone knows what does this means: 2016-01-25 15:47:33 ERROR juju.cmd supercommand.go:429 failed to bootstrap environment: index file has no data for cloud {RegionOne http://10.24.100.10:5000/v2.0/} not found
<rick_h_> tiagogomes: it can't access the streams file. There's a json file that it reads what images/etc are available
<apuimedo> thanks for the merge jamespage
<apuimedo> jamespage: should I request a backport to stable?
<jamespage> apuimedo, needs to all land in /next first
<apuimedo> ok
<jamespage> then we can talk about a backport (we have a release going out on thurs for openstack charms so will need to be after that)
<apuimedo> jamespage: it's already frozen, Thursday release?
<jrwren> anyone know offhand how to import a wily image into lxd for use with juju lxd provider?
<lovea> Hi, Trying to use ProLiant DL160 Gen9 servers in a MAAS set up. They have hpdsa drivers detected. So far so good. Then I use Juju to deploy a 15.10 Wily charm and the deploy fails because the hpdsa drivers are requested from http://downloads.linux.hp.com/SDR/repo/ubuntu-hpdsa/dists/wily/main/binary-amd64/Packages. This doesn't exist. HP only has a 14.04 Trusty repo. Any ideas how I can proceed?
<jhobbs> narindergupta: ^^
<jamespage> apuimedo, yes
<lovea> HP is saying "HPE is supporting only LTS releases (14.04, 16.04).Â Â Â  hpdsa driver does use dkms, so, while it
<lovea> may be possible to upgrade to Wily, it has not been tested in our lab."
<narindergupta> deploy we lovea, jhobbs hp is not buildin hpdsa any more so best way is enable ahci mode in controller in BIOS
<lovea> narindergupta: I was wondering about that. Thanks for the tip, AHCI mode it is then.
<narindergupta> lovea: correct
<narindergupta> lovea: in ahci mode it will use SATA driver
<lovea> Which is fine for me
<lovea> narindergupta: Many thanks
<narindergupta> lovea: hope you are aware how to change to AHCI mode?
<lovea> narindergupta: I just need to navigate the DL160 BIOS/iLO config utility!!
<jrwren> ah, i was missing the lxd image alias, added the alias and all is good.
<D4RKS1D3> jamespage, to whom sal I connected the juju-info interface required by openvswitch-odl?,  p.s: Could you share with me a juju-status, it would be very appreciated to enable me to explore this information, thanks a lot in advanced.
 * jmalcaraz Hi
<bdx> hey whats up everyone? Is there a maas/next repo for xenial?
<marcoceppi> bdx: probably
<bdx> marcoceppi: from what I've gathered, it should be up next week
<bolthole> Hi!. I'm new to juju, and have a couple questions about getting started on ubuntu 15.10 if someone might help me
<rick_h_> bolthole: ask away
<bolthole> kewl. well, first off... juju-quickstart has a bug, I should mention :-/
<bolthole> among other things, it tries to start up lxc-net service... but that service will fail to start, because dnsmasq makes it fail. but...
<bolthole> if you reboot, then lxc-net gets started before dnsmasq (i guess) and so it starts working fro that point.
<bolthole> erm.. i was going to give details about how deploy of juju-gui hangs for me. but.. it might actually be working this time. haha. ha.
<bdx> marcoceppi: check it --> https://github.com/jamesbeedy/layer-django/blob/modularize_and_refactor/reactive/django.py
<bdx> marcoceppi: and here --> http://bazaar.launchpad.net/~jamesbeedy/charms/trusty/django-nginx/dev/files
#juju 2016-01-26
<lazypower> bolthole : well huzzah for random workingness :)
<lazypower> bolthole: sorry about the quickstart papercut though :/
<apuimedo> jamespage: https://code.launchpad.net/~celebdor/charms/trusty/nova-cloud-controller/liberty/+merge/283709
<apuimedo> did you have a chance to look at the merge proposal?
<apuimedo> jamespage: I take it that were waiting for after the release to merge them since the branch is probably frozen (don't know if you do that or you make a frozen separate branch)
<jamespage> apuimedo, not yet
<jamespage> apuimedo, your tomorrows train work :-)
<jamespage> so long as things are looking and testing ok I'd hope to land later this week once we unfreeze
<apuimedo> you are going to Brussels by train?
<apuimedo> I finally decided for plane
<jamespage> apuimedo, btw we have plans to move all charm development upstream under openstack - I'm assuming that you might be supportive of that move?
<jamespage> apuimedo, eurostar is to easy :-)
<apuimedo> jamespage: supportive is not enough. I'm enthusiastic about it
<jamespage> apuimedo, planes are ok but I have to transfer Norwich -> Amsterdam -> Brussels
<apuimedo> you'll probably have a downstream release branch though, right?
<jamespage> apuimedo, I'm hoping to sprint on the last remaining technical blockers thurs/friday so can move forwards
<jamespage> apuimedo, nope - all git/gerrit
<apuimedo> jamespage: that makes me happy
<jamespage> well have some sort of branch strategy for consolidated charm release points
<jamespage> apuimedo, charm store is moving away from the bzr based injestion to a direct publish model so that helps...
<apuimedo> although having a downstream internal canonical clone for hotfixes makes a lot of sense
<jamespage> apuimedo, I'd rather do it all upstream
<apuimedo> jamespage: so would I. I'm just speaking in practical terms
<jamespage> apuimedo, we're just moving everything we do now in launchpad/bzr to gerrit/git
<apuimedo> that is the best news I heard for a while jamespage
<D4RKS1D3> Hi, goodmorning
<D4RKS1D3> jamespage, to whom sal I connected the juju-info interface required by openvswitch-odl?,  p.s: Could you share with me a juju-status, it would be very appreciated to enable me to explore this information, thanks a lot in advanced.
<jamespage> D4RKS1D3, just relate it to nova-compute and neutron-gateway
<D4RKS1D3> Thanks jamespage
<jamespage> gnuoy, beisner: not sure that neutron-openvswitch got a charm-helpers sync
 * jamespage checks
<jamespage> beisner, gnuoy: https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/16.01-resync/+merge/283933
<marcoceppi> lazypower aisrael
<lazypower> marcoceppi
<beisner> jamespage, yeah i've been finding onesy-twosey sync misses unfortunately
<lazypower> tvansteenburgh: o/ heyo - have you ever seen an issue like this bubble up when attempting ot test a bundle w/ bundletester? http://paste.ubuntu.com/14672009/
<tvansteenburgh> lazypower: no
<tvansteenburgh> lazypower: pondering...
<lazypower> I got this stack trace from anton, a nexenta charmer
 * lazypower is trying to reproduce
<mbruzek> I have a partner getting an error bootstrapping, "ERROR failed to bootstrap environment: cannot start bootstrap instance: No default VPC for this user (VPCIdNotSpecified". I have never seen this before.  Has anyone in #juju seen this problem?
<lazypower> mbruzek: i'm thinking thi sis an account permissions issue
<lazypower> but i'm not 100% positive ont hat
<mbruzek> lazypower: OK I will ask
<lazypower> mbruzek: and i'm referring to the AWS API Credentials being provided to juju
<mbruzek> yes I figured that.
<lazypower> right on :) I haven't had my coffee this morning, and if this week is anything like last week, i'm already speaking jibberish
<mbruzek> lazypower: I had not thought of that, so I thank you for responding.  I will send them an email
<tvansteenburgh> it could also be that the aws user is old enough that it predates vpc, and actually doesn't have a default vpc defined - that happened to me once upon a time
<mbruzek> thanks tvansteenburgh!  All good things to check.
<tvansteenburgh> lazypower: are the nexenta charms in lp?
<lazypower> tvansteenburgh they are, 1 sec and i can get you a link
<lazypower> tvansteenburgh https://bugs.launchpad.net/charms/+bug/1515717
<mup> Bug #1515717: Nexenta Edge Charms to deploy edge cluster <Juju Charms Collection:In Progress by nexenta-systems> <https://launchpad.net/bugs/1515717>
<tvansteenburgh> lazypower: ta
<lazypower> urulama: are we in the middle of a deploy?
<lazypower> urulama: looks like our docs are 404'ing all the way down the url tree
<urulama> lazypower: no, no deploy on production going on ...
<urulama> lazypower: will look into it
<lazypower> i take that back, /stable/ is whats 404'ing
<lazypower> looks like /devel/ is still functional
<lazypower> and i spoke too soon
<urulama> :-/
<urulama> weird behaviour!
 * lazypower nods
<lazypower> Want me to file a bug urulama?
<urulama> i'll put a task directly, thanks
<lazypower> np, thanks for taking a look
<urulama> lazypower: i suspect that was an exact moment when docs got rebuilt ...
<tvansteenburgh> lazypower: i couldn't repro using the charm from lp, but, the first thing i would try is to ask him to reinstall bundletester in a venv
<lazypower> ok, i've got you cc'd on these emails. incoming update
<marcoceppi> cory_fu: I've got a use case for helpers.any_states
<marcoceppi> cory_fu: I'd like to add a @hook_any do you think that would work?
<marcoceppi> cory_fu: this is what I'm doing currently: http://paste.ubuntu.com/14672561/
<cory_fu> marcoceppi: Sorry, meeting.  @hook_any?
<cory_fu> Do you mean @when_any?
<marcoceppi> @when_any, yes
<cory_fu> marcoceppi: Ok, sorry, meeting is done.
<cory_fu> Ok, I'm fine with a @when_any in theory, but the issue is that, for states set by relations, you are in a position where a given relation instance might or might not be available, so how do you match up relation instances to handler args?
<marcoceppi> cory_fu: are relations keynamed invoked? or just passed as args?
<cory_fu> And, more importantly, if a state is not set, you have no way of knowing if it would have been a relation state, so no way to know if it *would* have an instance if it were set vs if it's a non-relation state in which case it would not have a value regardless
<marcoceppi> maybe this isn't the best approach
<cory_fu> marcoceppi: They are only passed  by position, because you can name the args whatever you want
<cory_fu> In the handler
<cory_fu> marcoceppi: Really, passing args into the handler is not ideal, because there's already confusion about which states should expect args and which shouldn't.
<marcoceppi> cory_fu: yeah, I might need to rethink this a bit
<cory_fu> It's definitely a problem, though.  I've run into cases where having @when_any would make the code quite a bit cleaner
<cory_fu> marcoceppi: You can always work around it by having multiple handlers, each with their own explicit conditions
<cory_fu> But that can make the list of handlers quite cluttered
<cory_fu> marcoceppi: For your specific example, it looks like all of those states that you're checking with any_states are really just a way of detecting whether or not you're in an action.  Could you generalize that to a single "in-action" state?
<marcoceppi> cory_fu: possibly, but it's only a subset of actions
<cory_fu> marcoceppi: Also, if we had https://github.com/juju-solutions/charms.reactive/issues/11 then you could just handle that with an @action decorator with some sort of pattern or wildcard
<marcoceppi> cory_fu: sure, but I'm still in a position that I have to mix @action and @when_not
<cory_fu> True
<marcoceppi> does any_states support wildcard?
<cory_fu> Not currently, no
<marcoceppi> cory_fu: interesting
<cory_fu> marcoceppi: How are those states getting set?  Could the code that sets those states also set a common "action-requires-configured" state?
<marcoceppi> cory_fu: :) https://github.com/juju-solutions/vpe-router/tree/master/actions
<marcoceppi> cory_fu: so it's possible that we could set that state
<marcoceppi> which might be easier
<cory_fu> marcoceppi: You could also have the action code check for the vpe.configured state itself and handle the fail instead of doing it in the reactive handler
<marcoceppi> cory_fu: yeah, I was trying to avoid too much logic in the action file, wanted to keep it pretty boilerplate
<marcoceppi> but that is true
<cory_fu> Fair enough
<marcoceppi> this is part strawman to see how actions in reactive would work
<cory_fu> I do think that @when_any would be worthwhile to figure out, and more generally figuring out a better way to address the ambiguity in what states correspond to what params to a handler
<cory_fu> One thought was to always pass in a single "context" arg that has methods for accessing the available instances.  Of course, you can also always call charms.reactive.relations.RelationBase.from_state('state') and we could wrap that in a helper
<marcoceppi> cory_fu: kind of like how frameworks send in a single request object?
<cory_fu> I had rather liked Whit's charm context proposal before he left, but no one picked that up.
<marcoceppi> cory_fu: you still can ;)
<cory_fu> I know.  :p
<cory_fu> Switching to a single context param would be a pretty significant change to the reactive API, though.  Definitely a 2.0 thing
<tertiary> i am currently evaluating juju as a solution, but am unsure of something. we have a use case where our clients would need to install our maas+juju cluster, and ideally i would like this to be done with a simple .deb package. is this possible? would i be able to automate juju in a way that it could autodeploy the services with an installer?
<tvansteenburgh> tertiary: i don't see why not. you can automate via the juju cli pretty easily
<tertiary> ah theres a cli, thanks
<tertiary> trying to install juju on Ubuntu 15.10 Desktop with juju-quickstart, getting the following error: "ERROR there was an issue examining the environment: failure setting config: cannot find address of network-bridge: "lxcbr0": route ip+net: no such network interface". It's strange, this does not happen on Ubuntu 15.10 Server...
<lazypower> tertiary did you install the juju-local package? and are you using the 'local' provider?
<tertiary> yesyes, just confired juju-local is installed. yes using "local" provider
<lazypower> tertiary:  ifconfig lxcbr0  returns nothing?
<tertiary> yeah: lxcbr0: error fetching interface information: Device not found
<lazypower> one sec, let me boot up a vm and try something before i recommend it
<tertiary> k
<jrwren> tertiary: try `sudo service lxc-net restart`
<jrwren> tertiary: it can't hurt and its easy enough. then see if `ifconfig lxcbr0` returns status instead of error.
<tertiary> yeah same error
<skay> a very small docs comment: https://jujucharms.com/docs/1.25/authors-hook-debug someone has remapped ctl-b to ctl-a and it's written in to the docs as though ctl-a is hte default
<skay> speaking about the tmux session
<skay> just noticed it today
<lazypower> o/ skay  long time on see :)
<skay> lazypower: hiya
<lazypower> skay : looks like the same issue was ported over into the devel docs - https://jujucharms.com/docs/devel/developer-debugging#the-'debug-hooks'-command
<lazypower> skay can you file a bug? http://github.com/juju/docs/issues/
<lazypower> we'll circle back and add a footnote that the default key binding is ctrl-b
<skay> lazypower: I myself also remap ctl-b to ctl-a due to muscle memory of screen
<skay> change the code to match the docs, ha!
<lazypower> :) Thats possible too
<lazypower> skay - if you use dhx, you can customize all of that, even upload your own tmux config w/ things like powerline and the like.
<tertiary> `service lxc-net status` yields: "dnsmasq: failed to create listening socket for 10.0.3.1: Cannot assign requested address"
<lazypower> ahh, this came up the other day
<lazypower> dnsmasq is preventing the lxc bridge froms tarting
<lazypower> you can stop dnsmasq, start lxc-net, then restart dnsmasq - or reboot.
<bdx> mbruzek: http://bazaar.launchpad.net/~charmers/charms/trusty/postgresql/trunk/view/head:/hooks/service.py
<bdx> mbruzek: in the configure_sources hook
<mbruzek> bdx: What is up?
<bdx> mbruzek: it looks to me as if the sources are only added if both conditions are met correct?
<bdx> mbruzek: sudo sysctl -w kernel.shmmax=15032385536
<bdx> ooops
<bdx> my bad
<bdx> mbruzek: if config['pgdg'] and config.changed('pgdg'):
<tertiary> lazypower: service status looks good after reboot, working now. thank you much sir.
<mbruzek> bdx: Yeah it looks OK to me, if pgdg exists and it has changed generate the debian install line.
<bdx> mbruzek: all versions of postgres other than 9.3 fail to install the postgresql binary due to sources not being available
<bdx> because for any other version other than whats default in the repos the key and source have to be added
<bdx> mbruzek: what I'm getting at, is that config.changed('pgdg') shouldn't ever be true
<mbruzek> bdx config.changed('pgdg') will be true on first run and if anyone sets something else.
<mbruzek> bdx: If it has a value the first time, if it is none the first boolean check will fail
<mbruzek> bdx: I often use this pattern when doing configuration values.   if config['key'
<mbruzek> ] and config.changed('key'):
<bdx> mbruzek: yes, but who wants to modify that config every time the `juju deploy postgresql` ?
<lazypower> tertiary glad we got you unblocked :)
<bdx> mbruzek: it should be a prerequisite to have to change that param to have postgresql install other versions on install hook
<bdx> ?
<bdx> mbruzek: because it has no default value?
<bdx> mbruzek: so http://paste.ubuntu.com/14674572/
<bdx> mbruze: will automaticall make config.changed('dbms') true
<bdx> mbruzek: at the time of install hook?
<bdx> mbruzek: I see, hmm, ok. thanks for clearifying that
<mbruzek> bdx: Still checking, I am very confused thought we were hung up on pgpd parameter.  Let me look at dbmz
<bdx> oooh my bad
<bdx> mbruze: will automaticall make config.changed('pgdg') true
<bdx> typo in the .yaml I pasted you as well
<mbruzek> pgdg is a boolean so it _has_ to have a default value (so it will have "changed" on first run) so I would expect `if config('pgdg') and config.changed('pgdg'):` to be true the first run.
<mbruzek> I don't know about dbms or the version parameter.
<bdx> mbruzek: ok, no worries, thats all I was after
<mbruzek> bdx: stub write this charm code, and hangs out in the channel.  If you have specific postgres questions you can ask Stewart.
<bdx> mbruzek: my bad, I thought I saw you as the contrib for that, I see now it was stub
<lazypower> bdx are you stirring up trouble again ;)
<mbruzek> bdx: I am always happy to help, but I had no context what you were asking.
<mbruzek> bdx: if you have questions stub knows that code much better than I do.  But again I am always happy to help if you have questions.
<bdx> mbruzek, lazypower:  I'm currently replacing our postgres clusters with juju deployed ones, just running into a few issues, thats all
<mbruzek> bdx: Sure.
<bdx> mbruzek: thanks, totally, I see that now
<mbruzek> bdx: postgresql is one of the more complex charms we have I can see that one being hard to debug
<bolthole> Hi. I deployed a second mysql instance, and a gitlab instance, through juju-GUI on azure. They werent happy so I tried to delete them. BUt now they wont go away. How can I force them to go away?
<bolthole> the gitlab status is currently "error", with message;
<lazypower> bolthole if services are in error they will block
<bolthole> hook failed: "Db-relation-changed" for mysqlgitlab:db
<lazypower> bolthole you have 2 options - 1) to resolve every error (without retry) and just resolve it all the way down until it goes away
<lazypower> 2) juju destroy-machine # --force
<lazypower> which is a very very angry way to get rid of a service's units, and you can then juju destroy-service {{service}}
<bolthole> so basically, the "resolve" button, actually means "shut up I I nkow about it already" ?
<lazypower> correct. it will forcibly resolve the error (if you dont choose retry) and it will continue to the next hook in progression
<bolthole> that seemed to work. thanks!
<lazypower> np bolthole o/
<bolthole> while i'm on the subject, how can I make a new instance of something with a different name, through the gui? ...
<bolthole> command line of "juju deploy mysql mysqlfoo" works. but cant seem to find equivalent in gui that actually works
<lazypower> when it is dragged onto the gui its "staged"
<bolthole> I tried going to the Configure option, and changin ghe name, and pressing save... but it doesnt stick?
<lazypower> you should be able to change the service name while its got the blue ring
<lazypower> but once its committed, its committed. You cant change the name
<bolthole> trying again...
<bolthole> adding to canvas...
<bolthole> Configure -> Serivce name: "Mysql-gitlab" ... Save changes..
<bolthole> but when I click on configure again.. service anme is back to "mysql"
<bolthole> seems broken
<lazypower> bolthole: thats unfortunate :/ would you mind filing a bug? https://bugs.launchpad.net/juju-gui/+filebug
<bolthole> okay
<cory_fu> lazypower: I don't know if you saw my edit to my comment on https://github.com/juju-solutions/vpe-router/pull/16 that explains more about the try / else construct in Python
<lazypower> i did
<cory_fu> Ok.  :)
<lazypower> that was all new to me, i had no idea python did try/except, and the except is intended to allow further exceptions to not be caught by the diaper
<lazypower> #TIL
<cory_fu> aisrael: ^ also.  I do think that Exception should be changed to something more specific.
<cory_fu> "except Exception" is generally a bad pattern
<cory_fu> The only thing worse is a bare "except:"
<cory_fu> :)
<lazypower> cory_fu: diaper exceptions ftw i guess?
<aisrael> I'm indifferent about the Exception; it's just passing a string back to the action handler to call action_fail with
<aisrael> ^^ marcoceppi
<cory_fu> The main issue is that it will catch pretty much any type of exception, even ones you are not expecting.  E.g., if the ssh raises something, it will be caught and reported as a validation error, potentially robbing you of some useful stacktrace info
<marcoceppi> .
<marcoceppi> cory_fu: that method raises an Exception, haven't really scoped it yet :)
<marcoceppi> cory_fu: I supposed I should create a new Exception class
<marcoceppi> aisrael: I think the else can be squashed and set_state just moved to the last line in the try block
#juju 2016-01-27
<stub> bdx: If you want to deploy a version of PostgreSQL that is not 9.3 with trusty, then you either need to set the pgdg flag when you deploy (or change it when the charm fails and retry), or have it in some other repository pointed to by the install_sources line. That second option is why it isn't on by default - the many installs where there is no egress.
<stub> bdx: For example, we have PG 9.1 packages in our internal archives which get deployed to trusty.
<bolthole_> hi late night juju groupies...
<bolthole_> I'm wondering if anyone knows who specificaly invented juju? the various writeups I've found on it did not say
<bolthole_> I would like to contact the person who invented juju, reguarding some additional ideas I have.
<blahdeblah> bolthole_: The person who invented it is not really relevant any more; there are several teams of full-time developers working on it now.
<blahdeblah> The best way to discuss those things would be to open a bug on Launchpad, or join the juju or juju-dev mailing list and post your thoughts there.
<bolthole_> is that a code phrase for "that person no longer works at Canonical"? :-/  I would like to know how to contact them, because I have some ideas that, while sort of related to juju, are not actually applicable to normal juju itself.
<blahdeblah> Not code at all; I don't actually know who invented it.
<bolthole_> fair enough :)
<blahdeblah> But if that's the path you want to take, juju was renamed from ensemble around 2010/2011, I think, so you might want to Google for that instead.
<suchvenu> Hi Matt
<mbruzek> suchvenu: hello
<suchvenu> Regarding the DB2 design Doc I sent for your review
<suchvenu> You mentioned, relation-joined should create unique database-name/username/password so multiple charms can connect to the same db2 instance.
<suchvenu> I am creating unique username, password and instance as per config values , but this is done in Config change hook and not in relation hook
<suchvenu> Now to create database name in the relation hooks, the charm which needs the DB to be created on DB2 needs to give the DB name to the DB2 charm.
<suchvenu> Also some products may need only one DB to be created , while others may need multiple DBs
<mbruzek> suchvenu: Yes I think that may be incorrect.  For other database charms I believe this is done in the relation-joined hook so different charm types have different database names.
<suchvenu> So how do we control the no of DBs to be created and their names ?
<suchvenu> Ok, so the username and password should be provided by the related charm to DB2 charm ?
<mbruzek> suchvenu: If you do it per config value it is clear to me that you can not get multiple databases.  Let us assume mediawiki had a db2 relation and so did dokuwiki had a db2 relation.  Your config-changed hook would only create one instance, database, username, and password, but both charms could relate to the ibm-db2 charm.  These both would get the same information and write to each other's database.
<mbruzek> suchvenu: I am not a database expert but I know two different services (mediawiki and dokuwiki) using the same database would be a problem.
<mbruzek> suchvenu: kwmonroe and I looked into this before we made the recommendation I believe you can make multiple databases in the same instance, so your db2-relation-joined hook would use the service name to create a unique database name.  You could hash the name or just replace the invalid characters.  So in my previous example if you get a relation, and the remote service name comes back as 'mediawiki/0' you can convert that to 'mediawiki0' and 'dokuwiki
<mbruzek> suchvenu: that way the DB2 charm could relate to different charms and they would not use the same database name. Check out the mysql or mariadb charms for examples of how this is done.
<lazypower> mbruzek you would really need to just split on '/' otherwise mediawiki/1 would turn into mediawiki1 - just chiming in with a slight addendum there
<suchvenu> ok. But if any service wants more than 1 DB , how do we control it ? And if they need DB with specific name
<mbruzek> lazypower: I don't remember if mysql makes a 'mediawiki1' database or if it makes a 'mediawiki' database for all the units of mediawiki to share.
<suchvenu> Also for each user i am creating a separate instance with same name as user name. This is as per one of the review comment i got previously to have unique username, password and instance
<lazypower> mbruzek - it uses the service name, and strips invalid characters + anything after the / is dropped.
<mbruzek> suchvenu: That may have been a review comment from someone who didn't fully understand how DB2 worked, and if it was me I apologize.  kwmonroe and I looked into the DB2 documentation last night, and you _can_ create different instances, and it appears you can create different instances.  What Kevin and I discussed last night was to have one instance but different database names to make it work more like the other databases we are familiar with.
<suchvenu> ok. So one instance name and have mulitple user names to connect to same is what you are proposing.
<suchvenu> Db2 has the feature to create multiple instances, that's the reason i implemented it in that way.
<suchvenu> I will relook at this one
<mbruzek> suchvenu: After yesterday's read of the documentation, I think different instances may be too complicated and doing different database names is the way to go.  I didn't understand what a different instance was different than a different database
<mbruzek> suchvenu: What we found from reading the documentation that you would have to create a different host user for each instance and you could give different memory constraints to each instance.
<mbruzek> suchvenu: We thought it would be better to give ALL the memory on the host to the one DB2 instance and just create different databases in the one instance
<suchvenu> ok. I will explore this one
<suchvenu> On removal of a db2 relation the username and password should be destroyed - This also i am not sure whether it will cause any problem if we delete the user
<lazypower> suchvenu - those are 2 separate use cases really, consider the db2 charm implements 2 relationships for the 2 styles of use-case where the regular db: relationship  auto-generates the database for the relating charm, based on that charms name. And each unit joining the conversation gets that database name, and a unique username/password combo per unit. This mimics the MySQL and PostGRES charms behavior which many users/applications will fall
<lazypower>  into this category. The DB2 charm only sends a connection string back to the relating charm.
<lazypower> Consider another relationship db-programmable: where the relating charm can then send the details of what databases it requires, creating 1-n+1 databases and then hands back the multiple database connection strings on the wire. This gives you a pathway to support both use-cases, single charm, and multiple relations enable the behavior outlined.
<mbruzek> suchvenu: The user is overloaded here.  You need to create a user on the host to run the db2 instance, but you can create many database users.  The review comment that we left was to remove the database user when the relation was removed.
<mbruzek> suchvenu: not the host user running the db2 instance.
<kwmonroe> suchvenu: some confusion might be coming because instance users seem to be both linux system users as well as database users.  do you know if you can create separate database users for a given instance?  (for example, can i create 'kevin' as a database user that can access a db in the 'db2inst1' instance?)
<kwmonroe> or does db2 require that database users are also instance users, meaning 'kevin' would have to have his own instance to get his own database?
<suchvenu> I had created linus users for the new users i created . I need to explore on whether we can create database users and conenct to the single instance
<jacekn> is this a known problem in charms generate using "charm compose"? http://pastebin.ubuntu.com/14679942/
<lazypower> jacekn - its 'charm build' - and thats a new one. which version of charm tools are you on?
<jacekn> lazypower: charm-tools 1.11.1
<jacekn> lazypower: wher eare the official docs about "charm build"? links I had saved no longer exist ( https://jujucharms.com/docs/devel/authors-charm-composing/ for example)
<lazypower> jacekn  - the devel docs start with building from layers from teh get-started guide - https://jujucharms.com/docs/devel/getting-started
<jacekn> that was after I solved this: http://pastebin.ubuntu.com/14679951/
<lazypower> https://jujucharms.com/docs/devel/developer-getting-started
<lazypower> rather ^ that is the correct link
<tertiary> this is probably more of a maas question, but if i were to use maas+juju to create shared storage with gluster, what happens when we need to upgrade the Ubuntu OS. Wont maas+juju basically destroy the machines and rebuild them with the newest OS version? Consequently destroying the shared data?
<lazypower> and there is a deeper dive into layers under its topic page - https://jujucharms.com/docs/devel/developer-layers
<lazypower> jacekn - sorry about the moving target, the road to 2.0 and the docs has been a bit of a turbulent ride while we re-work and massage the devel docs.
<jacekn> also is it expected that juju-local now uses twice as much space? I think it pulls lxc images to disk and then into mongodb?
<suchvenu> So in relation joined hook, the db2 charm will create username, password and a DB name (as per the service name which tries to connect to it ) and when the relation is broken the username needs to be deleted. Also we have only one instance name which will be shared by all users and multiple DBs can be created in the same instance.
<suchvenu> the username, password will be provided by the related charm in the relation hooks .
<suchvenu> Is that right ?
<jacekn> lazypower: so any ideas where to do about that error? I am testing in a fresh trusty environment so it may be affecting all charm authors
<lazypower> jacekn - i'm attempting to reproduce now in charmbox. seems to work in the :devel flavor when i update charm-tools from pypi - there's a pip 8.x bug that landed in teh 1.11.x series but that was a complaint about pyyaml verion(s)
<lazypower> *versions
<lazypower> jacekn what layer are you building with? i'd like to keep this as close to your configuration as possible so i can reproduce
<suchvenu> When the relation is broken the db username needs to be deleted without disturbing the DBs.
<lazypower> kwmonroe ^
<jacekn> lazypower: there is no layer yet, I'm writing one on top of 'layer:basic'
<lazypower> tertiary - thats a possibility. Unless you have modeled the storage. And even then i'm not 100% sure. I would certainly bring this up on the mailing list
<tertiary> i see, thanks
<lazypower> tertiary - i feel like this has been answered before, and posting to the list raises visibility, so they can chime in
<suchvenu> If any product needs mutiple DBs with specific names ( like for example IBM MobileServer ) let that service create the required DB as per its requirements.
<jacekn> lazypower: is there really no way around installing gcc and pulling packages using pypi? I can see that blocking juju deployments due to security policies in many places
<lazypower> jacekn - is this in reference to charm build making the wheelhouse?
<mbruzek> suchvenu: reading your questions  now
<suchvenu> Matt:Kevin: lazypower: Does it sound ok ?
<suchvenu> ok
<jacekn> yeah it means software is partially outside of OS security patching coverage
<jacekn> lazypower: plus compilers on appservers are considered a security risk in some places
<lazypower> jacekn - right. I would certainly bring this up on the list as well as a talking point, i dont know that I have a good answer to that question
<jacekn> ack. I'll soon report a but about that ImportError: No module named 'charmhelpers.cli' problem
<lazypower> jacekn - and using the vanilla charmbox image, i was able to build both - an empty layer with only a layer.yaml containing " includes: [layer:basic]" , and a few other layers i maintain. No issue with charmhelpers.cli :/
<lazypower> are you running in a virtualenv or something similar?
<mbruzek> suchvenu: That looks correct, except for the part about the "the username, password will be provided by the related charm in the relation hooks"  the db2 charm will create the username and password, the username and password would not be provided by say the mediawiki charm.
<mbruzek> the user/pass would be set on the db2 relation, for the mediawiki charm to read
<suchvenu> then how do I take the username and password ? Based on service name ?
<jacekn> lazypower: nope but who knows, maybe there is somethin on my test box that would affect "charm build" (kind of another reason to stick to apt-get install in the charm rather than ship depds with the charm)
<jacekn> lazypower: I can publish my WIP charm somewhere if you want?
<lazypower> sure
<suchvenu> I mean how do I select any username for the mediawiki ?
<lazypower> jacekn lets try that, maybe theres something hinky in that layer path thats causing the problem
<jrwren> jacekn: I share your values and I've tried to avoid even installing pip on some of our charms, but most charm authors don't share these values and so many install python-pip and python-dev which pulls in gcc :(
<suchvenu> Also I need to explore whether we can create DB users without creating linux system users. If that is not possible, then I am not sure whether removal of users will cause some issues
<suchvenu> when the relation is broken
<marcoceppi> suchvenu: MySQL creates a unique database schema for every service connected, and a unique user/pass for that schema for every unit connected
<mbruzek> suchvenu: I don't think the linux user is directly related to the database user
<jacekn> lazypower: hmm I wonder...I wonder if "charm build" found some junk in my "charms/trusty/<charm>" subdirectory and composed the charm on top of it. Let me verify that maybe that's the cause
<suchvenu> Matt: I will check this up
<mbruzek> suchvenu: if I remember correctly db2inst1 is running the instance and that is needed to start and stop processes, but the db2admin is not a user on the filesystem.
<jacekn> jrwren: yes understood. One other things I would point out is that this will happen on all units. So it's fine for small scale but if you want to deploy 100 units that means dozens of GBs that need to go over the network
<jrwren> jacekn: yup. If you solve any of these issues, I look forward to hearing how you did so.
<jacekn> jrwren: lazypower: hmm OK then so when I started completely fresh it worked. "charm build" must have composed on top of some old files which caused this problem
<jacekn> sorry for the noise
<lazypower> jacekn ah i htink i may know the issue then
<lazypower> charm build is additive, so if an older dependency is embedded, you need to rm -rf the output charm artifact.  We have an open bug on this, as its not clear that this behavior can lead to unexpected behavior
<lazypower> removals are hard :/
<jacekn> lazypower: aha! that would make sense, thanks for help. If you find that but # let me know I'll meto
<lazypower> jacekn https://github.com/juju/charm-tools/issues/83
<jacekn> lazypower: is there anything in LP?
<lazypower> charm-tools has migrated over to github afaict
<suchvenu> Matt : db2inst1 is the username used to install DB2 and it creates the instance and runs it.
<suchvenu> db2admin is a db2 utility
<mbruzek> suchvenu: OK.  I meant db2admin as the administrative user for db2 then.
<mbruzek> I am not an expert in db2, it was just a guess
<suchvenu> No, db2admin user doesn't exist
<suchvenu> After installing db2 , I see the following users only :
<suchvenu> db2inst1:x:1001:1001::/home/db2inst1: dasusr1:x:1002:1002::/home/dasusr1: db2fenc1:x:1003:1003::/home/db2fenc1: db2usr1:x:1004:1001::/home/db2usr1:
<suchvenu> where db2inst1, dasusr1 and db2fenc1 are required for installing db2 and db2usr1 is a new user i created as per the config value in DB2 charm
<kwmonroe> yeah suchvenu, mbruzek wasn't suggesting that a 'db2admin' system user should be created.  he just meant that there would be a "db2 admin" as a user that could interact with the database without being a system user.
<kwmonroe> suchvenu: according to this, there are no "database users".  authenticating to the database requires a system account: http://www.ibm.com/developerworks/data/library/techarticle/dm-0508wasserman/
<kwmonroe> "This is different from other database management systems (DBMSs), such as Oracle and SQL Server, where user accounts may be defined and authenticated in the database itself, as well as in an external facility such as the operating system."
<kwmonroe> suchvenu: this implies that any new service that connects to db2 will need a new system account.  therefore, in the db relation-joined hook, you should call useradd to create a new user (the username can be a random string), and then call 'db2icrt -u <new-user> <new-user>'.  this will create a db2 instance name based on the new system user you just created.
<suchvenu> Kevin : Ok. I was also searching whether we can create database users which are not OS users
<suchvenu> right.. I am actually doing this in config changed hook
<suchvenu> creating a new user and a new instance
<suchvenu> but now that code has to go to relation hook
<suchvenu> so in this case when the realtion is broken, and i try to delete the user, it may get into issues
<suchvenu> as the db2 instance will be running using this userid
<kwmonroe> yeah suchvenu, i suggest you do not remove the system user, but simply drop the instance with 'db2idrop <instance>'
<kwmonroe> according to this, the database will remain in tact:  Dropping an instance will not drop your databases, so you will not lose data. You can drop your instance, recreate it and then catalog the databases to make them available. You can use the db2cfexp and db2cfimp commands to export and import instance profiles.
<kwmonroe> https://www-01.ibm.com/support/knowledgecenter/#!/SSEPGG_9.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0002058.html
<cmagina> bundletester is giving me a version conflict error for pyyaml on trusty. it was installed via pip. its looking for 3.11, but 3.10 is installed. the system is a fresh maas deployment to an x86 host
<lazypower> tvansteenburgh - is this the same symptom we saw emitting from charm-tools 1.11.x series? ^
<suchvenu> ok kwmonroe. I can try this out
<tvansteenburgh> cmagina: curious if you installed it in a virtualenv or not
<cmagina> tvansteenburgh: just a simple pip install bundletester
<tvansteenburgh> cmagina: would you mind trying it in a virtualenv?
<cmagina> tvansteenburgh: i can do that
<tvansteenburgh> my guess is that pip refused to upgrade the pyyaml system dep that was already installed
<suchvenu> kwmonroe: Suppose the related charm needs more than the default db , then its up to that charm to create the DB. DB2 charm will not handle it. Is that approach fine ?
<jacekn> where do I report charms.reactive bugs?
<jacekn> or "where should I" rather
<mbruzek> jacekn: against charm-tools
<mbruzek> let me get you a link
<mbruzek> jacekn: https://github.com/juju/charm-tools/issues
<mbruzek> suchvenu: as long as the database user has the permission to create a database or table that should be fine
<suchvenu> Yes the db user has permission to create DB. As per the current implementation where we are not creating any default DB in DB2 charm, we are returnign the db2 user name to the related charm
<suchvenu> Using this db2 user the related charm can create DBs
<kwmonroe> suchvenu: i think after you create an instance, you will still need to create a db.  but your approach seems fine -- a connecting service should receive the ip, port, user, password, and db name.
<suchvenu> yes after creating instance for new user, we need to creatre DB as well.
<suchvenu> yes, i was not sending db name . I will add that as well
<suchvenu> when we remove the relation the instance name is deleted. Now again if we try to set relation, it will create a new username and instance , right ?
<kwmonroe> that's correct suchvenu
<suchvenu> i will chk whether the username and instance name exists  and reuse the same if it exists
<suchvenu> instead of creating multiple usernames for the same service
<suchvenu> i think instance will not have issue, as we are deleting it when the relation is broken
<suchvenu> Thanks a lot mbruzek, kwmonroe,lazypower, marcoceppi for your time to clear some of the design related queries i had on DB2 charm. Appreciate your help
<suchvenu> I will send updated design doc with the points we discussed today
<kwmonroe> thanks suchvenu!!
<bolthole> I'd like to write a charm that deploys a webapp to a tomcat service. As such, I think I need to require some kind of relationship between a tomcat charm instance, and my charm, to make sure it gets deployed to the same machine? What should I be looking at for this?
<plars> Hi, I'm trying to add an lxc instance to a maas environment with juju-deployer. I have several other ones deployed this exact same with with 'to: lxc:agent-host' in my deployer bundle, so I just copied one of those as I've done before, and made the necessary changes.
<plars> but when I try to run juju-deployer, I get an error: 2016-01-28 02:46:03 Invalid service placement snappyx86003 to lxc:agent-host
<lazypower> mbruzek - bolthole is asking about tomcat :)
<plars> any ideas where to look to figure out what's wrong?
<mbruzek> hi bolthole: let me dust that section of my memory off
<mbruzek> bolthole: What you are looking for is what Juju calls a subordinate service
<mbruzek> https://jujucharms.com/docs/master/authors-subordinate-services
<bolthole> Sounds good. thanks
<mbruzek> The tomcat charm (written by me) accepts subordinate charms with the juju-info relation.  Look to the openmrs charm for an example.
<mbruzek> https://jujucharms.com/openmrs/precise/1
<mbruzek> bolthole: It was pointed out to me that tomcat should not use the juju-info relation and instead provide a "webapp" relation or something of that kind, but I have not implemented that yet, and I don't see that in the current cs:trusty/tomcat charm
<mbruzek> So that is the way it works now, and if you look at the openmrs code that should help.
<mbruzek> bolthole: please let me know if you have any other questions.
<bolthole> Hm. Is there really a requirement to have a particular relationship defined in juju, to make use of the subordinate designation?
<mbruzek> bolthole: no. But a webapp has some expectations of things (like the directory where it should put the war file).  Currently the openmrs can be a subordinate to any regular charm and that is probably not a good thing.
<bolthole> And if there is.. the page you referenced seems to imply I could just define "a standard client-server relation" fpr tjos
<bolthole> * for this
<mbruzek> bolthole: it probably makes more sense to have tomcat provide a webapp interface and charms that are webapps only consume that
<bolthole> ah. so the relationship would define the webapp directory, then
<mbruzek> bolthole: yes.  It would be a very basic thing
<bolthole> I see that openmrs "requires: tomcat-war:". Wouldnt it be more accurate to say that it *provides* tomcat-war ? And does the other side have to be coded to support the same relation
<lazypower> plars - can i see your bundle?
<plars> lazypower: sure, one sec...
<plars> lazypower: http://paste.ubuntu.com/14681702/
<lazypower> plars - try adding the unit number   --to lxc:agent-host/0
<plars> lazypower: actually, I am doing it without the bit for deploying agent-host. It's already deployed, but I include it here for clarity
<plars> lazypower: I did, same error
<lazypower> :-\ hmm... we used the --to operator in our k8s bundle... let me fish that up
<lazypower> contrast/compare
<plars> lazypower: I've used it *exactly* like this in this same environment on lots of other units
<lazypower> yeah, that looks correct
<lazypower> looks like there may be a regression or something going on here
<lazypower> i'm not positive
<lazypower> sorry :(
<plars> lazypower: any tips for debugging? or where to go from here?
<bolthole> mbruzek: erm. it seems like the tomcat module now already looks for webapp: java-webapp
<lazypower> plars not off the top of my head, i've been working with the newer juju deploy bin for the last month or so
<lazypower> a lot of that info is now replaced
<mbruzek> bolthole: I pulled the trust/tomcat one I don't see that in metadat.yaml, where do you see this?  Are you on precise/tomcat?
<plars> lazypower: so is there a better way I should be doing this?
<lazypower> plars - aside from upgrading to 2.0-alpha1 and giving it a go
<bolthole> I.... think so?  still new to ubuntu. i'm using ubuntu15.10, which I think is "precise", so I was looking at https://jujucharms.com/tomcat/precise/1
<lazypower> plars - obv. thats not recommended if this is a production environment
<lazypower> bolthole - "wiley" - ftfy :)
<plars> lazypower: heh, well it is production, but it's not working
<lazypower> bogus :\
<bolthole> oh wait. my actual deployed stuff, says verison of tomcat = cs/trusty/tomcat-1
<mbruzek> bolthole: yes, https://wiki.ubuntu.com/Releases
<bolthole> whcih is .. neither precise, nor wily. wth....
<mbruzek> bolthole If you deployed "tomcat" you likely got the precise version (12.04) I think that is how juju defaults. I also see the java-webapp in the metadata.yaml for precise/tomcat
<bolthole> whatisthisidonteven
<mbruzek> bolthole sorry for the confusion.  I think this change is not in the more recent version of tomcat
<bolthole> so... precise is the OLDER version ... butit is better for having more logical relations defined?
<mbruzek> bolthole: it appears that someone updated the older version of tomcat with the fix I described earlier, but did not move this to the later version of tomcat.
<bolthole> and this is extra sad, since I cant seem to use the "change version" to switch to the older version, in juju-gui.
<mbruzek> bolthole: Does it matter to you what version of the operating system you have?  precise (12.04) compared to trusty (14.04)?
<mbruzek> bolthole you will still get tomcat7 with either install, both should work very well
<bolthole> i have no idea.
<bolthole> comparing things, I guess my desktop is 15.10...
<bolthole> but i've deployed to azure, which uses  14.04.1 ubuntu imags
<bolthole> by default, I guess.
<rick_h_> marcoceppi: with resources, what's the charm you've got in your mind that would use the most number of individual resources? (/cc lazypower aisrael mbruzek)
<lazypower> rick_h_ - easily the big data stack
<rick_h_> lazypower: right, so a single service with how many resources declared? 2, 4?
<lazypower> rick_h_ - considering each install ships w/ a 90MB JVM + each component has a suite of jars.
<lazypower> cory_fu kwmonroe admcleod1  ^ can you chime in on this? like, ballpack
<bolthole> mbruzek I still think whoever "fixed" it, fixed it backwards again. the app should be the thing "providing" webapp. :-/
<lazypower> *ballpark
<marcoceppi> rick_h_: I'd see about 6 at most
<rick_h_> marcoceppi: lazypower k, ty for the feedback.
<mbruzek> bolthole: the webapp charm will need to require the "webapp" relationship because a relationship needs both sides.
<lazypower> rick_h_  - hey i found a good place to look - https://api.jujucharms.com/charmstore/v5/~bigdata-dev/trusty/cloudera-hadoo-yarn-master-0/archive/resources.yaml
<cory_fu> rick_h_: The core Hadoop charms have around 9 resources defined currently, most of which are different arch and versions of the libraries
<bolthole> mbruzek technically speaking, using java language, I believe the proper terminology would be that it "provides: webcontainer"
<cory_fu> lazypower: This is better: https://api.jujucharms.com/charmstore/v5/~bigdata-dev/apache-hadoop-yarn-master/archive/resources.yaml
<cory_fu> rick_h_: ^
<lazypower> woo
<cory_fu> A couple of those are python deps that go away with layers, and there are a couple dupes for handling older charms that don't have  a version specified and will go away
<bolthole> Is there a way to search the store for (Charms that use relation: webapp) ?
<cory_fu> But at a minimum it will be 7, until we add new supported versions
<cory_fu> Or arches
<cory_fu> bolthole: You'd want https://jujucharms.com/provides/webapp or https://jujucharms.com/requires/webapp
<cory_fu> But neither of those return any results
<cory_fu> bolthole: Here's one that has results: https://jujucharms.com/provides/http
<bolthole> got it thanks.
<cory_fu> bolthole: You can discover this on the store by going to a charm that you know of that uses a given interface and clicking on the link in the "Connects to:" section on the right, just above the files list
<bolthole> thanks
<bolthole> so if I want to file a bug "please provide webcontainer", should I do it against tomcat or tomcat7 charm?
<bolthole> seems like the tomcat7 writeup suggests tomcat. but the tomcat code seems in chaos  ;-}
<cory_fu> lazypower, mbruzek: ^
<cory_fu> I think that https://jujucharms.com/tomcat/trusty is the most recent
<cory_fu> But I defer to those who have actually worked on it
<bolthole> contrariwise... if I want to submit code... can I do that? It seems like stuff owned by "charmers" has some kind of extraspecial layer where people cant submit proposed stuffs through the normal process
<mbruzek> bolthole I did a diff between precise and trusty on the tomcat charm.  Only that webapp relation is different
<mbruzek> bolthole: do not use tomcat7 that is deprecated, use trusty/tomcat and I can move the fix for precise forward.
<lazypower> bolthole - you are correct that "charmer" owned charms have a layer of "special" - they are reviewed and curated charms. You can propose fixes against them through a traditional launchpad merge proposal. This process will change sometime near APril, but we will have docs in place for that process change.
<lazypower> bolthole there's docs on contributing here: https://jujucharms.com/docs/devel/authors-charm-store#submitting
<lazypower> bolthole - but if this is only to scratch an itch where you dont want a review to block you, you can always publish to your personal namespace, which is covered in the doc as well
<bolthole> mbruzek i can wait for the official code fix, thats fine.... Just please fix the naming ;)
<mbruzek> bolthole what naming?
<bolthole> the usage is backwards. tomcat needs to require 'webapp', not provide it.  Or, it could provide 'webcontainer'
<mbruzek> bolthole: that is not wrong, that is a Juju convention.
<bolthole> nooo... what i described, matches mysql usage
<mbruzek> bolthole: tomcat provides a webapp relationship, that webapps can require.
<bolthole> i compared with that first ;)
<bolthole> look at it this way.  wordpress is an aplication, that requires a database (mysql) to run. It *requires* mysql
<kwmonroe> i think the confusion here is because we're talking about a subordinate relation to tomcat.
<bolthole> openmrs is a webapp, that requires a webcontainer (like tomcat) to run. therefore, it *requires* webcontainer
<kwmonroe> so bolthole, it is actually correct as written -- tomcat is "providing the ability for a java-webapp to be connected"
<bolthole> tomcat is not providing an ability, it is providing a platform/service. just like mysql provides a platform/service.
<kwmonroe> yeah, i get the overall descriptions you're using, but when you're talking about a juju charm being subordinate to another, the principal provides that interface, the subordinate requires it.
<bolthole> yes. exactly.  tomcat is the principle. it provides webcontainer.   any webapp, Requires a webcontainer to run
<bolthole> so rename webapp to webcontainer, and it's all good
<kwmonroe> oh, sure - i don't have any skin in the naming of the relation.. just trying to explain why provides/requires may seem reversed in a principal/subordinate context.
<bolthole> got it
<cmagina> tvansteenburgh: i tested it in a virtualenv and hit the same issue
<bolthole> awwright... I have used bzr to make myself a local clone thingie, and made the trivial code changes. trying to push it up to my "personal space"... but its giving me permission denied?
<bolthole> ("Cannot create branch at /~s-phil-s/charms/blahblah")
<bolthole> lazypower I did try to carefully follow the instructions at that url. but it doesnt seem to work :(
<kwmonroe> bolthole: is the ssh-key (~/.ssh/id_?rs.pub) listed in your launchpad account?
<bolthole> yes
<kwmonroe> bolthole: what's the push command you gave?  just 'bzr push'?
<bolthole> ( ssh missing, gives an explicit "you havent registered any keys" error ;)
<bolthole> I did a very explicit push as follows:
<bolthole> bzr push lp:~s-phil-s/charms/tomcat/trunk.
<kwmonroe> hmph, that's looks legit
<lazypower> newp
<lazypower> missing series
<kwmonroe> ah
<kwmonroe> bzr push lp:~s-phil-s/charms/trusty/tomcat/trunk
<kwmonroe> bolthole: give that a try ^
<bolthole> tried it. nope.
<bolthole> oh qr
<bolthole> wait
<bolthole> differnt error
<bolthole> "project s-phil-s does not exist"
<bolthole> oops typo
<bolthole> added tilde back... and it works
<bolthole> thats annoying. i thought launchpad was more free-form
<kwmonroe> i think it's more restrictive with charms because the ingestion process that scrapes launchpad to pick up namespaced charms needs them in a specific structure -- charms/series/foo/trunk
<kwmonroe> alternatively bolthole, if you didn't want to wait for your modified charm to hit the charm store, you can deploy a local charm with "JUJU_REPOSITORY=~/charms juju deploy local:trusty/<newtomcat>"
<bolthole> naw I can wait a bit. and want to do this right anyways
<kwmonroe> cool
<bolthole> so, now, do I do the "proposal to merge" from my branch, or from the main one?
<bolthole> the "propose merge" stuff on the web, is a little.. lacking in documentation
<kwmonroe> bolthole: you'd do it from your branch.. click the "Propose for merging" link here: https://code.launchpad.net/~s-phil-s/charms/trusty/tomcat/trunk
<bolthole> ah okay, now it looks more helpful then :)
<bolthole> although I also pushed a precise branch. which one is better? I would think the precise one since thats where charms/tomcat seems to redirect to, but iunno
<kwmonroe> well, generally speaking, you'd propose your precise branch into lp:charms/precise/tomcat and your trusty branch into lp:charms/trusty/tomcat.. but in this case, mbruzek already has you beat ;)  he's testing a similiar change to those branches at this very moment.
<bolthole> kewl. well, good practice either way :)
<kwmonroe> that's the spirit!!  and thanks for bringing this up.. it's the right way to handle webapp subordinates in tomcat. (vs a more generic interface)
<jw4> I'm considering using juju to deploy a few services in mesos/marathon, including a webservice app, front end app, postgres, and a cassandra cluster.  Does this even sound reasonable, or would it be too much hassle to use juju to deploy through mesos/marathon?
<tvansteenburgh> cmagina: weird. i'll try it in a clean container and see if i can figure out what's happening
<cmagina> tvansteenburgh: ok, thanks
<firl> anyone know where a list of âactionsâ that I can send to my openstack charms for neutron would be?
<rick_h_> firl: "juju action defined $servicename"
<firl> ty
<rick_h_> np
<firl> anyone know how to have neutron recreate qdhcp/qrouter besides uninstall and reinstall with juju?
<lazypower> jw4 - that would be an excellent question on the juju mailing list. one of our isv partners has been working in marathon/juju land for the last 4 months
<jw4> lazypower: cool.. will do
<lazypower> jw4 - they don't idle on irc, but with a mailing list post, i can signpost them to it.
<jw4> +1
<lazypower> :) happy to help. ping me when you've sent so i do it before i forget.
<jw4> will do - is that https://lists.ubuntu.com/mailman/listinfo/juju or https://lists.ubuntu.com/mailman/listinfo/juju-dev?  I assume the former?
<lazypower> juju@
<jw4> perfect
<lazypower> juju-dev is if you're a golang hacker and want to discuss the innards of juju
<jw4> It's all coming back to me slowly :)
 * lazypower eyes narrow... "wait.... you *were* a core hacker weren't you jw4?"
<jw4> uh oh
<jw4> now I'm in trouble
<lazypower> \o/ man its been a while :D
<jw4> :)
<jw4> yep; I miss this team :)
<mbruzek> bolthole: I created 2 Merge proposals for the problems we discussed today, https://code.launchpad.net/~mbruzek/charms/trusty/tomcat/trunk/+merge/284189
<mbruzek> bolthole: https://code.launchpad.net/~mbruzek/charms/precise/tomcat/trunk/+merge/284188
<mbruzek> bolthole: Please take a look and let me know if that was in-line from what you suggested.
<jw4> lazypower: sent: subject "Mesos / Marathon with Juju"
<cory_fu> So, has anyone tried to deal with upgrading an existing deployed service from a non-layered charm to a layered charm?
<marcoceppi> cory_fu: nope >:D
<cory_fu> I'm running into a pretty big road block because the interface states (well, all of the states) need to be reconstructed without the chain of hooks actually being invoked
<cory_fu> Is it reasonable as a community to say that upgrading from non-layered to layered charms just isn't feasible?
<cory_fu> marcoceppi: ^?
<marcoceppi> cory_fu: it's possible. but couldn't upgrade hook do this?
<marcoceppi> cory_fu: the only thing it cant' really do is relation stuff
<marcoceppi> can't really do *easily*
<marcoceppi> cory_fu: want to jump on a hangout in like 5 mins?
<marcoceppi> maybe 10 mins
<lazypower> jw4 done and done
<jw4> awesome sauce
<cory_fu> marcoceppi: You up for that hangout?
#juju 2016-01-28
<donu7> Hello, juju is giving me an error in the later stage of bootstrapping the environment. Is there a way to "resume" the bootstrap? I've been releasig the node from maas, deleting the environment and recreating it to try again
<donu7> The node is up and commissioned according to maas; however, i can't ssh into it. if i reboot the node into single user, i see that the passwd file does not have any users and there appears to be no maas keys that were supposed to be set up in juju and maas
<donu7> Also, running `juju status` or any `juju *` commands just hangs with no output â why is that?
<donu7> actually, here's a different direction if anyone has any ideas - is there a resource on configuring network interfaces for the box that juju is trying to bootstrap?
<cloudgur_> is the git repo for juju experiencing issues ?
<cloudgur_> Nice pic of unicorn
<bolthole_> mbruzek yeah looks good thanks :)
<bolthole_> now I need to learn how to access webapp-path
<bolthole_> https://jujucharms.com/docs/stable/authors-charm-writing is unbalanced. it shows bash using an apparent external command "relation-set", but doesnt show the corresponding other side accessing the value
<bolthole_> mbruzek I just submitted https://code.launchpad.net/~s-phil-s/charms/trusty/tomcat-webapp/trunk
<bolthole_> not fully tested, but figure the earlier I get feedback the better :)
<bolthole_> oh he left. oops. well, feedback from anyone else would be appreciated too :)
<bolthole_> there seems to be something wrong with instructions on https://jujucharms.com/docs/1.24/authors-charm-store
<bolthole_> The instructions for (to main charm store) vs (to personal space), seem to say to push to the exact same URL
<bolthole_> bzr push lp:~<your-lp-username>/charms/trusty/<your-charm>/trunk
<bolthole_> vs
<bolthole_> bzr push lp:~your-launchpad-username/charms/series/nagios/trunk
<bolthole_> which. is. the. same. thing.
<bolthole_> kwmonroe that tomcat code is bugged.
<bolthole_> i updated https://bugs.launchpad.net/charms/+source/tomcat/+bug/1538715 with details
<mup> Bug #1538715: need relation webcontainer <tomcat (Juju Charms Collection):Fix Released by mbruzek> <https://launchpad.net/bugs/1538715>
<bolthole_> lol
<bolthole_> kwmonroe I submitted a merge request for a fix for that syntax err  in tomcat charm
<D4RKS1D3> Hi
<D4RKS1D3> Someone has problems downloading charms?
<marcoceppi> D4RKS1D3: I haven't, what problems are you seeing?
<marcoceppi> cory_fu: do we have virtualenv for layers? instead of installing on the bare machine
<cory_fu> Not yet
<cory_fu> Shouldn't be difficult to add, though
<marcoceppi> stub: your leadership layer is badass! Thnks for uploading it
<stub> Your about to get an apt layer too
<marcoceppi> idk if I can handle all of this
<marcoceppi> it's too much for my heart!
<stub> All the bits of behaviour shared between the PostgreSQL and Cassandra charms in fact.
<marcoceppi> stub: how very exciting
<geetha> mbruzek: Hi, I have got a comment for WebSphere Base product where its mentioned that we should add configuration options for profile options like profile name, path, and admin username and password. But once Websphere product installed and if user set new username and password through config option, there is no way to replace existing was admin username and password. So there is no benifit to the user if we provide config options for adm
<mbruzek> geetha: hrmm...
<mbruzek> geetha: It is strange that you can't change the admin user.  I will have to read the documentation for WAS on that.  The admin password can not be default, generate a password and save it to a file only root can read in the charm dir or something like that.  I still think the user will want to change the profile name, path and regular user.
<mbruzek> geetha: The point is we can not have default username/password for network services so people can not guess or know what the password is by reading the charm.
<mbruzek> geetha: The charm had very few configuration parameters and it will be more useful if you provide some of the things a user would change like profile or path and user/pass.  Can you change non-admin usernames and password?
<geetha> mbruzek: Its not system admin user name and password..its specific to Websphere, once we create profile using username and password we can not change it, but we can create new profile with diffrent username and password
<mbruzek> geetha: https://www-01.ibm.com/support/knowledgecenter/SSYJ99_8.5.0/security/was_filereg.html
<mbruzek> geetha: If someone needs a specific profile. path and username I think that is reasonable that the old profile is removed/
<mbruzek> If you charm could control those values form the config that would be a useful charm, but setting all those values to default is not likely going to work for many users
<geetha> mbruzeK: As per document, "websphere_install_directory/profiles" is the default directory for profiles. so  I am using that only to create profiles. And If we try to delete old profile, before deletion we have to stop the server with old username and password. If that is the case, how can we get old user name and password?
<mbruzek> Again if you wrote the password to a file readable only by root, the charm code could read that and use it to delete the old profile, and after you are done stopping/starting you could write the new password
<mbruzek> This all is seems possible with charm code.  I just want this charm to be useful to people, having only the default values will not be very useful.
<jmalcaraz> jamespage, D4RKS1D3 and jmalcaraz have been working hard these days, testings a lot your "odl" charms. We believe you would welcome this feedback. 1) Relationship  openvswitch-odl:neutron-plugin nova-compute:neutron-plugin. This relationship should be in charge of "removing any existing bridges in OVS", "population the ODL version of the ml2_conf.ini and setting the manager and controller to OVS and br-int. However, it only does the "manager in OVS". 2)
<jmalcaraz>  There is a conceptual missing relationship between openvswitch-odl and neutron-gateway. The problem is that the  neutron gateway does not provide the interface "neutron-plugin" and thus is cannot be applied. The aim is to achieve exactly what I decided in the prevision step but now over the neutron-gateway rather than over the nova-compute. We hope you find this feedback useful and hopefully you may like to address them in order to close a prototypical
<jmalcaraz>  integration using charms. Thanks a lot, in advance for the excellent work you are doing.
<kwmonroe> geetha: you won't need the old user name and password.  if the user changes the configurable profile name, you would stop WAS, then run 'was_install_dir/bin/manageprofiles.sh -listProfiles' to get the profile name, then 'manageprofiles.sh -delete -profileName <profile', then create a new profile, then start the WAS with the new profile.  that should happen in config-changed.
<jmalcaraz> jamespage, sorry, I was refering to the relationship openvswitch-odl:neutron-plugin nova-compute:neutron-plugin
<jmalcaraz> jamespage, notice that in the own documentation of the charms the relationship between openvswitch and gateway is there. Literraly: "juju add-relation neutron-gateway openvswitch-odl", however the interfaces does not match in the metadata.yaml
<geetha> kwmonroe: To stop WAS we need usename and password. Because we have enabled security. we have to run command to stop server ./stopServer.sh server1 -username "user" -password "password"
<kwmonroe> oh i see geetha - you're talking about changing the admin username/password.  i was thinking about how it would work to change the profile name.
<kwmonroe> let me think about an admin user/pass change for a minute..
<geetha> kwmonroe: yes, if user change just profile name we can stop WAS, delete old profile and create new profile. but user change admin username/password also how can we do that? as matt said, we have to store it in a file and we can use it while stopping the server. After stopping again we have to rewrite the file with new username/password right?
<kwmonroe> geetha: it seems like you can reset the WAS admin without knowing the old password.  run wsadmin and then perhaps an expect script to execute '$AdminTask changeFileRegistryAccountPassword -userId <newuser> -password <newpass>' (http://ibmdocs.com/2011/11/04/resetting-was-admin-password-when-the-browser-console-does-not-work-anymore/)
<kwmonroe> once the new user/pass has been set, you should be able to stop/start WAS using the newly configured user/pass.
<geetha> thank you Kevin & Matt...I will try this way.
<bolthole> i posted a question late last night but got disconnected, so I'll ask again :-/
<bolthole> The url at https://jujucharms.com/docs/stable/authors-charm-store describes "two methods" of putting up charms; one for the store, and one for personal space. however, they seem to tell me to do the same thing. what am I missing
<bolthole> example:
<bolthole> to publish to charm store it says,
<bolthole> bzr push lp:~<your-lp-username>/charms/trusty/<your-charm>/trunk
<bolthole> but to publish to personal name space, use
<bolthole> bzr push lp:~your-launchpad-username/charms/series/nagios/trunk
<bolthole> but... that's the same thing, seems to me.
<marcoceppi> bolthole: yes, they are both published to the store, however one goes on to say how to get it "promulgated" or approved past the personal namespace
<marcoceppi> bolthole: these docs (and process) are pretty confusing, we have a more streamlined process coming on next month
<bolthole> whie I remember: docs are out of date. they claim, "Name Spaced charms will be displayed under the other category in the GUI". but thre's no such category now, that i can see. I presume they mean in the search widget for the GUI
<bolthole> suggestion for rewrite: make the first section a single unified, "[whether you plan for your charm to be in the store, or just in your personal area, you do the same first step: ... (details)]"
<bolthole> now a followup question...
<bolthole> I dont see what you describe, about something getting approved past the personal namespace.
<bolthole> i'm presuimg that getting in the store, is NOT the same thing as a "charmer team recommended charm"
<bloodearnest> is there a way for a tool that is installed on a juju machine to query the units running on it?
<bloodearnest> (this is for logging purposes)
<lazypower|travel> bloodearnest i'm not sure i follow
<lazypower|travel> what are you wanting to do?
<kwmonroe> bolthole: the Recommended Charms section of the doc you linked (authors-charm-store) is the process for going past personal namespace and making it "charmer recommended", which is indeed the same thing as "getting it in the store".
<bolthole> okay thanks
<kwmonroe> bolthole: i like your rewrite suggestion -- i know marcoceppi already mentioned the docs were in transition, but that's an important one to clear up.  thanks!
<bolthole> i previously pushed my own charm, to "lp:~s-phil-s/blahblahblah".  This time, I tried just "bzr push". It said it was pushing to (...)//bazaar.launchpad.net/(...). Is there some difference between bazaar.lp, and.. the other stuff? more confusion :(
<marcoceppi> bolthole: that's just a limitation of bzr. "lp:" is basically bazzar.lp.net/...
<marcoceppi> bolthole: next month you'll be able to use whatever code hosting you'd like and publishing to the charm store is decopued from launchpad
<bolthole> cool. okay, soo... i finally have a charm that is "charm proof" clean. and I pushed it. that means in around an hour, I will finally be able to use    juju deploy lp:~s-phil-s/Mystuff ?  previously, i pushed it, but didnt have unique icon, so i guess it didnt allow that.
<bloodearnest> lazypower|travel, I want to look in some file or similar to know what unit(s) are deployed on this machine. Would like to differentiate between primary and subordinate units
<bolthole> (side comment.. it's... "interesting".. that I can view the same code tree, through both bazaar.lp.net and code.lp.net)
<kwmonroe> yeah bolthole, if proof passes, it should ingest into the store in about an hour.  note that your invocation will be "juju deploy cs:~s-phil-s/trusty/Mystuff" though.  the 'cs' syntax tells juju that you want a trusty charm from the charmstore (cs), in ~s-phil-s's namespace.
<kwmonroe> you could skip waiting for ingestion with "juju deploy local:trusty/Mystuff", but i think you already knew that.
<kwmonroe> cs:blah is the right way to pull from the store
<bolthole> kmonroe: kewl. done and done. also, officially submitted topcat-webapp to charms. now, I just need someone to aprove my bugfix merge request for the mainline tomcat charm so it actually *works* ;-)
<bolthole> doh... kwmonroe
<bolthole> btw i'm going to try to change my lp id from s-phil-s to match my github name, bolthole
<kwmonroe> bloodearnest: you could do something like this on the machine: sudo find /var/lib/juju/agents/unit-* -name metadata.yaml -exec echo {} \; -exec grep subordinate {} \;
<bloodearnest> kwmonroe, right. I was wondering if there was a better way than sniffing files
<kwmonroe> bloodearnest: even better (in caes you don't want to see pathnames..
<kwmonroe> sudo find /var/lib/juju/agents/unit-* -name metadata.yaml -exec grep name {} \; -exec grep subordinate {} \;
<kwmonroe> ah, i see bloodearnest.  i'm not sure of a utility on the juju deployed machine that can tell you that info (other than looking in /var/lib/juju/x)
<kwmonroe> bloodearnest: on the juju client machine, you can run 'juju status' and perhaps grep/awk your way to the data you'd like
<bloodearnest> kwmonroe, yeah, I was wondering if I could query a juju agent or something
<bloodearnest> kwmonroe, I want to discover it automatically on the unit itself
<bloodearnest> fwiw, the use case is that I want to add a juju-unit tag to all log statements my app generates (and subsequently forwards to logstash)
<bloodearnest> as the complete disconnect between hostname and juju-unit name is real PITA when debugging distributed failures
<bloodearnest> I think I'll just have to get the each charm to set it explicitly in an env var, but I was hoping to avoid having to do that
<marcoceppi> bloodearnest: there's no way to really tell about other units if you're not related to them
<marcoceppi> querying /var/lib/juju directly will only end in pain
<bloodearnest> marcoceppi, I only really want to know which unit I am
<marcoceppi> bloodearnest: what do you mean
<marcoceppi> the subordinates unit number?
<marcoceppi> or the primary attached to?
<bloodearnest> marcoceppi, the primary, in most cases
<bloodearnest> which unit installed/configured me
<marcoceppi> bloodearnest: oh, that's easy. Create a relation for juju-info interface
<marcoceppi> then during that relation you can do `JUJU_REMOTE_UNIT` which will give you the service/unit name you're related to
<bloodearnest> I can't create a relation - I'm on a unit
<bloodearnest> I don't have juju client/creds
<bloodearnest> some context: I already include the hostname automatically, which I don't need to explicitly configure
<bolthole> Hm.. is there some equivalent of "juju query cs:blah" to match "juju deploy cs:blah", where you dont want to deploy, you just want to verify or find out info on a charm in the store?
<bolthole> the juju help stuff doesnt see to mention anything
<bloodearnest> bolthole, charm search?
<bloodearnest> charm-tools pkg
<bolthole> juju charm search works nice for my immediate needs. thanks. Although seems like it would be nice to have a "pull more details" varient of that.
<marcoceppi> bloodearnest: I don't understand.
<marcoceppi> you have to create a realtion to get a subordinate on to a machine
<marcoceppi|airpl> nick marcoc|airplane
<bolthole> having problems deploying from personal space.
<bolthole> juju charm search tomcat, lists, among other things, lp:~bolthole/charms/trusty/tomcat/trunk
<bolthole> but when I try to do,   juju deploy lp:~bolthole/charms/trusty/tomcat/trunk   it says invalid charm name?
<bolthole> doesnt work if I use cs:~bolthole/charms/trusty/tomcat/trunk either
<bolthole> bzr branch lp:~bolthole/charms/trusty/tomcat/trunk works fine though
<marcoc|airplane> bolthole: cs:~bolthole/trusty/tomcat is the url, that charm search is woefully out of date in how it performs lookups
<marcoc|airplane> bolthole: https://jujucharms.com/u/bolthole/
<marcoc|airplane> bolthole: it can take a few hours before showing up, once it does, you'll see it there
<marcoc|airplane> bolthole: next month we'll have instant publishing! which doesn't help you now...
<bolthole> oh. so reguardless of what "juju charm search" says.. it isnt REALLY available, until it shows up  in that special url?
<bolthole> thats kind of odd though, becuae I branched and pushed the thing hours and hours ago.
<marcoc|airplane> bolthole: yeah, charm search search launchpad, which is instant, but it takes a few hours to go from launchpad to charm store
<bolthole> Is there a magic incantation I can use to deploy direct from launchpad?
<marcoc|airplane> bolthole: not really, as you can see this is one of the many reasons we're moving to the much more concise and quick publish tool next month
<bolthole> mkaaay.... sooo.. how many hours should it take? :-/
<kwmonroe> bolthole: juju deploy cs:~bolthole/trusty/tomcat should work for you
<marcoc|airplane> bolthole: 1-6 :\
<kwmonroe> it's listed in the store at the url marcoc|airplane referenced
<marcoc|airplane> bolthole: oh look! it just showed up
 * marcoc|airplane wipes brow
<kwmonroe> cory_fu: if i have a @hook('start')\n def myfunc(), can i call myfunc from outside the start hook?
<cory_fu> Absolutely
 * kwmonroe wipes brow
<adam_g> machine-0: 2016/01/28 11:47:23 http: TLS handshake error from 192.168.122.115:53369: remote error: bad certificate
<adam_g> machine-0: 2016-01-28 19:47:23 ERROR juju.provisioner provisioner_task.go:655 cannot start instance for machine "2": kvm container creation failed: exit status 1
<adam_g> any debug tips for this?
<adam_g> my local env magically broke overnight
<aisrael> cory_fu: lazypower|travel: The devel docs link to https://github.com/juju-solutions/layer-basic (from https://jujucharms.com/docs/devel/developer-layers) but that repo doesn't exist
<cory_fu> Does anyone have an objection to renaming the repo from reactive-base-layer to layer-basic?  That's more in line with the convention we've been using.
<cory_fu> Looks like github does the reasonable thing with repo renames, and I'll update interfaces.juju.solutions, so it won't affect anyone.  I'm gonna do it.
<aisrael> thanks, cory_fu!
<bolthole> marcoc|airplane ha ha, strategic lunch break for the win! :)
<bolthole> i'm writing a subordinate charm. it requires a value to be set by relation-set by the master. But itseems (according to juju debug-log) that its install and config hooks are being called AFTER the primary's relation-joined hooks. What am i supposed to do about that?
<adam_g>   "1":
<adam_g>     agent-state: error
<adam_g>     agent-state-info: 'kvm container creation failed: exit status 1'
<adam_g>     instance-id: pending
<adam_g>     series: trusty
<adam_g> where do i debug this? local provider /w kvm
<bolthole> my connection with juju debug-log keeps breaking if I leave it up. with errors like
<bolthole> ERROR: juu.apiserver debuglog.go:102 .... write tcp ... connection timed out
<bolthole> last time I did a restart of the whole local juju service (juju-agent-$USER-local.service). Is there a shorter way to recover the log capabilities?
<bolthole> okay... anyone awake still? :)   I just did another round of tests for my subordinate class andf
<bolthole> primary class... and it would seem that the primary "relation-joined" hook comes after even the SUBORDINATE relation-joined.
<bolthole> Bug, or feature?
<blahdeblah> bolthole: I'm not an expert in this, but I'm pretty sure there is no guarantee on which order hooks will execute in, especially across units.
<bolthole> generally spekaing i can understand that. but inthe specific case of a subordinate container, I wuld think having deterministic behaviour is pretty crucial
#juju 2016-01-29
<kwmonroe> bolthole: blahdeblah is correct, you can't rely on order of any hook execution.. so the hooks must handle the potential for nothing being returned by relation-get
<blahdeblah> thanks for confirming, kwmonroe
<kwmonroe> in your case, it would probably be easiest to wire in your webapp using webapp-container-relation-changed
<kwmonroe> you are guaranteed that relation-changed *will* fire anytime data on the relation changes
<kwmonroe> (vs relation-joined)
<kwmonroe> so, a webapp-container-relation-changed hook for you might look like "FOO=`relation-get webapp-path`; if [ ! -z $FOO ]; then wget the webapp and move it to $FOO; else do nothing and wait for the next time relation-changed runs.
<kwmonroe> eventually, relation-get webapp-path will return data (once tomcat sends it), and your -changed hook will get it.
<bolthole> thanks. I kinda empirically found out that gets fired when I want it to. Nice to know that it's guarranteed that way.
<bolthole> now that I think about it some more though...
<bolthole> I trhink that mbruzek is approaching the information exchange the wrong way.
<bolthole> he is using the relationship to share the tomcat webapp directory
<bolthole> but... when and ifjuju ever gets to the point whre subordinate services can be removed....
<bolthole> I'm thinking that the relationship may be severed, before the service gets told to 'stop' itself.
<bolthole> But at that point, it has just lost the information on where to cleanup itself. becuase that information was in the relatoinship, which doesnt exist
<kwmonroe> you can remove subordinate services today in juju.  relation-departed will fire, which is where you should remove any data that you don't want left on the principal unit.
<bolthole> huh. so relatoin-departed is inappropriately named? it should be called "relation-deparTING"?  ;-)
<kwmonroe> meh, i guess.  -departed means the relationship is tearing down but you can still access relation data (like webapp-path).  -broken means the relation is gone.
<bolthole> WOOO! I got it to work!
<kwmonroe> nice bolthole!
<bolthole> kwmonroe could you please aprove my one-char-syntax-error fix to the tomcat charm?
<bolthole> i submitteed it this morning. it was ignored, and someon else worked on the charm so there were conflicting lines.
<bolthole> i just redid it for no conflicts. please push it through before more conflicts happen?
<bolthole> the webapp-path value is completely unusable without the fix.
<kwmonroe> sure bolthole - let me take a look
<kwmonroe> bolthole: i only see this from 18 hours ago.  did you see another commit to trusty/tomcat that made this merge conflict?
<kwmonroe> https://code.launchpad.net/~bolthole/charms/trusty/tomcat/trunk/+merge/284213
<bolthole> yeah just 60 seconds beofre i wrote that :)
<bolthole> https://code.launchpad.net/~bolthole/charms/precise/tomcat/trunk/+merge/284373
<bolthole> on wiat, its precise
<bolthole> not trusty. yet.
<bolthole> so i guess thre's no conflict for the trusty one, and hyou could just approve that one too ;)
<bolthole> Thanks
<kwmonroe> bolthole: you talking to me?  if so, yw :)
<bolthole> :)   So.. how does the charm version in the store, get derived from the backend code baes?
<bolthole> one is at 15, the other is at 3 or something?
<kwmonroe> yeah bolthole - there's no correlation.  at least not one that i can figure out.
<kwmonroe> i mean, i guess you could have 10 commits to LP, push to store making charm rev 1, then 4 more commits to LP, push to make charm rev 2, then 1 more LP commit, push to make charm rev3.
<kwmonroe> i but i don't know how you could look at an LP commit number and figure out the charm revno.
<bolthole> okay, so it isnt automatic; you have to make some kind of explicit push each time
<kwmonroe> that's correct
<kwmonroe> well, no, that's not correct.  i don't know how it works.  the numbers just go up ;)
<bolthole> dohhh
<kwmonroe> oh oh oh -- maybe it is how it works.. each commit bumps the LP revision, but only a push triggers the ingestion into the store.
<kwmonroe> and now i have thought too much about this.  thanks.
<icey> does juju on openstack support the storage hooks?
<jamespage> gnuoy, quick review pls - https://code.launchpad.net/~james-page/charm-helpers/lp1537155/+merge/284411
<bloodearnest> marcoc|airplane, the subordinate thing is a red herring, I would just like to be able to discover the units running on the this machine, in the same way I can find out the hostname of the machine
<bloodearnest> to avoid having to explicitly configuring that info in every charm
<bloodearnest> the context here is *not* charm code. It's a library that is used by our apps to standardise/improve their log outout
<bloodearnest> we currently tag each log line with the hostname of the machine, which is easy to discover. But the hostname is kinda useless in a juju world, we are more interested in the juju unit that the app runs under
<bloodearnest> so I was wondering if there's away to discover the juju units running automatically
<bloodearnest> or else, every charm for each service that uses this library (10+) would need to be manually updated to set an env var or similar with the unit name
<deanman> Hi, I'm trying to learn more about Juju by using the official vagrant boxes and they fail to properly boot up both on windows and MacOSX. I've tried trusty64 trusty32 and wily 64. All of them are stuck at "Intalling package: cloud-utils"
<stokachu> kadams54: with theblues library how can i pull a specific revision? for example, bundle = cs.bundle('data-analytics-with-sql-like/5')
<stokachu> specifically for a bundle
<rick_h_> stokachu: yes, https://api.jujucharms.com/charmstore/v4/openstack-base-34/archive/bundle.yaml vs https://api.jujucharms.com/charmstore/v4/openstack-base-39/archive/bundle.yaml
<rick_h_> stokachu: the thing is that a new revision might be due to a readme update or the like
<rick_h_> stokachu: so not all reivsions will be changes to the bundle.yaml file itself
<stokachu> ah ok
<rick_h_> I was confused because that openstack 38 and 39 revisions the files were the same
<rick_h_> but the readme was different
<stokachu> so i can generate the archive url with cs.archive_url('data-analytics-with-sql-like-4') but that doesn't contact the charmstore to validate it
<rick_h_> stokachu: not sure on the library itself
<stokachu> i guess i could just generate the api url and check that a 200 is returned
<rick_h_> stokachu: yea, have to bug jcsackett or some other folks that work on that atm
<stokachu> rick_h_: ok cool, will do thanks
<tych0> marcoc|airplane: https://github.com/lxc/lxd/issues/1477
<stokachu> how do you force a destroy-controller with juju 2.0?
<stokachu> i interrupted a lxd bootstrap and now juju is unable to cleanup after itself
<tych0> marcoceppi: er, https://github.com/juju/juju/pull/4191
<tych0> i am bad at copying and pasting
<marcoceppi> tych0: aren't we all
<cmagina> tvansteenburgh: hey, did you get a chance reproduce that bundletester issue i've been hitting?
<tvansteenburgh> cmagina: i was unable to repro. would you mind filing a bug here https://github.com/juju-solutions/bundletester/issues
<cmagina> tvansteenburgh: will do, thanks for trying
<tvansteenburgh> please include traceback
<tvansteenburgh> i will have another look soon
<adam_g> does juju / local provider log output of uvtool errors?
<lazypower|travel> adam_g I don't believe so
<lazypower|travel> But i'm not 100% on that.
<bolthole> scalability question: what if we have a 'cloud' where we anticipate 200-300 services? how can we use juju for that and keep it manageable?
<rick_h_> bolthole: we'd suggest a controller with multiple models in HA mode
<rick_h_> bolthole: the controller multiple model work is in the 2
<rick_h_> p alpha work
<rick_h_> bah 2.0
<bolthole> rick_h_ sounds like you are describing more scalability. however, I am actually trying to focus on manageability
<rick_h_> bolthole: i'm talking about moving those 200 services into 20 more tightly scoped models
<rick_h_> so you're managing things in smaller groups
<bolthole> ah, good
<bolthole> demo or screenshots?
<rick_h_> bolthole: https://jujucharms.com/docs/devel/wip-systems
 * rick_h_ looks for youtube talk
<rick_h_> https://youtu.be/-1aVgnJIwLk
<rick_h_> bolthole: shows the old gui visualizing it
<rick_h_> bolthole: don't have one with the new gui atm
<bolthole> ah
<bolthole> thanks for the video
<bolthole> cant use sound at the moment....  is "model" the thing tothe right of juju/admin ?
<bolthole> so,
<bolthole> juju/admin/openstack
<bolthole> "openstack" is a model view?
<rick_h_> bolthole: yea the model name is sonething you give whwb you create it
<rick_h_> you'll see different names as the video goes on
<bolthole> "it"
<bolthole> you said that was with the "old" gui... but I dont see that option with the trusty/juju-gui screen?
<rick_h_> bolthole: it was a demo
<rick_h_> bolthole: today you use the jes festure flag in that doc link
<rick_h_> bolthole: and use the gui 2.0 deployed into the admin model
<bolthole> oh. um.. waitaminute though...
<bolthole> the feature flag you are referring to, is   for "multiple environments"?
<rick_h_> bolthole: yes
<bolthole> models == enviroments? or someting else?
<rick_h_> bolthole: where we're doing s/environments/models in 2.0
<bolthole> ugh. naming overload:(
<bolthole> but.. I dont want to set up a separate juju .. controller(?) for every model wehave
<bolthole> that seems rather wasteful
<rick_h_> no one controller many models
<bolthole> huh
<rick_h_> once your bootstrapped you can creat more and more models
<rick_h_> try it out in that doc link
<bolthole> so whats the new name for what is now called environments? :-D taht is to say "my azure environment" vs "my amazon environment" ?
<rick_h_> my azure controller
<rick_h_> and in 2.0 you give them names and can have multiple azurr controllers
<rick_h_> so cloud-controllers-models
<bolthole> ah, k.  thanks.  Rough ETA for release of this thing?
<rick_h_> 16.04
<rick_h_> april release
<bolthole> thanks
<rick_h_> i'm showing some of it at the summit next wed
<rick_h_> the charmers summit
#juju 2016-01-30
<yuanyou>    hi all,when i deploy openstack with juju ,some units always pending? how can I resolve it?
<yuanyou>          active  false   local:trusty/ceph-105
<yuanyou> ceph-osd              blocked false   cs:trusty/ceph-osd-14
<yuanyou> ceph-radosgw          blocked false   cs:trusty/ceph-radosgw-19
<yuanyou> cinder                unknown false   local:trusty/cinder-136
<yuanyou> cinder-ceph                   false   local:trusty/cinder-ceph-2
<yuanyou> glance                unknown false   local:trusty/glance-150
<yuanyou> heat                  unknown false   local:trusty/heat-12
<yuanyou> juju-gui              unknown false   cs:trusty/juju-gui-45
<yuanyou> keystone              unknown false   local:trusty/keystone-0
<yuanyou> mongodb               unknown false   cs:trusty/mongodb-33
<yuanyou> mysql                 unknown false   local:trusty/percona-cluster-45
<yuanyou> neutron-api           unknown false   local:trusty/neutron-api-1
<yuanyou> neutron-api-onos              false   local:trusty/neutron-api-onos-0
<yuanyou> neutron-gateway       blocked false   local:trusty/neutron-gateway-64
<yuanyou> nodes-api             unknown false   cs:trusty/ubuntu-5
<yuanyou> nodes-compute         unknown false   cs:trusty/ubuntu-5
<yuanyou> nova-cloud-controller unknown false   local:trusty/nova-cloud-controller-501
<yuanyou> nova-compute          blocked false   local:trusty/nova-compute-133
<yuanyou> ntp                           false   cs:trusty/ntp-14
<yuanyou> onos-controller       unknown false   local:trusty/onos-controller-0
<yuanyou> openstack-dashboard   unknown false   local:trusty/openstack-dashboard-32
<yuanyou> openvswitch-onos              false   local:trusty/openvswitch-onos-0
<yuanyou> opnfv-promise         unknown false   local:trusty/opnfv-promise-2016011201
<yuanyou> rabbitmq-server       unknown false   local:trusty/rabbitmq-server-150
<marcoceppi> yuanyou: please use a pastebin in the future, also this is the services, we'd need to see the units output as well. I suggest you install pastebinit (sudo apt-get install pastebinit) then run `juju status --format tabular | pastebinit`
<nagyz> hey
<nagyz> trying to use juju + maas together; managed to bootstrap but after adding a new charm the next maas machine didn't come up from a juju perspective
<nagyz> I can ssh to the machine and I see that cloud-init properly ran, however, the juju agent isn't running
<nagyz> how could I debug further?
<marcoceppi> nagyz: what does cloud-init-output.log have in /var/log ?
<nagyz> funny thing is, after destroying it and re-deploying it, it worked.
<nagyz> so I'll need to recreate to dump the cloud-init-output.log
<nagyz> let me boot up a new one and see if I can get a dump.
<nagyz> ok, managed to reproduce
<nagyz> marcoceppi, http://pastebin.com/jDCcn8vP
<firl> anyone know of a murano charm / a guide to do the install with a juju openstack
<arosales> firl there is no murano charm that I know of put the juju charm store does have a pretty good catalog that you can deploy onto openstack
<arosales> ref = smile.amazon.com
<lazypower|travel> s/put/but
<arosales> I mean https://jujucharms.com/store
<lazypower|travel> arosales - look at you supporting charity :D
<arosales> :-)
<firl> arosales: yeah, I am very pleased with it
<lazypower|travel> firl - is there a specific application you were looking for thats in murano that we dont yet have in the store?
<firl> lazypower|travel: some of the openstack summit videos have murano/heat walkthroughs. I was just hoping to use it. currently right now I am trying to figure out how I want to solve kuberenets/coreos implementation
<lazypower|travel> firl - funny you mention that :) we have kubernetes charms
<firl> I donât care about autoscaling right now, but it would be nice
<firl> yeah I know the charm is there, but I liked the idea of having coreos with the OS updates for security
<firl> I was trying to figure out which bundle to use for now to get there
<lazypower|travel> and our latest revisions (not in the store proper just yet) support in place upgrades
<firl> gotcha
<lazypower|travel> if you're interested, i can get you a bundle in the next few minutes
<firl> thatâd be nice
<lazypower|travel> we'll have to wait a 20 minute cycle for the charms to ingest, but i can get you up and running on k8s
<firl> the last thing I found is:
<lazypower|travel> ack, let me ping my main man mbarnett
<firl> https://insights.ubuntu.com/2015/07/30/juju-kubernetes-the-power-of-components/
<lazypower|travel> er mbruzek
<lazypower|travel> hey neat, thats my article
<firl> ( this is all to solve the fact that the meteor charm is outdated )
<mbruzek> what where?
<lazypower|travel> firl you're in good company ;)
<firl> :) yeah hah, I remember meeting you in september for the summitt
<lazypower|travel> Nice!
<firl> but yeah a recommended bundle for getting it up and going would be awesome
<firl> does juju have autoscaling implemented yet ( I know ceilometer / heat can )
<lazypower|travel> well, juju itself doesn't implement that, as autoscaling is subject to business intelligence / different requirements
<lazypower|travel> we've got some implementations with zabbix
<firl> gotcha, yeah I was considering just writing some hooks with ceilometer/heat to call juju add-unit essentially
<lazypower|travel> firl - ok, i'm running a quick test, we did some mods to the etcd interface last week that i need to ensure hasn't broken the charm
<firl> hah ok
<lazypower|travel> I'd rather be honest than give you broken software :)
<firl> i appreciate it
<nagyz> while so many devs around any idea based on my pastebin why my deployment doesn't pick up juju after starting the node by maas? :-)
<firl> nagyz: have you confirmed that routing / networking is all set up properly?
<firl> all of your interfaces seem to be down, and the datasource ip is not on a private subnet
<firl> when deploying from maas you can physically be on the host and log in via ubuntu:ubuntu I believe to diagnose it
<nagyz> firl, yeah if I destroy the machine and start a new one it has a 50% chance to come up properly
<nagyz> and maas actually tells me the machine is deployed.
<nagyz> so it can do the callback to the maas server
<firl> is it 50% chance on the same machine, or 50% chance in general across multiple nodes?
<nagyz> on the same machine
<nagyz> 9.4.113.0/24 is actually our maas network so that is correct
<firl> so this only happens to a specific machine, and it works on all the other nodes no issues?
<nagyz> no, on any node if I start it via juju (and only then) then I have this issue
<firl> gotcha
<nagyz> the machine (from a maas perspective) can be deployed 100%
<nagyz> and as you can see it can do an apt-get update and fetch packages so the network bonding works
<nagyz> I suspect when juju is changing the networking around to be under a bridge instead of bond0 then something goes wrong
<nagyz> I also have a capture from a run when it actually came up properly if that helps?
<firl> it might
<nagyz> let me paste it
<firl> yeah I wonder if it is because of the networking setup ( what you have chosen for the networking  in maas might be colliding with juju )
<arosales> mbruzek: http://hardening.io/
<arosales> mbruzek: interesting process on security hardening.
<nagyz> firl, basically I have two 10g interfaces and I'd like to bond them together (which is done by maas)
<nagyz> firl, while actually PXE booting from a 3rd, 1g interfaces
<nagyz> and have a 4th 1g interface that I'm not using
<firl> gotcha, and if you have the 1g interface, or donât do bonding for this server does it work fine?
<nagyz> no idea. all our servers are bonded. :)
<nagyz> haven't tried
<nagyz> here's a good run: http://pastebin.com/5mgiVVPe
<firl> and you are using maas 1.9?
<nagyz> yep
<firl> or 1.8
<firl> k
<nagyz> 1.9
<nagyz> (which actually has a horrible list of bugs but that's a topic for the very dead #maas channel...)
<firl> haha
<firl> yeah, I havenât found much help from maas vs a lot of help from the juju guys
<firl> I am curious
<firl> the known good paste you have
<firl> it is able to verify the ssl cert bundle
<firl> the  bad one isn't
<lazypower|travel> firl - yeah, i've got s'more work to do in here. the refactoring i did broke flannel networking
<lazypower|travel> firl - question for you as a consumer. Would you prefer it with or without sdn bundled?
<firl> I donât know to be honest
<firl> I am using docker UCP at work
<firl> this is for a home project
<lazypower|travel> firl - ack. Are you perchance coming to the Charmer Summit on Monday?
<lazypower|travel> I can reasonably have a fix in place by then :)
<firl> I am not haha
<firl> I can use an older version that is stable though if there is a good bundle to use
<lazypower|travel> sure, let me get you that
<lazypower|travel> mbruzek - can you paste the bundle we built for scale?
<firl> kk thanks man
<mbruzek> http://paste.ubuntu.com/14765250/
<lazypower|travel> firl you can get the layer from http://github.com/mbruzek/layer-k8s
<mbruzek> right, you can charm build that one
<lazypower|travel> you'll need to build the layer in your $JUJU_REPOSITORY, then you should be g2g with that bundle
<mbruzek> to create the k8s charm.
<firl> oh do a local deploy you mean
<firl> from the git repo
<lazypower|travel> right, the bundle references a local charm
<lazypower|travel> and the repository linked is just a layer, you'll need to `charm build`
<firl> hrmm ok, iâve deployed locally, havenât done a charm build before
<mbruzek> firl: need to install charm tools
<nagyz> firl, so what could cause it not to be able to verify it?
<nagyz> firl, just started using juju, honestly. :)
<firl> nagyz: the bundle verifies it, both places
<firl> but the known bad is actually having to reach out 2x to the same ip on line 25/26
<lazypower|travel> firl - i'm off or now, headed out to get waffles. If you need anything, dont hesitate to ping and i'll get back to you when i return to the hotel
<lazypower|travel> good luck and cheers o/
<firl> sounds good man thanks again
<firl> nagyz: ok looks like itâs just a networking issue
<firl> look at lines  2336 on the known good boot
<firl> and compare that section to 2336 on the known bad
<firl> you will see that the known good is able to wait for the bonding to come up
<firl> my suggestion would be to look into some of the properties of your LACP or ling aggregation setup with the switch you are using to do the bonding with the devices
<firl> ( i could be totally off too )
<nagyz> firl, the network is set to do active LACP, so the client needs to send LACP packets actually
<nagyz> but it's set to portfast, so should be fine (although I admit I have no idea how portfast+LACP work together)
<firl> yeah nor do i
<firl> I havenât done bonding via maas yet ( hope to this year )
<firl> only done it via manual config / cisco
<firl> why are you bonding if you donât mind me asking
<firl> depending on what you deploy you could just split the net traffic into 2 ( 10 g ) networks ( ceph on one vs neutron on another for example if you were using openstack )
<nagyz> we can't have enough network bandwidth :_)
<firl> hah
<nagyz> this ceph cluster is ~2PB, and even with bonding on the storage nodes I only have 360Gbit on the storage side
<firl> yeah I am waiting for mellanox support
<nagyz> vs 1.92Tbit on the compute side
<nagyz> if I don't bond, that 360 becomes 180...
<firl> ya
<firl> I hear ya
<nagyz> plus redundancy
<nagyz> I should have really got dual 40gbit instead of dual 10
<nagyz> might put in an other dual 10g card next week
<firl> yeah
<firl> you will have more support with those than mellanox
<firl> yeah I have a 54g setup but I canât use it with charms because of the maas / node configuration
<nagyz> use ethernet :p
<nagyz> ib is dead :p
<nagyz> (isn't it 56gbit, btw? or 52...?)
<firl> ragyz you might be right with the rate
<firl> mbruzek , lazypower|travel : http://pastebin.com/d44UjP9g
<firl> looks like there are some issues with it still; I assume this is what you were talking about with the networking
#juju 2016-01-31
<lazypower|travel> firl yeah :(
<firl> lazypower|travel: gotcha, is there a benefit to doing it this way over having k8s bootstrap the env itself?
<lazypower|travel> The flannel networking is a bolt on for cross host communication. The charm itself does deliver a k8s bootstrapped cluster
<lazypower|travel> we're leveraging hyperkube to standup the cluster, so we're closer in alignment with their reference arch
<lazypower|travel> firl - unless i completely missed the questions intent....
<firl> http://kubernetes.io/v1.1/docs/getting-started-guides/juju.html
<lazypower|travel> which is possible, its 5am
<firl> hah yeah gotta love time zones
<lazypower|travel> firl - ah yeah, dont trust that document :) thats in reference to our older charm work where we were delivering k8's entirely via CM
<firl> hah kk
<lazypower|travel> it was klunky to say the least, but it was an important evolutionary step in our understanding of their stack
<lazypower|travel> thats actually a prime topic in my FOSDEM track today
<firl> gotcha
<firl> too bad I couldnât come haha
<firl> So I should wait until monday? will it get pushed to the charm store
<lazypower|travel> it'll be recorded, and since we're buds, i'll do a special after-session for ya if you're interested :)
<firl> or just fixed so I can deploy the bundle with charm build layer
<firl> Itâd be cool to get a better understanding
<firl> I am going through bulletproofmeteor and they are recommending kubernetes
<lazypower|travel> yeah, what I'll do is after FOSDEM lets out and I get settled in Gent i'll dive back into fixing that stilly bug.
<firl> and since the meteor charm is broke, I figured why not
<lazypower|travel> i can ping you here or via email once i've got a fix proposed, or you can star the layer-k8s repo
<lazypower|travel> as I gate all that work through mbruzek
<firl> celpa.firl@gmail.com
<firl> ( I will star the github also )
<lazypower|travel> awesome. I'll reach out when i've got the fix in place for the flannel containers :)
<firl> I have a build node all set aside for it
<lazypower|travel> just for good measure
<firl> I can at least smoke test it for ya
<firl> since I replicated it
<lazypower|travel> <3 really appreciate that
<firl> so I should just do a single docker node for now
<lazypower|travel> sure, actually if you remove 'layer:k8s' from the layer.yaml
<lazypower|travel> rebuild and deploy with a single k8s node + single etcd service
<lazypower|travel> you should be in like flynn
<lazypower|travel> sorry, remove 'layer:flannel'
<lazypower|travel> and you can even scale that, you just wont get distributed pods working. the scheduler is intelligent enough to know this in my testing
<firl> lol gotcha I was like â¦ yeah itâs not there
<firl> ok
<firl> I will do that for now and see how it goes
<firl> then once itâs ready I can add the overlay networking
<lazypower|travel> yep!
<lazypower|travel> in-place upgrade should take care of that missing component
<firl> any DTR charms you recommend
<lazypower|travel> whats DTR stand for again?
<firl> docker trusted registry
<lazypower|travel> oh!
<firl> ( to use outside of kuber )
<firl> for it to reference
<lazypower|travel> I started on, but haven't finished it yet, we spiked on TLS support instead of the registry
<firl> gotcha
<lazypower|travel> with that being said, I can see hvaing a docker distribution charm in the next 30 days or so
<lazypower|travel> if you're feeling spunky, i can get you early access there as well
<firl> yeah, I donât mind testing/contributing
<firl> all of my charms currently are private only repoâs
<firl> I also havenât written anything for 1.25.3
<firl> so status/update/blocking is still a foreign concept
<lazypower|travel> well, 2.0-alpha1 is coming really soon
<firl> ( for dev that is )
<lazypower|travel> so be prepared for nomenclature changes, and even more awesome
<firl> haha
<firl> yeah running trusty-liberty now
<lazypower|travel> as in single controller - many models (thats right, one state server for many environments)
<firl> nice
<lazypower|travel> and thats just the tip of the iceberg :D
<firl> ya, iâm a fan of it
<firl> have fun at the FOSDEM, I will try to deploy the bundle without flannel now
<lazypower|travel> firl - thanks :) feel free to reach out if you encounter any further issues. I'm happy to lend a hand. And I really appreciate the testing/patience while we iron out our next major release of the charms
<firl> hah yeah, itâs been a bumpy ride. Itâs nice though to have configuration around the environments in a private cloud
<nagyz> firl, so there was a maas 1.9 bonding + juju bug but it should be fixed in .3
<bdx_ghent> yoyoyo
<bdx_ghent> ghent-charmers: what hotel are we in?
<icey> is it possible to set a charm config from within an action?
<firl> lazypower|travel: if you messaged me I missed it
<firl> I saw a notification but the scroll history buffer is not big enough to show me what it was
<lazypower|travel> firl - nope but here's a repost - nagyz firl, so there was a maas 1.9 bonding + juju bug but it should be fixed in .3
<firl> hah, k
<firl> lazypower|travel: I am still on for tomorrow for setting up the cluster let me know when you want me to test it
<firl> If I can get it, will deploy over maas and openstack
<lazypower|travel> firl - will do. I'm still preoccupied with prep for the charmer summit, but for sure I'll be hacking on that regression
<lazypower|travel> looking like its going to be first up tomorrow morning unless i get completely shanghaid
<firl> no worries, I can do tuesday too if thatâs easier. no worries on my end
<lazypower|travel> Ok, lets shoot for that so i have some breathing room :)
 * lazypower|travel adjusts the todo due-date
<firl> haha k
<firl> just realized I was on the http://summit.juju.solutions/ website lol
<lazypower|travel> :) Yep!
 * lazypower|travel chants "one of us, one of us"
<firl> haha
<nagyz> lazypower|travel, yeah but it's not :(
<nagyz> lazypower|travel, I've been trying with .3 yesterday as well
<firl> nagyz at least you have something to track though
<nagyz> should I reopen the bug?
<lazypower|travel> nagyz - link to the bug?
<nagyz> I think this is the one that firl mentioned: https://bugs.launchpad.net/juju-core/+bug/1516891
<mup> Bug #1516891: juju 1.25 misconfigures juju-br0 when using MAAS 1.9 bonded interface <juju-core:Fix Released by frobware> <juju-core 1.25:Fix Released by frobware> <MAAS:Invalid> <https://launchpad.net/bugs/1516891>
<nagyz> but my bug is that SOMETIMES it has some weird error.
<nagyz> 50/50 :)
<lazypower|summit> :-/ hrm
<lazypower|summit> yeah, if you have logs/data to back up the hinky behavior, it'll go a long way to re-open the bug with logs
<nagyz> I've posted the pastebin yesterday - do you have logs or shall I relink them?
<lazypower|summit> nagyz - same output contained in comment #19? https://bugs.launchpad.net/juju-core/+bug/1516891/comments/19
<mup> Bug #1516891: juju 1.25 misconfigures juju-br0 when using MAAS 1.9 bonded interface <juju-core:Fix Released by frobware> <juju-core 1.25:Fix Released by frobware> <MAAS:Invalid> <https://launchpad.net/bugs/1516891>
<nagyz> I don't run VLANs
<lazypower|summit> ok, yeah - i would most def attach the logs and at bare minimum poke the bug
<nagyz> the thing is it's not that it misconfigures the interface
<nagyz> it's just.. something. I have no idea what. I can ssh to the node and whatnot.
#juju 2017-01-23
<kjackal> Good morning Juju world!
<surf> hii all, I am bootsraping juju on openstack cloud how much time it will take to complete bootstrap
<surf> i am suspecting it is strucked at "Running apt-get upgrade"
<surf> please someone help
<sparkiegeek> surf: how long is a piece of string? It's impossible to say how long it will take, but if you're having problems you should paste debug logs from bootstrap (e.g. --debug)
<sparkiegeek> surf: also, look at https://jujucharms.com/docs/2.0/models-config#apt-updates-and-upgrades-with-faster-machine-provisioning to turn off the apt step
<surf> sparkiegeek: here is my bootstrap command and output http://paste.openstack.org/show/596012/
<sparkiegeek> surf: is your OpenStack otherwise healthy? i.e. can you start an instance, assign a floating IP and SSH to that floating IP?
<surf> sparkiegeek: yeah my openstack instance is in active state
<sparkiegeek> surf: ah somehow I missed the last couple of lines - so it successfully reached the instance in the end
<surf> yes
<sparkiegeek> surf: so yeah - I think if you've got up to date images in Glance, then you can safely disable the apt update/upgrade steps
<surf> sparkiegeek: Tell me the way to do this
<sparkiegeek> surf: use the --config option of juju bootstrap and set enable-os-refresh-update and enable-os-upgrade to false
<aisrael> Does any documentation for configuring Juju 2 under centos7 exist? The only page in the docs I can find is for 1.24
<surf> sparkiegeek: could you provide a sample command
<lazyPower> bdx - heyo, i know you're not in office yet, but I'll take a look at your Elasticsearch peering code before I leave office today. Thanks for being patient on that request.
<lazyPower> magicaltrout - only a handfull of days left before we go sample the belgium microbrews :D
<bdx> lazyPower: 10-4, thx
<lazyPower> np np
<lazyPower> happy to help where i can
<bdx> lazyPower: is there a kubernetes bundle that includes the monitoring pieces?
<lazyPower> bdx - looking specifically for the beats bits without manually relating?
<lazyPower> ryebot - can you run a quick build and publish to your namespace w/ the monitoring segment? I know we didnt publish that as we haven't settled on naming schema for the addon bundles.
<ryebot> lazyPower: yep, no problem
<ryebot> lazyPower: cdk or core?
<lazyPower> ty ryebot
<lazyPower> bdx ^ preference on which base solution?
<bdx> lazyPower: canonical-kubernetes?
<lazyPower> ryebot ^
<ryebot> ack
<ryebot> lazyPower bdx: https://jujucharms.com/u/ryeterrell/cdk-flannel-elastic/
<ryebot> ^ untested recently
<lazyPower> ryebot - a gentleman and a scholar, ty ty ty
<ryebot> anytime
<bdx> lazyPower, ryebot: thanks!
<CoderEurope> marcoceppi_: You good for Tuesday & getting discourse upto date with a Juju charm ?
<petevg> kwmonroe: if I'm co-locating two services on the same machine, what's the bundle yaml syntax for putting them in lxd containers?
<kwmonroe> petevg: it's "to: blah" to put multiple services on the same machine.. lemme see about lxd directives.
<petevg> kwmonroe: I know about the "to" part. Couldn't find the directive bit in the docs ...
<kwmonroe> petevg: looks like "to: lxd:n"... https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives
<petevg> kwmonroe: sweet. Thank you :-)
<kwmonroe> np
<cholcombe> do we have any charms out there that manage ca certs?
<kwmonroe> cholcombe: pretty sure lazyPower and mbruzek do that ^^ easyrsa maybe?
<rick_h> cholcombe: and wasn't there some look at vault or something in the openstack team for some of that?
<cholcombe> kwmonroe, thanks
<cholcombe> rick_h, well vault encrypts things at rest
<kwmonroe> cholcombe: https://jujucharms.com/u/containers/easyrsa/6 might be what you want
<lazyPower> cholcombe it uses a single CA, non encrypted at rest, but easyrsa is our go-to for that atm
<lazyPower> its just a bunch of bash wrappers around the openssl cli
<cholcombe> lazyPower, that's exactly what i need.  a single ca
<lazyPower> if you have an existing CA cert, you should be able to side-load that into the charm as well
<lazyPower> consult the readme for specifics
<lazyPower> if anythings funky, ping me or mbruzek  and we'll try to step you through it
<cholcombe> lazyPower, much appreciated
<lazyPower> np np
<lazyPower> happy to help where we can
<cholcombe> lazyPower, so is easyrsa by containers what i want?
<lazyPower> cholcombe yep, thats the latest and greatest. i don't think its gotten a review to be promulgated yet
<lazyPower> sof or now the namespace is the difinitive source
<cholcombe> that's fine
<marcoceppi_> CoderEurope: Yup!
<CoderEurope> marcoceppi_: Cheers - see you (at what time UTC??) tomorrow ...
<bdx> if I am deploying lxd containers via e.g. `juju deploy ubuntu --to lxd:N` is there anyway to have the production considerations for lxd applied to my juju deployed lxd hosts? does the lxd charm do this?
<bdx> should I create a subordinate for this?
<rick_h> "the production consideerations"?
<bdx> rick_h: https://github.com/lxc/lxd/blob/master/doc/production-setup.md
<lazyPower> host var tuning for open file pointers and what not
<lazyPower> yeah, we've talked about this briefly too on the application container side of things
<bdx> rick_h, lazyPower: I have a bundle that deploys 14 lxd to a host ... using the aws provider
<bdx> rick_h, lazyPower: I'm thinking a subordinate that adds the production setup config for lxd might be the answer I need
<lazyPower> bdx - seems like a good work-around until we can tweak/tune that from juju. However i'm not certain how much of that production-setup doc we've encapsulated in the lxd logic of the juju agent.
<bdx> rick_h, lazyPower: I was hoping the lxd charm does this already, but it doesn't look like it
<lazyPower> there's likely some overlap, but hard to say without investigation
<bdx> lazyPower: I don't think any of it has been
<CoderEurope> marcoceppi_: Cheers - see you (at what time UTC??) tomorrow ...
<marcoceppi_> CoderEurope: 3PM EST / 8PM UTC
<CoderEurope> marcoceppi_: gotcha - I shall be there :)
<lazyPower> bdx - ok just finished up with piloting a large PR into the k8s repo. lets context switch to elastic questions and then i'm gonna EOD. Is the elastic-charmer doc updated with the most recent information I should be looking at?
#juju 2017-01-24
<Teranet> ok ruty ruty here how could I check the Juju syslog again on the command line ?
<lazyPower> Teranet juju debug-log
<Teranet> thx
<bdx> lazyPower: yea
<anrah> is there a way to stop agent executing charm upgrade process?
<anrah> I think i managed to create infinite loop for my charm and I now it is just processing the upgrade and I would like to fix that :)
<anrah> Apparently by killing the juju agent process which is handling the charm-upgrade
<kjackal> Good morning Juju world
<stub> huh. Nobody has written a passwordless ssh layer yet. I thought this would be required more often?
<anrah> what do you mean?
<Zic> hi, I'm using canonical-kubernetes bundle charms and I noticed that the default Grafana/InfluxDB packaged in it does not seem to erase graphs from deleted Pods.. do you know where can I do some cleaning?
<Zic> automatic-cleaning, I mean, if that's possible :)
<rick_h> Zic: sounds like an action feature request. I can see the desire to keep some data around for historical context, but definitely also see the desire to clean stuff out.
<anrah> is there a way to react (set state) when metric value is <> something?
<rick_h> Zic: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues please file an issue here
<Zic> rick_h: thanks, I will
<Zic> rick_h: I also notice a strange behaviour, don't know if it's a bug or is it normal: I don't see my new created namespaces in Grafana list of namespace (dashboard: Pods), but if I enter the name of my namespace manually, it works and displayed proper data
<Zic> is it up to me to add namespace in this drop-down list? or it should be auto-completed?
<Zic> for Pods drop-down list, it's auto-filled from what is running in the k8s cluster
<Zic> but the namespace list only contain default and kube-system namespaces :o
<marcoceppi_> Zic: the grafana container is the one run from upstream gcr.io, we don't do much munging for that, that I'm aware of, but most of the k8s team is US based so it's early there
<Zic> ok, if the charms does not manage this area, I will try to poke them at their Slack :) thanks anyway
<rick_h> marcoceppi_: what's the best place to get the kubectl for the bundle?
<rick_h> marcoceppi_: have the core bundle on my maas and wanting to tinker around with it
<marcoceppi_> rick_h: read the readme ;)
<marcoceppi_> rick_h: https://jujucharms.com/kubernetes-core/#yui_3_17_1_2_1485260992490_80
<rick_h> marcoceppi_: oh....I was looking at the readme and it had me get the config, i missed it had me get the binary as well
<rick_h> marcoceppi_: my bad, I'm reading the readme I swear :P
<marcoceppi_> rick_h: yeah, conjure-up makes it nice because it does that automatically, but for manual deploys we don't really have a way to `juju download` or fetch something
<marcoceppi_> rick_h: in the very near future you'll just snap install kubectl
<rick_h> marcoceppi_: yea all good. I thought it was a snap but wasn't sure where/etc
<rick_h> marcoceppi_: ty for the poke on looking harder
<rick_h> bwuhahaha Kubernetes master is running at https://10.0.0.160:6443
<marcoceppi_> \o/
<magicaltrout> hellooooo from bluefin... bit weird sat at a desk in this place
<marcoceppi_> magicaltrout: oi, what are you up to in bluefin?
<magicaltrout> marcoceppi_: went to talk to Tom Calaway about join marketing stuff for Spicule & Juju in 2017
<marcoceppi_> magicaltrout: awesome
<magicaltrout> have a few hours to kill before DC/OS meetup
<magicaltrout> so sitting around taking up space
<marcoceppi_> well, enjoy o/
<magicaltrout> marcoceppi_: can you dig out that big half pull request thing that existed for getting Centos support in Reactive?
<magicaltrout> I want to improve on the DC/OS stuff charms, but I also want them to run on Centos because thats what they support and not my half bodged Ubuntu support
<magicaltrout> i'm doing their office hours in a couple of weeks and it'd be a nice thing to discuss alongside generic Juju support
<lazyPower> Oh yeah, bogdan sent that in didn't he?
<magicaltrout> dunno :)
<lazyPower> i think it withered on the vine, we left some feedback and it wasn't circled back.
<magicaltrout> clearly i could write old school charms, but the hooks make me so sad :)
<lazyPower> but thats from memory so its likely incorrect
<lazyPower> i dont blame you magicaltrout, not one bit
<magicaltrout> no its likely correct, marco showed me the PR ages ago and I said I'd take a look
<magicaltrout> then didn't
<magicaltrout> but i didn't really have a use case 6 months ago
<marcoceppi_> magicaltrout: it landed a while ago
<magicaltrout> landed as in merged or landed as in got pushed as a PR a while ago and is now very stale? marcoceppi_
<magicaltrout> i suspect the latter, which is fine, I was just going to take the spirit of the commit and figure out what needs changing, refactoring and implementing
<marcoceppi_> magicaltrout: it's landed and released
<marcoceppi_> magicaltrout: we don't have centos base layers, or anything there, but charm-helpers is present with basic centos support: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/files/head:/charmhelpers/core/host_factory/
<lazyPower> oh nice
<magicaltrout> interesting!
<lazyPower> magicaltrout - see? I told ya my brain was probably incorrect
 * lazyPower mutters something about scumbag brain
<magicaltrout> alright then
<magicaltrout> so i can take a stab at creating a centos base layer
<magicaltrout> that would make the Mesos guys and the NASA guys happy
 * magicaltrout greps around in the code to see whats going on
<marcoceppi_> magicaltrout: yeah, I think we need to rename basic to ubuntu, tbh. Having a centos layer adds complexity, cory_fu and I chatted breifly about having the notion of a special "base" layer where layer:ubuntu and layer:centos wouldn't be compatible
<marcoceppi_> magicaltrout: then again, snaps will basically save us from ever needing to worry about distros again
<magicaltrout> true that
<magicaltrout> looking forward to snap integration
<magicaltrout> i've cloned layer-basic locally, i'll create a layer-basic-centos to prevent clases
<magicaltrout> clashes
<marcoceppi_> we have a snap layer that stub wrote, I know ryebot and Cynerva have taken a run at it a few times for kuberentes stuff
<magicaltrout> i'll have to check that out when i get time... i'd still get the "does it run on centos" question though ;)
<magicaltrout> SA's are a picky bunch... why can't they just get on with it?
<magicaltrout> I don't care if it doesn't fit their nagios templates ;)
<magicaltrout> marcoceppi_: to save me digging all over the place
<magicaltrout> do you know where you define stuff that ubuntu needs to install when creating a new machine to run the charms, python3 etc
<magicaltrout> bootstrap_charm_deps
<magicaltrout> think i found it
<marcoceppi_> magicaltrout: yeah, it's in lib/layer/basic.py of the basic layer IIRC
<magicaltrout> aye boos
<magicaltrout> boss
<jcastro> http://askubuntu.com/questions/875716/juju-localhost-lxd
<jcastro> any ideas on this one?
<magicaltrout> well
<magicaltrout> you don't do intermodel relations yet
<magicaltrout> so thats not going to fly very far currently
<lazyPower> magicaltrout - sounds like they want 2 controllers, so two completely isolated juju planes on a single host
<lazyPower> i'm not sure if we've even piloted that use case as a POC, as multi-model controllers removed a lot of the concerns here
<magicaltrout> lazyPower: dustin swung by and asked about the DC/OS stuff, said he'd be happy to do some collaboration and pitch some talks at Mesoscon and stuff, now I'm off up to the Mesos meetup in London to go talk to the Mesosphere guys, maybe one day someone will offer to write the code as well! ;)
<lazyPower> magicaltrout - i'd love to do that integration work *with* you
<magicaltrout> anyway the LXD poc is coming along slowly
<lazyPower> i think there's a good overlap in changes in k8s and mesos charms that will make it simple
<magicaltrout> once we have something that semi works i'll get you guys and the mesosphere guys looped in to fill in the gaps
<lazyPower> you're just plugging in api ip's and what not, i think theya re both rest based no?
<lazyPower> and some dinky config flags to replace the scheduler too, dont let me forget those important nuggets
<lazyPower> as a complete oversimplification of the problem domain ^
<magicaltrout> you've lost me now
<magicaltrout> what are you asking?
<lazyPower> oh were you not talking about k8s/mesosphere integration?
<magicaltrout> lxd -> Mesos <-juju
<magicaltrout>       ^
<magicaltrout>       -
<magicaltrout>      k8s
<lazyPower> oic
<lazyPower> yeah i missed the mark there
<magicaltrout> well the DCOS chaps are certainly interested in getting LXD into Mesos which would be a win because then juju can bootstrap against mesos
<magicaltrout> you could run K8S on Juju within Mesos ;)
<magicaltrout> and on your side Dustin says he's interested in getting Juju and DC/OS playing nicely which includes the LXD stuff
<magicaltrout> but! when that starts coming together
<magicaltrout> i shall be stealing your flannel network and stuff
<magicaltrout> not that i have a clue how to monetize such a platform, but in my head juju managed mesos and juju bootstrapped mesos sounds pretty sweet
<lazyPower> i concur
<lazyPower> maybe the monetization is the support of such a platform and not the tech itself
<lazyPower> not that anybody we know does that *whistles*
<arosales> kwmonroe: if you have a sec I am seeing some curious behavior with spark via the hadoop-spark bundle + zeppelin
<arosales> kwmonroe: doesn't seem like sparkpi is running successfully per the action output http://paste.ubuntu.com/23859116/  I also don't see the job in the spark job history server @ http://54.187.77.213:8080/
<arosales> I do see "2017-01-24 17:28:22 INFO sparkpi calculating pi" in the /var/log/spark-unit logs though .  . .
<kwmonroe> arosales: i'm not surprised at the lack of spark info in the spark job history server.. you're probably in yarn-client mode, which means the job will log to the resourcemanger (expose and check http://rm-ip:8088 to be sure)
<kwmonroe> arosales: in your pastebin, search for "3.14", you'll see it.
<kwmonroe> granted, that action output needs some love.. no sense in having all the yarn junk in there
<arosales> kwmonroe: I do see jobs @ http://54.187.77.213:18080/ through
<arosales> and it looks like pagerank ran ok, just issues with sparkpi
<kwmonroe> arosales: i see 3 sparkpi jobs at your link... did you run some after the pagerank that are no longer showing up?
<arosales> I did
<arosales> kwmonroe: resourcemanger = http://54.187.130.194:8088/cluster
<kwmonroe> arosales: spark 8080 is the cluster view which (i think) will only show jobs when spark is running in spark mode (that is, not yarn-* mode).  spark 18080 is the history server that will show jobs that ran in yarn mode -- yarn 8088 also shows these.
<arosales> kwmonroe: so looks like it ~is~ running just the raw output for the sparkpi action may be a little tuning?
<arosales> kwmonroe: sorry, and to your previous question if I am _not_ seeing jobs I submitted --- I think I am seeing all the jobs @ http://54.187.77.213:18080/ and http://54.187.130.194:8088/cluster
<kwmonroe> yeah arosales, from those URLs, it looks like you're seeing all the spark jobs (sparkpi + pagerank) on the spark history server (18080), and all the spark *and* yarn jobs (sparkpi + pagerank + tera*) on the RM (8088)
<arosales> kwmonroe: was http://paste.ubuntu.com/23859116/ the output you were expecting for sparkpi ?
<arosales> to your point "3.14" is in there just amongst a lot of other data
<kwmonroe> so arosales, i mispoke in my 1st reply.  to be clear, you should see all spark jobs in the spark history server (18080) and resourcemanager job history (8088).  you will *not* see  job history in the spark cluster view (8080) while in yarn mode.
<arosales> kwmonroe: ack
<kwmonroe> yup arosales, as long as it says "pi is roughly 3.1", we promulgate ;)
<kwmonroe> because ovals are basically circles
<arosales> kwmonroe: I was just concerned about the 153 lines of output to tell me a circle is like a oval
<kwmonroe> :)  ack arosales, i'll see if we can clean up that raw output without hiding the meat.
 * arosales will submit a feature on the spark charm ;-)
<arosales> Just glad its working as expected
<arosales> kwmonroe: thanks for the help
<kwmonroe> np arosales - thanks for the action feedback!  if it ever returns "Pi is roughly a squircle", let me know asap.
<arosales> kwmonroe: I'll not that for the failure scenerio
<CoderEurope> marcoceppi_: You ready in about an hour ? (#Discourse)
<kwmonroe> arosales: one more thing.. if you've still got the env, would you mind queueing multiple sparkpi/pagerank/whatever actions in a row?  i noticed sometimes the yarn nodemanagers would go away if memory pressure got too high, and yarn would report multipel "lost nodes" at http://54.187.130.194:8088/cluster.   so if you don't mind, kick off a teragen, sparkpi, and pagerank action at the same time so i can watch.
<kwmonroe> iirc, sometimes the RM would wait for resources before firing off those jobs, but sometimes not.  curious if that's easily recreatable.
<arosales> kwmonroe: gah, missed this message before I tore down :-/
<arosales> kwmonroe: easily reproducible though
<kwmonroe> np arosales -- i've been meaning to get to the bottom of that.  i'm like 25% sure we fixed it when we disabled the vmem pressure setting.  i've got a deployment in my near future, so i'll check it out.  just one of those things i keep meaning to try but keep forgetting until someone like you comes along.
<arosales> kwmonroe: np, spinning back up. I'll let you know what I find
<kwmonroe> thanks!
<marcoceppi_> CoderEurope: yup!
<CoderEurope> marcoceppi_: On here ? or do you want me to keep PM'ing you ?
<CoderEurope> Back in ten mins
<CoderEurope> marcoceppi_: 15 to go ....... :)
<rick_h> CoderEurope: heh, you're on a mission eh?
<CoderEurope> rick_h: The weather is with us tonight ! https://www.youtube.com/watch?v=z3TNGDVOMA4&feature=youtu.be
<rick_h> CoderEurope: oh man, did not see that coming
<CoderEurope> rick_h: I have ordered one of these in my budget: http://amzn.eu/e93uNrY
<arosales> kwmonroe: ok got 20+ actions running @ http://54.202.97.95:8088/cluster  from resourcemanager and spark
<CoderEurope> marcoceppi_: you ready ?
<arosales> kwmonroe: ref of actions running = http://paste.ubuntu.com/23859496/
<CoderEurope> I have a nose bleed - hangon marcoceppi_
<rick_h> whoa holy actions arosales
<arosales> kwmonroe: wanted some mem pressure
<arosales> :-)
<CoderEurope> marcoceppi_: Okay at the ready :D
<marcoceppi_> CoderEurope: yo, lets do this!
<arosales> kwmonroe: also in ref to spark sparkpi action  https://issues.apache.org/jira/browse/BIGTOP-2677  (low priority)
<kwmonroe> +100 arosales!  thanks for opening the jira.
<kwmonroe> ugh, i missed an opportunity to +3.14
<lazyPower> kwmonroe fun fact, my apartment number is pi
<kwmonroe> you must have a very long apt number lazyPower
<lazyPower> its a subset of pi, but pi all the same
<kwmonroe> 3?  close enough.  r square it.
<lazyPower> :P
<arosales> kwmonroe: let me know if you need me to run any more tests on this hadoop/spark cluster
<kwmonroe> so arosales, you've hit it.  see your cluster UI http://54.202.97.95:8088/cluster.. 3 "lost nodes".
 * arosales looks
<kwmonroe> arosales: what i'm waiting to see now is whether or not yarn will recover and process the last running job if/when the nodemgrs come bak.
<arosales> ah http://54.202.97.95:8088/cluster/nodes/lost  -- kind of hidden
<arosales> kwmonroe: ack, I"ll let it run
<CoderEuropeLenov> marcoceppi_: Just to keep you in the loop - my chromebook just crashed - 2 mins till re-surface on meet.jit.si
<kwmonroe> arosales: what has happened here is that each of your nodemgrs is only capably of allocating 8gb of physical ram.  when yarn schedules mappers or reducers (which take min 1gb) on a nodemgr that already has a few running, it can "lose" those nodes.  i don't know the diff between an "unhealthy" node and a "lost" node, but i think the latter can come back once jobs complete.  what's interesting to me is that you got through 19
<kwmonroe>  of the 20+ jobs.  surely that's not a coincidence.
<arosales> at 8gb per nodemgr and 3 nodemgr wee should expect to see around 24 jobs at least allocated, correct?
<kwmonroe> arosales: i would expect 24 jobs *possible* because the min allocation to each nodemgr is 1gb.  but jobs may specify mem constraints that differ from the min.  at any rate, 73 of your 74 jobs have completed, which makes me think those nodes get lost and come back once they're freed up -- like "i'm busy, don't allocate to me anymore".
<kwmonroe> arosales: and now i see 82 jobs.  you're not giving this thing a break, are you :)
<kwmonroe> i mean, "big data" is usually just a handful of word count jobs.. #amirite lazyPower?
<arosales> kwmonroe: just for my education once a node is marked as lost does it return?  Seems the three @ http://54.202.97.95:8088/cluster/nodes/lost haven't been "returned"
<lazyPower> trufacts
<lazyPower> but you were probably looking for the reaction: (ï¾Â´ï½¥Ïï½¥)ï¾ ï¾ â¸ââ¸
<arosales> kwmonroe: I stopped submitting a bit ago, but the submitted jobs are at http://paste.ubuntu.com/23859496/
<CoderEurope> marcoceppi_: Awesome and valuable work there ... high five o/
<marcoceppi_> \o
 * CoderEurope leaves till tomorrow - bye y'all.
<kwmonroe> arosales: i don't know for sure (if lost nodes return).  i think they do once they have > min phys mem available.  i have hope.  when we started this convo, you had 73 of 74 jobs completed with 3 nodes lost.  now you have 89 of 90 jobs complete.  i think that means that your job completion rate has lengthened, but they do appear to be completing eventually.. i suspect that's because a lost node comes back to grab another
<kwmonroe> task.
<kwmonroe> yeah arosales, for sure that's what's happening.  the terasort job that was running with 3 lost nodes has completed.  now 3 are still lost, but a new app (nnbench) is going.
<kwmonroe> so all is well.  your cluster is just kinda small for 90+ concurrent jobs :)
<arosales> kwmonroe: thanks for the info, and here with the hadoop-spark bundle e hae 3 separate units hosting the slave (datenode and node manager) correct?
<kwmonroe> correct
<arosales> and how do we map those units to the lost nodes (http://54.202.97.95:8088/cluster/nodes/lost)
<kwmonroe> arosales: they are the same.  your 3 lost nodes are the 3 slave units.
<kwmonroe> so, what makes a node lost?  a failed health check, or a violation of constraints (like < min memory available).  the former means the nodemanager slave unit hasn't told the RM that it's available for jobs; the latter means it can't take jobs because it doesn't have enough resources available.
<arosales> that was what I thought, but then I was wondering where were the 3 lost nodes (http://54.202.97.95:8088/cluster/nodes/lost) at in reference to units
<arosales> logically the 3 lost have different node addresses than the 3 active nodes
<kwmonroe> negative arosales - the node address is the ip of the slave unit.
<arosales> yes, you are correct. Same address different ports
<arosales> which makes me think we start a new process on the unit for a new node
<arosales> _if_ we started with 3
<kwmonroe> yup arosales -- when the RM has a new task, it farms it out to the slaves / nodes and they spawn a new java (hi mbruzek), which opens a new port.
<mbruzek> yes Kevin?
<kwmonroe> just java buddy
<mbruzek> OK
<arosales> kwmonroe: gotcha -- cool thanks for the lesson here
<arosales> kwmonroe: no pending actions on the juju side -- all have completed
<kwmonroe> ack arosales... now let's watch and see if the nodes come back in the cluster view now that the jobs are done.
<kwmonroe> health checks happen every 10ish minutes
<kwmonroe> which should bring them back if there are no jobs
 * arosales will eagerly await kwmonroe
<kwmonroe> arosales: your cluster ui (:8088) shows 3 active and 3 lost nodes.  i'm not sure how to reset the lost count without restarting yarn, or even if the 'lost' count is all that important given the expected nodes are 'active' now.  i'll google around to try and learn more about 'lost'.
<arosales> kwmonroe: given we started with 3 and we have 3 active. I am not sure lost are recliamable
<arosales> kwmonroe: it seems the cluster did try to keep the 3 "active" albeit starting a new "node" process on the 3 given units
<kwmonroe> yeah arosales, it seems restarting yarn (sudo service hadoop-yarn-resourcemanager restart) resets the lost node count.  maybe the lost node count is supposed to be indicative of how many times the yarn cluster was starved for resources.  i really don't know.  i wish you never asked me this on a public forum because now people know i'm ignorant (on this one thing).
<kwmonroe> but thanks for running this with me -- like i said earlier, i had been meaning to get to the bottom of lost nodes.  it's good to know the cluster can still do jobs (evinced by your 90+ jobs) even if nodes get reported as lost.
<Teranet> with which command is : juju set been replaced with in juju 2.x ?
<kwmonroe> Teranet: juju config
<Teranet> so syntax I hope right ?
<kwmonroe> juju config <app> <key>=<value>.  pretty sure it's the same, just s/config/set
<kwmonroe> er, s/set/config
<arosales> kwmonroe: np, thanks for the info and keeping with my odd questions :-)
<arosales> I could have tried to look it up, but it I was lazy and just pinged you
<lazyPower> ooo someone said my name, but not directed at me :D
<arosales> lazyPower: :-)
<magicaltrout> he was balding... he was pink... they called him kevin....keeevin. He had a swimming pool, with questionable filtration... they called him Kevin Keeeeeevin or just kwmonroe
<magicaltrout> kwmonroe: if i punt you chaps some big data chapters from my little book over the next couple of weeks can you give them the once over?
<arosales> magicaltrout: I was going to catch you in gent, but would love to look over your chapters. Would like to see what you have lined up for testing and layer creation
<magicaltrout> not a great deal yet arosales :) I have some stuff sketched out for layer creation. I didn't do testing yet because a) kjackal_ suggested he might help out there and also I wanted to review what went down in the office hours the other day with the python stuff from bdx
<magicaltrout> to try and aggregate examples
<magicaltrout> i'll be certain to punt stuff your direction though to help suggest gaps
<magicaltrout> i'm crowdsourcing knowledge ;)
<bdx> magicaltrout: tvansteenburghhttps://gist.github.com/jamesbeedy/dad808872e5488b43cf3fa5d5f2db87c
<arosales> magicaltrout: would be good to catch you up on charm-ci in gent if not sooner
<bdx> errrg, tvansteenburghhttps was just giving me some pointers on that jenkins script I've been working on
<arosales> magicaltrout: but yes kjackal is also an excellent guy to help out with the charm ci bits as well
<magicaltrout> bdx: someone on the book feedback stuff specifically asked for CI and testing examples
<magicaltrout> so I figure we should probably solve that conundrum ;)
<Teranet> Hey do we have anyone here who knows rabbitmq-server setup for a cluster ? I got 3 nodes but somehow they won't peer right.
<Teranet> Log output and OpenSTACK overview : http://paste.ubuntu.com/23860574/
<arosales> Teranet: not sure what folks are around atm, but you also may have some luck posting in #openstack-charms
<Teranet> thx will do
<balloons> wallyworld, the osx bug is on us indeed. You're free :-)
<wallyworld> yay, ty
#juju 2017-01-25
<Teranet> question can you move in juju a container from 1 machine to the next via command line by chance ?
<Budgie^Smore> so I am about to put a PoC using Juju onto AWS and am trying to figure out what the min instance size should be?
<marcoceppi_> Budgie^Smore: t2.medium is pretty good for most things, depends on the owrkload
<Budgie^Smore> well I was just going to put the juju controller on it and maybe a kubernete-master node
<Budgie^Smore> it is probably only going to manage a few other slave nodes
<marcoceppi_> Budgie^Smore: so, t2.medium should be okay for controller, m3.medium might be more stable
<marcoceppi_> as for kubernetes-master / worker the master doesn't need too much if you're going to be doing a small number of worker nodes. The workers are really up to you depending on how many continaers you want to pack in per node
<Budgie^Smore> yeah I am looking at doing c4.4xlarge or c4.8xlarge for the slaves
<Budgie^Smore> I should be able to put the master on the same instance as the juju controller right/
<Budgie^Smore> ?
<Budgie^Smore> lol managed to blue screen the work laptop! new windows 10!
<Budgie^Smore> is there a way for models to share machines?
<Budgie^Smore> hey marcoceppi_ just came across your name on a GitHub "issue" in relation to useing AWS ELB as a substitue for kubeapi-load-balancer, defintely would get thumbs up from for that :)
<kjackal> Good morning Juju world
<fang64> I have been trying to deploy 10 nodes with Juju/MAAS and I've gotten to the point where I bootstrap juju onto MAAS as a cloud, but it doesn't indicate machine status when HA is enabled is this normal?
<marcoceppi_> fang64: could you elaborate on machine status?
<fang64> marcoceppi: when I type juju machines it indicates ha status is 1/3
<fang64> this is after I've enabled ha, and the hosts are done being deployed.
<fang64> marcoceppi: I mean controllers sorry, wrong command
<fang64> it does show the controller has machines 3 but HA is yellow with 1/3
<fang64> I assume it means high availability isn't working? because it's not indicating 3/3? maybe it's my lack of understanding
<fang64> marcoceppi_: this is what I see, http://i.imgur.com/onIDdtL.png
<rick_h> fang64: yes, it should work its way to 3/3 as HA kicks in and the db is replicated/etc
<fang64> it's been a day
<fang64> I don't think it takes that long for ha to enable?
<rick_h> fang64: k, so something is up. You'll have to check the debug-log or at this point the logs on the machines
<rick_h> fang64: no, definitely not
<fang64> ok, so it's broken
<fang64> I just wanted to make sure I wasn't misunderstanding,.
<rick_h> no, definitely not. That's why it shows yellow in that you've asked for 3 controller nodes in HA, but only have one functioning
<fang64> alright well I'll take a look, really I am trying to get to a point where I can deploy openstack, but I kept running into issues with bootstrap I suspect I have some network issues.
<rick_h> fang64: oh, ok.
<fang64> I don't know if anyone can answer this question, if I am deploying to MAAS as I am now, is juju using lxc containers or is it just installing the charms to physical hosts?
<rick_h> fang64: so it depends on how you're installing
<fang64> I just created maas as a cloud
<rick_h> fang64: the bundles that the openstack folks use use lxd containers on the physical hosts to help spread things out
<fang64> in juju and then told it to bootstrap
<rick_h> fang64: so nothing there is lxd centric
<fang64> ah ok
<rick_h> fang64: you'll see in the bundle things getting told to go to 0:lxd or the like
<rick_h> fang64: to help colocate the openstack services on deploy, but until you do that nothing is lxd unless told to be
<fang64> ok, so in my case with juju bootstrap as it stands now it's provisioning the hosts as the failover
<fang64> no containers being used to do that?
<rick_h> so juju boostrap will go ask MAAS for one node, download the jujud binary to it, and start the service
<rick_h> fang64: no, no containers being used to do that
<fang64> ah ok, that's something I didn't understand initially when I looked at this, so juju is running on the host os, and when I enabled ha it grabbed 2 more
<rick_h> fang64: exactly
<rick_h> fang64: to have HA you need three machines so that you're not going to fall over if a disk dies/etc
<fang64> that makes more sense, I was just a little confused what it's actually doing with MAAS
<fang64> Another thing I was curious about which is responsible for networking configuration? Juju or MAAS?
<fang64> or is it a combination of both, or it depends on what bundle or charm you are using?
<fang64> rick_h: I appreciate the help, I'm going to try and figure out what's borked on my networking
<anrah> Is there a way to react when controller losts connection to agent?
<anrah> for example now it says: agent lost, see 'juju show-status-log my-instance/3'
<stub> anrah: If it doesn't recover, I think it means either your network is blocked between the controller and that unit, or one or more jujud agents has crashed and you have a Juju bug to deal with
<stub> anrah: If you find a dead juju service, you can try restarting it to see if sorts itself out.
<anrah> I mean that if i have a case where the server dies for reason X
<anrah> meaning some sort of selfhealing that juju would spin up replacament unit
<anrah> I think that requires external monitoring and some magic with jujulib for python..
<stub> Yes. I don't think juju ever spins up new units without user input.
<stub> If you run juju recursively, you can even charm that :) I think some people do it for autoscaling.
<stub> (probably easier now with 2.0 - you just need your monitoring charm given credentials for the main controller and it can administer itself and all the other models)
<tvansteenburgh> anrah: there's a basic autoscale/autoheal demo here: https://github.com/juju/python-libjuju/blob/autoscaler/examples/autoscale.py
<tvansteenburgh> you could extend it to deal with more healing conditions
<anrah> stub: Yeah, I have build something like that for autoscaling
<anrah> next step is to make it work with autohealing
<stub> tvansteenburgh: Do you know if there is a best practice to adding unit tests to layers that don't mess up the main charm's tests?
<tvansteenburgh> stub: sorry, i dunno
<stub> I think I either need to exlude them from charm build, or stick them in a non-standard directory
<beisner> stub, should be able to exclude unit/tests/file-by-name iirc
<beisner> or some such
<Mmike> Hi, lads. How do I use 'Login wih USSO' in juju2's gui? I bootstraped 2,0 environment, did 'juju gui --show-credentials', opened the juju-gui URL in the browser, and when I click on 'Login with USSO' i get 'authentication failed: no credentials provided'
<Mmike> i am logged within ubuntu single-sign-on
<Mmike> I am also logged into charmstore
<lazyPower> Mmike thats a great question.
<lazyPower> rick_h - is there any feedback/guidance here re USSO login support? or who should i be pinging to find out for mmike?
<rick_h> Mmike: hmm, that's only available if it's configured to use an external idenity provider.
<rick_h> Mmike: not useful for most cases unless you've bootstrapped that way
<Mmike> rick_h: oh
<rick_h> Mmike: the --show-credentials should show you the username/password ot use in the gui
<Mmike> rick_h: yup, those work, I just wanted to see how 'USSO' would work
<rick_h> Mmike: honestly, I think that button should be hidden unless the controller supports it.
<rick_h> Mmike: I'll bring it up with the team
<Mmike> rick_h: that was my thinking too!
<Mmike> rick_h: how do I bootstrap with external identity provider configured?
<Mmike> that's a controller option, or?
<rick_h> Mmike: it's a config. I'll have to see if I can find it sec
<Mmike> thnx!
<rick_h> Mmike: https://lists.ubuntu.com/archives/juju/2016-September/007843.html
<rick_h> Mmike: beware "here be dragons" as it works but has side effects and such
<rick_h> Mmike: we were just working on things like show-model listing users with access/etc this morning
<rick_h> Mmike: so it's there but there's some things that it causes to act a bit wonky
<Mmike> I see
<Mmike> rick_h: thank you for that info
<rick_h> Mmike: np, hope that helps
<lazyPower> thanks rick_h
<Mmike> I'm asking because on https://blog.jujugui.org/ it's mentioned that if you log in with sso you get additinoal options, etc...
<Mmike> so I was wondering how to get there
<Mmike> but, yea - disabling that button if controller doesn't support it would be the excellent
<rick_h> Mmike: what additional options?
 * rick_h skims
<rick_h> Mmike: not sure I can think of any additional options you get tbh
<rick_h> it's the same juju/gui/etc
<Mmike> rick_h: the video says that 'you can log in into controller using sso, and then you get additional canvas to select models, etc, etc'
<rick_h> Mmike: oh hmm, will have to watch that I guess.
<rick_h> Mmike: I mean it just shows models you have access to/etc
<Mmike> rick_h: yup, maybe the video is confusing or giving information that's specific for a particular type of a controller
<aisrael> What's the bash equivalent of charmhelper.config's .changed?
<cory_fu> aisrael: You using reactive?  There is a state set that you can check with `charms.reactive is_state config.changed.foo`
<cory_fu> aisrael: I don't know if there's a CLI for the actual method on Config.changed
<aisrael> cory_fu, Yeah, this is a pure-bash charm, but I'm going to convert it to a layer, I think
<Zic> lazyPower: (I prefer IRC over Slack :D) the last time you said to me that Canonical Kubernetes was not compatible with Helm, is that so?
<Zic> lazyPower: about my issue with Vitess posted in their Slack, the guys from Vitess pointed me to https://github.com/youtube/vitess/tree/master/helm/vitess (work-in-progress)
<Zic> it's a Helm package :s
<Zic> help charts* don't know the right word :)
<lazyPower> Zic  - we certainly do support helm, but there is a known issue with teh current incantation of the kube-api-loadbalancer
<lazyPower> the supported work around is to either update or clone your kubeconfig and point it directly at one of your kubernetes-master units, and expose the kubernetes master unit. Proxying through the master load balancer will cause you heartburn and trigger false positive failures with helm.
<Zic> oh, I can do that
<lazyPower> Zic - we have an open PR to actually put that in the upstream docs for the CDK
<lazyPower> thanks to SaMnCo for that submission *hattip*
<Zic> I already access directly to my kubernetes-master on my LAN
<Zic> and even the kube-api-loadbalancer is exposed only on LAN
<lazyPower> ok, if you're already pointing directly at your master, you should be g2g
<Zic> so I just need to upgrade my kubeconfig
<lazyPower> and if thats not the case, i want all your bugs and feedback around this so we can triage accordingly
<Zic> hehe :D
<lazyPower> yep, just point kubeconfig at the correct port/ip for a master and you should be g2g
<lazyPower> Zic - not sure if you saw but we just landed *everything* upstream yesterday around 7pm CST
<lazyPower> https://github.com/kubernetes/kubernetes/pull/40324
<Zic> lazyPower: as my Canonical Kubernetes cluster runs perfectly fine, I throw all of my ninja-power to make Vitess works in K8s... except I discovered today that their config (used in their official deployment guide) is not resiliant at all \o/
<Zic> so... no, I didn't see anything from tech-world except my rage and tears with Vitess this last 2 days :)
<Zic> but noted, I will take a look )
<lazyPower> well, i'm sorry to hear about the tears, but its awesome to hear that we are empowering you
<lazyPower> makes my own day to day tears worth while :)
<Zic> huhu :)
<Zic> simply puts: canonical-kubernetes does the job perfectly, and when I saw some guys raging against kubeadm which can't do the same, or even try to build K8s from scratch, I thank Juju :)
<Zic> building K8s from scratch is very instructive anyway
<Zic> but I need a tool to industrialize it in my company
<lazyPower> I appreciate that feedback, we've been cycling hard to remove the barrier to going to prod with kubernetes
<Zic> that's for the right part o/
<lazyPower> so much so that we're going hard in the paint, my nice grey paintjob has streaks of red and white all up and down the side
<Zic> for the wrong part (and that's totally offtopic here): Vitess is not ready for production-grade experience at this time
<Zic> it will need some tricks to be
<Zic> the good news is that this K8s cluster is not entirely dedicated to Vitess :p
<lazyPower> we're looking to leave beta soon
<lazyPower> so, be prepared for bulletproofing in the coming iterations
<lazyPower> Zic - did you perhaps update to 1.5.2 with last weeks update?
<Zic> my main coming-feature is haproxy replacing nginx in kube-api-loadbalncer :)
<lazyPower> we just had a brief meeting about that this morining, we're looking to do layer 4 routing instead of layer7 via nginx. so that haproxy replacement cant come soon enough
<Zic> nope, I think I'm on 1.5.1 for now, I didn't have the time to upgrade with my Vitess problems :(
<lazyPower> Ack. when you do i'm highly interested in a) which approach you took to do the upgrade and b) how that experience went for you
<Zic> lazyPower: layer 4 is OK also, in fact it's just the possibility to make a master offline easily which I need
<lazyPower> so capturing any feedback for us would be highly useful in that context
<Zic> (without touching the nginx vhost)
<lazyPower> thats our goal, to not have the master exposed at all, so you can isolate it in some network segment and sleep easy
<Zic> to describe our infra for now : I'm using 3 master, 5 etcd, 1 kube-api-loadbalancer (with easyrsa charms also on it), 6 physical workers, 3 EC2 AWS instances, 2 DRBD filers for PV with NFS, all in private-LAN
<Zic> to expose publicly, I set up a public haproxy which have Ingress as backend
<Zic> (3 haproxy, with a heartbeat VIP)
<lazyPower> oh hey thats a nice setup
 * lazyPower nods
<lazyPower> i also see you took etcd durability very serious and gave it proper fault tolerant pooling
<Zic> yes, as you advice me the first time I came here :)
<lazyPower> :D
<Zic> (was for the first pizza i owe you :p)
<Zic> but yeah, I don't use public Ingress because if I bring-up a public ethernet interface on my kubernetes-worker, all my NodePort will be exposed, as K8s does not have a setting like "NodePort only bind on private address"
<Zic> so I prefered to put public LB in front of Ingress and NodePorts
<lazyPower> right
<lazyPower> i think there was discussion on the k8s project around that
<lazyPower> NodePort on Interface
<lazyPower> but i haven't been tracking that too closely, and it may have been tabled
<Zic> yes, I saw an issue on GitHub but not so much news since 2015
<lazyPower> yeah, that sounds about right
<lazyPower> might be worth poking it to see if there can be some renewed interest around it
<Zic> the only negative effect of that is that our customer had full capability with kubectl to expose his service privately through Ingress and NodePorts
<Zic> but for public parts, it's HAProxy VMs wich is fully managed through Puppet
<lazyPower> so, we're going to augment ingress with configmaps, which should make it more durable for you
<Zic> so our customer needs to fill at ticket for every public expose
<lazyPower> as in, you can expose interesting things like ssh services for a private gogs instance
<Zic> (not so important as he only expose 80 and 443, and we do the ssl offloading with a wildcard cert)
<lazyPower> and it'll proxy that ssh connection through the ingress controller
<Zic> cool :)
<lazyPower> as it stands today we're only concerned with web traffic on that ingress controller
<Zic> yep
<lazyPower> but we do realize and understand there is another class of workloads that need to be supported
<Zic> I use NodePorts for all other concerns
<lazyPower> and its a bit slower going but i'm tracking that
<lazyPower> yeah
<lazyPower> i do the same in my homelab
<Zic> but nos as practicle as Ingress
<Zic> practical*
<lazyPower> yep
<Zic> I discovered something like https://traefik.io/ also
<lazyPower> there's been some effort around an haproxy ingress controller as well
<lazyPower> i've used traefik, its great
<lazyPower> i'm not positive that its been updated withs ocket support yet though
<Zic> which can enpower my customer for public exposing
<lazyPower> are you aware if they added that in recent revisions?
<Zic> no not at all, I'm at the point of just reading their homepage :D
<lazyPower> i'd like to include that in a workloads repository so end users can mix/match their ingress controllers via namespace
<lazyPower> eg: namespace=customer  you get the default nginx controller,    namespace=alpha1  - you launch and use a traefik ingress controller until you're ready to promote to namespace=customer
<lazyPower> and can move all that stuff with it, its just kind of a nice to have, and would be great to be tuneable with some curated manifests
<lazyPower> but thats all pie in the sky at this time, i haven't had time to dedicate to it
<Zic> I'm planning to finish this resilient Vitess cluster, upgrading to K8s 1.5.2 through Juju and after that, poweroff/rebooting randomly all the parts :D
<SaMnCo> @lazyPower pleasure :) just the first of (I hope) a long list
<SaMnCo> @Zic, let me know if you have issues around helm
<Zic> if all stay OK (I already did some tests :p) I will owe you 3 pizzas total
<Zic> if all is wrong, I will jump through the window of the ground-floor
<Zic> (and then, debug :p)
<SaMnCo> also, if you are looking into scaling SQL, we have a Charm Partner partner (ScaleDB) who does just that
<SaMnCo> not sure they have a lot of k8s stuff, but still worth looking at.
<Zic> noted :)
<SaMnCo> and, last but not  least, I'm trying to get good low level sysadmin feedback on Juju usage to document
<SaMnCo> so I'd be happy to discuss your XP, in French or English ;)
<Teranet> anyone here who knows about ceph a bit how to create a directory without creating a drive mount point : http://paste.ubuntu.com/23864768/
<Teranet> I try to get ceph-disk for some reason push a drive creation but I do want only a directory been done
<lazyPower> ping cholcombe and icey  ^
<icey> Teranet: can you share a bundle that you're using to deploy that? from the bit of logs, it looks like it _should_ just work so would be a bug
<Teranet> it's not a bug I was looking now on the container and I do see /srv/osd is created and has data already in it
<Teranet> I am thinking it's looking still for an empty one
<Teranet> ok I am deploying a bigger OS.yaml which I cutom build already in a lot of ways
<Teranet> want me to share the hole OS.yaml ?
<icey> ah Teranet, ceph-osd doesan't behave too well inside of a container
<Teranet> so I should redploy it outside of the container ?
<Teranet> or can I just do redeploy the 3 ceph-osd somehow to the boxes instead of containers ?
<jrwren> what is wrong with ceph-osd in a container? if your lxc/lxd is in zfs, make sure to set use-direct-io: false and don't expect /dev osd-devices to work. Other than that it seems to work for me for dev/test.
<jrwren> Teranet: is your lxd zfs or btrfs backed?
<jrwren> Teranet: I get those errors with zfs backed container and the default use-direct-io value. Set the use-direct-io to false for the ceph-osd charm.
<icey> jrwren: I suppose that's relevant too; I wouldn't run ceph-osd inside of a container unless it _is_ just for testing
<Teranet> let me doublecheck
<Teranet> how can I check that if it's zfs again ?
<Teranet> fdisk don't help much there
<jcastro> lxc info | grep storage
<jcastro> should tell you
<Teranet> zfs
<Teranet> ok so if I put them now to the host how can I move them ?
<Teranet> or do I need to kill those 3 containers with ceph-osd on it first and than redeploy them ?
<jrwren> Teranet: probably, yes, or you could set the config, check in /etc/ceph for 'journal dio = false'
<bdx> concerning the hosted controller
<bdx> I don't seem to be able to connect to a model on the hosted controller via libjuju
<bdx> see http://paste.ubuntu.com/23865066/
<bdx> ^ succeeds when ran against my own controller, but fails when ran against the hosted controller
<bdx> tvansteenburgh
<bdx> ^^
<bdx> tvansteenburgh: have you tried using libjuju against the hosted controller?
<tvansteenburgh> bdx: yeah. it *can* work, but it relies on valid macaroons existing on the host
<tvansteenburgh> bdx: do you have juju cli installed on the host?
<bdx> tvansteenburgh: yea, does the fact that juju cli can access the model validate that the macaroons are valid?
<tvansteenburgh> yeah
<tvansteenburgh> bdx: can you deploy to the model with the cli?
<tvansteenburgh> if you can, then libjuju should work too
<jcastro> heya Zic
<jcastro> we should find time this week or next to have you sync with the team in a hangout, would love to get a laundry list of feedback from you
<tvansteenburgh> bdx: this is a limitation of libjuju until we add support for obtaining and discharging macaroons
<bdx> tvansteenburgh: I run that script ^ against a lxd controller and it does not error, then I switch controllers via `juju switch jujucharms.com` following which select one of my models, and run the script again and it fails
<tvansteenburgh> bdx: is there a traceback?
<bdx> due to auth errors, see http://paste.ubuntu.com/23865120/
<tvansteenburgh> bdx: are you logged in, i.e. juju login
<bdx> yea
<bdx> tvansteenburgh: http://paste.ubuntu.com/23865139/
<tvansteenburgh> bdx: do you have a ~/.go-cookies file?
<bdx> yeah ... want me to squash it and try again?
<tvansteenburgh> not yet
<stormmore> so I just created a controller and ran "juju ssh -m controller 0". while it appears that it is logged in, it doesn't give me a prompt, and hitting enter gives me "-bash: line 1: $'\r': command not found". any ideas what is going on?
<tvansteenburgh> bdx: http://pythonhosted.org/juju/narrative/model.html#connecting-with-macaroon-authentication
<bdx> tvansteenburgh: YES! that did it
<lazyPower> stormmore - that's most definiately a bug, but i'm not certain what happened
<bdx> tvansteenburgh: thank you
<icey> stormmore: are you on windows?
<tvansteenburgh> bdx: \o/ np
<stormmore> lazyPower I came across https://bugs.launchpad.net/juju-core/+bug/1468752 so you are right it is bug :)
<stormmore> icey yes
<lazyPower> oo fantastic find
<lazyPower> bummer that its a bug, but glad its not pioneering territory
<icey> stormmore: yeah, that was the bug I was curious about :)
<stormmore> so what its telling me is that this is a stupid Windows bug without a workaround but the fix is in 2.2 which is currently in alpha!
<stormmore> I am going to see if I can install and bootstrap from bash / Ubuntu on Windows 10
<stormmore> that is if I can figure out how to get it upgrade to Xenial
<bdx> tvansteenburgh: I may have been a bit premature in my rejoice
<bdx> tvansteenburgh: I'm getting "Fatal error on SSL transport" now ...
<tvansteenburgh> bdx: that's almost always a side effect of another problem, got a traceback?
<bdx> tvansteenburgh: my simple script http://paste.ubuntu.com/23865377/
<bdx> tvansteenburgh: traceback <- http://paste.ubuntu.com/23865383/
<tvansteenburgh> bdx: can you set logging level to DEBUG and paste the full output?
<bdx> tvansteenburgh: http://paste.ubuntu.com/23865408/
<tvansteenburgh> bdx: did you change line 41 or 43?
<bdx> tvansteenburgh: do you think I need to be supplying a 'cacert'?
<bdx> tvansteenburgh: both
<tvansteenburgh> bdx: yeah, trying sending the cert
<tvansteenburgh> i'm not convinced that's the problem but it won't hurt
<bdx> tvansteenburgh: http://paste.ubuntu.com/23865441/
<bdx> sad
<tvansteenburgh> heh
<tvansteenburgh> nevermind!
<vmorris> any word on this bug? https://bugs.launchpad.net/juju/+bug/1614364
<bdx> tvansteenburgh: does libjuju use the asyncio ssl protocol for ssl connections?
<tvansteenburgh> bdx: no
<tvansteenburgh> well it might indirectly
<bdx> tvansteenburgh: this https://github.com/python/asyncio/blob/master/asyncio/sslproto.py
<tvansteenburgh> no
<bdx> tvansteenburgh: can you replicate this on your end?
<tvansteenburgh> bdx: not yet. it works for me. still looking
<tvansteenburgh> bdx: change line 41 to: logging.basicConfig(level=logging.DEBUG)
<tvansteenburgh> then paste me the full output
<tvansteenburgh> bdx: also, print the value of your model_uuid to make sure it's actually set
<bdx> tvansteenburgh: I'm setting MODEL_UUID inline in my testing
<tvansteenburgh> ok i thought you were getting it from the env
<bdx> http://paste.ubuntu.com/23865573/
<bdx> thats better
<tvansteenburgh> bdx: yeah, so that means whatever macaroons you have locally aren't sufficient
<tvansteenburgh> bdx: which is strange since you can run juju status on the model
<bdx> tvansteenburgh: I launch a fresh xenial container `lxc launch ubuntu:16.04 libjujutest`, exec in, `su ubuntu`,  install juju, libjuju, asyncio via `sudo apt install juju python3-pip && sudo -H pip3 install juju asyncio`, run the script and get the error
<bdx> tvansteenburgh: should the jujuclient version <-> juju controller version mismatch have anything to do with this?
<tvansteenburgh1> bdx: when using macaroon auth (shared controller), libjuju can currently only connect if there valid macaroons in ~/.go-cookies already
<bdx> * login to the controller:model via juju cli, then run the script and still get the error
<bdx> correction
<bdx> my bad
<bdx> I am logging in first
<tvansteenburgh1> bdx: i'm trying to figure out what's different. when i run your script against jimm, it works. on every model i try
<tvansteenburgh> bdx: try `juju deploy ubuntu`, then run your script
<stormmore> I keep getting "ERROR cannot update credentials for aws: timeout acquiring mutex" when trying add credentials or change the default aws region :-/
<bdx> tvansteenburgh: deployed ubuntu, no change, except that now the script will error every other try, and just hangs on 50% it seems ... when I Ctrl+C out of what seems to be a hang, I get this http://paste.ubuntu.com/23865667/
<bdx> which has an interesting error at the bottom "juju.errors.JujuAPIError: unknown version (1) of interface "Client""
<tvansteenburgh> yeah
<tvansteenburgh> bdx: my models are 2.0.1, yours are 2.0.2.1
<tvansteenburgh> that shouldn't matter, but...
<tvansteenburgh> nope, works for me on 2.0.2.1 too
<mskalka> anyone familiar with the openstack base bundle available to answer a few questions?
<stormmore> looks like I am going to just spin up an Ubuntu VM since Bash on Windows has problems with mutex when trying to do stuff with juju and the windows version of juju has problems with ssh to the controller
<zeestrat> mskalka: What's up? I've got some minutes before EOD.
<zeestrat> mskalka: If no one is around here, make sure to check out #openstack-charms
<mskalka> zeestrat: thanks for the reply. I'm running into an odd issue deploying openstack on aws. It looks like the deployment is hung up cinder and glance connecting to the mysql db
<mskalka> I had it working last week just as a POC after a few tweaks to where it landed the charms (the ceph-mon charms did not like being in lxd for example) but never ran into this issue
<zeestrat> mskalka: Hmmm. Unfortunately I only have experience with deploying it to MAAS on bare metal so I won't probably be too helpful. Ping the guys in #openstack-charms with the output of "juju status --output yaml" and perhaps some logs.
<mskalka> zeestrat: No worries! Thanks for the help, I'll ping #openstack-charms tomorrow morning
<zeestrat> Cool. I think most of them are on a EMEA timezone.
<mskalka> copy, thanks
<Teranet> do we have an JUJU OPENSTACK guys here ? I am looking for some logging from OpenStack but now since it's all on JUJU / MAAS not sure where it would be looging too for neutron
<marcoceppi_> Teranet: #openstack-charms channel might be best
#juju 2017-01-26
<stormmore> hmmm that is odd, for some reason the flannel charm wont let me add a relation to kubernetes-worker charm :-/
<kjackal> Good morning Juju world!
<stormmore> *sigh* I forget to setup an ssh key for my user before deploying the kubernetes bundle :-/
<stormmore> I am guessing my only option is destroy and recreate, anyone still awake who might know of another option?
<aisrael> stormmore, sorry, I don't have scrollback atm. What's the issue?
<Budgie^Smore> I forgot to add an ssh key when I deployed the kubernetes bundle so couldn't scp anything from the master node
 * Budgie^Smore was being a dumb-a** 
 * Budgie^Smore is stormmore on his "other" computer 
<aisrael> Budgie^Smore, stormmore, Ahh, sorry. I'm afraid I'm not up to speed on k8's enough to help with that. :(
<Budgie^Smore> the kubernetes part is a red herring in this instance, to simplify I couldn't use juju scp or juju ssh to access juju managed systems
<Budgie^Smore> not a huge deal, decided to destroy the model and rebuild it tomorrow
<Budgie^Smore> I did come across what might be a bug after that... decided to create another user in juju, then deleted it cause I couldn't figure out how to add a display name, then it wouldn't let me readd the same user even though it wasn't listed
<Budgie^Smore> maybe I was being impatient?
<Budgie^Smore> the user wasn't list in juju users
 * Budgie^Smore does some weird things when he is learning new toys 
<deanman> morning
<Zic> "SaMnCo | and, last but not  least, I'm trying to get good low level sysadmin feedback on Juju usage to document"
<Zic> ping back SaMnCo, glad to help if you need my review :)
<Zic> (cc jcastro also, same answer :D)
<Budgie^Smore> care to definte "low level sysadmin"?
<Zic> Budgie^Smore: I take it like "core level", like C is a low-level language, but maybe I'm mistaken
<Budgie^Smore> so someone who handles the infrastructure then is what I would think as low level sysadmin
<SaMnCo> Zic:  Budgie^Smore right what I want to capture is from your views on how Juju represents the world. Vs your view of it
<SaMnCo> So not the charms themselves but the tool
<SaMnCo> Core is probably better wording
<Budgie^Smore> The one big thing I love about juju is it's GUI as it is a great way to visible show PHBs the "world"
<SaMnCo> As well as the key questions you asked yourself when you started
<SaMnCo> And eventually struggled to answer
<SaMnCo> Or find answers for
<Budgie^Smore> I will be honest and say the only reason I came across Juju was cause I was looking for tools to manage machines from a powered off state and found MaaS
<Budgie^Smore> OK I will see you guys in a few hours, need to get some zee
<junaidali> Hi blahdeblah : the openstack-base bundle has ntp charm without ntpmaster. The auto_peers is also not set. Will we be good with clock sync when the internet is disconnected?
<Zic> lazyPower: http://paste.ubuntu.com/23868491/ is it normal from etcd?
<Zic> all is working actually, but I show this by hasard
<Zic> lazyPower: also, normally there is a "etcd-4" but I don't see it :(
<Spaulding> hello juju world!
<junaidali> Hi Spaulding
<rick_h> stormmore: you can add SSH keys to juju with juju add-ssh-key. Now, I don't think that will retro add to previous units but using add-unit should get you new ones with the key there.
<CoderEurope> marcoceppi_: Hows it going with Discourse charm ?
<lazyPower> Zic yeah, the component status does'nt use teh TLS certificate to verify
<lazyPower> Zic i've only ever gotten etcd to display properly in component status when its non tls secured
<ryebot> Anyone else hitting 502 errors trying to `charm attach`?
<rick_h> uiteam ^
<lazyPower> ryebot - its been ages since i've encountered that
<lazyPower> whats the size of your upload?
<ryebot> lazyPower: 9.2MB
<lazyPower> ryebot that seems unusually small to be throwin 502's
<lazyPower> ryebot it might have been temporary load against the charm store, is it still throwing errors at you?
<ryebot> let me try again
<lazyPower> i've seen that 502 with like, 1GB attempted uploads
<lazyPower> it simply times out during transfer
<ryebot> nope, still fails :|
<lazyPower> but if you hit it during a deployment, or under extreme load, you'll also see that
 * lazyPower weeps silently
<ryebot> I must be doing something stupid
<lazyPower> jrwren imma poke you with some <3 to see if you can poke the right pplz about a 502 issue on upload? the last time we hit this it was a prodstack deployment underway
<jrwren> you can try.
<lazyPower> oooo
<lazyPower> you wanna go jrwren? :D want some of this hot 502 error action? :D
<lazyPower> sorry i've had too much coffee this morning
<stormmore> howdy juju world
<stormmore> thanks rick_h I figured as much, it would be nice to be able to add keys retroactively though since it would allow to add / update keys for users
<stormmore> for instance, in my last job we had to "rotate" our ssh keys every 90 - 180 days (depending on access type)
<rick_h> Yea...Thinking. as long as you have the old key you can update/add the new one
<rick_h> Juju run ... Across machine's I guess.
<rick_h> It's something that'd be great.to support better
<stormmore> admittedly if I am going to use ssh for much going forward I really want to use client certs instead of keys
<stormmore> so I just created a new superuser account on my controller, switched to it, add a credential and set default cred and region but when I run juju add-model <model name> it keeps saying that I didn't provide a credential. is that expected?
<lazyPower> stormmore - yep, you have to set the default credential if you want a default credential.
<stormmore> lazyPower I did run juju set-default-credential
<lazyPower> oooo
<lazyPower> i missed that, osrry
<lazyPower> that seems wrong indeed
<lazyPower> well this bit of our docs seems oddly specific to that behavior
<lazyPower> Setting a default credential means this will be used by the bootstrap command when creating a controller, without having to specify it with the --credential option.
<stormmore> no worries, I am wondering if there is a default cred that is separate for models vs controllers
<lazyPower> stormmore - did you juju set-default-credential "credential-name" "username" ?
<lazyPower> or for the cloud rather
<lazyPower> juju set-default-credential aws carol -- is the example from the doc
<stormmore> yeah so I set the default credential for aws
<lazyPower> yeah iw ould have expected that to set it for every request unless overridden with --credential
<lazyPower> i too pass --credential on a hosted controller i've been using for the past month and haven't been bothered with looking into why thats the case. On my next deploy i'll try to replicate a successful configuration where it doesn't require the --credential. I'm fairly certain we support this
<stormmore> it seems odd when the "admin" user doesn't require it
<stormmore> that is why I am thinking I did something wrong
<lazyPower> yeah, i'm mostly certain its a local config thing that you can run a command to set it like a context and it "just works"
<stormmore> it is gets even weirder, I just noticed when I added a "test" model using the admin account that it wasn't using the default credential I set so I deleted the model, deleted the cred and readded the model and it used the "deleted" cred! http://paste.ubuntu.com/23871129/
<lazyPower> stormmore yeah, thats definitely bug worthy. Would you mind capturing the steps you outlined in a bug so we can reproduce and get some engineering eyes directed at that?
<stormmore> of course I wouldn't mind :) just don't like to file "bugs" that aren't bug worthy ;-) hence check here first
<stormmore> I am finally having to register for a Ubuntu One account! Woot!
<lazyPower> #nailed-it-aced-it-cant-be-stopped
<lazyPower> look @ you go stormmore :)
<stormmore> lol
<stormmore> I am guessing we should consider this a security vunerability (or at least the potential to be one) since it about credentials too
<lazyPower> I think thats a reasonable classification
<stormmore> filed
<lazyPower> Thanks stormmore
<stormmore> I am debating whether I should also file one for the fact that switching to a non-"admin" user doesn't seem to use the default credential too
<rick_h> stormmore: there's an existing bug around that being an option
<rick_h> stormmore: typically a different user doesn't mean the admin wants to be on the hook for expenses
<rick_h> stormmore: but there are cases where that's legit.
<stormmore> rick_h: i get that but the non-"admin" user account I created was given the same level of access - superuser - so my assumption is that it should work the same way as the "admin" user account
<rick_h> stormmore: I'm not sure I agree there. Trusting someone with your running bits is different than your credit card
<rick_h> stormmore: but I understand. As I said, there's an existing bug for the admin to make that an option
<jrwren> uiteam: for review, a util I just ran on jujugui.org https://github.com/juju/charmstore/pull/704
<stormmore> rick_h I agree, however I am kinda modeling it after the Linux security model of no one logging in as root, so maybe a juju sudo type command would be a good option
<rick_h> stormmore: hmm, I tend to look at Juju more like a database server or the like.
<rick_h> stormmore: I may trust you to help add databases/etc but am not going to give you root on the machine so you can do other things with it
<stormmore> rick_h ah that kinda makes sense but the problem I am having is creating the "db" (model)
<stormmore> rick_h seems odd that I can set a default credential for the user but still have to provide that credential using --credential when I am running commands like add-model
<rick_h> stormmore: oh I'm +1 on making it better for the users there like that. I'm just -1 on auto leveraging the admin's credentials in any way
<stormmore> rick_h I am definitely not suggesting sharing credentials across accounts just if you set add and set a default credential in any account it should behave the same way
<stormmore> now I am having problems adding ssh keys :-/
<stormmore> keeps giving me invalid key!
<stormmore> oh you hcae to cat out the key instead of it reading the file.
<stormmore> have*
<Teranet> ok everyone I do have a little issue ........  how do you grab in JUJU a trunk interface and get those multpile VLAN's mapped into charms proper ? Any docu or help is welcome thank you
<marcoceppi_> Teranet: it sounds like you want extra-bindings
<Teranet> sorta yes
<Teranet> so far I had eth0 also bridged but now I want to add eth1 as an additional bridge and with VLAN's
#juju 2017-01-27
<stub> cory_fu: Should I do a followup MP that tears out the Python2 import machinery completely? Is there any Python2 code still out there using charms.reactive?
<SaMnCo> @Zic thanks. Today is a bit busy for me, but can we do a call like next week?
<kjackal> Good morning Juju world
<BlackDex> hello :)
<BlackDex> Can i upgrade a charm which is installed via cs, but i now want it to use a local version?
<BlackDex> of use code.launchpad.net for its source?
<aisrael> BlackDex, yes. check out the --switch flag of upgrade-charm
<BlackDex> oke
<BlackDex> i think i need --path :)
<Ankammarao> Hi juju world
<Ankammarao> do we need to created terms each time we are pushing to charm store ..
<Ankammarao> or else it's enough to crdate one time
<Zic> SaMnCo: I'm also very busy at this time at office because of Vitess (the Canonical Kubernetes was one of the quicker part :D) as we're late on the deadline, but I'm available through IRC all the (France UTC+1 o/) time. If you prefer an audio call I will try to find a solution :)
<Zic> feel free to pm me if you need
<Zic> I saw in the Ubuntu Newsletter the blogpost of jcastro : so conjure-up is the now-official way to install Kubernetes through Juju? I personately used the "manual provisionning" of Juju as I'm on baremetal server and doesn't use Ubuntu MaaS
<Zic> I will surely bootstrap new k8s cluster so I'm asking myself if I must continue through this or begin to use conjure-up for the next ones
<Zic> (I know that conjure-up is just a ncurses-like GUI for Juju, but I don't know if this install-way does exactly the same vs. what I did)
<aisrael> BlackDex, Aha. I was close!
<BlackDex> aisrael: You indeed were, and it worked :)
<BlackDex> --switch --revision and --path are mutually exclusive :)
<SaMnCo> Zic are you using MAAS?
<SaMnCo> for bare metal management?
<SaMnCo> if you do, then conjure-up will help. If not and you are on full manual provisioning then I guess you'll be good with your current method.
<SaMnCo> conjure-up is a wizard to provide some help
<Zic> SaMnCo: ok, yeah we don't use MaaS as we have a sensible same homemade product here, so I bootstrap Ubuntu Server from it, add this via juju add-machine over SSH
<Zic> and when I want to deploy the canonical-bundle charms, delete all "newX" machines Juju want to pop, and reassign charms to machine already installed via manual
<Zic> (just via drag'n'dropping)
<Zic> at this step, I personally scale etcd to 5 instead of 3 by default, and put the EasyRSA charm at the same machine of kube-api-load-balancer
<Zic> (and scale kubernetes-master to 3 also, I forgot to mention)
<Zic> even if we have a MaaS-like in our company, maybe I will try in the future to set a MaaS here just to automatize all with Juju :)
<SaMnCo> Zic that or write a juju provider for your tool. Is it all in house development or another product like crowbar ?
<Zic> SaMnCo: completely homemade, it was to permit our customer to reinstall their VMs or physical server from our SI
<BlackDex> how can i define a local charm in a bundle file?
<BlackDex> or atleast in which directoy does it look? "local:xenial/charm-name" should be enough i think?
<anrah> BlackDex: charm: "./build/xenial/my-charm"
<anrah> for example
<BlackDex> so instead of local i can just input the full path?
<anrah> yep
<BlackDex> oke :)
<BlackDex> nice
<BlackDex> thx
<BlackDex> in juju 1.25 it took some hassel
<anrah> if you download the bundle through the GUI you must change those manually
<anrah> I haven't find a better way to do that
<BlackDex> thats no prob
<BlackDex> i have a bundle file already :)
<BlackDex> using the export via the gui makes a messy bundle file in my opinion
<anrah> that is true
<BlackDex> i don't need the placements for instance
<BlackDex> or annotations is they are called
<BlackDex> ow strange, i see a lot "falsetrue" in the exported file
<BlackDex> those are values which should be default
<BlackDex> :q
<jcastro> Zic: yeah for first use we went with conjure-up because it's a better user experience, especially for those getting started, it's all juju under the hood though so it's all good.
<SaMnCo> Zic: whaow. This is a significant engineering effort, congrats on building that.
<Zic> hmm, my kubernetes-dashboard display a 500 error with "the server has asked for the client to provide credentials (get deployments.extensions)
<Zic> did you already see that? I just apt update & upgrade & reboot kubernetes-master and etcd, one per one
<Zic> the juju status is all green
<jcastro> that one sounds like a bug
<jcastro> but mbruzek and lazypower aren't awake yet :-/
<cory_fu> stub: There is not.  Other parts of the framework, mainly the base layer due to the wheelhouse, require Python 3.  So, +1 to pulling out py2 support
<Zic> jcastro: I also have some error when running command that create or delete ressources, but they are random compared to kubernetes-dashboard error:
<Zic> kubectl create -f service-endpoint.yaml
<Zic> Error from server (Forbidden): error when creating "service-endpoint.yaml": services "cassandra-endpoint" is forbidden: not yet ready to handle request
<Zic> this kind of error
<jcastro> ok as soon as one of them shows up we'll set aside some time and get you sorted
<Zic> thanks a lot
<Zic> I will try to debug and collect some logs
<Zic> http://paste.ubuntu.com/23875089/
<Zic> "has invalid apiserver certificates or service accounts configuration" hmm
<lazyPower> Zic - thats a new one to me, hmmmm
<Zic> many pods are in CrashLoopBackOff such Ingress also :/
<lazyPower> sounds like something botched during the upgrade. you ran the deploy upgrade to 1.5.2 correct?
<Zic> W0127 15:01:40.848867       1 main.go:118] unexpected error getting runtime information: timed out waiting for the condition
<Zic> F0127 15:01:40.850545       1 main.go:121] no service with name default/default-http-backend found: the server has asked for the client to provide credentials (get services default-http-backend)
<Zic> I just upgraded the OS via apt update/upgrade
<lazyPower> Did the units assign a new private ip address to their interface perhaps?
<Zic> and reboot the machine which host kube-api-load-balancer, kubernetes-master and etcd
<Zic> lazyPower: hmm, to the eth0 interface?
<lazyPower> Zic - correct. The units request TLS certificates during initial bootstrap of the cluster, and we dont yet have a mechanism to re-key with new x509 data, such as if the ip addressing changes
<lazyPower> which would yield an invalid certificate if the ip addresses changed
<lazyPower> i'm trying to run the gambit of what might have happened to cause this in my head
<Zic> I only use one private eth0 interface (static) for management VMs like master, etcd and kube-api-loadbalancer/easyrsa
<Zic> for worker, I use bonding on two private interface
<Zic> but nothing change at this area :(
<lazyPower> ok i dont think thats teh issue then if the addressing hasn't changed
<lazyPower> hmmm
<SaMnCo> lazyPower, Zic would maybe removing the relation to easyrsa and adding it again fix?
<Zic> for info, I reboot the VM which host juju controller also
<Zic> rebooted*
<lazyPower> Zic - i dont think its a juju controller issue, its an issue with the tls certificates it seems. Something changed that's causing them to be invalid which is causing a lot of sick type symptoms with the cluster
<Zic> let me check the date of cert files
<Zic> is it in /srv/kubernetes root?
<lazyPower> yep, the keys are stored in /srv/kubernetes
<Zic> 16 January
<Zic> :(
<Zic> SaMnCo: do I risk to loose the PKI if I do that?
<SaMnCo> that's what I am asking myself, if it would just regen the certs for the whole thing or not
<SaMnCo> lazyPower would know better
<lazyPower> SaMnCo - i'm mostly certain there's logic to check if the cert already exists in cache and will re-send the existing cert
<lazyPower> we have an open bug about rekeying the infra but haven't taken an action on it yet
<Zic> and there is some strange behaviour: via kubectl, I can do read action (get/describe) without any problem
<Zic> but write, like create/delete sometime return a Forbidden
<Zic> (I posted the exact message above)
<Zic> but for Ingress or dashboard, it's a strong "nope"
<lazyPower> Zic - hav eyou upgraded the kube-api-loadbalancer charm? we changed some of the tuning to disable proxy-buffering which was cuasing those issues
<SaMnCo> I have seen that behavior in clusters where the relation with etcd or etcd itself was messy
<SaMnCo> k8s seems to keep a state as long as it can
<SaMnCo> so if you break etcd, it will keep returning values for its current state, but will refuse to change anything
<Zic> lazyPower: I don't upgrade any juju's charm, juste classical .deb via apt
<Zic> oh, in the apt upgrade, I saw an etcd upgrading
<Zic> can it be...?
<lazyPower> :| i sincerely hope this is not related to the deb package doing something with what we've done to the configuration of etcd post deployment
<lazyPower> if it is, i'm going to be upset and have nobody to complain to
<Zic> I run an etcdctl member list on etcd machines
<Zic> seems OK
<Zic> but I don't know what to do more to check the health
<lazyPower> member list and cluster-health are the 2 commands that would point out any obvious failures
<SaMnCo> etcdctl cluster-health
<SaMnCo> and tail the log, it tells if a member is out of sync
<Zic> http://paste.ubuntu.com/23875201/
<SaMnCo> ok so not that issue then
<lazyPower> so that doesn't seem to be the culprit
<Zic> I did a etcdctl backup before the upgrade also, just in case
<lazyPower> excellent choice
<Zic> hmm, so it seems to be tied to the CA
<Zic> can I run some manual curl --cacert to one point of the API to check it?
<lazyPower> Zic - yeah, so long as you use the client certificate or server certificate for k8s, you should be able to get a valid response if the certificates are valid
<lazyPower> the server certificates are generated with server and client side x509 details. meaning the k8s certificates on the unit can be used as client or server keys.
<Zic> lazyPower: it's what is strange : kubectl get/describe commands always work, kubectl create/delete at contrary works only 1 of 3, returning Forbidden message
<Zic> and for Ingress/default-http-backend/kubernetes-dashboard side, it's just CrashLoopBackOff :(
<lazyPower> Zic - can you check the log output on teh etcd unit to see if there's a tell-tale in there?
<Zic> yep
<lazyPower> Zic - it does sound like the cluster state storage is potentially at fault here
<Zic> the etcd cluster doesn't return any weird logs, it juste saw that I upgraded the etcdd package :s
<SaMnCo> Zic what are the logs of the Ingress/default-http-backend/kubernetes-dashboard pods?
<Zic> Unpacking etcd (2.2.5+dfsg-1ubuntu1) over (2.2.5+dfsg-1) ...
<Zic> (was the update)
<Zic> the '1ubuntu1' part
<Zic> seems that the etcd from Ubuntu archive installed over the Juju charm one, no?
<Zic> SaMnCo: (I'm pasting you the log shortly)
<lazyPower> Zic - thats expected. the etcd charm installs from archive
<Zic> yeah but as it has not some "ubuntu" tagged version in the deb-version, I thought it was from outside of the Ubuntu archive.ubuntu.com
<Zic> http://paste.ubuntu.com/23875254/
<Zic> SaMnCo: ^
<lazyPower> Zic - from your kubernetes master, can you grab the x509 details and pastebin it? openssl x509 -in /srv/kubernetes/server.crt -text
<lazyPower> i dont need teh full certificate output, just teh x509 key usage bits so i can cross ref this info w/ whats in the cert
<lazyPower> i'm expecting to find IP Address:10.152.183.1, in the output
<Zic> oki
<lazyPower> Zic - additionally, if you could run juju run-action debug kubernetes-master/0  && juju show-action-output --wait  $UUID-RETURNED-FROM-LAST-COMMAND
<lazyPower> it'll giv eyou a debug package you can ship us for dissecting the state of the cluster and we can try to piece together whats happened here
<Zic>             X509v3 Subject Alternative Name:
<Zic>                 DNS:mth-k8smaster-01, DNS:mth-k8smaster-01, DNS:mth-k8smaster-01, IP Address:10.152.183.1, DNS:kubernetes, DNS:kubernetes.cluster.local, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local
<lazyPower> i think i transposed debug and kuberetes-master
<lazyPower> yeah, the certs valid, it has all the right SAN's i would expect to see there
<Zic> that part of the certificate?
<Zic> ok
<Zic> let me run this juju command
 * lazyPower sighs
<lazyPower> this is a red herring, its something else thats gone awry
<Zic> error: invalid unit name "debug"
<Zic> hmm?
<Zic> maybe I need to inverse the two args :D
<Zic> juju run-action kubernetes-master/0 debug ?
<Zic> Action queued with id: 99267d59-f3aa-467d-8686-130e90dc47a0
<Zic> seems to be that :)
<Zic> # juju show-action-output --wait 99267d59-f3aa-467d-8686-130e90dc47a0
<Zic> error: no action ID specified
<lazyPower> :|
<lazyPower> juju y u do dis
<lazyPower> Zic  if you omit hte --wait, it'll give you what you're looking for now
<lazyPower> teh debug action doesn't take long to run
<lazyPower> its just aggregating information and then offers up a tarball of files
<Zic> http://paste.ubuntu.com/23875303/
<Zic> but at that path, I don't have any debug-20170127153807.tar.gz
<Zic> am I missing something? :o
<lazyPower> Cynerva - have we encountered any situations where the debug package isnt' created?
<lazyPower> 1 sec, cc'ing the feature author
<Zic> if I run the proposed juju scp it's ok
<Cynerva> lazyPower: I haven't seen anything like that, no
<lazyPower> wait so it did create?
<Zic> lazyPower: if I run the juju scp manually, don't know if it was what you wait from me :)
<Zic> or if the show-action should exec it
<SaMnCo> Zic, lazyPower other people have had that: https://github.com/kubernetes/minikube/issues/363
<lazyPower> Zic - I'm looking for the payload from that juju scp command that showed up in teh action output
<lazyPower> Zic - that tarball will have several files which includes system configuration, logs, and things of that nature
<Zic> yeah, I have it
<Zic> just untared
<lazyPower> Do you have a secure means to send that to us? if not i can give you a temporary dropbox upload page to send it over
<Zic> yeah, I can generate you a secure link
<lazyPower> excellent, thank you
<mbruzek> Hello Zic, sorry I am late to the party. I heard you were having trouble with the Kubernetes cluster.
<Zic> yeah :(
<Zic> lazyPower: I pm-ed you the link with its password
<lazyPower> Zic - confirmed receipt of the file
<Zic> mbruzek: just run apt update/upgrade all over the different canonical-kubernetes machine, one per one, and the API begins to refuse some request for unknown reason
<lazyPower> i'll take this debug package and we'll dissect it to see if we can discern whats happened post apt-get upgrade. i can't for the life of me think what went wrong but i suspect there's clues in here.
<Zic> (for TL;DR :))
<mbruzek> Thanks for bringing me up to speed.
<lazyPower> Zic can you also send me the output from a kubernetes-worker node as well?
<lazyPower> same process to run the debug action
<Zic> lazyPower: just before the upgrade, I kubectl delete ns <a_large_namespaces> and it was still in Terminating when I kubectl get ns
<Zic> don't know if it can help
<lazyPower> Zic - it might be trash in the etcd kvstore, but i'm not positive this is the culprit yet
<Zic> the goal was to delete all large namespaces used for PoC, upgrade all the cluster, reboot it, and begins some prod; but it seems that it will not be the good day :p
<Zic> (I'm generating you other logs)
<mbruzek> Zic: I am sorry you ran into this problem
<mbruzek> Zic Have you verified that kube-apiserver is running on your kubernetes-master/0 charm?
<Zic> I'm running a permanent watch -c "juju status --color"
<Zic> it should be red if it's not working, correct?
<Zic> because all is green atm :)
<mbruzek> Zic not necessarily
<Zic> oh
<Zic> let me check directly so
<Zic> but even if it was that, no queries will work at all, here I have some success via kubectl get/describe, random success with kubectl create/delete (resulting in Forbidden error sometime, and works just at the 2nd try...), and 0 success with Ingress & dashboard
<Zic> (yeah it's running fine)
<lazyPower> Zic  - ok we'er going to need a bit to sift through this data and see what we come up with
<lazyPower> i have the whole team looking at these debug packages, i'll ping you back when we've got more details
<Zic> thanks for all your help!
<mbruzek> Zic: You rebooted the nodes after apt-get update?
<Zic> yep
<Zic> all of it
<mbruzek> Zic: Do you remember what time about? Looking at the logs I see some connection loss about 2017-01-25 10:35
<Zic> hmm, I begin the kube-api-load-balancer, 3 kubernetes-master and the two etcd at ~14:15 (UTC+1)
<Zic> and finished 3 mores etcd and all kubernetes-worker 1 hour after I think
<mbruzek> Zic: OK that does not appear to be the problem then
<Zic> but on 25th january, all was fine
<Zic> (didn't see the day, sorry)
<Zic> the exact timelaps is : I delete 4 large namespaces, that was forever in Terminating state, and no pods or other ressources was in Terminating, so I began to delete them one per one (without --force ou --grace=0, just normally)
<Zic> all pods & svc was terminated, but the namespaces always show "Terminating" in the "kubectl get ns"
<Zic> as I needed to upgrade and reboot all the cluster anyway, and saw an issue concering this fixed by rebooting the masters, I did it
<lazyPower> DELETE /apis/authorization.k8s.io/v1beta1/namespaces/production/localsubjectaccessreviews: (698.088Âµs) 405 -- this seems to be dumping stacks in teh apiserver log
<lazyPower> 405 response
<lazyPower> undetermined if this is the root cause, but it is consistent
<Zic> yeah so it's maybe this large deletion which is the root cause :/
<Zic> was 4 namespaces hosting 4 Vitess Cluster labs
<lazyPower> logging error output: "{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"the server does not allow this method on the requested resource\",\"reason\":\"MethodNotAllowed\",\"details\":{},\"code\":405}\n"
<lazyPower> which is interesting, i know for a fact you can delete namespaces
<lazyPower> i believe what might be the cause, is it caused some kind of lock in etcd
<Zic> yeah
<Zic> for my previous labs, I just delete ns and all was clean
<lazyPower> and k8s is stuck trying to complete that reqeuest and etcd is actively being aggressively in denial about it
<Zic> but never deleted 4 larges one in the same time...
<lazyPower> but not positive this is the root cause, we're still dissecting
<mbruzek> Zic our e2e tests do large deletes of namespaces so that should be fine.
<Zic> ok
<Zic> atm, this namespace are still in "Terminating"
<Zic> I check if rc, pods, services, statefulset, all the ressources was terminated
<mbruzek> Zic Did you reboot the etcd node(s) while this was still trying delete? Was there an order of reboot?
<Zic> mbruzek: I just checked the rc/pods/svc/statefulset of this namespaces was goodly terminated, but the namespace was still blocked at Terminating
<Zic> I rebooted the etcd node one by one
<Zic> (and try etcdctl member list after each reboot)
<lazyPower> yeah
<Zic> I have a previous backup of this morning for etcd
<Zic> (and one after the upgrade)
<lazyPower> the more we think this through, i think etcd is the core troublemaker here
<lazyPower> i think teh client lost the claim on the lock
<Zic> because of the high amount of delete request or because of the upgrade via apt of its package?
<lazyPower> combination of the operation happenign and then being rebooted during the op
<lazyPower> etcd is still waiting for that initial client request to complete
<Zic> :s
<lazyPower> i hear you, etcd is very finicky, and this is exactly why we label it as the problem child
<lazyPower> i'm looking up how to recover from this
<Zic> all my troubles was with etcd for all that time :D with K8s or Vitess
<lazyPower> Zic - can you curl the leader units ledaer status in etcd?
<lazyPower> eg:  curl http://127.0.0.1:2379/v2/stats/leader
<lazyPower> teh leader is identified with an asterisk next to the unit-number in juju status output
<Zic> hmm
<Zic> I have a non-printable character in return
<Zic> I have a bad feeling about this
 * lazyPower 's heart sinks a little in his chest
<Zic> https://dl.iguanesolutions.com/f.php?h=1mvhf5F9&p=1
<Zic> oh wait
<Zic> it's not the master
<Zic> etcd/0*                   active    idle   5        mth-k8setcd-02             2379/tcp        Healthy with 5 known peers.
<Zic> I will try here
<Zic> same non-printable-character :(
<mbruzek> Zic: juju run --unit etcd/0 "systemctl status etcd" | pastebin
<lazyPower> Zic - etcdctl ls /registry/namespaces
<Zic> http://paste.ubuntu.com/23875518/
<Zic> mbruzek: ^
<Zic> http://paste.ubuntu.com/23875523/
<Zic> lazyPower: ^
<Zic> jma, production, integration, development was the namespaces I deleted
<Zic> (which is still locked to "Terminating" status)
<Zic> hmm, lazyPower I run the same curl with https instead of http
<Zic> root@mth-k8setcd-01:~# curl -k https://127.0.0.1:2379/v2/stats/leader
<Zic> curl: (35) gnutls_handshake() failed: Certificate is bad
<Zic> even with the "-k"
<lazyPower> Zic - etcd is configured to listen to http on localhost
<Zic> oh ok, so it was correct
<lazyPower> you'll need https if you poll the eth0 interface ip
<Zic> I try
<Zic> # curl -k https://10.128.74.205:2379/v2/stats/leader
<Zic> curl: (35) gnutls_handshake() failed: Certificate is bad
<mbruzek> Zic: juju run --unit etcd/0 "journalctl -u etcd"  | pastebinit
<Zic> http://paste.ubuntu.com/23875565/
<Zic> the 14:17-14:32 interval is the upgrade/reboot I think
<lazyPower> Zic - etcdctl ls /registry/serviceaccounts/$deleted-namespace
<lazyPower> do you have 'default' listed in there in any of those namespaces?
<Zic> root@mth-k8setcd-02:~# etcdctl ls /registry/serviceaccounts/production
<Zic> /registry/serviceaccounts/production/default
<Zic> yep
<Zic> sorry, I will be afk for 1 hour (breaking K8s cluster was not an sufficient punition, I'm also of the rotation on-call tonight... need to go home before it begins... double-punishment :D)
<mbruzek> Zic we were about to offer some face to face support. We can wait until you get home.
<mbruzek> Zic ping us when you are back
<bdx> is there a reason juju automatically adds a security group rule to every instance that allows access on 22 from 0.0.0.0/0?
<bdx> I'm guessing juju just assumes you will always be accessing the instance via public internet and not from behind vpn?
<lazyPower> Zic - we think we've narrowed it down to the one area we dont have visibility into at the moment, we're missing debug info from etcd, and there's no layer-debug support in teh etcd charm at present. When you surface and have a moment to re-ping, we'd like to gather some more information from the etcd unit(s) under question and i think we can then successfully determine what has happened.
<stormmore> howdy juju world
<lazyPower> o/ stormmore
<stormmore> so I got my first k8s cluster up and running yesterday, woot!
<lazyPower> AWE-SOME
<lazyPower> doing anything interesting in there yet stormmore?
<stormmore> not yet, still teaching the Devs how to create containers
<stormmore> it will get more interesting when I migrate out AWS to our own hw
<stormmore> I just love all the "pretty" dashboards that I can "show off" to management
<stokachu> stormmore, it's what promotions are built on :)
<stormmore> stokachu not in a startup where you already report to the CEO
<stormmore> job security maybe
<Zic> ping back lazyPower and mbruzek
<Zic> sorry transports was hell
<stokachu> stormmore, hah, maybe some nice hunting retreats
<stokachu> stormmore, that may just be here in the south though
<stormmore> yeah that is a southern thing stokachu, definitely not a bay area thing
<Zic> lazyPower: what do you need from etcd, just the journalctl entries?
<lazyPower> Zic - can you grab that, the systemd unit file /var/lib/systemd/etcd.service   and the defaults environment file   /etc/defaults/etcd
<lazyPower> er
<lazyPower> sorry /lib/systemd/service/etcd.service
<lazyPower> i clearly botched teh systemd unit file location. herp derp
<Zic> /lib/systemd/system/etcd.service?
<Zic> because /lib/systemd/service does not exist :)
<lazyPower> correct
<Zic> 'k
<Zic> http://paste.ubuntu.com/23876230/
<Zic> etcd.service date of Dec 18th Jan 16th, if it can help
<Zic> oops, missing copy/paste
<Zic> etcd.service is Dec 18th and /etc/default/etcd is Jan 16th
<lazyPower> ok these unit files appear to be in order. We found some issues that also look related to the core problem regarding flannel not actually running on the units
<lazyPower> it failed contacting etcd
<mbruzek> Zic are you able to hangout with us for a debug session?
<Zic> so sorry but I can't, my wife and my child will kill me if I run into a debugging-session with audio :/ but really appreciate your kindness, thanks
<Zic> I'm out-of-office actually but I can do some IRC discretely :)
<Zic> lazyPower: yes, we always discuss about this, but I `systemctl start flannel` at all rebooted kubernetes-worker
<Zic> or maybe the time Flannel is not running sets the problem?
<Zic> (and I also did a `juju resolved` on the flannel unit which was in error)
<Zic> as you taught me :)
<lazyPower> Zic - seems like flannel is having an issue contacting etcd per the debug output from kubernetes-master
<lazyPower> which in turn is causing the kubernetes api to not be available to pods, which is causing the pod crashloop
<mbruzek> Zic: Can you pastebin the /var/run/flannel/subnet.env also ?
<Zic> hmm, I remembered to start the flannel service after every kubernetes-*worker* reboot
<Zic> not on master
<Zic> (as Juju doesn't tell me anything is on error after master reboots)
<mbruzek> Zic Were the flannel services not autostarting?
<Zic> yes, on worker
<ryebot> fwiw, that issue was fixed in 1.5.2
<Zic> nice, noted
<Zic> I will plane an upgrade if I'm able to recover from this crash
<mbruzek> Zic: Can you get the /var/run/flannel/subnet.env  file for us?
<Zic> oh sorry, I missed your message
<Zic> http://paste.ubuntu.com/23876307/
<ryebot> Thanks, Zic
<Zic> lazyPower: at this point, you can tell me the truth, do you think I will be able to recover from this crash? :D not so important because it's not in prod yet, and it's easy to redeploy from scratch
<Zic> but to know of what mu monday will be made :)
<Zic> s/mu/my/
<lazyPower> Zic - we have some ideas, but nothing definitive for the root cause so its hard to point at what  a fix would be without access to real time debug.
<lazyPower> Zic - we're trapped in a meeting atm thats starting to wind down, but we've been scrubbing through the logs you sent all morning, it all seems to point back to flannel + etcd as the core of the issue
<lazyPower> mbruzek - any ideas left to try before we call it DOA?
<Zic> don't hesitate to ping me for more info, I'm @home but lurking at IRC as usual
<lazyPower> Zic - will do. just pending feedback from the remainder of the team actively cycling on the issue.
<lazyPower> i did the same oeprations you outlined on my home cluster running 1.5.1 and i got the intermediary flannel connection issue
<lazyPower> it resolved once i restarted the master however
<Zic> I rebooted in this order : juju controller, kube-api-loadbalancer+easyrsa, kubernetes-master, kubernetes-worker
<lazyPower> That all sounds correct to me in terms of ordering
<Zic> I don't try to reboot anything since then
<lazyPower> would you mind terribly trying to reboot the kubernetes-master unit one last time to see if it "unsticks" the error?
<Zic> but I can if it's needed
<Zic> yeah, I can
<Zic> the 3 ones?
<Zic> one by one?
<lazyPower> Zic - i would pick one, and restart it yes. identified as the leader
<lazyPower> start there, and lets see what results we get back from that single reboot
<lazyPower> if it looks promising, then cycle one by one the other two nodes
<lazyPower> s/nodes/units/
<Zic> reboot launched on the active master
<lazyPower> Zic - i need to cycle into another role at the moment, but i'm leaving you in the very capable hands of mbruzek, ryebot, and Cynerva - they're going to keep the stream alive and ask for details about post-reboot
<ryebot> Ready and waiting, Zic :)
<Zic> oki, thanks anyway lazyPower for your great help :)
<lazyPower> Zic - no, thank YOU for the patience during this debugging session. I know its unnerving
<Zic> ryebot: o/, reboot finished, do I start the flannel.service?
<lazyPower> if we can fix it we'd like to do so
<ryebot> Zic: yes, please
<mbruzek> Zic: Yes
<Zic> started, systemctl status seems correct
<ryebot> great, and /var/run/flannel/subnet.env exists?
<lazyPower> mbruzek ryebot - the flanneld unit not being up before kube-apiserver/scheudler/controller-manager might be a bigger portion of the error set as well.
<lazyPower> during bootstrap that happens after flanneld has indiciated its running and available
<stormmore> Hey lazyPower just out of curiousity, do you know why the k8s bundle adds flannel connection to the k8s master that takes a full /24? seems like a bit of a waste to me
<lazyPower> stormmore - expedience, that seems like an area we can optimize
<Zic> http://paste.ubuntu.com/23876577/
<ryebot> lazyPower ack, we'll investigate
<Zic> ryebot: ^
<ryebot> Zic great, thanks
<Zic> do I mark the flannel/0 unit as "resolved" with juju cli?
<Zic> it's in error atm
<stormmore> lazyPower - seems like it, still not a deal breaker for me ;-)
<ryebot> Zic, yes, please
<Zic> done, it's green
<ryebot> great
<ryebot> one sec
<lazyPower> stormmore glad to hear it :) As you can see per the channel logs, we take deployments seriously and value all feedback. keep it comin. if you'd like to file a bug against github.com/kubernetes/kubernetes regarding the master service cider range we can angle to get it on the roadmap in the future.
<mbruzek> Zic: Can you also reboot the other masters and start flanneld as well.
<Zic> ok
<Zic> ok for the 2nd master
<Zic> also for the 3rd one
<ryebot> okay, and you resolved the errors?
<Zic> yep
<mbruzek> Zic can you try to create a simple pod to see if that works?
<stormmore> lazyPower this channel is one of the reasons I am driving adoption in my company of mass and juju to underpin the k8s environment instead of other options
<mbruzek> Are you still in a Crash Loop back off?
<lazyPower> stormmore <3
<lazyPower> we appreciate you too
<mbruzek> Thanks stormmore!
<Zic> mbruzek: I will deploy a simple nginx pod, let me few seconds
<ryebot> Thanks, Zic
<Zic> $ kubectl run my-nginx --image=nginx --replicas=2 --port=80
<Zic> Error from server (Forbidden): deployments.extensions "my-nginx" is forbidden: not yet ready to handle request
<Zic> same from before
<ryebot> alright, thanks, Zic, one moment
<Zic> I run it a 2nd time, same error
<Zic> but the 3rd time operation was executed
<Zic> so, same as earlier
<stormmore> so on a less work related topic, when is the next convention that I can try and twisted the boss' arm into letting me attend?
<mbruzek> Zic what master does your kubectl point to?
<Zic> the kube-api-load-balancer
<Zic> I can test from a master directly
<mbruzek> Zic: Do you have a bundle or description of how you deployed your cluster?
<Zic> yeah : 1 machine for kube-api-loadbalancer and easyrsa (identified by mth-k8slb-01 in my infra), 3 machines for kubernetes-master (mth-k8smaster-0[123]), 8 kubernetes-worker (mth-k8s-0[123], mth-k8svitess-0[12], mth-k8sa-0[123])
<Zic> and one Juju controller of course : mth-k8sjuju-01
<mbruzek> Zic: Can you pastebin juju status?
<Zic> of course
<Zic> http://paste.ubuntu.com/23876663/
<lazyPower> mbruzek - one thought just occured to me as well, if Zic  hasn't updated to our 1.5.2 release, the api-lb stil has the proxy buffer issue which reared its ugly head on delete/put requests as well.
<lazyPower> but not certain if thats pertinent
<Zic> (oh, I forgot to mention mth-k8setcd-0[12345] which running etcd parts)
<Zic> mbruzek: ^
<Zic> all is added manually ("cloud-manual" provisionning of Juju, even the AWS-EC2 instance)
<Zic> I don't use the AWS connector since I have baremetal servers and EC2 instances in the same juju controller
<mbruzek> Zic would you be able to upgrade this cluster to see if our operations code corrects this problem?
<Zic> yeah, it will be the first upgrade I conduct through Juju since the bootstrapping of this cluster :)
<Zic> what is the recommended way?
<mbruzek> We have that documented in this blog post: http://insights.ubuntu.com/2017/01/24/canonical-distribution-of-kubernetes-release-1-5-2/#how-to-upgrade
<mbruzek> Since you don't seem to be using the bundle, I would recommend the "juju upgrade-charm" steps
<mbruzek> We can walk you through it.
<Zic> I use the canonical-kubernetes but just scale etcd from 3 to 5 and master from 1 to 3 in the Juju GUI
<Zic> (and make easyrsa to be on the same machine as kube-api-loadbalancer)
<Zic> the canonical-kubernets bundle*
<mbruzek> Zic: Ah I see, still would recommend the upgrade-charm path
<Zic> ok, I just read the how-to-upgrade section, never use the upgrad-charm one by one :(
<mbruzek> There is a first time for everything!
<Zic> :)
<Zic> so I run `juju upgrade-charm <on_every_charm_one_by_one>` ?
<mbruzek> Zic: Just the applications in the cluster, kubernetes-master, kubernetes-worker, etcd, flannel, easyrsa, and kubeapi-load-balancer
<mbruzek> The _units_ will upgrade automatically when the _application_ does
<mbruzek> So you don't need to use the /0 /1 /2 /3
<ryebot> Zic: Right, so literally as it is in those docs :)
<Zic> ok
<Zic> just to let me know, for the future (~yay~), is it the classical recommended way in a cluster like mine?
<Zic> so I will try to memorize this :)
<ryebot> Zic: As far as I know, for custom deployments, yes.
<ryebot> You might be able to use your own bundle without version numbers, but I'm not sure. I'll have to investigate.
<mbruzek> Zic: Technically you could export a bundle of your current system and it would be reproducable, as well as you could edit the version numbers of the charms and upgrade your cluster in one step
<mbruzek> Zic: But we are just trying to fix the cluster here, we can add automation and reproduction as following steps
<Zic> ok, so as I modify the number of units and replace the easyrsa charm, the out-of-the-box way is via juju upgrade-charms, that's all I want to know, it's not the time to automation I agree :)
<Zic> (upgrades is running btw)
<Zic> s/replace/place in the same machine of kube-api-loadbalancer/
<mbruzek> Zic: Actually you can export the deployment from the Juju GUI in one step, and it will have the machines and everything just as you have it now.  We could then copy that yaml and edit it.
<Zic> smooth, noted, I will study this after :)
<Zic> (easyrsa, kube-api-load-balancer already OK, kubernetes-master in progress, I will follow then with etcd and kubernetes-master)
<Zic> worker*
<Zic> hmm
<Zic> I had a `watch "kubectl get pods --all-namespaces"` running
<mbruzek> Zic: The documenation describe this process: https://jujucharms.com/docs/2.0/charms-bundles
<Zic> and suddenly all the pods switched to Running after the kubernetes-master upgrade
<Zic> \o/
<mbruzek> Excllent!
<Zic> let me check carefully
<lazyPower> @!#$%^@!#$%!@#$%
<lazyPower> AWESOME mbruzek  GREAT WORK!
<Zic> yeah, all is running
<mbruzek> sweet
<Zic> I continue the upgrade-charms
<lazyPower> i think this calls for a tradmarked WE DID IT!
<Zic> (I will try to schedule a new pod deployments just after)
<Zic> clap clap anycase o/
<Zic> oh maybe "clap clap" is not sounding like a STANDING APPLAUSE in English, sorry :)
<mbruzek> Zic: It translated in my head just fine
<Zic> :)
<Zic> etcd charms is upgrading
<Zic> <crossfinger>
<Zic> done for etcd, I'm ending with kubernetes-worker charm
<Zic> all upgrades is finished
<Zic> let me run a kubectl deployment
<mbruzek> Zic: Can you create a small test deployment?
<Zic> yep
<Zic> (oh btw, kubectl get ns does not return any of the old locked Terminating namespace)
<Zic> it's clean now
<lazyPower> yassssss
<ryebot> great
<Zic> so, it works but something weird but maybe normal : some Ingress goes to CrashLoopBackOff but it looks to come back in Running atm
<Zic> yeah, they are all running now
<Zic> some strange flap
<lazyPower> it the default-http-backend pod is under re-creating it'll crash loop the ingress controller until the backend stabilizes
<lazyPower> thats known and normal
<Zic> oh, so it's normal
<Zic> COOL :D
<lazyPower> phwew
<lazyPower> man, fan-tastic
<Zic> so your new version amazingly correct everything
<Zic> and I don't know if it's tied to it, but Scheduling is much more quicker
<lazyPower> thats a 1.5.2 fixup :)
<Zic> I waited sometime up to 3min between Scheduling and ContainerCreating
<lazyPower> plus you probably have less pressure on the etcd cluster without those large namespaces
<Zic> here it was done in 10s
<mbruzek> Zic: The upgrade process installed new things and reset the config files to what we would expect, that is why I think you are having so much success here.
<mbruzek> Zic: Is everything ok? do you have any other problems?
<Zic> I'm checking some more test but all seems OK now !!
<mbruzek> Zic: There was also a fix for LB where we turned off the proxy buffering in the 1.5.2 update.
<Zic> hmm, some Ingress stayed in CrashLoopBackOff and others are running correctly:
<Zic> http://paste.ubuntu.com/23876828/
<mbruzek> Zic it is possible that those are still being "restarted" by the operations
<mbruzek> But good information.
<Zic> hmm, kubernetes-dashboard is staying in CrashBackLoopBackOff, I forgot to check the kube-system namepace :/
<Zic> (kube-dns was also Crashed, but it is Running since the upgrade)
<ryebot> Zic, is your juju status all-green right now?
<Zic> yep
<ryebot> thanks
<Zic> heapster also was CrashLoopBackOff and just Running fine now
<mbruzek> Zic: so just the ingress ones are in CLBO ?
<Zic> the only crashloopbackoff now is the dashboard and some of the Ingress
<Zic> oh I anticipated the question :)
<Zic> I'm trying to relaunch the pod of kubernetes-dashboard, maybe it will help
<Zic>   2m1s163{kubelet mth-k8svitess-01}WarningFailedSyncError syncing pod, skipping: failed to "SetupNetwork" for "kubernetes-dashboard-3697905830-qv6hv_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kubernetes-dashboard-3697905830-qv6hv_kube-system(8785f143-e4cc-11e6-b87d-0050569e741e)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"
<mbruzek> Zic: Did you get to juju upgrade-charm flannel?
<Zic> no, I just realize that, I thought it was a part of the kubernetes-worker charms
<Zic> can I run its upgrade now?
<mbruzek> Zic: Yes please
<Zic> done
<mbruzek> Zic: Can you please pastebin `kubectl logs nginx-ingress-controller-jlxr5` ?
<mbruzek> We want to see why that is in a CLBO
<Zic> http://paste.ubuntu.com/23876915/
<Zic> hmm
<Zic> I didn't have this error before
<mbruzek> Are they still in crash loop back off?
<Zic> yep
<Zic> oh
<Zic> dashboard is running
<Zic> and 2 Ingress are back in Running
<ryebot> 6 total ingress running now?
<Zic> 2 are always in CLBO but I think it will combe back
<Zic> hop, all Running
<ryebot> 8 total running ingress?
<Zic> yep
<ryebot> great
<Zic> all is Running now, and I'm checking --all-namespaces
<Zic> (I forgot kube-system earlier :/)
<Zic> dashboard is working effectively
<Zic> hmm, 2 Ingress are returning in CLBO atm
<mbruzek> Zic: Send us pastebin logs for those
<Zic> http://paste.ubuntu.com/23876940/
<Zic> I have another log from the RC of Ingress:
<Zic> Liveness probe failed: Get http://10.52.128.99:10254/healthz: dial tcp 10.52.128.99:10254: getsockopt: connection refused
<ryebot> thanks, looking
<Zic> so the only CLBO part now is that 2 Ingress:
<Zic> default       nginx-ingress-controller-vg9qc            0/1       CrashLoopBackOff   17         13h
<Zic> default       nginx-ingress-controller-w1dhl            0/1       CrashLoopBackOff   92         10d
<lazyPower> Zic - which namespace is this in?
<Zic> default, it's the builtin nginx-ingress of the canonical-kubernetes bundle
<lazyPower> ack, ok
<Zic> (I don't add any Ingress myself)
<Zic> ingress controller* to use the right terminology
<lazyPower> can you kubectl describe po  nginx-ingress-controller-vg9qc | pastebinit
<Zic> (Actually, I add Ingress, but not Ingress controller)
<Zic> http://paste.ubuntu.com/23876964/
<ryebot> thanks
<lazyPower> Zic - thats totally fine
<lazyPower> hmm nothing is leaping out at me from the pod description.... it did say it was failing health checks
<lazyPower> from earlier pastes it looked like it was running out of file descriptors
<Zic> in fact, my Ingress deserve well my testing website
<Zic> but this two controller stays in CLBO :/
<Zic> the 6 others is well Running
<cholcombe> is it possible to tell juju to give my instance 2 ip addrs when it starts them?
<lazyPower> cholcombe - you have to sacrifice 40tb of data and do the chant of "conjure-man-rah"
<Zic> lazyPower: if I reboot (yeah, it's a bit brutal) nodes which host this two CLBO, maybe it will repop as Running?
<cholcombe> man i need to brush up on that chant haha
<lazyPower> in short, i dont know but i think extra-bindings and spaces is what would introduce that functionality
<Zic> lazyPower: it's what I did for kubernetes-dashboard
<lazyPower> s/know/so/
<lazyPower> no no, nevermind that last edit, it was right
<lazyPower> Zic - its worth a shot.
<stormmore> hmmm that is curious for an "idle" cluster, one of my nodes just dropped 4GB memory usage and increased network tx for potential no obvious reason!
<Zic> lazyPower: it seems to work, weird but I like it xD
<Zic> I will wait few minutes before confirming
<lazyPower> Zic - i'm going to blame gremlins for that one
<Zic> default       nginx-ingress-controller-7qcsn            0/1       CrashLoopBackOff   10         13h       10.52.128.135   mth-k8sa-01
<Zic> default       nginx-ingress-controller-lx6kt            0/1       CrashLoopBackOff   16         13h       10.52.128.253   mth-k8sa-02
<Zic> it don't work so long :)
<ryebot> Zic, another option is destroying them; the charm should launch new ones to replace them on the next update (<5 mins)
<Zic> with the same error: F0127 21:11:44.648841       1 main.go:121] no service with name default/default-http-backend found: the server has asked for the client to provide credentials (get services default-http-backend)
<Zic> ryebot: thanks, I will try
<Zic> it pops again and they are Running
<Zic> let few minutes pass to confirm :)
<ryebot> +1
<Zic> CLBO for one of them
<mbruzek> Zic: Please try one more thinkg
<mbruzek> juju config kubernetes-worker ingress=false
<mbruzek> *wait* for the pods to terminate.
<Zic> the two new bringed up ingress-controller are now in CLBO again
<Zic> mbruzek: ok
<mbruzek> Then you should be able to juju config kubernetes-worker ingress=true
<Zic> mbruzek: (it mades me remember that at the end, I need to shut off the debug I enabled earlier on the kubernetes-master?)
<Zic> all pods are terminated
<Zic> rewitch to true
<Zic> all is Running, let's wait some minutes
<Zic> they are all in CLBO now xD
<Zic> but it switch again to Running
<lazyPower> it will register as running when they initially come up, but have to pass the healthcheck
<lazyPower> Zic - can you juju run --application kubernetes-worker "lsof | wc -l"   and pastebin the output of that juju run command?
<Zic> http://paste.ubuntu.com/23877086/
<Zic> All Ingress are flapping between CLBO and Running now
<lazyPower> i'm not positive, but k8s worker/2 and k8s-worker/1 seem to have a ton of open file descriptors
<lazyPower> looks like something is leaking file descriptors :|
<lazyPower> which would explain the crash loop backoff on a segment of the ingress controllers vs the handfull that succeed
<lazyPower> Zic - at this point we'll need a bug about this, and can look into it further, but we're not in a position to recommend a fix at this time.
<lazyPower> we have encountered this before but the last patch that landed should have both a) enlarged the file descriptor pool, and b) hopefully corrected that. we might be behind in a path release on the ingress manifest that fixed this
<lazyPower> i'll take a look in a bit, but i think we're 1:1
<Zic> in fact the Ingress is working even if the Ingress Controller are flapping
<lazyPower> do you have any ingress controllers listed as running that are not in CLB?
<Zic> strange
<Zic> yeah, as they flap between the two state not at the same time
<lazyPower> if thats the case, kubernetes will do its best to use a functional route throuth the ingress API
<lazyPower> as it round-robin distributes them. you'll likely find some requests that get dropped if you run something like apache-bench or bees-with-machine-guns against it
<lazyPower> but typical single user testing likely looks fine
<ryebot> Zic, one last thing - can you pastebin /lib/systemd/system/kubelet.service on a kubernetes-worker node?
<Zic> http://paste.ubuntu.com/23877117/
<lazyPower> juju run --application kubernetes-worker "sysctl fs.file-nr" | pastebinit -- as well would be helpful in ensuring it is indeed related to file descriptors. this will list the the number of allocated file handles, the number of unused-but-allocated file handles, and the system wide max number of file handles
<Zic> http://paste.ubuntu.com/23877122/
<lazyPower> well those fd numbers went way down
<lazyPower> however there are very different configurations listed there
<lazyPower> where some have 400409 and some have 6573449 listed as max
<ryebot> They might be his baremetal/ec2 machines?
<lazyPower> ah, you are correct
<lazyPower> different substrates
<lazyPower> different rules
<Zic> yeah, only mth-k8s-* and mth-k8svitess-* are robust physical servers
<lazyPower> all the units appear to be well within bounds of those numbers though
<Zic> mth-k8sa-* are EC2 instances
<lazyPower> so maybe its not FD Leakage
<Zic> http://paste.ubuntu.com/23877130/ <= and this error about credential remembered some bad hours of this day
<Zic> (which was the same error returned by some kubectl commands and the dashboard earlier)
<Zic> 21:28 is @UTC
<Zic> so few minutes ago
<mbruzek> Zic: Can you please file a bug about this problem on the kubernetes github issue tracker? https://github.com/kubernetes/kubernetes/issues
<mbruzek> Maybe someone else knows why the ingress would be in CrashLoopBackOff.
<mbruzek> Please list if you added any ingress things and what manifest you used for that.
<Zic> hmm, about this, maybe I can delete my Ingress
<mbruzek> Put Juju in the title and your best estimation on how to reproduce these errors.
<mbruzek> Zic: The ingress=false would have deleted them no?
<mbruzek> Did you put in your own _different_ ingress objects?
<Zic> if you talk about Ingress controller, no, I stayed with the default nginx-ingress-controller of the charms bundle
<Zic> but I create two Ingress yep
<lazyPower> ingress objects are related to, and depend on the ingress controller, but have very little to do with ingress controller operations
<lazyPower> unless i'm misinformed
<Zic> ok, so I promise you to fill a bug tomorrow morning, it's late now and if I want to fill a well-written/described bugs I prefer to do it seriously :) I will post the Issue link here tomorrow
<Zic> the step-to-reproduce part will be the most difficult part
<mbruzek> Zic yes
<mbruzek> Zic, I just don't know how to reproduce this
<mbruzek> I realize it is late for you, sorry about the problems.
<Zic> no worries, you were all formidable to help me and focus on this problem during hours today, I can at least pursue the debug on IRC even if I'm out of office!
<Zic> thanks a lot mbruzek, ryebot, lazyPower, jcastro
<ryebot> Happy to help, Zic!
<Zic> I will pop again here tomorrow about the issue I will report
<ryebot> thanks, feel free to me ping when you post it, I'd like to track it
<ryebot> ping me* :)
<Zic> huh
<Zic> I deleted and recreated my Ingress, and controller are now Running without flapping since 6min
<Zic> engineering is SO 2016, magic is the new way
<Zic> :|
<ryebot> lol
<ryebot> Zic, when you say you deleted your ingress, can you provide the command you executed?
<Zic> kubectl delete ing <my_two_ingress>
<Zic> (one was exposing a nginx-deployment-test with nodeSelector on machine labelled at our datacenter/Paris (baremetal servers))
<Zic> (and one another on EC2 only)
<Zic> 8min without flapping, lol
<ryebot> I wonder if there was a conflict with our automatic ingress scaling. Shouldn't be, but I should probably make sure.
<Zic> I try to keep all traces to fill a bug anyway tomorrow
<ryebot> Hmm, you said you recreated them, too, so I guess that can't be it.
<ryebot> I guess you're right... magic!
<Zic> and add how I resolved, if it don't flap from tommorrow
<ryebot> +1 sounds good, thanks Zic.
<Zic> ryebot: yeah, and it's really simple Ingress, with only one rule on the hostname
<Zic> like baremetal.ndd.com for the first, ec2.ndd.com for the second
<Zic> 11min without flapping \o/
<ryebot> \o/
 * Zic will buy some magic powder before sleeping
<ryebot> heh
<Zic> hmm, I discovered some more information in kubectl describe ing
<Zic> all the operation that the controller did
<Zic> I didn't know where to find those
<Zic> I have a lot of   42m42m1{nginx-ingress-controller }WarningUPDATEerror: Operation cannot be fulfilled on ingresses.extensions "nginx-aws": the object has been modified; please apply your changes to the latest version and try again
<Zic> during the CLBO period
<Zic> and now it's working, juste MAPPING action
<Zic> just*
<Zic> (no flapping for 38min :p I'm going to bed, g'night and thanks one more time to all of you :))
<lazyPower> Zic : excelelnt news. have a good sleep and enjoy your weekend o/
<Zic> I will ping back with the GitHub issue tomorrow o/
<stormmore> wish there was a good recommendation guide for SDN hardware :-/
<stormmore> (still think runninig some of the maas services on an sdn switch would rock!)
<stormmore> since it is Friday and I am not doing anhything to affect my cluster but I want to learn more about juju, is it possible run your own charm "store"?
<lazyPower> stormmore - not at this time
<stormmore> lazyPower another "future work" thing then :)
<lazyPower> stormmore - you can keep a local charm repository which is just files on disk, but as far as running the actual store display + backend service(s), thats not available for an on-premise solution
<bdx> yo whats up with the charmstore ?
<bdx> ERROR can't upload resource
<bdx> will we ever fix this?
<bdx> killin' me here
#juju 2017-01-28
<blahdeblah> BlackDex: hi - best to keep charm Qs in the channel here.  It's a team effort and someone else may be able to help you in a more timely fashion due to time zone differences. :-)
<stokachu> bdx, is that the proxy error message again?
<Zic> re here
<Zic> all my Ingress are fine since yesterday :)
<Zic> lazyPower: do I need to disable something to shut the debuf off `juju run-action debug kubernetes-master/0` command of yesterday?
<Zic> hmm, I rebooted all the kubernetes-worker to test the resilience, all Ingress controller are CLBO, kube-dns and kubernetes-dashboard also :/
<Zic> http://paste.ubuntu.com/23879175/
<Zic> so I identify the step-to-reproduce in my cluster: rebooting the node which host the default-http-backend-0wt64 pod cause this kind of error
<Zic> http://paste.ubuntu.com/23879186/ <= kube-dns logs
<Zic> Killing container with docker id e5f551e889bc: pod "kube-dns-3216771805-w2853_kube-system(854c0971-e4cc-11e6-b87d-0050569e741e)" container "dnsmasq" is unhealthy, it will be killed and re-created
<Zic> Liveness probe failed: HTTP probe failed with statuscode: 503
<Zic> if you have any reviews before I fill the issue on Kubernete's GitHub
<marcoceppi_> Zic: Most of the k8s team is asleep, unfortunately
<marcoceppi_> hell, I don't even know why I'm awake
<junaidali> Hi marcoceppi , you still there? :)
<marcoceppi> junaidali: o/ yes
<junaidali> marcoceppi: have you got time to look into the issue?
<marcoceppi> junaidali: which issue?
<junaidali> aws credentials
<marcoceppi> junaidali: oh, jeeze, ofc
<marcoceppi> junaidali: give me 10 mins and I'll be back with some more info
<Zic> marcoceppi: no problem, and it's the weekend btw :)
<ryebot> Zic: The debug action is a one-shot thing, no need to shut it off. Did your ingress controllers eventually come back up?
<Zic> ryebot: nope, and it is atm stuck in CLBO since this morning
<Zic> same for kube-dns and kubernetes-dashboard
<ryebot> Zic: Sorry about that. Please move forward with your bug. I'll see if I can repro your issue soon.
<ryebot> Zic: In the meantime, you might try destroying them all and seeing if they come back up (our charm should reinstate them automatically).
<Zic> ryebot: about Ingress controller or also the kube-dns and kubernetes-dashboard pods?
<Zic> ryebot: (I did nothing for now) after 109 restarts (saw in the Restart column of a `kubectl get pods -o wide --all-namespaces`), all my Ingress and kubenertes-dashboard/kube-dns are running
<Zic> but I suspect if I reboot again some nodes it will stay in CLBO for hours :/
<Zic> I'm preparing the issue for GitHub
<Zic> https://github.com/kubernetes/kubernetes/issues/40648 (cc ryebot lazyPower)
<ryebot> Zic: Excellent, thanks! We'll be tracking this.
<Zic> sadly the default-http-backend pod have no logs :/
<Zic> (kubectl logs on it return nothing)
<bdx> having issues with `juju attach` ... it doesn't seem my resource ever makes it to the charm
<bdx> even though uploading to the controller is successful
<bdx> my charm just doesn't get the resource
<bdx> I wonder if it has something to do with the hosted controller
<bdx> in all honesty .... I can't upload resources to the charmstore, can't upload to the controller .... I guess resources are just broken right now?
<bdx> thats the impression all my new uses have at least
#juju 2017-01-29
<sdfsfg> https://www.youtube.com/watch?v=njul5sUC8rg
<BlackDex> blahdeblah: I did that several times, also posted a bug report, there is no anwser, since i see you did some activity, i hoped you could atleast help me a bit.
<blahdeblah> BlackDex: What's the bug number?
#juju 2018-01-22
<Utking> Hi!
<Utking> I'm having a hard time with juju
<Utking> Getting an error : current model for controller not found
<Utking> any fix for it? i've tried google already ^^
<pmatulis> Utking, can you provide more info? like how you created the controller and any models. also what command is giving that error?
<Utking> Sure!
<Utking> juju status Openstackmaas
<Utking> gives the error
<Utking> Openstackmaas is the name of the controller :)
<Utking> also stuff like juju sync-tools gives the same error
<Utking> juju status --model admin/default works though
<Stormmore> so I am back to architecting environments, woot! Trying to determine how I want to build my build env
<Stormmore> no to figure out how to get it to actually give me a string and not a list of ascii codes!
<Stormmore> now*
<kwmonroe> Cynerva: you were the last objector to https://github.com/juju-solutions/layer-filebeat/pull/32.  with the decorator swizzle, any objection to me merging that?
<Cynerva> kwmonroe: no objection here, +1 to merge. thanks for following up on that
<kwmonroe> np
<Stormmore> My odd question of the day, does anyone know how CDK would handle workers between stopped / started depending on load?
<jose-phillips> hi anyone know how to setup manually the lxc network with juju?
<jose-phillips> im just want to assing manually the ip address of each container
<jose-phillips> how i can do that?
<knobby> Stormmore: I don't understand your question. Are you asking about automatic scaling of machines in CDK?
<Stormmore> knobby, short answer - yes and no
<Stormmore> knobby, long answer, I want to build a dynamic build environment where the underlying machine can be "powered off" during quiet periods and spun up when needed
<Stormmore> I am thinking of using jenkins which can handle it appropriately but I want my build slaves to be containers not machines
<knobby> Stormmore: you can certainly instruct juju to add and remove workers whenever you want, but I'm not sure about any sort to automatic scaling. It might be a good idea to have the jenkins slave run as a daemonset with some tag on the kubernetes cluster and then create machines via juju and add that tag via kubectl whenever you want to grow. Jenkins would then have to be cool with slaves coming and going.
#juju 2018-01-23
<jose-phillips> hi someone know how to add persist the changes on /etc/network/interfaces after add a second network interface?
<jose-phillips> or keep the configuration on /etc/network/interfaces on  a lxc container deployed by juju
<jose-phillips> or change the dns
<kjackal> hi rick_h, I got  a question you might be able to help with. On a network restricted deployment we would like to have in the model cofig no-proxy a set of IPs os a few subnets. These IPs are propagated to the snapd configuration and systemd is failing to load it. https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/413
<kjackal> Any ideas how we could work around this?
<pmatulis> kjackal, the goal is to get the snapd system to have no_proxy declared in its /etc/environment file
<kjackal_> pmatulis: Mostly yes. /etc/environmet should have the same line limit but I hear /etc/default/profiles/whatever.sh does not
<kjackal_> however, i think juju is updating the snapd configuration without giving me an option.
<kjackal_> probably related: https://bugs.launchpad.net/juju/+bug/1639494
<mup> Bug #1639494: juju model-default no-proxy setting invalid for CIDR (172.16.0.0/16) <juju:New> <https://launchpad.net/bugs/1639494>
#juju 2018-01-24
<jose-phillips> question
<jose-phillips>  i have a charm locally and i need to redeploy with the changes i do locally
<jose-phillips> but upgrade-charm --path ./path and resolved dont upgrade the charm on the destination
<hloeung> jose-phillips: maybe upgrade-charm --switch ?
<manadart> window
<manadart> Oops :)
<rick_h> kwmonroe: you feeling up to the Juju Show today?
<kwmonroe> sure thing rick_h.  i've got some conjure-up/k8s news to share.
<rick_h> kwmonroe: woot
<tychicus> Hi, I have an issue where my controller is stuck in an error loop, I attempted to destroy kibana, elasticsearch, filebeat, packetbeat, and top beat from the juju-gui
<tychicus> the machine-0.log shows an endless stream of the following type of messages
<tychicus> 2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#filebeat/16#charm": <nil>
<tychicus> 2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#topbeat/7#charm": <nil>
<tychicus> 2018-01-24 16:01:30 ERROR juju.state unit.go:339 cannot delete history for unit "u#filebeat/13#charm": <nil>
<tychicus> 2018-01-24 16:01:35 ERROR juju.state unit.go:339 cannot delete history for unit "u#packetbeat/13#charm": <ni
<tychicus> followed by
<tychicus> 2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe
<tychicus> 2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe
<tychicus> 2018-01-24 16:04:59 ERROR juju.rpc server.go:510 error writing response: write tcp 10.110.0.111:17070->10.110.0.117:34072: write: broken pipe
<tychicus> mongodb is maxing out all available processors, is there anything I can do to reset my controller to a healthy state
<mlbiam> hello, i'm trinyg to configure a canonical k8s distro with oidc.  I found this medium blog (https://medium.com/@tvansteenburgh/the-canonical-distribution-of-kubernetes-weekly-development-summary-49274b78b5c2) that says its there but we can't find where we configure the api server flags
<kwmonroe> mlbiam: i can't find a reference to that oidc interface in the k8s charms or bundles, so i'm guessing that was more of a foundation that is meant to be exploited later.  i do, however, see an config option for api server flags.  you'd set them like: juju config kubernetes-master api-extra-args="foo=bar ham=burger"
<mlbiam> kwmonroe - awesome, that confirms what we were thinking
<mlbiam> here's my next question, whats the right way to import our OIDC ca cert?
<kwmonroe> mlbiam: i'm not really sure because i typically let the easyrsa charm handle all my CA/cert needs.  Cynerva or ryebot, do you guys know if/how/where you might inject your own ca cert?
<mlbiam> this is only the CA for our oidc provider, we're not changing any of kubernetes internal CA information
<mlbiam> (we're trying to figure out how to set the --oidc-ca-file parameter on the API server)
<ryebot> ah
<mlbiam> or even better get juju to install our CA cert into the trust store on the api server
<ryebot> mlbiam if that's all you need, you can set it with the api-extra-args config option on kubernetes-master
<mlbiam> right, but where do i put the cert so its accessible by the API server?
<ryebot> of course, you'd need to maintain it on each master node yourself, unfortunately
<ryebot> mlbiam: anywhere accessible by root should do; apiserver is run by root
<mlbiam> ok, so juju won't over write it?  i see that the easyrsa files are in /root/cdk. would that be as good a place as any?
<Cynerva> it needs to be in a non-hidden folder in /root 'cause it's a confined snap with the home interface
<mlbiam> got it
<mlbiam> does juju have any way of updating the os cert store?
<mlbiam> (ie if i want to distribute a cert to be trusted by multiple servers)
<mlbiam> that way i could skip that paraeter entirely
<ybaumy> hi. is there some good documentation on how to use juju and kubernetes persistent storage somewhere
<ybaumy> i want to setup nfs shares
<ybaumy> to use
<rick_h> kwmonroe: do you have something for ybaumy ? ^
<ybaumy> rick_h: im still trying to convince ppl that redhat openshift platform well lets put that way falls short of expectations
<ybaumy> rick_h: but its a long and hard way
<rick_h> ybaumy: heh, always is more work to change minds than to do the thing
<ybaumy> rick_h: you wont believe .. i did already tech demos and stuff and presentations ... compared both ways to roll out clusters across multicloud providers
<ybaumy> rick_h: they still think because its redhat it cant be that bad
<ybaumy> ;)
 * rick_h whistles to the wind
<ybaumy> same goes for SuSe CaaS platform
<ybaumy> if you guys have a piece of documnetation for me i can use it would be nice if you put as PM. i have to eat now and drink beer
<knobby> ybaumy: Once you get cdk installed you can just use nfs persistant volumes and persistent volume claims. Are you trying to get juju to manage your nfs or something?
<ybaumy> knobby: yes thats what im trying to accomplish
<ybaumy> knobby: juju should handle the nfs shares if that is even possible
<ybaumy> else show me a way around
<ybaumy> to scale out
<ybaumy> for workers
<ybaumy> im talking kubernetes
<knobby> I'm not sure a follow. Once you setup a pv and pvc you just run pods. There isn't any work outside of getting the nfs server up and reachable by the cluster.
<ybaumy> but how do i do that initialy when setting up the cluster
<knobby> are you trying to put the nfs server on the cluster?
<ybaumy> maybe i missed the options
<knobby> I just set the pvc in the deployment description and specify the mount point inside the pod and magic happens. There isn't anything to do.
<knobby> it's all in k8s land
<ybaumy> ok maybe im thinking to complicated
<ybaumy> let me try that tonight
<ybaumy> i have an application that needs a nfs volume accross all pods so i thought
<ybaumy> i could setup the cluster with that nfs volume from the start
<ybaumy> but then again i dont have the pods when adding a unit
<ybaumy> though im stupid
<knobby> once you get kubectl you just add the pv with some yaml and then make a pvc for it and then specify the pvc in the deployment. I can try to find a tutorial for that if you want.
<knobby> it's not related to juju at all. Juju doesn't know about your nfs or the workloads on k8s
<ybaumy> knobby: thats what i understand now
<ybaumy> knobby: sorry for asking stupid questions
<knobby> if you have trouble with it, ybaumy, I have a bare metal k8s with nfs beside me and I'm happy to answer questions.
<knobby> ybaumy: not at all. This stuff is complex and it is so easy to get lost in the trees.
<ybaumy> knobby: if i cant get it to work i would be happy to get back to you
<ybaumy> let me ask you guys one last question about scaling pods accross vsphere. one colleague of mine says we should put one pod on one worker on one VM
<ybaumy> for performance reasons in vsphere
<ybaumy> what do you think of that
<ybaumy> small VM's with only one pod ?
<ybaumy> why do i need kubernetes then i dont get it
<ybaumy> he says vmware guys recommand it that way
<knobby> I agree with you. The point of k8s is to help utilize those machines without having to pick your VM size to match your workload. K8s will move things around and make sure things fit while giving guaranteed resources to a pod.
<ybaumy> knobby: thats what my picture is
<ybaumy> im trying to understand .. and im falling short
<ybaumy> on friday some vmware guy will be with us in a meeting i hope he can clarify this. i would like to understand it before but ...
<knobby> I can't help you there. You could certainly put a single pod per node, but it seems like an odd choice. You'd spend your life adjusting nodes to scale out.
<ybaumy> and down... its like a neverending stream of create/delete VM's
<ybaumy> well i hope he will shed light on this.
<ybaumy> let me check my beer status and eat something ... then i try the nfs stuff
<ybaumy> bbl
<rick_h> 34min until the Juju Show!!!!! get ready.
<rick_h> kwmonroe: hml agprado magicaltrout bdx and anyone else that's been holding their breath ^
<hml> w00t!
<bdx> oh yeaaa
 * rick_h needs to refill this glass pre-show
<agprado> rick_h: yep! holding my breath! for my first jujushow!
<agprado> rick_h, I can't see the hangout link in the calendar for the juju show
<rick_h> https://hangouts.google.com/hangouts/_/ygz6alasjvg4lf3fuurxu4hprye for joining the call and https://www.youtube.com/watch?v=K9x3Xhlnl1s for watching the stream
<rick_h> bdx: you coming this week?
<kwmonroe> the cloudinit bug rick_h is referring to is https://bugs.launchpad.net/juju/+bug/1535891
<mup> Bug #1535891: Feature request: Custom/user definable cloud-init user-data <cpe-onsite> <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1535891>
<EdS> hi Juju people :)
<EdS> I have just run into a failing update charm operation
<EdS> I have connected to the affected unit and tailing the log of the juju unit shows some python stuff about "INFO upgrade-charm TypeError: 'NoneType' object is not subscriptable"
<EdS> I'm trying to update a production k8s cluster and this is a bit of a surprise
<EdS> the units affected seem to be sitting in a loop trying to update and repeatedly failing at the same step.
<EdS> any ideas what I should do?
<bdx> `juju model-config | grep proxy` - why are these proxy configs not applied to the containers?
<bdx> per conversation in juju show
<bdx> shouldn't a container and a machine get the same proxy config if defined at the model level?
<rick_h> bdx: so they should be on containers but apt caches and such aren't in there I believe
<kwmonroe> monitoring a k8s cluster: https://medium.com/@kwmonroe/monitor-your-kubernetes-cluster-a856d2603ec3
<rick_h> bdx: http/https proxies are
<bdx> the apt proxies aren't then?
<rick_h> bdx: so I think it's less proxy but more cache
<rick_h> bdx: or custom repos
<bdx> ahh
<bdx> I see
<EdS> any ideas how I kick it to try a new version and forget about failing, please?
<kwmonroe> conjure-up spell addon repo: https://github.com/conjure-up/spells/tree/master/canonical-kubernetes/addons
<rick_h> EdS: this is for a charm you're updating?
<rick_h> EdS: just mark it "juju resolved xxxx --no-retry" so that it will go green but not rerun anything
<EdS> not as such - I'm updating our canonical kubernetes setup
<rick_h> EdS: and then run your upgrade-charm command (if it's a charm you mean)
<EdS> all I did was (some time ago) deploy https://api.jujucharms.com/charmstore/v5/canonical-kubernetes-117/archive/bundle.yaml
<EdS> after editing the charm versions mentioned in my local copy, I have updated the deplyment
<EdS> I did several minor versions no trouble
<EdS> now all my kubernetes worker and master nodes are stuck in a loop with the message: hook failed: "upgrade-charm"
<rick_h> hml: do you have that bug link again please for the cloud-init stuff for the show notes?
 * rick_h hangs head in shame he didn't copy it from the hangout chat
 * hml looking
<hml> rick_h: https://bugs.launchpad.net/bugs/1535891
<mup> Bug #1535891: Feature request: Custom/user definable cloud-init user-data <cpe-onsite> <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1535891>
<EdS> ok thanks...
<rick_h> hml: ty!
<hml> rick_h: it only defines the first set of work, thatâs already out there
<rick_h> hml: right, bdx was wondering about that so I wanted to have that link in the notes
<hml> not the replication from machine to container piece
<rick_h> EdS: sorry, I'm not sure to be honest. You upgrade the charms by doing an upgrade-charm command and editing the yaml file doesn't do anything. So I'm assuming you did an upgrade-charm on each application name and it errored in some way?
<rick_h> EdS: if so, the folks that work on those charms would want to see what the juju debug-log looks like for those errors and if you're wanting to try again you'd run the "juju resolved xxxx --no-retry" and then the "juju upgrade-charm xxx" as I noted
<EdS> ok thanks very much. I'm trying to work out what happened and agree I'll try and gather some info.
<knobby> I'd think the first step is to look at the debug-log and figure out why it doesn't want to upgrade
<knobby> EdS: `juju debug-log --replay | pastebinit`
<EdS> knobby: ok :)
<EdS> I've carried on working on it. I'm assuming log will be massive by now
<knobby> no doubt
<knobby> if it is too big for pastebinit, we might have to get to it another way
<EdS> *Blammo* https://api.jujucharms.com/charmstore/v5/canonical-kubernetes-117/archive/bundle.yaml
<EdS> ah crud
<EdS> Failed to contact the server: [Errno socket error] [Errno socket error] The read operation timed out
<EdS> please ignore link
<EdS> ;)
<EdS> yeah pastebinit died
<knobby> timeout smells like too large of a file to me
<EdS> yes
<knobby> maybe like `juju debug-log --replay|tail -n 10000|pastebinit`
<kwmonroe> EdS: you can limit the log to just one failing instance with "juju debug-log --include kubernetes-worker/0 --replay" or "--lines 1000" as an alt to --replay.
<knobby> or that is even better
<kwmonroe> knobby: you rascal.. --lines (-n) makes the tail unnecessary.
<EdS> haha just dumped to a file
<EdS> 36MB
<EdS> :/
<knobby> just do the --lines 10000 and pastebinit that
<knobby> or tail that file to it or something
<EdS> http://paste.ubuntu.com/26454287/
<EdS> line 127-142
<EdS> I was seeing a lot of that sort of "nonetype is not subscriptable" errors.
<EdS> does "watch juju status" being left in a terminal make a load of log noise anywhere?
<kwmonroe> nah EdS, i'm not aware of juju status increasing log noise anywhere.
<EdS> phew.
<knobby> so there's supposed to be a dictionary in the unit with information like the cluster credentials. Your credentials don't exist.
<knobby> investigating
<EdS> interesting. that may be one stage of the brokenness but I think that was all fine at several points (was a running cluster, was working fine until update failed)
<EdS> \o/ happy cluster again
<EdS> I ended up killing the workers but keeping headroom.
<EdS> now I have "updated" workers ;)
<ryebot> \o/
<EdS> not pretty but it's late and that was weirdness I would like to be without
<EdS> thanks for taking a look :) please buzz if you want any more info (but I'm going away soon and will be back tomorrow)
<kwmonroe> EdS: i opened this to make sure we have required creds in place before doing things like "start_worker":  https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/474
<EdS> thanks :)
<kwmonroe> np EdS - thanks for breaking stuff to make it better ;)
<EdS> I'm still here trying to work out how upgrade nginx too ;)
<EdS> I want new ingress controller, but cannot figure that -_-
<EdS> haha no worries. any time.
<EdS> still trying to get this bit straight and it's juju related, I'm sure - I wonder if you could help shed some light on it?
<EdS> using juju and maas, I set up canonical kubernetes
<EdS> I have nginx as the ingress controller in kubernetes
<EdS> it's on the workers
<EdS> it all got updated
<EdS> but nginx version is the same
<EdS> I don't quite know where it comes from, when setting it up with juju since I'm so far removed from the setup
<knobby> is it beta.13?
<knobby> EdS: I just added something to the charm to allow changing that. It was hard-coded in the charm to a specific version and we weren't tracking releases well. I updated the version and made it a config option to specify the image to use, but that isn't in stable yet.
<EdS> :) yep it is NGINX 0.9.0-beta.13 git-949b83e
<EdS> this has been driving me nuts, I just didn't know where it was from ;)
<knobby> I bumped it to beta.15. Unfortunately it is hard-coded in the charm, so unless you want to fork the charm to update it you'll just have to live with it for a bit longer.
<knobby> EdS: the code is around here: https://github.com/kubernetes/kubernetes/blob/cab439b20fbb02cc086bf63b6dd7d51f1908067c/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py#L658
<EdS> not to worry - I now know I'm not going crazy...
<EdS> when do you expect that to be available?
<EdS> thank you
<EdS> thanks all :) I gotta get some sleep
#juju 2018-01-25
<zeestrat> jamespage: Got some spare time for some administrative work on charm-helpers? https://github.com/juju/charm-helpers/issues/27
<elmaciej> Hi!
<elmaciej> Hi! I have a charm nova-compute with lvm storage backend. There is no option to provide images_volume_group = my-volumes, I have to put it myself to nova.conf in libvirt section. How can I dissallow juju to roll back my changes in this file.
<elmaciej> please let me know, it is actually very seroius problem.
<zeestrat> elmaciej: You can use the https://jujucharms.com/nova-compute/#charm-config-config-flags option in the nova-compute charm to set options in nova.conf
<magicaltrout> hello folks
<magicaltrout> whats the Canal support like for CDK if I swap out flannel?
<rick_h> tvansteenburgh: might have an idea ^
<magicaltrout> i see a couple of issues on github, are they the edge cases, or the norm? :)
<magicaltrout> rick_h: i haven't forgotten the CLI feedback stuff, just been mega busy recently
<magicaltrout> honest!
<rick_h> magicaltrout: all good, sorry for the bugging but wasn't sure you were back from holiday/got lost in the holiday mailbox, etc
<rick_h> magicaltrout: and I know of everyone out there you had the biggest eye for it :P
<magicaltrout> yeah, its pretty epic, I did get a 2 minute demo of it via hangout a few months ago, it looked great. I'm looking forward to getting to tinker
<rick_h> magicaltrout: nice. When you get time we're eager if this is solving your issues and doing what you need. Thanks for checking it out
<magicaltrout> no problem, we've got some new stuff coming to Juju soon, our new platform called Anssr which is a scalable natural language processing platform aimed at discovering personally identifiable information for the new GDPR legislation coming into force in may
<magicaltrout> Juju will be great at dealing with the server components for those companies who use Cloud services
<rick_h> magicaltrout: very cool, when you're ready I've love to see a demo and what's up sometime
<magicaltrout> indeed, indeed!
<SaMnCo> hi there
<SaMnCo> anyone from the K8s team around here?
<petevg> SaMnCo: hi there, long time no chat. :-) It looks like kjackal and kwmonroe are both logged in, and they're both doing k8s stuff now. Not sure if either of them is paying attention, though.
<SaMnCo> heeyyy!! Yes, been busy on other stuff
<SaMnCo> I am trying to get the HPA working with custom metrics and I am STRUGGLING big time
<SaMnCo> wanted to discuss a few things
<petevg> Cool. My k8s knowledge is still pretty basic, so I'm probably not useful. Hopefully, one of those two cats will see my ping.
<magicaltrout> petevg: as a technical sales pro... surely you must know everything
<magicaltrout> how else can you sell stuff?
<magicaltrout> oh openstackers :)
<magicaltrout> i forgive you
<magicaltrout> now learn some kubernetes and help the man out
<petevg> magicaltrout: I'm working on it :-p
<knobby> what is your question, SaMnCo?
<SaMnCo> knobby:  a github issue says a 1000 words: https://github.com/DirectXMan12/k8s-prometheus-adapter/issues/12
<SaMnCo> last comments I made
<SaMnCo> For some reason I cannot get the controller manager to read the metrics from the Metrics API Server nor the Custom Metrics API Server
<SaMnCo> both are registered correctly and I can see the values right from calling the API
<SaMnCo> but the HPA cannot leverage them
<SaMnCo> and I cannot start to figure out what is going on
<SaMnCo> it seems that the HPA in the Controller Manager keeps trying to hit http://heapster as a resource metrics value despite using  --horizontal-pod-autoscaler-use-rest-clients
<SaMnCo> but even at max log level I do not have any error anywhere but in the HPA events
<SaMnCo> So that is for one
<SaMnCo> the other is that I have weird RBAC errors which do not match the RBAC profile of CDK:
<SaMnCo> https://www.irccloud.com/pastebin/advBNfeG/
<SaMnCo> All these rights are covered by the RBAC for system:node but they keep coming
<SaMnCo> have you guys started working on Custom Metrics?
<knobby> so you have a horizontal autoscaler and you're trying to reach into the heapster pod to ask about request counts so it can scale, right?
<knobby> am I reading it correctly that your controller manager is outside the cluster?
<knobby> and it's trying to use an internal cluster ip to hit heapster?
<SaMnCo> yeah exactly
<SaMnCo> btw for rbac: https://github.com/kubernetes-incubator/bootkube/issues/483
<SaMnCo> update the default rbac manifest for system:node binding to:
<SaMnCo> https://www.irccloud.com/pastebin/PQrRs7zm/
<SaMnCo> will solve this problem, I am guessing others have it
<SaMnCo> opening github issue now
<knobby> I always use services to get to things from outside my bare-metal cluster. It's either that or a nodeport really. I think you'll have to expose the pod via a service and then use that service ip.
<knobby> ip routing to your cluster for your service addresses would be required
<SaMnCo> but in theory according to the docs, using the flag horizontal-pod-autoscaler-use-rest-clients on the controller-manager should tell it to talk to the API server
<knobby> I have an appointment now, but I'll check back in an hour or so. kjackal would be the one to talk about RBAC. The issue will help track it.
<knobby> SaMnCo, how does it auth?
<SaMnCo> but for some reason it keeps hitting the heapster
<SaMnCo> filled in https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/475
<SaMnCo> knobby: I am using a specific Service Account with a custom RBAC profile
<SaMnCo> OK I finally nailed the issue (it is my 4th day on this)
<SaMnCo> It all goes  back to a bug in the scaling mechanism of masters
<SaMnCo> WIll explain in a another GH issue
<SaMnCo> for those interested: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/476
<knobby> thanks SaMnCo, glad you got it figured out and thanks for the bug
<admcleod> im using the manual provider, have bootstrapped a controller, controller is behind a firewall (nat) - can connect to the machine
<admcleod> however, the client is then trying to download the tools from the controllers internal address instead of its external ip - is there any way to specify this?
<kwmonroe> admcleod: try adding the public-ip first in .local/share/juju/controllers.yaml for the api-endpoint.
<admcleod> kwmonroe: hmm yeah its there first.. maybe i should make it the only one
<kwmonroe> admcleod: yeah, try that, but don't forget whatever was there before you remove it ;)
<admcleod> kwmonroe: ha, thanks
<admcleod> kwmonroe: has something like this worked for you?
<kwmonroe> admcleod: yeah, forcing an endpoint ip has worked for me in the distant past (like 2.0 timeframe).  it's been months since i've been in that kind of environment though.
<admcleod> kwmonroe: k cool
<admcleod> kwmonroe: something is adding the other ip back in automatically
<kwmonroe> hrm... admcleod, could you use sshuttle to give your client access to the controller's subnet?  sshuttle -r user@firewall a.b.c.d/24
<kwmonroe> s/firewall/nat machine
#juju 2018-01-26
<admcleod> kwmonroe: i just added an outbound static route
<elmaciej> zeestrat: it is only for [General] section - this item should go to [libvirt] section so I tried even injecting this but does not work
<magicaltrout> i have some units stuck with the agent lost and workload unknown
<magicaltrout> is is possible to remove them without destroying the application
<magicaltrout> ?
<magicaltrout> goooooo kjackal
<magicaltrout> oh remove-machine --force did it
<magicaltrout> cancel that
<kjackal> hi magicaltrout, destroy the machine holding the broken agents?
<ybaumy> question. what happens if my controller is unavailable or dead and gone forever. and i dont have a second one. will there be a chance to manage the models still?
<ybaumy> is it possible to create a new one and say that this controller should manage model xxx
<ybaumy> those are questions i took from our architecture meeting
<ybaumy> since controllers are not HA when is it planned to implement this
<pmatulis> ybaumy, the model is lost if its controller is gone
<gbc> can anyone out there help with a juju charm deployment issue?
<gbc> ceph charm deployment issue
<ybaumy> pmatulis: hmm what is the recommandation of how many controller units i should have
<ybaumy> does it even make sense to distribute controllers accross zones or is this only good for datacenters in vsphere
<ybaumy> also what i noticed is that when deploying machines are setup in sequential order
<ybaumy> not parallel
<ybaumy> why is that
<ybaumy> when i do juju status i see only one doing something
<pmatulis> i'm sorry but i'm not fluent in juju ha but i gather the more ha units, and distributed, the better, in a sense.
<pmatulis> not sure what you mean by 'only one doing something'
<ybaumy> pmatulis: i see only one VM donwloading a image
<akern07> How do I successfully deploy my first juju charm?
<pmatulis> akern07, with the 'juju deploy' command. `juju deploy mysql`
<rick_h> akern07: is this a charm you wrote or just any charm?
<rick_h> akern07: the best thing to do is walk through the first tutorial: https://jujucharms.com/docs/stable/getting-started
<elmaciej> Hello
<rick_h> howdy elmaciej
<elmaciej> Does anyone knows how to add cinder volumes with different volume-type thru juju ? so I have profile ssd and hdd but want to do it thru juju
<elmaciej> also simpler question - how to block juju from rolling back my changes in config file for example nova.conf
<rick_h> elmaciej: so Juju has a config for the charm that as it's updated it writes out the file. If you want to override it you'll have to see if the charm allows special injection as well or if there's a second file you can edit/leave that is slurped into the generation of the final nova.conf
<elmaciej> so I have to modify the charm if it's not provided? I'm talking about situation I posted before - deploying nova-compute, change instance disk type to lvm. this charm do not provide the images_volume_group property. you can inject only to GENERAL section thru charm
<elmaciej> and when I add this manually juju reverts the nova.conf after some time
<elmaciej> so have to either modify the charm or after deployment shutdown juju
<rick_h> elmaciej: right, because juju is syncing things when the charm config is changed and it's written to rewrite the file. The best bet would be to chat with the #openstack-charmers and see what provisions they've got in the charms to help with what specific changes you want to make around it
<rick_h> elmaciej: Juju can't know the current state and make things repeatable (e.g. dump out the same config that should work again on a second install) if it's done manually around Juju's control.
<elmaciej> ok, thanks for confirming this - I was not sure . it would be good if you can for example set kind of unmanage flag for specific charm in the model
<elmaciej> :) but it's just a wish to wishlist
<rick_h> elmaciej: yea, I don't have insights into the charm and the various control knobs it exposes but I know we use these charms for real prod customer openstack deploys so I'm assuming there's some method to the madness there just might take an ask to the mailing list or irc for the openstack folks to find it.
<elmaciej> well it's so many variations and in my case is rare situation. Mostly people use ceph for storage and here I have just lvm on SAN. so it's hard to predict everything. but as I used mirantis before in production +100nodes clusters and also redhat I can say that juju is far way better and user firendly then all others distros.
<elmaciej> and debugging is so much easier
<elmaciej> so I love it really
<rick_h> elmaciej: nice to hear, sorry I don't have a firm answer for you there on the last bit you're hitting.
#juju 2018-01-27
<elmaciej> Hi Again! Question about charms - how the charm is parsing the records from config.yaml to the template file. I added the property in the template and the property in config.yaml . But when I'm setting it it not goes to the final rendered file. There is new line with property but it not parsing the value which I'm putting
<ybaumy> can i use constraints to datastore and primary and external network? when deploying a charm
<ybaumy> can i resize a vsphere VM with juju commands
<ybaumy> also the kubernetes elastic charm does not start elasticsearch since java 8 is not in the repos
<ybaumy> and they interface is also broke
#juju 2018-01-28
<kwmonroe> ybaumy: fyi, the k8s-elastic bundle should be fixed "real soon".  the java8 issue you're seeing has been resolved in https://jujucharms.com/elasticsearch/25. once that makes its way to stable, i'll be updating the bundle. until then, feel free to adjust the bundle.yaml to point to revision -25 of the charm to get that unblocked.
<ybaumy> kwmonroe: thanks will try today
<ybaumy> kwmonroe: is this -25 still trusty or xenial
<ybaumy> kwmonroe: used -25 with xenial. topbeat filebeat and kibana cannot connect to elasticsearch
<ybaumy> kwmonroe: elastic is running but all other services are waiting
<ybaumy> kwmonroe: do i need to use other versions too of kibana for example
#juju 2020-01-20
<nammn_de> manadart: morning, want to go with another cr? https://github.com/juju/juju/pull/11088 i renamed the files as you suggested
<manadart> nammn_de: I will look today if I get the chance. Got meetings/CI/Release stuff blocked out for a big chunk.
<nammn_de> manadart: ahh sure
<nammn_de> manadart: its ci day so nvm
<manadart> Need a review for LXD dep update: https://github.com/juju/juju/pull/11126
<intrepidsilence> Hi all
<intrepidsilence> i have a system with a very broken juju so i removed the snap with a  --purge
<intrepidsilence> has that gotten rid of everything or are there some files that I should go delete before trying to reinstall and configure everything from scratch
<rick_h> intrepidsilence:  you can check .local/share/juju/* and see if there's anything there
<rick_h> intrepidsilence:  with it being the snap make sure you're looking in your snap tree around that
<rick_h> intrepidsilence:  but should be ok
<intrepidsilence> rick_h thx
<intrepidsilence> looks like there may be stuff in .cache, too
<intrepidsilence> ok
<intrepidsilence>  i am having crazy problems
<intrepidsilence> i just did a new bootstrap of juju
<intrepidsilence> now everything i try except status gives me some variation of saying that access to https://x.x.x.x:8443/1.0 is forbidden
<intrepidsilence> the IP is the IP of the bridge
<intrepidsilence> ok, i figured out that issue - needed my local bridge vlan in juju-no-proxy in config.yml when bootstrapping
<intrepidsilence> now i have another issue
<intrepidsilence> how can i set a static IP on the juju controller
<intrepidsilence> lxc config device set does not work as it does not see eth0 as a device for some reason
<intrepidsilence> even though i see it in lxc list and when i login to the container
<intrepidsilence> nm, set the proxy against 0.0.0.0 rather than needed a static IP
<nammn_de> manadart: I just read your comment. This makes a lot of sense to me, thanks!
<nammn_de> But
<nammn_de> ah no but
<nammn_de> thanks!
<nammn_de> manadart: around for quick ho?
<manadart> nammn_de: Gimme 10 mins. Just feeding kids.
<nammn_de> manadart: sure gimme a ping if you are ready
<manadart> nammn_de: In DAILY
<nammn_de> manadart: I also mocked subnets and spaces instead of using apiservertesting.fakespaces and fakesubnets. Wdyt?
<nammn_de> now everything is built on mocks in the spaces_test for the new suite
#juju 2020-01-21
<nammn_de> manadart: i've reworked the mocking tests and don't use the stubs anymore. In for another review round? https://github.com/juju/juju/pull/11088
<manadart> nammn_de: Yep. will look.
<achilleasa> can I get a quick CR on https://github.com/juju/juju/pull/11132 ?
<stickupkid> achilleasa, done
<achilleasa> stickupkid: thanks. I will push a 2.7->develop once this lands
<soumplis> Hi all... For some time now we are having an issue with juju export-bundle and I was wondering if anybody could help. We run the command "juju export-bundle -m openstack" and it stops at the same point all the time with not error, as if it has ran completely. I do not know if it is related but the point where it stops is an inline yaml config option. It definetely worked with v2.6.8. Now we are running 2.7.1
<stickupkid> soumplis, can you add --debug and see if that reveals any more information
<soumplis> stickupkid I have ran it and the last lines are:"        12:01:50 DEBUG juju.api monitor.go:35 RPC connection died
<soumplis> 12:01:50 INFO  cmd supercommand.go:525 command finished
<soumplis> "
<soumplis> stickupkid Please also check  https://github.com/juju/python-libjuju/issues/384 it is related
<achilleasa> soumplis: if you run 'juju debug-log -m controller --replay | grep ERROR' do you see any messages related to the export-bundle command?
<soumplis> achilleasa not, there is nothing relevant. I would say that the problem is that it cannot handle inline configs (or big values) because it also fails in another model, when it comes to a gpg-key config
<soumplis> achilleasa, for example "     gpg-key: |
<soumplis>         12:10:24 DEBUG juju.api monitor.go:35 RPC connection died
<soumplis> 12:10:24 INFO  cmd supercommand.go:525 command finished
<soumplis> "
<achilleasa> soumplis: is juju cli generating some output for the bundle and then getting stuck when it reaches that particular section?
<soumplis> achilleasa yes, it starts to print the bundle and when it reaches the point that fails, it just stops and returns to the shell
<achilleasa> soumplis: can you try running the same command with '--filename bundle.yaml' and check if you get the full bundle dumped in that file?
<soumplis> achilleasa the command reports "Bundle successfully exported to test.yaml" but the file has the same incomplete content as in the cli
<achilleasa> soumplis: sounds like a potential bug in the bundle export code running on the controller side. Can you please open a bug?
<soumplis> achilleasa, sure I'll do it now, thx
<achilleasa> stickupkid: can you take a look at https://github.com/juju/juju/pull/11133 (2.7 -> dev)
<nammn_de> manadart: for the rename-space: do we have some  old code lying around where we also touch multiple collections at once? Want to make sure to follow conventions and thinking about whether this should be done synchronous as this may take time (?)
<manadart> nammn_de: There are many examples. One is in the state code for committing a generation. But the way I think we should do this in the future is using ModelOperation as per https://github.com/juju/juju/pull/10630.
<manadart> I have not had time to realise that in proper fashion.
<nammn_de> manadart: thats perfect. Just want to follow our future plans instead of maybe running into legacy and using it. Thanks
<manadart> To do your part would require altering settings in some cases, so I might put a patch up with changes that allow that.
<achilleasa> manadart: can you take a look at 11133?
<nammn_de> How do I a query to update the spaceid in every collection matching a query? Only found code where we do it for each collection we defined ourselves. Is there a kind of crosscollection search?
<manadart> achilleasa: Yep.
<achilleasa> stickupkid: thanks
<manadart> Patch I mentioned in stand-up: https://github.com/juju/juju/pull/11134
<nammn_de> manadart: i can take a look, but I don't think I am the correct person to approve this
<nammn_de> manadart: I took a look at your comment. I cannot 100% revert the stub_network change. As it stubbing `Backing` which has interface methods to my created one. So I still need to implement the interface in stub_network, right?
<stickupkid> nammn_de, manadart I think you should understand it, I'll mark it as approved
<manadart> nammn_de: That imports the spaces facade. The general cannot depend on the specific.
<nammn_de> manadart: nvm me, my ide was slow and showed an error which did not exist in the first place
<nammn_de> stub is using networkBacking not general backing
<nammn_de> manadart: quick ho?
<nammn_de> stickupkid: kk, didnt look at it yet, didn't work a lot with our mongo things yet
<manadart> nammn_de: I'm in DAILY.
<nammn_de> manadart: done and pushed
<manadart> nammn_de: Doing QA. The command output includes the error member. I.e. "error: null".
<nammn_de> ahh, yeah makes sense. I just put the default param into the output. I should parse it again it exclude the error. Something else?
<manadart> nammn_de: Exclude.
<stickupkid> manadart, having fun trying to get microstack running in lxd
<stickupkid> - Start snap "microstack" (198) services ([start snap.microstack.nginx.service] failed with exit status 1: Job for snap.microstack.nginx.service failed because a timeout was exceeded.
<stickupkid> fun, fun, fun
<manadart> stickupkid: Yes, I expect there will be tribulation with that.
<nammn_de> manadart: something else beside excluding the error output, in case there is no error?
<manadart> nammn_de: You check the payload for errors in the API client. They will bubble up to error output at the CLI. We want to exclude the error member from the command output.
<manadart> nammn_de: Commented on the patch.
<nammn_de> manadart: thanks! The output is directly put into yaml (no formatter option) as in the spec, therefore I would  add an additional kind of result param which does not have error fields.
<manadart> nammn_de: Use a type specifically for display. Branches do this.
<nammn_de> manadart: yeah was planning to do so. Still thinking where the best place to put it. Not param, maybe in show-space command itself or core/network/types ?
<nammn_de> as branches puts them into core/model/generation
<manadart> nammn_de: If you want to return that type from the API client itself, it can go in core/network/space.
<nammn_de> manadart: ahh great, will do
<nammn_de> i reworked the output to something like this: https://pastebin.canonical.com/p/wbpNzTWmdw/
<nammn_de> manadart:^
<manadart> nammn_de: OK. We'll finish it tomorrow.
<nammn_de> manadart: sure! Thanks for going through with me
<stickupkid> achilleasa, one for tomorrow https://github.com/juju/juju/pull/11135
<stickupkid> I went mental with the test to ensure that it works :)
<achilleasa> stickupkid: nice! Will take a look in the morn
<stickupkid> wicked
<skay> why is my postgresql unit stuck waiting for peers?
<skay> previously it was not stuck, but I deployed a pristine environment and it is stuck
<tlm> Hi skay, off the top of my head it sounds like it's trying to connect to another instance. Are you able to provide exact error/app status strings ?
<skay> tlm: the status message is 'Waiting for peers' and the status is 'waiting'.
<skay> I'm looking at the charm src now, but I do not know what changed since the last time i did this
<anastasiamac> skay: what about 'status --format=yaml'?
<skay> I am just deploying one unit
<anastasiamac> skay: diff juju versions?
<skay> anastasiamac: it's in a private environment
<anastasiamac> skay: format yaml ususally ahs more details
<anastasiamac> has*
 * skay nods
<anastasiamac> skay: so u've deployed charm previously and are just adding another unit now and that unit does not come up?
<skay> anastasiamac: no, I've deployed a bundle previously (using a mojo spec) that included postgres. I ripped it own to verify my spec, and this time the postgres unit is stuck in the waiting state
<tlm> I am wondering if it is possibly because a relation is hanging around or it's waiting on storage ?
<anastasiamac> tlm: that's possible
<anastasiamac> maybe not storage.. but sunds like a relation maybe?..
<tlm> anastasiamac: is there a way to list relations ?
<anastasiamac> status --relations
<skay> these are the cases where it is stuck in that state https://paste.ubuntu.com/p/QdJzPN4dHs/
<skay> a snippet from teh charm source
<skay> it's related to pgbouncer
<skay> and nrpe
<skay> oh, there are relations I didn't set up in there
<anastasiamac> skay: is ti possible that whatever u did to reap the original deployed postgresql was not clean? and some things were lingering that r preventing this new deployment?
<anastasiamac> skay: yes, so mayb clean the relations :)
<skay> it's weird though. I ran remove-application on everything
<anastasiamac> altho m a bit surprised that they r not removed when u did the original clena
<skay> I'll remove everything again and list relations on the empty model just to be sure
<anastasiamac> sounds good
#juju 2020-01-22
<anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/11131 as discussed
<babbageclunk> T'ing an L
<anastasiamac> :)
<stub> skay: If you remove all the units, but don't remove the application, then the Juju leadership state hangs around. When you add new units to the application, they have no way of telling if they need to wait for what the leadership settings tell them is the master, or if the data is stale. Well, they can now thanks to the goal state feature, but that didn't exist when that code was written.
<stub> It might be a mojo (or juju-deployer) bug, where it assumes removing all the units is equivalent to removing the application. It has come up a few times with mojo.
<anastasiamac> stub: ah! thanks for explanation - that makes sense :D
<babbageclunk> anastasiamac: approved
<anastasiamac> babbageclunk: \o/
<stickupkid> anyone know if these files are still generated?
<stickupkid> https://github.com/juju/juju/blob/2.7/service/windows/zservice_windows.go
<stickupkid> and https://github.com/juju/juju/blob/2.7/service/windows/zpassword_windows.go
<stickupkid> both are flagging up with import errors in the linter, but I'm trying to work out how they're generated
<stickupkid> turns out the new goimport in latest golang just finds they're not correct
<achilleasa> stickupkid: were the linter errors in your PR caused by the lint_go.sh bits that your last commit tweaks?
<stickupkid> only in tests
<stickupkid> very strange
<nammn_de> stickupkid achilleasa: just for my understanding. The settings collection is responsible to safe config setting of a model?  E.g. changes to config mysql are reflected there?
<skay> stub: I've removed all of the applications are re-run my spec but it still can't tell it's the leader. Is there a bug about that? Meanwhile, I guess someone could delete and recreate the model for me?
<skay> when I run remove-application on postgresql it's not removing anything. I have to call it with --force and --no-wait
<skay> oh wait, it just takes freaking forever.
<rick_h> skay:  yes, it could take forever depending on the underlying cloud, state of things, etc
<rick_h> skay:  --force will tell it to try to do things right, but if you have to start cheating in potentially bad ways"
<skay> rick_h: might that explain why I got the problem with the messed up leader state?
<skay> rick_h: in the scrollback I asked about my situation. I've been testing a mojo spec and when I want to rerun it I remove all the applications. I thought postgresql was stuck somehow because of how long it was taking
<skay> rick_h: so I've been using --force --no-wait on it and other things
<nammn_de> manadart: was doublechecking with achilleasa. It seems that the method `AllSpaces()` seems to use `AllAddresses` which uses the addresses collection, which in aws, do not contain the public ip's. Therefore I would change that to use `machine.Addresses()`
<nammn_de> does that make sense?
<manadart> nammn_de: The public IPs are shadow addresses. Those are not reasoned about in terms of spaces, so it shouldn't matter for this purpose.
<nammn_de> manadart: quick ho?
<manadart> nammn_de: I'm there.
<skay> ok, the answer to my problem was that there were still instances running even though juju status did not show them
<skay> I deleted hte instances and the postgresql unit became leader
<achilleasa> nammn_de: that is ipaddresses collection
<nammn_de> rick_h: I updated the description. Talked to manadart about why a machine-count=0 can happen. That's because I happen to bind after a deploy.
<nammn_de> Additionally, it seems that we do not care about the public part of the addresses, therefore we just skip that for space information. I updated the QA section in the PR and added a comment. Seems like no change to the code regarding this is needed.
<achilleasa> hml: what do you think about adding a dirty check to the in-memory state map in the uniter's context? We can set it each time we write/delete something and clear when flushing. We could avoid needless roundtrips to the controller if the state has not been mutated
<skay> I have a puzzling situation with the autocert charm. I'm using mojo to deploy it, and the charm is not getting configured with values in the services file
<skay> mojo uses juju-deployer
<skay> I can assign the values by hand using juju config
#juju 2020-01-23
<rick_h> skay:  "values in the services file"?
<nvalerkos> hi
<stickupkid> nvalerkos, hi
<nvalerkos> I've deployed a controller for kubernetes charm
<nvalerkos> i got a problem which seem a bit odd
<nvalerkos> Stopped services: kube-proxy
<nvalerkos> status of the kubernetes master is blocked
<nvalerkos> I am using LXC on a single machine, localhosdt
<nvalerkos> I am not sure why this is happening because I just deployed a charm which should work without any problems
<stickupkid> nvalerkos, can you pastebin a juju status here
<nvalerkos> App                    Version  Status   Scale  Charm                  Store       Rev  OS      Notescontainerd                      active       5  containerd             jujucharms   54  ubuntu  easyrsa                3.0.1    active       1  easyrsa                jujucharms  296  ubuntu  etcd                   3.3.15   active       3  etcd
<nvalerkos> jujucharms  488  ubuntu  flannel                0.11.0   active       5  flannel                jujucharms  468  ubuntu  kubeapi-load-balancer  1.14.0   active       1  kubeapi-load-balancer  jujucharms  704  ubuntu  exposedkubernetes-master      1.17.2   blocked      2  kubernetes-master      jujucharms  792  ubuntu  kubernetes-worker      1.17.2
<nvalerkos> active       3  kubernetes-worker      jujucharms  634  ubuntu  exposedUnit                      Workload  Agent  Machine  Public address  Ports           Messageeasyrsa/0*                active    idle   1        10.38.12.8                      Certificate Authority connected.etcd/0                    active    idle   3        10.38.12.103
<nvalerkos> 2379/tcp        Healthy with 3 known peersetcd/1*                   active    idle   0        10.38.12.236    2379/tcp        Healthy with 3 known peersetcd/2                    active    idle   5        10.38.12.60     2379/tcp        Healthy with 3 known peerskubeapi-load-balancer/0*  active    idle   7        10.38.12.39     443/tcp
<nvalerkos> Loadbalancer ready.kubernetes-master/0       waiting   idle   9        10.38.12.10     6443/tcp        Waiting for 6 kube-system pods to start  containerd/4            active    idle            10.38.12.10                     Container runtime available  flannel/4               active    idle            10.38.12.10                     Flannel
<nvalerkos> subnet 10.1.4.1/24kubernetes-master/1*      blocked   idle   2        10.38.12.113    6443/tcp        Stopped services: kube-proxy  containerd/3            active    idle            10.38.12.113                    Container runtime available  flannel/3               active    idle            10.38.12.113                    Flannel subnet
<nvalerkos> 10.1.93.1/24kubernetes-worker/0       active    idle   8        10.38.12.54     80/tcp,443/tcp  Kubernetes worker running.  containerd/1            active    idle            10.38.12.54                     Container runtime available  flannel/1               active    idle            10.38.12.54                     Flannel subnet
<nvalerkos> 10.1.11.1/24kubernetes-worker/1       active    idle   6        10.38.12.237    80/tcp,443/tcp  Kubernetes worker running.  containerd/2            active    idle            10.38.12.237                    Container runtime available  flannel/2               active    idle            10.38.12.237                    Flannel subnet
<nvalerkos> 10.1.45.1/24kubernetes-worker/2*      active    idle   4        10.38.12.89     80/tcp,443/tcp  Kubernetes worker running.  containerd/0*           active    idle            10.38.12.89                     Container runtime available  flannel/0*              active    idle            10.38.12.89                     Flannel subnet
<nvalerkos> 10.1.96.1/24Machine  State    DNS           Inst id        Series  AZ  Message0        started  10.38.12.236  juju-0a24b5-0  bionic      Running1        started  10.38.12.8    juju-0a24b5-1  bionic      Running2        started  10.38.12.113  juju-0a24b5-2  bionic      Running3        started  10.38.12.103  juju-0a24b5-3  bionic      Running4
<nvalerkos> started  10.38.12.89   juju-0a24b5-4  bionic      Running5        started  10.38.12.60   juju-0a24b5-5  bionic      Running6        started  10.38.12.237  juju-0a24b5-6  bionic      Running7        started  10.38.12.39   juju-0a24b5-7  bionic      Running8        started  10.38.12.54   juju-0a24b5-8  bionic      Running9        started  10.38.12.10
<nvalerkos> juju-0a24b5-9  bionic      Running
<stickupkid> nvalerkos, to here https://pastebin.canonical.com/
<stickupkid> :)
<nvalerkos> https://pasteboard.co/IRhbIoA.png
<nvalerkos> I dont have access to that one
<stickupkid> nvalerkos, it seems like there is an issue with kube-proxy which is blocking the kube-master
<stickupkid> nvalerkos, I'm not sure why that would be
<stickupkid> nvalerkos, it might be worth asking here https://discourse.jujucharms.com/
<manadart> stickupkid nvalerkos: pastebin.canonical.com is internal pastebin.ubuntu.com is public.
<stickupkid> d'ho
<nammn_de> rick_h:  morning, with provider are the clouds meant, right? (Maas, aws ..)
<nvalerkos> localhost
<nvalerkos> Warning  FailedCreatePodSandBox  3m48s (x125 over 31m)  kubelet, juju-0a24b5-6  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: failed to mount rootfs component &{overlay overlay
<nvalerkos> [workdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/work upperdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1745/fs lowerdir=/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}: invalid argument: unknown
<nvalerkos> All pods are stuck at containercreating
<nvalerkos> something to do with the storage volume
<nvalerkos> I am using lxd with zfs
<stickupkid> manadart, achilleasa CR please https://github.com/juju/juju/pull/11140
<manadart> stickupkid: Yep.
<rick_h> nammn_de:  right, I'm hinting there's a cloud-based specific trick to that PR
<nammn_de> rick_h: ahh, yes I was planning to write code where specific cloud providers support renaming (aws does, maas does not)
<nammn_de> something along the line
<rick_h> nammn_de:  ok cool just tossing a hint your way since I didn't see anything in the PR around that
<nammn_de> rick_h: yeah thanks!
<nammn_de> better safe than sorry =D
<nammn_de> my pr is still in the early phase, just out for people to give fundamental hints. Like you did.  Still constantly updated
<rick_h> nammn_de:  understand
<nammn_de> rick_h: i answered your concerns in my old pr (show-space).  As well here in chat, late though. Does that answer your concers?
<nammn_de> concerns
<rick_h> nammn_de:  I think so, I'm just trying to find time to play with it myself then
<skay> rick_h: services file == yaml file that mojo uses. anyway, I figured out the problem. The yaml was not correct. all good. The magic thing happened where I found the error right after asking for help
<rick_h> skay:  ah ok awesome
<manadart> Patch for systemd file relocation: https://github.com/juju/juju/pull/11147
<stickupkid> manadart, I'll give it look now
<manadart> stickupkid: Thanks.
<stickupkid> manadart, i'm assuming that this needs regression testing
<manadart> stickupkid: I tested on Focal and Bionic. I assume that covers back to Xenial as well.
<stickupkid> manadart, yeah, what's supported under ESM
<stickupkid> manadart, answering my own questions - 14.04 is still in esm, but that's not systemd right, that's upstart?
<stickupkid> manadart, so we need to test xenial onwards?
<manadart> manadart: Series upgrades definitely now need rework. I will follow with a patch for that. At the same time I will test/fix the old upgrade steps.
<manadart> stickupkid: Trusty is Upstart IIRC.
<stickupkid> yeah, thought as much
<stickupkid> manadart, so upgrading series will now need to move files right?
<manadart> stickupkid: Actually thinking about it, upgrading from Trusty should work out of the box (I will test it). Aside from that, we handle in a Juju upgrade step.
<manadart> I think. Testing to follow.
<manadart> Series upgrade leaves the files alone if the upgrade is systemd -> systemd.
<stickupkid> but that's not what we want if a restart happens though right?
<manadart> It should be OK. If you do a series upgrade with Juju < 2.7.2, you will not be able to go up to Focal.
<stickupkid> interesting
<manadart> So you upgrade Juju, then do it.
<manadart> If you are > Trusty and you upgrade Juju, it just needs to relocate the files.
<stickupkid> > sudo: setrlimit(RLIMIT_CORE): Operation not permitted
<manadart> stickupkid: That is a symptom of sudo on Focal (at least on LXD). it doesn't come from Juju.
<stickupkid> yeah, really didn't like that
<manadart> stickupkid: You can test it on the container directly - just sudo something.
<stickupkid> anyway, it finished :D
<stickupkid> manadart, done
<nammn_de> achilleasa manadart: is there an easy way to bootstrap an controller on aws with --config juju-ha-space=<any space> ? I thought alpha should work. But it fails for me
<manadart> nammn_de: I think there might be chicken/egg thing there with AWS, because you are trying to land a controller on a space that doesn't come from the provider. Juju hasn't loaded the subnets and created the space yet, so it can't satisfy.
<nammn_de> manadart: hmmm makes sense. Just wanted to set the config setting so that i can test the rename-code
<manadart> nammn_de: Try the other config value (juju-mgmt-space).
<nammn_de> I could, maybe, just add a key-value to the collection :D?
<nammn_de> manadart: same error
<manadart> nammn_de: OK, both should work if you set it post bootstrap with `juju controller-config juju-ha-space=alpha`
<nammn_de> manadart: oh yes, hmm tried before. now works after trying again.. weird. Thanks!
<nammn_de> manadart: regarding your comment
<nammn_de> didnt we discuss last time that we cannot remove them and need to add them as empty https://github.com/juju/juju/pull/11088#discussion_r370208805
<nammn_de> because we need to fulfill the interface?
<manadart> Wrap network backing stub in a new type for the old facade tests and put them on that.
<manadart> The old facade tests will be strangled out in favour of the new mock suite and they will ultimately disappeared. Those methods have nothing to do with the common backing.
<achilleasa> hml: can you take a look at https://github.com/juju/description/pull/72 ?
<hml> achilleasa:  looking
<nammn_de> manadart: not sure if I can follow 100% do you have an example for me?
<nammn_de> because afaict the  spaces.NewAPIWithBacking(...) takes a backing interface.  That's why it initially used those implements. But Actually there is no networkbacking usage in that test anymore
<hml> achilleasa:  approved
<achilleasa> hml: tyvm
#juju 2020-01-24
<lucidone> Is it possible to define multiple service accounts in the k8sspec? It looks like it's not, as the serviceAccount key takes no name and requires a list of rules
<lucidone> I'm also needing to bind multiple roles to the service account. Natively this is a ClusterRole and a Role. In k8sspec spec language this specific use case could be done if there were global:true rules and global:false rules
<lucidone> Ooo, just saw kubernetesResources.serviceAccounts !
<lucidone> That looks like my ticket to freedom
<lucidone> Actually, that solves the first issue, but not the second. as each service account still takes a global key and a list of rules - does it allow specifying the same serviceAccount name twice?
<kelvinliu> lucidone: so SA is always namespaced I think, but Role/ClusterRole could be different
<kelvinliu> so u can't have two SA having same name in same namespace.
<tlm> lucidone: Are you able to provide any deatails on the integration you are doing for future reference
<kelvinliu> lucidone: and u can see `kubernetesResources.serviceAccounts` is actually an array, so u can create multi SA but they should have different names or K8s will give u an already exist error.
<lucidone> Yeap, the 'kubernetesResources.serviceAccounts' solves the first issue. I'm looking at charmifying the nginx-ingress helm chart. It has an nginx-ingress service account, and it binds a ClusterRole to it aswell as a Role - which I believe is currently unsupported as you define a serviceAccount with a global flag and a list of rules
<tlm> ah yeah I see the problem. The global rule shouldn't be defined on the SA but the rules
<lucidone> One possible fix would be to remove the global flag, and add a globalRules list instead .. so the user can specify either
<lucidone> But in general you can bind N roles to a given service account
<lucidone> So separating rules from the SA is a more general solution
<tlm> yeah. What about if each rule could be set global or not ? That way juju would make one clusterole and role and split them based on the flag ?
<tlm> but sounds like rules should not be tied so hard to sa's
<lucidone> global flag per rule would work for this use case. But yea, I think splitting is the better solution. Would allow you to define rule lists, and then bind as many as you wish to a set of service accounts
<lucidone> It looks like multiple services are also not supported? .. nginx-ingress needs to create 'nginx-ingress-controller' and 'nginx-ingress-default-backend' services
<lucidone> That might be more challenging with the juju model .. Perhaps the default backend would be another charm that is then related
<kelvinliu> lucidone: yeah, we used to support the main SA linking to multi role/cluster but later we changed to current spec because we wanted to make it simpler, but we can have a discussion on Tue.
<lucidone> Ah right, yea sounds like a good one to discuss :)
<kelvinliu> lucidone: so yeah, one charm one pod
<kelvinliu> but u can have many containers in the pod
<kelvinliu> so in this case, they would be two charms
<lucidone> Yea, that one makes a lot of sense with the juju model
<stickupkid> manadart, you looking into the focal stuff from the PR?
<manadart> stickupkid: Which stuff in particular?
<stickupkid> manadart, ignore me, I just saw your comment
<stickupkid> haha
<nammn_de> manadart:  morning, some time for a quick HO?
<manadart> nammn_de: Sure.
<nammn_de> manadart: heading daily
<nammn_de> manadart: ahhh now I understand what you meant. Wrapping the type in spaces_test so we don't touch the stub_network as we don't plan to change it anymore. Thanks!
<nammn_de> manadart: as you approve it, I will rebase those commits down
<manadart> nammn_de: I will give it a last look now.
<nammn_de> manadart: regarding rename spaces provider support. Is adding a method to networking interface called seomthing along "supportspacesrenames"  enough? Then implementing it on the supporting provider. In this case only ec2.
<manadart> nammn_de: Call it SupportsProviderSpaces with good comment differentiation between that and SupportsSpaces.
<nammn_de> manadart: great, will do
<manadart> Pre-real-work refactoring patch for service file writer: https://github.com/juju/juju/pull/11149
<stickupkid> manadart, looking
<hml> wallyworld: ping
<nammn_de> manadart: shouldnt the method `DeltaOps` takes a collection param? As the collection we need to update in rename-space should be controllers?
<stickupkid> nammn_de, he's gone :)
<nammn_de> stickupkid: yeah, not time pressure anyway. Gonna ask him next week=D
<madsage> juju make sandwich
<madsage> greetings :)
<addyess> frankban: any chance you can address https://github.com/go-macaroon-bakery/py-macaroon-bakery/issues/80
<addyess> frankban: it's causing issues in https://zuul.opendev.org/t/openstack/build/e6195988f5374039a7645f1d1363c18a
#juju 2020-01-25
<Tetta> aloha
#juju 2020-01-26
<evhan> lucidone: Note the default-backend service is optional, so you may be able to skip it for now and the controller will still work.
<lucidone> evhan: I've charmified the backend service alright. controller itself needs ClusterRole _and_ Role .. although I wonder if it can live with just the ClusterRole rules
<evhan> Right, it might be that it creates both but only needs one depending on how you decide to deploy the controller.
<evhan> Looks like it creates both when RBAC is enabled.
<evhan> It only* creates both.
<evhan> And when the controller isn't scoped.
<evhan> So I'd guess you can do just the Role if you restrict the controller, and vice versa. But, totally guessing.
