#juju 2012-02-27
<danwee> hello people, i need help with juju, i m stuck at a corner and cant find a solution. can anyone help  ?
<SpamapS> danwee: did you ask on askubuntu.com yet?
<SpamapS> danwee: most of the regular juju users/contributors are generally in here between 9am and 5pm in the US (so 1400 UTC - 0800 UTC)
<danwee> SpamapS, i didnt, i figured here is the officail support for juju,
<danwee> SpamapS, can you help ?
<SpamapS> danwee: definitely, but it seems you've asked at very off times for us, so we keep missing you . :-p
<SpamapS> danwee: I have about 10 minutes then I need to go to bed... I'm all yours for the 10 minutes though. What seems to be the trouble?
<danwee> SpamapS, thans alot , can u take alook at this and tell me what u think http://paste.ubuntu.com/857589/
<SpamapS> danwee: first question, what version of juju?
<SpamapS> danwee: second question, does your cobbler setup include full power control or are you turning systems on/off manually?
<danwee> SpamapS: mmm... lastest i suppose, i dowloaed juju over orchestra server, i manually  turn on and off the systems
<SpamapS> danwee: apt-cache policy juju will show you the version of juju
<SpamapS> danwee: has the system you're trying to connect to completed its installation?
<danwee> SpamapS: juju:   Installed: 0.5+bzr398-0ubuntu1   Candidate: 0.5+bzr398-0ubuntu1   Version table:  *** 0.5+bzr398-0ubuntu1 0         500 http://jo.archive.ubuntu.com/ubuntu/ oneiric/universe amd64 Packages         100 /var/lib/dpkg/status
<SpamapS> danwee: ok, that explains some of it.
<SpamapS> danwee: I'd recommend you enable the juju PPA, and use the latest version from the PPA. The one in Ubuntu 11.10 has a lot of bugs and we haven't been able to update it yet.
<danwee> Spamaps: hmm , so you think it might be a bug
<danwee> SpamapS: can u explain to me please what exactly going on between juju, orchestra server and the machines,
<danwee> SpamapS: where do you recommend i shoud start digging
<danwee> SpamapS: i still have two minutes :)
<SpamapS> danwee: the first step, bootstrap, has your client machine interrogating cobbler about available systems, and then allocating one, and feeding data into cobbler to boot and install that system.
<SpamapS> danwee: until that machine boots and runs the juju agent (which it will do after installation), juju will basically be unaware of its progress installing.
<SpamapS> danwee: so, I need to go, but basically if you have cobbler (the main piece of orchestra-provisioning-server) setup right, then on rebooting the system that you've allocated, it should net-boot via PXE, and start installing Ubuntu
<danwee> Spamaps: on the cobbler web interface, i edit the system , and mark it as orchestra-juju-available, then i juju bootstrap, it shows a msg that bootstrap completed succefully, and when i check cobber web interface, it shows that the system orechstra-juju-aquired
<SpamapS> danwee: once it is done installing and reboots, it should run the agent, and juju status should be able to connect to it.
<danwee> SpamapS: thanks for your time , and have a good night sleep, i lll try to catch you guys in a better time
<SpamapS> danwee: good. Cobbler also should have generated a PXE configuration (based on the MAC address of the first network interface) that will cause it to boot into the installer.
<danwee> SpamapS: hmm i ll check this also, it doesnt boot by itself
<SpamapS> danwee: if you don't have a PXE capable system, you can also just boot with url=http://cobblerserver/cobbler/kickstart/systemname  .. I forget the exact url.. but you can see it by viewing the system's kickstart file.
<SpamapS> danwee: good luck. Be encouraged, what you're attempting has many many pages of documentation that need to be written on top of the few that are already in existence, and a huge effort is under way to make the whole thing much simpler!
 * SpamapS heads to bed
<danwee> SpamapS: piece out and thanks agian :)
<koolhead17> hello SpamapS
<TeTeT> using juju/lxc my machines do not get a public ip, see http://pastebin.ubuntu.com/858965/
<TeTeT> anything I need to enable this?
<TeTeT> guess state: pending is not too good
<koolhead17> TeTeT: it does take sometime :)
<TeTeT> hmm, ok
<TeTeT> any place to monitor progress on setting up the container, e.g. logs or so?
<TeTeT> ~/juju/ubuntu-sample it seems
<TeTeT> koolhead17: thanks, was not patient enough
<koolhead17> TeTeT: yes thats where things are
<koolhead17> you have logfiles there
<TeTeT> koolhead17: yeah, they are somewhat populated now, some d/l seem to have happened.
<koolhead17> and also you can go inside state and  units to see more!! :)
<TeTeT> koolhead17: though in units a log file to a non existing log under lxc points to.
<TeTeT> hmm, state is actually empty too
<koolhead17> TeTeT: ooh :(
<koolhead17> first state gets populated and then unit
<koolhead17> are you on oneiric
<TeTeT> koolhead17: no, precise
<koolhead17> TeTeT: i have not played with juju on precise yet, am still on oneiric
<TeTeT> koolhead17: I have two new nics, vethw7LG25 and vethAluklw
<TeTeT> koolhead17: guess they are used for the containers
<koolhead17> hmm possible
<koolhead17> TeTeT: http://paste.ubuntu.com/858979/  this is what i used to get my Juju envirnment working without any pain on a physical machine running oneiric
<koolhead17> also in precise i think default-series: oneiric     default-series: oneiric will change to precise
<TeTeT> koolhead17: looks about the same, I use the contains in oneiric as well, maybe that's the problem, no oneiric containers inside precise?
<koolhead17> TeTeT: i don`t know much about juju/precise :(
<TeTeT> koolhead17: let me issue a juju destroy-environment and start with all precise again
<koolhead17> TeTeT: sounds cool :P
<koolhead17> TeTeT: also you have not used PPA correct?
<TeTeT> koolhead17: correct, just the precise repo
<koolhead17> hmm.
<TeTeT> gonna install oneiric in another vm and see if it works there. would you recommend a ppa for use with oneiric?
<koolhead17> TeTeT: i have used the default repo and it just worked with the config and manual i shared with u  via ubuntu paste
<koolhead17> i have configured it all on a physical machine with oneiric on it not in a Vm, so i have no idea how will it work/behave in the VM env
<TeTeT> koolhead17: ok, we'll see then if the virt env makes any differenc
<TeTeT> koolhead17: with an oneiric vm juju bootstrap on the virtual net start :( I have seen this before in precise, guess it's fixed there now
<TeTeT> koolhead17: installed juju on my precise running laptop, lxc worked just fine, must be a prob with the vms then
<TeTeT> I wonder if it's a generic problem with lxc inside a vm or a juju specific one
<_mup_> Bug #941873 was filed: upgrading charms that use symbolic links fails <juju:New> < https://launchpad.net/bugs/941873 >
<koolhead17> TeTeT: congrats!! :P
<jcastro> SpamapS: we should probably work on our slides today
<SpamapS> jcastro: I am free after 14:00 EST
<jcastro> SpamapS: ok
<jcastro> it's a date, also, we have a rehearsal tomorrow so we can't keep procastinating.
<jcastro> even though I enjoy not writing slides.
<SpamapS> jcastro: :)
<jcastro> that we will put in U1
<jcastro> and then lose.
<jcastro> again.
<SpamapS> they're not lost, they're "merged"
<jcastro> marcoceppi: nijaba: FYI, m_3 is going to strata this week and gets on a plane soon, so any help reviewing incoming charms would be <3
<jcastro> anyone else getting an install error on the mysql charm?
<andrewsmedina> jcastro: what error?
<jcastro> I am looking now, I was just hoping someone knew before I had to do the work myself. :)
<jcastro> ok weird, it installed this time.
<jcastro> hazmat: where's the code to the charm browser?
<hazmat> jcastro, on the site
<danwee> hello, anybody free to talk about juju ?
<SpamapS> danwee: did you get any further?
<SpamapS> jcastro: 5 minutes?
<jcastro> SpamapS: sure
<danwee> SpamapS: hello SapmapS, hmm yeah kinda, ubuntu doesnt explain deploying openstack on juju using orchestra, there page is really missed up, so after i talk to you, i find out that u have to install juju first on orchestra then bootstrap the enviroment and boot the machine, i did and didnt work yet, maybe cause i manually booted it i dont know, and i installed juju from ppa
<danwee> SpamapS: but at least i m getting the idea now
<danwee> check it out, its really upside down, if someone deosnt know what hes doing , he will get nowhere with these steps https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<jcastro> our docs on that are pretty horrible unfortunately
<danwee> very horrible man, first i had to get back to frdora docs, to read a bout cobbler, then i got the RSA key thing, so i have read even more on ssh keys, and then i find out that you have to add it in the evnirnments.yaml by hazmat, and that i have to make it available on cobbler
<danwee> and now i find out that you have to make it avialable to juju and then bootstrap it, just by luck
<danwee> and still didnt manage to deploy juju yet , i m stand still
<SpamapS> danwee: part of the reason there's not more information available on this is that it is being so heavily changed
<SpamapS> danwee: in many ways, the 11.10 story for this is a proof of concept, and a tech preview.. and the preview has revealed that we had to change a few things :)
<SpamapS> jcastro: by 5, I meant 30 ;) almost ready
<jcastro> SpamapS: it's ok, I am cloud scale, I am used to elastic time.
<danwee> SpamapS: you know if i manage to deploy juju, i will right it step by step so windows guys will find it even easier to deploy
<danwee> with explination pictures
<SpamapS> danwee: so where are you stuck now?
<SpamapS> danwee: when you say you bootstrapped it.. did you then boot it into the installer with the pre-seed URL?
<danwee> SpamapS: ok thats what i did, i installed orchestra with all the distros and profiles it comes with, then i installed juju , made an environments.yaml, went to cobbler web interface and made a new system with precise profile persay, from management i chose > juju-orchestra-available > then juju bootstrap > i went back to the web interface and it was juju-orchestra aquired > ok now the node machine is still off power but configured for pix
<danwee> the whole thing started , then when machine finished installing, i juju statused it with a big disappointment
<SpamapS> danwee: when it booted up, did it show juju starting on the console?
<danwee> which console? the orchestra ?
<danwee> on orchestra just connecting to environment
<danwee> then connection refused
<SpamapS> danwee: no on the machine
<danwee> although i chose a profile with juju preseed
<danwee> on the machine i just saw it pixie booting the OS
<danwee> i didnt notice a msg about juju
<SpamapS> danwee: did you upgrade to the latest juju?
<danwee> yes i did
<danwee> but i didnt boot it from the URL, couldnt find it, so i manually powered on the machine
<SpamapS> danwee: the url is critical
<SpamapS> danwee: if you PXE booted though, then the installer booted with that url
<SpamapS> danwee: the install proceeded automatically, right?
<danwee> yes
<danwee> and there is a post installation preseed i saw at the end on installation
<danwee> and the last key command were executed
<SpamapS> danwee: ok, you should be able to ssh to the machine
<danwee> checked the kickstart for the system , it was all executed
<danwee> yes but after i add authrized_keys to the machine
<danwee> otherwaise will ask for a password
<SpamapS> danwee: ok thats evidence that something went wrong then
<SpamapS> danwee: juju installs your key on all the machines
<SpamapS> danwee: once you ssh to the machine, is juju installed already?
<danwee> ok where is that key located exactly? /var/lib/orchestra/.ssh/id_rsa.pub ?
<danwee> u mean on the machine ? there is no sign for juju whatsoever
<danwee> what user should i use on orchestra when i juju bootstrap? root user ?
<SpamapS> danwee: ok, so something failed during the automatic install
<SpamapS> danwee: ubuntu
<danwee> on the orchestra server ?
<SpamapS> danwee: no, from your machine, to the bootstrapped machine
<SpamapS> danwee: what profile is the machine in?
<danwee> precise-juju
<danwee> tried oneiric also with and without juju
<danwee> i have two machines , orchestra with juju , and the node(machine )
<danwee> on the node i have ubuntu user
<danwee> on the orchestra/juju what user should i use when i juju stuff
<danwee> SpamapS: when orchestra server installed by default, it creates and RSA key on var/lib/orchestra/.ssh
<danwee> and a system user : orchestra
<danwee> i gave orchestra a pass and login with it , juju bootstrap with it , and i tried root and ordinary user, no luck
<danwee> since you have a create .juju folder in the home directory of a user , which user ?
<bac> hi m_3, could i ask you some questions about your review of the buildbot charms?
<SpamapS> danwee: you actually don't need to use the orchestra server for anything.
<SpamapS> danwee: it just hosts cobbler and the webdav server that juju uses to install/boot machines
<SpamapS> danwee: everything should work fine from your client
<danwee> SpamapS: where is juju installed? on what machine?
<SpamapS> danwee: your client machine, and all machines that are bootstrapped or provisioned using your client machine
<SpamapS> danwee: bootstrap installs the "juju server" so to speak
<danwee> SpamapS: thats how you lost me conpletely
<danwee> completely
<SpamapS> danwee: juju consists of about 5 parts
<SpamapS> danwee: juju the client CLI program, and then some agents
<SpamapS> danwee: juju bootstrap provisions a machine to run the 'provisioning agent' ..
<SpamapS> danwee: all machines that are provisioned by juju run a 'machine agent'
<SpamapS> danwee: and any service units that are deployed run a 'unit agent'
<SpamapS> danwee: at the center of all of that, lies zookeeper, which is not really "part" of juju, but is used as the system of record that the agents inspect to find out what they should do
<_mup_> Bug #942179 was filed: Spec needed for subordinates <juju:In Progress by bcsaller> < https://launchpad.net/bugs/942179 >
<SpamapS> danwee: so your first step in running juju is to run 'juju bootstrap' to allocate a machine to run the provisioning agent and zookeeper
<SpamapS> danwee: with EC2, you do this by making EC2 API calls
<SpamapS> danwee: with orchestra, you do this by making Cobbler API calls
<SpamapS> danwee: the cobbler API call you  make tells cobbler to allocate the machine, and sets some values that will be fed into the installer
<SpamapS> danwee: those values install juju on the server
<SpamapS> danwee: *and* they install your SSH key
<SpamapS> danwee: so, if your SSH key is not appearing on the server (under the ubuntu user) then its likely the automated install failed in some way.
<SpamapS> danwee: make sense?
<SpamapS> danwee: and also, does it then make sense why we would try to change this so its not so mind-blowingly complicated?
<danwee> SpamapS: i m gonna have to read couple of times and think about it
<danwee> SpamapS: would we see changes on 12.04 ?
<SpamapS> danwee: yes
<danwee> SpamapS: well what are you waiting for, go and do your thing, u only have till spring :)
<danwee> SpamapS : thanks for the explination, wish i couldve helped somehow
<SpamapS> danwee: you have helped.. you've helped confirm that it needed work. :)
<danwee> SpamapS: thanks mate, i m still not giving up on juju, i ll try different things tomorrow , time to take my leave, sleep time , you take care and thanks again really
<danwee> peace out
<hazmat> SpamapS, re doc merges, people seem to be having trouble hitting the right places for merges, i'm thinking we should just merge them for people if needed
<SpamapS> hazmat: Sure. I think that one was just done before the switchover
#juju 2012-02-28
<_mup_> juju/enhanced-relation-support r11 committed by jim.baker@canonical.com
<_mup_> Usage and impl details
<m_3> bac: might be easier to hit me up wth email this week for questions about the buildbot review
<_mup_> juju/enhanced-relation-support r12 committed by jim.baker@canonical.com
<_mup_> Completed impl plan and details
<jcastro> SpamapS: charm school rehearsal!
<jcastro> SpamapS: 3 minutes!
<ivoks> can someone help me out here? what's exactly failing here? http://paste.ubuntu.com/860691/
<ivoks> juju destroy-environment otoh works :)
<james_w> does someone have a pointer to this summit charm I'm hearing about?
<robbiew> m_3: ^^^^^^?
<robbiew> ivoks: hazmat jimbaker or bcsaller are your best bets for help there
<bcsaller> ivoks: the last line of your paste: 2012-02-28 11:08:53,092 DEBUG Environment still initializing. Will wait.
<bcsaller> ivoks: sounds like its still doing the bootstrap
<bcsaller> takes a while
<ivoks> bcsaller: it's not
<ivoks> bcsaller: i've digged into it a bit more
<ivoks> bcsaller: none of the juju services are started within the instance
<hazmat> ivoks, are you on/ssh'd into the machine?
<ivoks> hazmat: i'm not now, but yes, i can ssh into it
<hazmat> ivoks, which version of juju are you running?
<ivoks> hazmat: in an hour :)
<ivoks> hazmat: all precise
<ivoks> host and guests
<hazmat> ivoks, from ppa or the distro version?
<ivoks> distro
<hazmat> ivoks, cool, if you can log in and get the cloud-init logs that should be helpful
<ivoks> hazmat: i've noticed it complains about bad syntax for juju-admin
<ivoks> and also both juju services said that --nodeamonize (or --nodaemon) is bad option
<hazmat> ivoks, that would imply a client vs node version mismatch
<hazmat> ie different juju versions
<ivoks> hazmat: i'm reimplementing openstack as we speek, so i'll be able to reproduce this in 60-90 minutes (or less)
<hazmat> there was some large changes that landed last week, but they caused some incompatbilities between versions
<hazmat> ivoks, reimplementing or redeploying? ;-)
<hazmat> ivoks, cool
<ivoks> hazmat: both :/
<ivoks> hazmat: hardware and sotfware :D
<ivoks> hazmat: ok, i have system up and ready
<hazmat> ivoks, the file i'd be interested in is.. /var/log/cloud-init-output.log
<ivoks> give me a second, i need to bootstrap it
<ivoks> hazmat: http://paste.ubuntu.com/860965/ <- that's from console
<ivoks> hazmat: well, same thing is in the log
<m_3> james_w: summit's still in progress... it's a combo of pgsql, memcache, and a fork of lp:~michael.nelson/charms/oneiric/apache-django-wsgi/trunk with summit-specific setup
<james_w> m_3, interesting, I'd like to take a look when it's nearing completion
<james_w> I have another django thing I want to charm, so I'm keen for some convergence
<m_3> james_w: sure, I'll ping you for summit help... I understand most of what's going on so far
<m_3> but there are a couple of places where it assumes interactive installation
<james_w> in the init-summit stuff?
<m_3> james_w: I'll push up to lp:~mark-mims/charms/oneiric/summit/trunk and ping you when there's more to look at
<james_w> thanks
<m_3> I'm planning on keeping with the puppet-based django charm for now... might back off of that later, but haven't decided yet
<james_w> I think it's good as an example, but a set of helpers (puppet modules?) to make writing django charms easier would be better
<ivoks> hazmat: machine-agent.log shows: juju.state.errors.MachineStateNotFound: Machine 0 was not found
<ivoks> hazmat: i didn't look into juju internals, but does it ask nova-compute for the state of the node? i just noticed "AttributeError: 'dict' object has no attribute 'state'" in nova-compute's log
<hazmat> ivoks, sorry was in a meeting
<ivoks> hazmat: no worries
<hazmat> ivoks, can you pastebin the cloud-init/user-data .. its at /var/lib/cloud/instance/user-data.txt
<hazmat> this sounds vaguely familiar
<ivoks> hazmat: http://paste.ubuntu.com/861000/
<hazmat> SpamapS, m_3 does that ring any bells.. juju-admin: error: unrecognized arguments: 2007-01-19 2007-03-01 2007-08-29 2007
<hazmat> its like its getting junk back from the metadata server
<hazmat> ivoks, what version of openstack?
<ivoks> hazmat: 2012.1~e4~20120224.12913
<hazmat> ivoks, if you do curl http://169.254.169.254/1.0/meta-data/instance-id  what do you get back?
<ivoks> 1.0
<ivoks> and then bunch of dates
<hazmat> that's the problem
<ivoks> hazmat: http://paste.ubuntu.com/861002/
<hazmat> smoser, adam_g does that sound familiar.. metadata server returning back junk
<hazmat> i wonder if i can reproduce with devstack
<SpamapS> hazmat: yes I've seen that.. IIRC its from broken metadata service.
<ivoks> fwiw, this is keystone only auth
<smoser> why are you using 1.0 ?
<smoser> i think it doesn't matter actually, but i wouldn' tuse that.
<ivoks> er... 1.0 of what?
<smoser> in that url
<smoser> '1.0/meta-data/instance-id'
<hazmat> smoser, because its stable and well known
<smoser> dont use that and it works.
<smoser> hazmat, 1.0 is probably missing all sorts of things you want anyway.
<ivoks> http://169.254.169.254/sdfgsdfg/meta-data/instance-id
<ivoks> that gives the same outpu
<smoser> right
<smoser> its basically unknown to openstack
<smoser> http://169.254.169.254/2009-04-04/meta-data/instance-id
<smoser> use that
<hazmat> smoser, but its returning back its knowns, and it returns back 1.0 as a valid value
<ivoks> now that's right
<ivoks> i-00000001
<smoser> hazmat, yeah, you're right. it does list it in the index.
<smoser> which is obviously wrong.
<hazmat> ie. it should work, but its a bug in ostack imo
<hazmat> we can update it to a newer version.. but really we don't need much else besides pub/priv address and instance id from the md server
<ivoks> hazmat: is there anything i can do on a live system... i guess i need to change the path somewhere?
<smoser> ivoks, please open a bug on nova.
<ivoks> smoser: ok
<ivoks> not sure how to phrase it :D
<smoser> metadata service reports to support 1.0 but serves incorrect data
<ivoks> thanks
<smoser> then show your wget's
<hazmat> ivoks, pls paste a link to the bug here, i'm going to explore it a little more
<ivoks> https://bugs.launchpad.net/nova/+bug/942868
<_mup_> Bug #942868: metadata service reports to support 1.0 but serves incorrect data <OpenStack Compute (nova):New> < https://launchpad.net/bugs/942868 >
<hazmat> ivoks, thanks
<ivoks> hazmat: np
<ivoks> 2012-02-28 15:22:47,958 INFO Connected to environment.
<ivoks> finally :)
<hazmat> smoser, does devstack work on precise?
<hazmat> there's a bunch of oneiric only labeling on it
<ivoks> zul's packages work
<ivoks> from the archives
<smoser> hazmat, yes, it works.
<smoser> FORCE=yes
<hazmat> ivoks, smoser thanks
<smoser> hazmat, by works, i mean maybe
<smoser> :)
<smoser> it has worked. and has been broken.
<_mup_> Bug #942868 was filed: metadata service reports to support 1.0 but serves incorrect data <juju:Confirmed> <OpenStack Compute (nova):New> < https://launchpad.net/bugs/942868 >
<zul> hazmat: yes it does
<ivoks> hazmat: note that bootstraping juju with e4 will fail
<ivoks> is there a way to pass additional info to cloud-init while doing juju bootstrap (like apt proxy)? i've seen a bug about this issue...
<hazmat> ivoks, there isn't, although that one in particular is worth exposing i think
<jcastro> SpamapS: 51 charms, did you promulgate something?
<SpamapS> hazmat: I'd love to see the full power of cloud-init exposed to people
<SpamapS> hazmat: by and large, we should abstract the things that make sense across all providers, but being able to slide your own cloud-config in would be quite useful.
<niemeyer> SpamapS: This would render charms dependent on such cloud-init config..
<SpamapS> niemeyer: I hadn't thought about it until you said charms, but really.. subordinate charms almost give us the same capability as cloud-init..
<SpamapS> niemeyer: the only thing missing is the ability to do interesting things to the system in early-boot
<SpamapS> which I don't think most people want
<niemeyer> SpamapS: Agreed
<SpamapS> what I do think people want is the ability to have their data on a RAID of EBS volumes, or the ability to install a bunch of extra stuff... both of those will work fine with subordinates
<SpamapS> Though the EBS thing will still require manual intervention to attach the EBS volumes.. that would be true of cloud-init too actually.
<niemeyer> SpamapS: That'll eventually be handled internally by juju itself
<SpamapS> niemeyer: I've bee kicking some ideas around in my head about how it might work. I feel like we're still a world away from addressing it though.
<niemeyer> SpamapS: As far as focus goes, agreed
<SpamapS> niemeyer: like, I don't even know what to ask for. ;)
<niemeyer> SpamapS: That's been on the table since first tech sprint, though
<niemeyer> SpamapS: It's about management of volumes and relationship between volumes and units
<niemeyer> SpamapS: This must be a first-class feature
<SpamapS> niemeyer: yeah, I feel like the simples thing to do is to just track the volumes that get attached to EBS rooted machines.. and offer a way to say "add-unit --volume-id d-203494950"
<SpamapS> niemeyer: but I know thats glossing over a lot of details
<jcastro> SpamapS: launchpad workflow email on the list, I mention you explicitly to explain something so if you could that would be swell. :)
<hazmat> we need juju eatings its own tail first
<hazmat> then we can launch juju environment infrastructure services like volume management
<hazmat> they would be provider specific in the case of volume management
<hazmat> ie. perhaps ceph for maas/orchestra, ebs for ec2
<hazmat> and then have per unit volumes against the storage service, or in some cases against direct/ephemeral disk
<SpamapS> hazmat: at a simple level, those are all doable as subordinates now.
<SpamapS> or
<SpamapS> when it lands
 * SpamapS notes that Ubuntu 12.04 beta1 is dropping any minute.. w/o subordinates.. :(
<hazmat> SpamapS, subordinates get a little messy here, and subordinates live in their principal's environnment
<hazmat> so they'd be coming in after the principal, the sequence is wrong.. its much simpler if its in core imo.. yes it could be hacked between cooperating charms with intimate details..
<SpamapS> hazmat: the subordinate can be related to other services though right? So the ceph-client subordinate will be able to ask services where to mount their ceph-backed-block device
<hazmat> SpamapS, first subordinate branches just landed
<hazmat> SpamapS, yes
<SpamapS> Ordering, once again, rears its ugly head
<hazmat> SpamapS, but what's the lifecycle.. the subordinate can be added to the principal at any point in the principal's lifecycle.
<hazmat> so the principal already has a bunch of data, and then your copying data around volumes and coordinating the unit's underlying service and relations to during that migration.. its pretty messy imo
<SpamapS> hazmat: the principle needs a way to delay full configuration until certain relations are established.. in this case.. the subordinate "where do I put my data" relation. :)
<SpamapS> hazmat: forget subordinates, this is true of other services too like nova api that have two options (local or remote database storage)
<hazmat> SpamapS, again this is much simpler if something as intrinsic as storage is in the core
<hazmat> SpamapS,  required relations are orthogonal, but also of import
<hazmat> deploying subordinates when the principal isn't a known running state, complicates their communication imo.. a unit doesn't respond to relation events till its running/started
<SpamapS> I see the two use cases as equals until we have any kind of real discussion about storage.
<SpamapS> hazmat: so, the answer I have for the nova problem and ceph (it has this problem too) is to just have a config option giving the relations hints about when to store data.
<SpamapS> So for ceph, you just tell ceph "start-nodes=4"" and it delays data storage until there are 4 peer units.
<hazmat> SpamapS, so i think we've walked around two solutions for required.. one is per the subordinate implementation, don't deploy actual units till the required relations are satisifed.. ie. in much the same way that subordinates units are actualized till the subordinate relation is established
<hazmat> the other is a unit don't join provider relations till its requires are satisifed
<hazmat> which is actually a bit better/needed imo, else you get additional ordering issues around when is something fully operational.. ie deploy with dep relations doesn't nesc. solve the ordering around consumers of the service attempting to use it pre its initialazation.. but waiting till its dependencies are solved and running does.
<hazmat> SpamapS, i don't think config option is the right solution
<SpamapS> hazmat: no, its the "current" solution
<hazmat> fair enough
<_mup_> juju/enhanced-relation-support r13 committed by jim.baker@canonical.com
<_mup_> Reworked docs on changes to relation commands
<jcastro> niemeyer: hey we did this already right "[niemeyer] drive dicussion about interface documentation on juju mailing list"?
 * jcastro is cleaning up spec work items
<niemeyer> jcastro: Kind of..
<niemeyer> jcastro: I did this before, maybe a couple of times even, but I wouldn't consider the problem as solved
<jcastro> my bp is that we have a discussion, a solution or not isn't in my scope. :)
<niemeyer> jcastro: LOL
<hazmat> niemeyer, did you see clint's email to the list re env vars.. i asked him to post there for your feedback
<niemeyer> hazmat: I missed that.. looking
<jcastro> ok I'll put "inprogress" until I am more desperate to close it
<hazmat> jcastro, it wouldn't be hard to do some bespoke interface analysis for documentation
<hazmat> its kinda of hacky.. but we could track all the keys being get/set
<jcastro> hazmat: hey speaking of, we need to figure out how to style the generated sphix docs
<hazmat> via charm introspection
<jcastro> say something magical like "bootstrap can do all that for us easily"
<hazmat> jcastro, ugh.. that means another rt... better start now if you know what you want..
<jcastro> heh ok
<hazmat> jcastro, i'd look around/google for sphinx themes
<hazmat> and see if there's something reasonable out there
<jcastro> I'd like to see if anyone else in the project is styling their sphix
<jcastro> maybe we can steal
<hazmat> i don't think bootstrap whole sales is going to work with sphinx without some dev time
<hazmat> jcastro, yup.. that's a good plan
<hazmat> jcastro, i kinda of like the sphinx theme the celery folks did
<jcastro> I don't see other ones styling it, I am debating if it's even worth the effort right now
<james_w> jcastro, I'm pretty sure http://developer.ubuntu.com/packaging/html/ is sphinx
<hazmat> oh.. nevermind that's the old one
<jcastro> james_w: oh nice, where'd you get that, dholbach?
<hazmat> jcastro, actually it looks we don't need an RT we can just mod the source..
<hazmat> the makefile for the cron is the same
<james_w> jcastro, yeah, he'll know for sure
<hazmat> here's some.. http://pythonic.pocoo.org/2010/1/8/new-themes-in-sphinx-1-0
<james_w> http://bazaar.launchpad.net/~ubuntu-packaging-guide-team/ubuntu-packaging-guide/trunk/files/head:/themes/ubuntu/
<hazmat> these are the built in ones.. http://sphinx.pocoo.org/theming.html
<hazmat> james_w, cool.. it kinda of looks ugly but we grab the colors ;-)
<hazmat> m_3, ping
<m_3> hazmat: hey
<hazmat> m_3, hey just wanted to check and see how charmrunner is working out
<m_3> hazmat: working through some recursion problems with the planner
<hazmat> m_3, oh.. do tell?
<hazmat> m_3, which charm?
<m_3> on gnaglia for some reason... lemme get you the logs... link coming
<hazmat> m_3, typically that means a semantic in the charm error but not always
<m_3> also need to coerce max_time into float for watch
 * m_3 grrrr latency
<hazmat> m_3, i switched the project maintainer to juju-jitsu hackers.. if you need to merge something
<m_3> http://ec2-50-16-5-188.compute-1.amazonaws.com:8080/job/oneiric-local-charm-ganglia/
<m_3> hazmat: ^^
<m_3> I'd like to take it down soon so lemme know when you're done digging through the workspace
<m_3> hazmat: thanks for the ownership... I'll push the float commit
<niemeyer> hazmat, SpamapS: Responded
<hazmat> m_3, what's the channel for test notifications?
<m_3> hazmat: ##charmbot-test
<hazmat> ah it is a double hash
<m_3> trying to be polite with made-up channels :)
<m_3> I'll figure out the problem (like you said, prob just syntax somewher in interface)
<m_3> but that' pretty much status atm
<hazmat> m_3, so besides ganglia its been okay?
<hazmat> m_3, are you copying the juju-record output into the workspace?.. is that logs.zip?
<hazmat> its more than just the logs but fair enough
<hazmat> cool!
<m_3> hazmat: just walking through the list of fails... ganglia was the third type of fail
<m_3> I need to re-run to test other timeout fails with new max_time... that was my next step
<m_3> yeah, it's coming along nicely in general though
<hazmat> m_3, yeah.. the service watcher timeout code was a little suspect i think
<hazmat> it had too many ways of going at it
<m_3> hazmat: there was another problem with the runner generating and reading plans from different places, but I'll push that back too
<hazmat> m_3, sounds good
<m_3> there's a bit of strange behavior in the watcher beesides timeout, but one at a time
<m_3> cool... I'm gonna tear down the ganglia run then
<hazmat> niemeyer, responded
<niemeyer> hazmat: re-re-responded
<hazmat> jamespage, wow that's seriously insane about jenkins master/slave
<hazmat> that's like the craziest design impl i've heard all year.
<niemeyer> hazmat: Do you realize the irony there? ;-)
<hazmat> niemeyer, yeah.. i'm a slave, no i'm the master ;-)
#juju 2012-02-29
<hazmat> m_3, all the build links on the #charmtbot-test channel are broken
<hazmat> is that to be expected?
<hazmat> m_3, its the cyclical dependency between hadoop-master and hadoop-mapreduce that's throwing it off i think
<hazmat> m_3, hadoop-master shouldn't be depending on hadoop-mapreduce
<hazmat> yup.. removing that and it works fine
<hazmat> the planner is a nice lint on charm metadata semantics ;-)
<_mup_> juju/enhanced-relation-support r14 committed by jim.baker@canonical.com
<_mup_> Use cases
<_mup_> juju/upgrade-sym-link r468 committed by kapil.thangavelu@canonical.com
<_mup_> support extracting bundle symlinks in place over an existing charm for upgrades
<hazmat> jimbaker, its probably worthwhile to send an email to the list re the relation support
<jimbaker> hazmat, that's what i'm working on :)
<hazmat> jimbaker, awesome.. i should have mentioned that earlier
<jimbaker> it's taking a while to get the proposed api change in place
<hazmat> jimbaker, its probably better to do the email first, just in case..
<jimbaker> given the number of bugs and interlocking concepts
<hazmat> yeah.. its got a hit a few places
<jimbaker> hazmat, i have not worked on impl. just the spec
<hazmat> jimbaker, ah.. cool
<hazmat> jimbaker, i'm probably going to take over on that purge-queue-hook thing then
<jimbaker> hazmat, sounds cool
<hazmat> great
<jimbaker> hazmat, i expect at least 6 branches will be needed for this work
<hazmat> jimbaker, sounds about right
<hazmat> jimbaker, the most important i think is just getting unambigious relation identities
<jimbaker> hazmat, yes, that's the first branch
<hazmat> jimbaker, after that hook cli mods, and status.. what else?
<jimbaker> hazmat, out of band hook command execution
<hazmat> oh.. yeah.. and the hook context as well
<jimbaker> yes, that's the second branch :)
<jimbaker> i'm not planning to do any changes for juju status, i think it's possible to do everything with an enhanced relation-list
<jimbaker> trying to keep the changes as small as possible
<hazmat> jimbaker, there's a problem with status relating to it.. say for example you have mysql with relations to wordpress and mediawiki..
<jimbaker> hazmat, yes, that's a reasonable scenario
<hazmat> jimbaker, status only shows the one rel because its using a dict by rel name, instead of identity
<jimbaker> hazmat, i see what you mean. well... i was going to delete the juju status changes from the proposed spec because i couldn't show a use case from the bug reports/mailing list, but it looks like you have a reasonable one here
<jimbaker> that's at least one more branch then
<jimbaker> hazmat, on the same vein: do you see a need for a juju-status (a version of juju status that can be run from a hook)?
<jimbaker> right now, i cannot justify it, so it's pending deletion...
<jimbaker> but that would be 2 more branches for that support
<hazmat> jimbaker, huh.. hooks and status are on remote ends
<hazmat> i don't follow
<jimbaker> hazmat, indeed they are... might be useful for a brain. so i guess we can delete.
<jimbaker> it can always be done in some subsequent api change, just trying to keep this work reasonably limited
<hazmat> bcsaller, jimbaker .. for small branches, i'm thinking we should just  move to approved on one review, if the original reviewer feels the change is small/simple..
<jimbaker> hazmat, +1 on that
<bcsaller> hazmat: I'm good with that
<hazmat> the review queue has been flooded for a while it feels like
<hazmat> cool
<hazmat> flacoste, suggested it
<hazmat> he mentioned lp did an experiment with no reviews (ie. self-reviews), and then went back after a cycle and looked back over the commits, to see if they would have benefited from a review.. for the most part they didn't.. i'm not quite feeling that one yet ;-)
<jimbaker> hazmat, ;)
<_mup_> juju/enhanced-relation-support r15 committed by jim.baker@canonical.com
<_mup_> Removed juju-status, other cleanup
<hazmat> interesting.. first 'real' google go project i've seen http://code.google.com/p/vitess/
<jimbaker> hazmat, definitely good to see, especially the pros/cons of go discussed http://code.google.com/p/vitess/wiki/ProjectGoals
<jimbaker> i naively thought they were using a better gc, but guess the project just needs work on that
<hazmat> jimbaker, gc at scale is dark arts
<hazmat> a generational gc would work for a cache
<hazmat> i just hope they don't get into the  thousand gc  knobs of java.. looking at all the gc hints in programs like cassandra or elasticcache needs a book to reference
<jimbaker> hazmat, well that's certainly a perverse result of large market share
<jimbaker> and of course the fact that java code is not necessarily written by top notch devs...
<hazmat> or what scale your using it at
<hazmat> fwereade, g'morning
<matsubara> hello there. I'm having problem starting an precise instance on ec2 using juju. I'm trying to deploy the oneiric jenkins charm on a precise instance. The problem is that the ec2 instance doesn't seem to come up online
<matsubara> as in, I can't ssh into it to debug what's wrong with the deployment
<hazmat> matsubara, do the ec2 tools show the instance is running? did you have problems with bootstrapping or just the charm?
<matsubara> hazmat, not sure where the problem is
<matsubara> hazmat, and yes, ec2 console shows the instance as running and with a public ip assigned
<hazmat> matsubara, okay.. why don't we walk through the steps.. you bootstrapped, and then deployed a charm?
<matsubara> hazmat, I had a environment bootstrapped previously
<hazmat> ah
<matsubara> where I deploys a jenkins instance on oneiric
<matsubara> I deployed, I mean
<hazmat> matsubara, how old is the environment?
<matsubara> then, using the same bootstrapped environment, I tried to deploy the precise jenkins instance
<matsubara> not older than 2 weeks
<matsubara> should I have bootstrapped another environment?
<hazmat> matsubara, we had an incompatible change in the ppa version of juju about a little over a week ago
<hazmat> fwereade__, we probably should have sent an email out about that one
<fwereade__> hazmat, damn, yes we should :(
<matsubara> hazmat, is there a workaround? what do I need to do?
<hazmat> matsubara, are you using the ppa?
<matsubara> AFAICT, no
<matsubara> hazmat, ii  juju                          0.5+bzr457-0ubuntu1           next generation service orchestration system
<hazmat> matsubara, do you have any value specified for juju-origin in environments.yaml
<hazmat> matsubara, i think that implies your using the ppa
<hazmat> oh.. maybe not.. sorry
<SpamapS> hazmat: is that the version of juju on your bootstrapped machine too?
<SpamapS> hazmat: juju-origin defaults to distro if you are using the distro version. :)
<hazmat> matsubara, can you pastebin the console output of the instance, it should be available from the ec2 console or and also via the ec2 cli tools
<hazmat> SpamapS, just wanted to check that's the latest version in precise.. and it appears to be
<hazmat> the restart support started landing right after 457
<SpamapS> hazmat: DOH
<SpamapS> hazmat: whats really going to break things is when people try to deploy an older version of juju with juju-origin: distro .. as in.. an oneiric instance.. that will be r398
<SpamapS> hazmat: I'm convinced now, we can't do it this way anymore. We have to start storing juju in file storage and distributing it to the nodes.
<hazmat> SpamapS, agreed, its insane as it is
<hazmat> SpamapS, would you mind bringing it up on the list? if not i can
<SpamapS> hazmat: I've been holding off applying for a FFe because of subordinates landing.. would rather not bother the release team with 2
<SpamapS> hazmat: I'll email the list, I've been thinking about this for a while.
<hazmat> SpamapS, thanks
<flaviamissi> hey! I'm writing a charm for django, and I want to export some environment variables in the `db-relation-changed` hook, but when I try to access this from a django app, after relating the service with mysql, the environment variables exported by the hook are not available, I could just write them to a file, but I like the approach of using environment variables for that... some one has any tip for this problem?
<SpamapS> flaviamissi: you really *should* be writing them to disk, in a place where your django app's startup can read them.
<SpamapS> flaviamissi: if you reboot, or have to restart the service manually, you won't have access to the environment that the charm had access to.
<flaviamissi> SpamapS: yeah, you're absolutely right. Do you know how Heroku solves that issue? because they use environment variables for database informations..
<SpamapS> flaviamissi: I don't think Heroku has the same structure as Juju.. perhaps those are stored somewhere and re-executed every time?
<flaviamissi> SpamapS: that's what I thought, I think you're right, the best approach with Juju is writing those to a file.
<SpamapS> flaviamissi: right, juju is just an event framework with a conduit for sharing configuratino.
<flaviamissi> SpamapS: but the environment variables are so clean... :~
<flaviamissi> SpamapS: thank's for the clarification :)
<SpamapS> flaviamissi: sounds to me like Heroku requires you to give up more control though.
<flaviamissi> SpamapS: yeah, I have that impression too.
<SpamapS> flaviamissi: care to share your charm? I'd be happy to suggest a clean way to do what you're intending.
<flaviamissi> SpamapS: Sure! https://github.com/timeredbull/charms
<flaviamissi> SpamapS: we are integrating Juju with openstack
<SpamapS> flaviamissi: great start, it looks nicely organized
<SpamapS> flaviamissi: juju_charm="/var/lib/juju/units/${juju_unit}/charm"
<SpamapS> flaviamissi: thats a no-no
<SpamapS> flaviamissi: $CHARM_DIR will always be set
<flaviamissi> SpamapS: great!
<SpamapS> flaviamissi: also the cwd when charms are run will always be the root of the charm
<SpamapS> flaviamissi: in some cases, that may not be the right path. :)
<SpamapS> rather, the one you've explicitly chosen, may not be the right path
<flaviamissi> SpamapS: hmmmmm, changing it now
<flaviamissi> SpamapS: thanx :)
<SpamapS> flaviamissi: Other than that, you should move *all* of joined into changed.
<SpamapS> flaviamissi: and instead of just exporting them, just write them to a file that 'start_gunicorn' sources.
<flaviamissi> SpamapS: hmmm
<SpamapS> flaviamissi: I'd also recommend putting start_gunicorn and stop_gunicorn in /usr/local/bin so you can use them without the charms.
<flaviamissi> SpamapS: great! I'll do that.
<flaviamissi> SpamapS: but... why should start_gunicorn source the file with the variables?
<flaviamissi> SpamapS: sorry if it's a noob question.. :/
<SpamapS> flaviamissi: I assume that the django app running inside gunicorn is the bit that needs those variables.
<flaviamissi> SpamapS: that's right
<flaviamissi> SpamapS: oooh, right, I got it :)
<flaviamissi> SpamapS: you've helped I *lot*! thank you ^^
<SpamapS> flaviamissi: I'd just put it in the helloworld root.. 'settings.sh' or something.. and then just 'source /home/app/helloworld/settings.sh'
<SpamapS> flaviamissi: no, thank you. :)
<flaviamissi> SpamapS: xD
<SpamapS> we appreciate that juju is new and weird, and that you are willing to play with us. :)
<flaviamissi> SpamapS: it has been a great experience
<flaviamissi> SpamapS: are you a juju developer?
<SpamapS> flaviamissi: heh.. sort of. :)
<SpamapS> flaviamissi: I'm not in the core dev team, more of a power user. :)
<flaviamissi> SpamapS: great, i've been thinking in send a patch
<frankban> hey SpamapS: do you have a minute? I was looking at juju-jitsu, it seems that a shell alias is used to override the juju command. AFAICT this sure works well for tests using bash, but not so easily for python tests.
<frankban> I've seen the proposal to use env vars to set up charms repository and namespace, and IMHO that should solve the problem. In the meanwhile, what do you suggest? I could just continue using the juju_wrapper script.
<flaviamissi> SpamapS: because we use proxy here, and juju doesn't seems to work properly behind one
<matsubara> hazmat, hey there, you were helping me up with that juju issue on precise. it seems I'm using the packaged version rather than the ppa's.
<matsubara> would the console output of the instance help debug the problem?
<SpamapS> frankban: I have been thinking about that..
<SpamapS> frankban: I think just need to create a temporary PATH override until the env vars are available. So just put 'juju-jitsu-wrapper' in $PATH somewhere as juju
<SpamapS> flaviamissi: I think there may already be a bug report. Its a common request. :)
<flaviamissi> SpamapS: I saw it :)
<hazmat> matsubara, yes.. very much so..
<frankban> SpamapS: thanks.
<SpamapS> frankban: I'd expect the environment variables to land quite soon though. :)
<frankban> SpamapS: great!
<matsubara> hazmat, https://pastebin.canonical.com/61266/
<matsubara> hazmat, machine 6 is the first one I started where I noticed the problem. machine 7 is the second one but that one didn't have any output in the system log
<hazmat> matsubara, this appears to the libc6 upgrade problem when installing juju
<hazmat> matsubara, at the moment the work around is to manually specify a machine image to use
<hazmat> ie. picking one of the latest nightlies
<hazmat> from http://cloud-images.ubuntu.com/precise/current/
<hazmat> and putting it in ~/.juju/environments.yaml for the ec2 environment as default-image-id
<matsubara> hazmat, cool. thank you. let me try that
 * hazmat hopes it works
* jcastro changed the topic of #juju to: Interested in writing a charm? Join the contest! http://juju.ubuntu.com/CharmContest | http://j.mp/juju-florence | http://j.mp/juju-docs | http://j.mp/irclog | #juju-dev | Office Hours (1600-1900UTC)
<matsubara> hazmat, do I need to choose the one ebs root? (or it doesn't matter?)
<matsubara> s/one/one with/
<matsubara> looks like the daily build images don't even start
<jcastro> hazmat: do we handle Amazon spot instances in juju?
<m_3> hazmat: re: build links...yes broken until I flick the switch on publishing
<m_3> hazmat: thanks, I'll fix the hadoop cycle
<hazmat> matsubara, after the environments.yaml change, it takes a moment for the provisioning agent to notice the change
<hazmat> jcastro, no
<mchenetz> good afternoonâ¦
<matsubara> hazmat, does it matter that the bootstrapped environment was bootstraped with oneiric now that I changed the environment.yaml file for the precise release?
<hazmat> matsubara, at the moment it shouldn't.. as those versions should be compatible. otoh you wouldn't be having these problems if you had a fresh environment
<hazmat> the next upload to precise will be incompatible with the oneiric version
<matsubara> right
<hazmat> at the moment the best option to utilize a single version of juju across multiple distro releases is to utilize the ppa
<SpamapS> hazmat: or use a branch
<hazmat> but even that's not a guarantee as the ppa is a trunk build, SpamapS and i were discussing this earlier right before you left, namely that juju should be distributing itself to all machines
<hazmat> SpamapS, yes, using a stable branch would work well (juju-origin: lp:branch_location)
<matsubara> hazmat, right. I'm bootstrapping a new env to test.
<hazmat> but in terms of moving towards juju upgrading itself, having juju distribute itself would work a bit better
<SpamapS> hazmat: indeed
<_mup_> juju/enhanced-relation-spec r16 committed by jim.baker@canonical.com
<_mup_> Rework juju status output, consistency guarantees, and other finalization
<_mup_> juju/enhanced-relation-spec r17 committed by jim.baker@canonical.com
<_mup_> Merged docs trunk
<matsubara> hazmat, bootstrapping a new environment work, but I lost access to the old bootstrapped environment. Is there a way to access that one? Even if I change back my environment.yaml back to what it was, it still refuses connection to the oneiric bootstrapped env
<marcoceppi> o/ hi everyone
<hazmat> matsubara, ? if you want to have multiple environments, you should have multiple entries in environments.yaml one per environment.. if you change the details of an environment (things like control bucket or name) then juju won't know how to contact the original environment.. if you need to destroy a previous environment you can make sure it has the same name and do juju destroy-environment.
<hazmat> matsubara, ie.. when you say you bootstrapped a new environment, did you destroy the old one or use a new entry in environments.yaml.. if you need to reference a non-default environment pass -e env_name on the cli after the subcommand
<hazmat> marcoceppi, greetings!
<matsubara> hazmat, that's what I did (and I didn't change the control bucket, access-key, secret-key for them) just created a new environment with the same values plus the ami type and series
<matsubara> then I bootstrapped this new one and lost access to the old one
<matsubara> when I try juju -e $env it says it can't connect to the environment
<matsubara> so, I think I kinda broke everything
<matsubara> is there any way to find the key used to log into the ec2 instance?
<hazmat> matsubara, it sniffs a public key from the ~/.ssh/  if one isn't specified in environments.yaml
<hazmat> matsubara, the previous one was working, so its not clear why access would have been lost
<matsubara> hazmat, yeah, it's unclear to me as well. I'm trying to connect to the instance to get the data but the instance is not accepting my ssh key
<matsubara> hazmat, managed to connect to the instance. I was using the wrong username.
<hazmat> ah
<matsubara> (it's still unclear why juju can't see both environments...)
<SpamapS> matsubara: do they have the same control-bucket ?
<hazmat> that would be bad
<SpamapS> like crossing the streams
<matsubara> yep
<matsubara> SpamapS, where do I get that value from?
<SpamapS> matsubara: that has to be unique to every environment
<SpamapS> matsubara: it stores the seed information that tells clients where to find the bootstrap node
<matsubara> SpamapS, right. that makes sense. how do I generate a new one?
<SpamapS> matsubara: lately I attach the date to the environment name when I bootstrap.
<matsubara> SpamapS, so it's just a random string? it's not a hashed value from somewhere else?
<matsubara> btw, isn't it a bug that juju doesn't tell me I have two environments with the same control-bucket value?
<hazmat> matsubara, hmm.. yeah.. it is
<hazmat> matsubara, its just a random string
<matsubara> cool. I'll file the bug once I'm done with my experiments
<matsubara> thanks for all the help hazmat and SpamapS
<hazmat> matsubara, incidentally even with the control bucket change, destroy-environment will do the right thing (it works by env name)
<matsubara> right. I was being careful to not run destroy-environment before I can get my data out of it
<_mup_> juju/enhanced-relation-spec r18 committed by jim.baker@canonical.com
<_mup_> Finish TODOs
<_mup_> Bug #943610 was filed: Specification for enhanced relation support <juju:In Progress by jimbaker> < https://launchpad.net/bugs/943610 >
<jimbaker> with that and the merge proposal, about to generate the corresponding proposed api change email to juju mailing list... hold on please :)
<hazmat> jimbaker, cool
<jimbaker> hazmat, hopefully it will delight everybody. fingers crossed ;)
<ejat> ping jcastro
<SpamapS> ejat: its usually safest to start with somebody's name, like
<SpamapS> jcastro: ping ^^
<jcastro> hi
<jcastro> oh sorry I was burning AWS CPU time with boinc
<ejat> SpamapS owh okie
<ejat> just want to check .. i just at the bugs at the charm target list
<ejat> line 125
<jimbaker> and email sent on the proposed api change. time to walk the dog! :)
<jcastro> ejat: that's symfony right?
<jcastro> that's in review right now right?
<ejat> jcastro : not sure .. but i already tag new-charm
<jcastro> ah right
<jcastro> SpamapS: marcoceppi: review?
<jcastro> who else is around ...
 * ejat trying to write another charm â¦ go go go ..  
<SpamapS> ejat: whats the status of the new-charm tagged bug?
<SpamapS> I only look at New/Confirmed/Triaged
<SpamapS> In Progress and Incomplete I ignore since they usually mean its not ready for review
<ejat> so i need to change to confirm / triaged ?
<m_3> hazmat: snapshot restore is hanging about 5-10% of the time ( http://ec2-107-22-3-212.compute-1.amazonaws.com:8080/job/oneiric-local-charm-cloudfoundry-server-dea/1/console )
<SpamapS> ejat: bug #?
<m_3> hazmat: changing it to use verbose and catch stderr properly... hopefully I'll catch more info next time
<ejat> bug #940140
<_mup_> Bug #940140: Charm needed: Symfony <new-charm> <Juju Charms Collection:In Progress by fenris> < https://launchpad.net/bugs/940140 >
<m_3> hazmat: (just fyi)
<hazmat> m_3, i think i see the issue, can you file a bug against charmrunner
<m_3> sure
<ejat> since m_3 already help me to debug and test
<hazmat> m_3, i'm in the middle of a problematic relation bug atm
<m_3> np... thanks!
<ejat> brb
<jcastro> nice he got that one
<marcoceppi> jcastro: o/
<jcastro> he was stuck on some java thing before that wasn't packaged and it was generally not fun
<jcastro> marcoceppi: https://launchpad.net/bugs/940140
<_mup_> Bug #940140: Charm needed: Symfony <new-charm> <Juju Charms Collection:Confirmed for fenris> < https://launchpad.net/bugs/940140 >
<jcastro> We know you love php dude
<marcoceppi> and I <3 Symfony
<matsubara> is it possible to pass the instance-type to juju deploy at run time?
<ejat> SpamapS: so i need to change to confirm / triaged?
<marcoceppi> jcastro: looking at it now
<ejat> thanks marcoceppi
<jcastro> SpamapS: see if we could sort merge proposals we wouldn't have this bug status confusion
<jcastro> just sayin'
<SpamapS> jcastro: what confusion?
<SpamapS> jcastro: In Progress would be like Work In Progress in a merge proposal.. same problem. :)
<ejat> is it something in juju school mentioning about the status ....
<SpamapS> ejat: no, it was not clear, so I have updated https://juju.ubuntu.com/Charms to make it more clear.
<ejat> \0/
<SpamapS> ejat: also just noted that you need to point reviewers to your charm.. no mention of it in the bug
<ejat> noted + thanks
<ejat> updated the branch
<marcoceppi> Any idea if PEAR uses HTTPS?
<marcoceppi> Or does payload verification?
<SpamapS> marcoceppi: it does not do either
<SpamapS> marcoceppi: I looked into it at one point. :(
<SpamapS> neither pecl nor pear do anything to protect the user
<marcoceppi> onedayiwillwriteabetterphpdeploymentservice
<SpamapS> "pear is a dev tool"
<marcoceppi> "pear is a pos"
<SpamapS> marcoceppi: better to just dump any unpackaged PEAR modules you need into the charm.
<marcoceppi> I agree, considering how tedious the update and release cycle is for module owners
<SpamapS> "pear don't care, pear don't give a s***"
 * marcoceppi concludes review
<marcoceppi> Is there an idea of "optional interfaces"
<marcoceppi> It's been a while :)
<marcoceppi> Would that be a peer?
<ejat> :)
<ejat> pear fruit :)
<SpamapS> marcoceppi: all requires are actually optional
<marcoceppi> SpamapS: I noticed that actually
<SpamapS> marcoceppi: at some point we may provide some mechanism for hardening or softening that
<SpamapS> marcoceppi: we're seeing some ordering problems because they're optional... so a  discussion needs to happen.
<marcoceppi> I think that responsibility should lye within the charm/hooks itself though
<marcoceppi> Or, if there are problems, then a conversation should happen
<SpamapS> marcoceppi: for instance, there are times where you want to store your data locally in sqlite, and others where you want mysql. That shouldn't be two charms IMO, we should be able to have that choice declared, and a simple way to change the default behavior.
<hazmat> SpamapS, re ordering around relations, outside of them being established, notional ordering supposes a steady state/goodness, where as dependency fufillment is done by existance.
<marcoceppi> SpamapS: I agree, and am struggling with that ATM
<marcoceppi> But that's something that happens in the config-changed hook, nothing related to interfaces, correct?
<SpamapS> hazmat: somehow you always manage to make me go cross eyed
<hazmat> SpamapS, sometimes i have that effect, in english just because  a relation is established and thus a dependency satisified doesn't imply that's its ready to use.
<SpamapS> marcoceppi: thats how you deal with it today.. but it would be a better experience if there were a uniform way to say "don't serve X until you have saitsifed Y"
<marcoceppi> SpamapS: interesting idea
<hazmat> yeah.. a nice way for a charm to say.. i don't provide anything till X
<SpamapS> hazmat: right, I'm looking for a better answer than the one that I proposed since my proposal requires steady state.
<SpamapS> as hazmat pointed out to me yesterday, if we make disk storage a 1st class abstraction in juju, that may handle the whole thing properly
<SpamapS> since you could, in theory, declare that you need disk storage | mysql storage..
<SpamapS> but, forget what I'm saying, I get off in the weeds sometimes.
<jimbaker> SpamapS, i'd appreciate if you'd take a look at the proposal i sent to the mailing list, it should satisfy this need
<SpamapS> jimbaker: ooo I didn't know it was something actively being looked at. Will read ASAP :)
 * SpamapS is only about 6 hours behind on email ATM.. woot
<jimbaker> also, i'm curious if the potential tongue twister of `juju do` will be what end up implementing ;)
<jimbaker> any hook command from anywhere juju runs would seem to be some goodness
 * jimbaker understands email backlog, for sure
<jimbaker> hazmat, can you respond to the api change email re your observation "juju do is out of scope and dangerous"? i guess the stmt mysql has multiple relations is complex may be overstating these types of scenarios. certainly more complex than 1 instance of a wordpress blog + mysql
<jimbaker> just trying to keep the discussion in the appropriate channel, per api changes
#juju 2012-03-01
<marcoceppi> heh, so I did a stupid thing. I bootstrapped on EC2 and deployed a crap load of charms to test something. Then I left the office and went home: ERROR Invalid SSH key
<marcoceppi> Can I destroy from EC2 without Juju getting sad faced
<_mup_> juju/hook-alias-expansion-redux r468 committed by kapil.thangavelu@canonical.com
<_mup_> scheduler stop values are ignored if the scheduler is running
<jamespage> morning all
<jcastro> anyone have the spec for the automagic dependency check handy?
<jcastro> lp:~hazmat/juju/auto-magic-dependency-spec seems like it's the canonical location
<hazmat> jcastro, that's not quite the same thing..
<jcastro> ah ok
<hazmat> jcastro, i mean well.. it depends on what you mean ;-)
<jcastro> do we have a spec for juju recommending dependencies on a charm?
<jcastro> I know we've talked about it and the answer was "some day", just wondering if we had gotten beyond that
<hazmat> jcastro, if you mean deploy dependencies automatically.. that's what its doing.. but that isn't practical it turns out
<jcastro> hazmat: oh, ok, I am trying to answer this guy's question:
<jcastro> http://askubuntu.com/questions/109042/how-would-one-know-a-charms-dependencies
<jcastro> and my answer so far is "yeah we don't do that yet." but I wanted to link him to something.
<jcastro> or in this case, I guess explain why we don't want that anymore?
<hazmat> jcastro, so knowledge is easy to satisfy.. i'd like put the charmrunner plans onto the charm browser, so its more clear
<jcastro> ok so the tldr; is "it'll be in the charm browser?"
<hazmat> jcastro, but in terms of deploying it into an environment.. say you want to deploy mediawiki into an environment, and you have 3 mysql services already there.. which one do you use to satisify the dep, or do you deploy a new one.
<jcastro> ah, right
 * hazmat needs some coffee
<jcastro> indeed, caffeine time
<jcastro> ok I left the guy an answer, any one feel free to submit an edit to clarify or correct.
<hazmat> jcastro, re the charm browser currently, you can got to any charm page (probably via search) on a given charm page, you can click an interface and it takes to you a page that shows who provides and requires a it
<jcastro> yeah but we can improve that a bit
<hazmat> also the search works directly from the chrome/chromium's url bar if you tab on the site name .. which is magically awesome ;-)
<jcastro> like, mediawiki says it requires memcache, but it doesn't really
<jcastro> the real issue I think here is that we don't have READMEs for the basic older charms
<jcastro> that would make all of this go away
<hazmat> true, mediawiki doesn't any of those as optional
<hazmat> https://bazaar.launchpad.net/~charmers/charms/oneiric/mediawiki/trunk/view/head:/metadata.yaml
<hazmat> but that's partly because the default is optional anyways at this point..
<jcastro> hmm, maybe we should highlight the Configs and Interfaces more in the web UI than the hooks
<jcastro> if I'm consuming a charm do I care about the hooks?
<jcastro> I would think I'd care about the config options I can set and if I need something else
<jcastro> anyway, for now I think just asking people to go back and README their charms is a good 90% fix
<jcastro> and we're heavily recommending it on the new charms
<jcastro> still, the guy asked a good question
<hazmat> jcastro, docs need some love ;-)
<jcastro> I know
<_mup_> Bug #944038 was filed: Inform users they have an environment.yaml with the same control bucket value <juju:New> < https://launchpad.net/bugs/944038 >
<SpamapS> hazmat: :(
<SpamapS> hazmat: you're suggesting dropping my favorite part of Jim's spec
<SpamapS> hazmat: juju do is not an anti-use case. I've been asking for it for a *long* time.
<SpamapS> hazmat: and it has always been suggested that these commands would be usable by cron jobs, ssh, etc.
<hazmat> SpamapS, cron jobs can execute hooks directly and use the cli api.. but the very notion of juju do is pretty crackful imo
<hazmat> SpamapS, post the team meeting currently ongoing i was hoping to do a g+ with jimbaker if your up for it
<SpamapS> hazmat: cron jobs would have to set the socket path.. and other weird stuff. wrapping that is nice.
<SpamapS> hazmat: hrm.. I'm interested but I'm under the gun for a few other things.. so I'll pass for now.
<hazmat> SpamapS, fair enough
<hazmat> SpamapS, socket path has a default known address, it doesn't need to be set
<hazmat> SpamapS, the problem is the other side isn't really listening outside of a hook execution atm
<SpamapS> hazmat: I think you misunderstood the desire to have juju do.
<SpamapS> hazmat: From what I see, its so you can make it easy to interact with juju outside hooks.
<SpamapS> but I digress
<SpamapS> I think its two different things
<SpamapS> and should be discussed as such
<hazmat> SpamapS, it is two different things, using the hook cli api outside of hooks is the one we need to support
<hazmat> having juju client support for executing arbitrary hooks.. is not
<SpamapS> its not so much the client on your remove machine though..
<SpamapS> right?
<SpamapS> Its just so that while you're ssh'd into the machine, you can use relation-get or call a hook
<SpamapS> anyway, I don't want juju do to delay the introduction of all the other goodness
<hazmat> SpamapS, 'juju' is the client.. regardless of where it is..
<SpamapS> because the other stuff.. being able to relation-list one relation from another.. is *critical*
<hazmat> SpamapS, agreed the entire hook cli api offered by hooks should be executable easily outside of hooks
<hazmat> SpamapS, what's a use case for executing a hook directly?
<SpamapS> hazmat: re-asserting configuration
<SpamapS> hazmat: admin logs in, screws everything up debugging.. wants to leave it back in the state the charm asserts
<hazmat> SpamapS, the problem is not every hook is idempotent.. calling something like -depart could mess things up more
<SpamapS> hazmat: I don't have a use case for departed
<hazmat> SpamapS, if we enable the capability.. we enable it for everyone.. a user can just execute config-changed directly at that point
<SpamapS> *that* would be fantastic
<hazmat> jimbaker, SpamapS g+ invites out
<jimbaker> ok
<jcastro> m_3: Hey, how's the node charm these days?
<jcastro> https://github.com/thedjpetersen/subway#readme
<jcastro> saw this on HN and I wanted to chase them down, it looks cute
<SpamapS> jcastro: that does look cool
<jcastro> especially since it uses node and mongo
<jcastro> that's 2 other charms it can interact with
<jcastro> SpamapS: you have any idea how complete the node charm is?
<SpamapS> jcastro: Its a framework, like django and rails.. needs subordinates to be really amazing.
<jcastro> ah dude I get it
<jcastro> i'd need 3 instances for IRC
<jcastro> bootstrap, subway, and mongo right?
<SpamapS> jcastro: yes and no
<SpamapS> jcastro: Since this is almost certainly a "low scale" app, you can just have it install mongo locally.
<SpamapS> jcastro: I was doing the same thing w/ mediagoblin before they decided to rip out mongodb ;)
<jcastro> ah
<jcastro> ok I'm going to go just join their channel
<jcastro> they seem pretty cool
<SpamapS> jcastro: when are we going to start working on project fido again? Next week?
<jcastro> yeah
<jcastro> we totally forgot to plan for Strata running into this week
<adam_g> hmm is there some hard limit on the # of settings that may be passed via relation-set in a single hook?
<adam_g> cant seem to get anymore than 7 of 10 to the other side
<SpamapS> adam_g: shouldn't be
<SpamapS> adam_g: are the values large?
<SpamapS> adam_g: zk does have a 1MB limit on any single node, and I believe relation-set sticks all of them in a single node.
<SpamapS> But I'd think you would have something much more horrible if you hit that.
<SpamapS> dunno if it truncates or errors
<adam_g> SpamapS: actually, nevermind. verifiedby flooding the relation with a thousand settings before i realized i fat fingered something in the charm :{
<SpamapS> adam_g: lol
 * SpamapS sends adam_g's fingers to The Biggest Loser to get yelled at
<adam_g> fat finger summer camp
<m_3> jcastro: the node charm is actually in good shape
<jcastro> good to hear
<m_3> it's rpetty much ready for review... had to revamp it a couple of weeks ago for the mongodb talk
<jcastro> SpamapS: pastebin/lodgeit charm is ready for round #2 of review. Fight!
<flaviamissi> Hey! Is there a way to execute a command in an unit with `juju ssh`? I've tried the ssh's approach of passing the command in the end, but juju doens't interprets it the way I expected..
<robbiew> SpamapS: m_3: -> http://ubuntuone.com/3fNyLF45rp016mWud0KMBc
<robbiew> juju-jitsu!!
 * robbiew returns to work now
<SpamapS> robbiew: nice, is that hong kong phooey?!
<robbiew> yup
<m_3> robbiew: awesome!
<m_3> I'm sitting in a talk entitled "Data Jujitsu"
 * m_3 harumph!
<_mup_> Bug #855989 was filed: Document usage with orchestra <juju:Confirmed> < https://launchpad.net/bugs/855989 >
#juju 2012-03-02
<hazmat> flaviamissi, that should work in the ppa
<hazmat> jimbaker, ^ executing commands after juju ssh works?
<jimbaker> hazmat, it should
<jimbaker> there's specific testing for it, and i understand it's been used by people like m_3
<jimbaker> (not to mention myself)
<_mup_> juju/refactor-machine-agent r457 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/refactor-machine-agent r458 committed by jim.baker@canonical.com
<_mup_> Addressed review points
<_mup_> juju/robust-test-removed-service-unit r458 committed by jim.baker@canonical.com
<_mup_> Merged upstream and resolved conflict
<_mup_> juju/refactor-machine-agent r459 committed by jim.baker@canonical.com
<_mup_> One last rename
<_mup_> juju/robust-test-removed-service-unit r459 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<_mup_> Bug #873907 was filed: Security group on EC2 does not open proper port <juju:Confirmed> < https://launchpad.net/bugs/873907 >
<flaviamissi> jimbaker: I saw the test in juju's code
<flaviamissi> jimbaker: also put a pdb to check what's going on, the extra params never gets to the open_ssh function, the error is raised before it
<jml> hi
<jml> ISTR there's a guide somewhere for setting up a machine to do charm development locally using lxc.
<jml> I've had a shallow poke around the juju.u.c site and couldn't find it
<jml> I'll just plod away on the charm school page and see where that gets me
<marcoceppi> jml you can check out this page: https://juju.ubuntu.com/CharmSchool
<marcoceppi> Nevermind, you already have that URL
<jml> marcoceppi: :)
<jml> marcoceppi: thanks for the confirmation though
<jml> what does data-dir get used for?
<jml> specifically, why is it in /tmp?
<marcoceppi> jml: data-dir is just the directory for the zookeeper state and various log files
<marcoceppi> It's only used on local environments
<jml> marcoceppi: so the chroots don't live there?
<marcoceppi> To my knowledge, no
<marcoceppi> jml: http://readthedocs.org/docs/juju-docs/en/latest/provider-configuration-local.html
<jml> marcoceppi: when I run 'juju bootstrap' I am told "SSH authorized/public key not found."
<marcoceppi> You need to make sure you have both a private key and a public key in ~/.ssh/ folder
<jml> marcoceppi: Hmm. I bet I need to make sure they are called either id_dsa(.pub) or id_rsa(.pub)
<marcoceppi> Yeah, it uses your id_(r|d)sa(.pub) file
<jml> marcoceppi: cool. some quick symlinking addressed that.
<jml> I'm now getting "error: internal error Network is already in use by interface virbr0" errors. I wonder if that's because I didn't log out & log in again
<marcoceppi> jml:  That's exactly why :)
<jml> marcoceppi: I'd like to know the underlying reasons for that.
<marcoceppi> jml: I'm not 100% sure since I've never used the LXC containers, but I believe it's because of how the libvirt sets up the network interface and the permission associated with that, logging in and out or restarting is essentially the same thing as running newgrp libvirt for your user
<jml> hmm.
<marcoceppi> I've always just known: Installed libvirt for the first time? Restart for success
<jml> marcoceppi: interesting. I wonder how that works, since I can't actually see any configuration anywhere that would put my user into the libvirtd group
<jml> I'm mildly reluctant to restart because I'm in the middle of doing other things :)
<marcoceppi> Yeah, I detest restarting as well. You could just try `newgrp libvirtd`
<jml> marcoceppi: yeah, but my user isn't added to that group yet. Am I supposed to explicitly do that? The docs don't say to do so.
<marcoceppi> jml: You shouldn't have to do it
<marcoceppi> let me dig up the doc I read that on
<jml> marcoceppi: thanks. I really appreciate your help.
<marcoceppi> jml:  It's on the same page as the data-dir stuff: https://juju.ubuntu.com/docs/provider-configuration-local.html
<jml> marcoceppi: yeah, but I'm pretty sure that only works if something has done 'adduser jml libvirtd' first.
<jml> marcoceppi: and afaict, that has not been done
<jml> which makes me _want_ to restart, just to see if something magically does that :)
<marcoceppi> Installing libvirt should have done that, at least on my machine I'm in the libvirtd group :)
<jml> marcoceppi: well, this is a pretty darn fresh precise install, so I'm thinking that maybe there's a packaging or documentation bug here.
<jml> anyway, I'll re-log now and see what happens.
<jml> that's a reboot later and still no libvirtd membership
<jml> but 'adduser jml libvirtd; newgrp libvirtd; juju bootstrap' succeeds
<marcoceppi> jml: I'm not sure if that's a bug in libvirt on Precise or not. I don't typically use local containers. But if juju bootstraps successfully than \o/
<jml> hmm. also 'bootstrap' doesn't do anywhere near as much downloading as the documentation suggests
<jml> marcoceppi: well, yeah, getting further makes me happy. If my experience makes it smoother for others, that would make me happier still.
<marcoceppi> jml: it should take about 10 seconds to bootstrap on local
<marcoceppi> does juju status show you a machine running?
<jml> "To speed up the process, you will want to juju bootstrap before the meeting where you have a decent internet connection, which will take a while because it must debootstrap the operating system."
<jml> yep
<marcoceppi> that sounds outdated, I know bootstrapping to EC2 can take sometime because of startup lag on Amazon
<james_w> jml, are you in the office today?
<jml> "The bootstrap command will prompt for sudo access as the machine agent needs to run as root in order to create containers on the local machine." â I also didn't get prompted
<jml> james_w: no, I'm at home.
<marcoceppi> jml: that's odd, I do get prompted
<jimbaker> flaviamissi, i assume you have tried quoting the command? also it would be helpful to have the actual juju ssh you are trying to make
<jml> so I've deployed a few charms (wordpress, mysql, oops-tools)
<jml> and have got 'juju status |grep public-address' on a watch
<jml> but so far no public addresses, and no evidence that it's doing anything
<jelmer> hi jml
<jelmer> jml: what is the status of the machines?
<jml> oh no, apt-cacher-ng is active, so I guess it's downloading stuff in some background process somewhere
<jml> jelmer: 'pending'
<jml> yay, it works!
<jml> "Local provider environments do not survive reboots of the host at this time, the environment will need to be destroyed and recreated after a reboot." â does this mean I'll have to bootstrap again after rebooting?
<james_w> jml, yeah, assuming that is still current
<jml> james_w: ta
<jml> I'm guessing that the packages downloaded by apt-cacher-ng are all still present though
<james_w> should be
<james_w> unless they are stored in data-dir
<SpamapS> they're in apt-cacher-ng's normal location
<SpamapS> /var/cache I think
<SpamapS> jml: perhaps you didn't get sudo prompted because you had recently done some kind of sudo action?
<jml> SpamapS: entirely possible. I don't recall that I did, but memory is a slippery thing.
<jml> oh wait
<jml> duh
<jml> sudo adduser jml libvirtd
<jml> that'll be the reason.
<hazmat> jml, reboot should work now (w/ juju ppa).. data-dir needs to be in a non temp location
<jml> hazmat: thanks.
<jml> http://code.mumak.net/2012/03/local-juju.html fwiw.
<SpamapS> hazmat: how does reboot work with local provider?
<SpamapS> hazmat: is it putting upstart jobs on my box?
<hazmat> SpamapS, yes
<SpamapS> hazmat: COOL
<SpamapS> hazmat: does it also make the containers autostart now?
<SpamapS> I know I could read it
<SpamapS> but I'm gearing up for fix it friday
<SpamapS> no time to hunt down answers. :)
<jamespage> jcastro, m_3: hbase and refactored hadoop charms ready for review!
<jamespage> I'll blog something next week about using them...
<jcastro> jamespage: nice!
<jcastro> hey ping me before you do it
<jcastro> I'd like to have it just natively on cloud.ubuntu.com
<m_3> jamespage: whoohoo!
<jamespage> jcastro: not sure what that means but I'll draft it up somewhere first
<jamespage> jcastro, m_3: I've spent quite a bit of time on the README's so hopefully that should help alot....
<jcastro> jamespage: hey so duplicate content on the web = bad, so I was thinking of just having you write it right on cloud.u.c instead of on your personal blog
<jcastro> make it a bit more Pro looking
<jamespage> jcastro, sure
<jamespage> jcastro: you mean javacruft.wordpress.com is not pro looking? :-)
<jcastro> I appreciate the work on the READMEs
<jcastro> those will be very useful
<jcastro> jamespage: I am trying not to say "your blog makes it look like you are a walking java developer stereotype."
<jamespage> lol
<jamespage> jcastro, no problemo
<SpamapS> wow.. tons of deep, well thought out email threads today. I may never get to my part in fix-it friday
<negronjl> m_3, SpamapS: https://bugs.launchpad.net/charms/+bug/944989
<_mup_> Bug #944989: charm needed: distcc <Juju Charms Collection:Fix Committed by negronjl> < https://launchpad.net/bugs/944989 >
<jcastro> SpamapS: nathan's 2 charms are ready for round 2
<SpamapS> jcastro: Its fix it friday not review it friday! ;) (ok ok, I'll take a look once I get < 138 unread emails ;)
<m_3> negronjl: cool...  I'll hit the review queue first thing next week
<negronjl> m_3: cool
<m_3> SpamapS: you can leave stuff for me for next week if you want
<m_3> negronjl: in your neck of the woods atm actually!
<SpamapS> m_3: well I already did the 1st round on those two, so I will finish/promulgate them. :)
<negronjl> m_3: really ? where ?
<m_3> Santa Clara at strata
<negronjl> m_3: ahh ... I'm about 4 hours south today .... lol ... I guess I'm just avoiding you :P
<m_3> ha!
<SpamapS> m_3: I hate that conference center. Its so close to Great America.. I can never stop thinking about how great it would be to just have a conference talk where you say "Screw it guys, no talk, we're riding the big dipper *NOW*"
<m_3> yeah, you can see it from the hotel room
 * m_3 haven't been on roller coaster in years!
<negronjl> m_3, SpamapS: I think the park is closed this time of year
<m_3> yeah, didn't pay attention to see if it was running or not
<SpamapS> negronjl: that sux
<m_3> grrrr... really need to be able to open-port manually without having to reconfigure or upgrade
<SpamapS> all the more reason to rent it out and go on the dipper though
<SpamapS> m_3: why did your port change?
<m_3> SpamapS: ha!  yes... it would be legendary
<negronjl> SpamapS: Yup. that makes it worst ... you get to see all of the fun you _could_ be having if the park were open ... :/
<negronjl> UDS At Great America :D
<m_3> jenkins on 8080 want to go to 80 and remove the proxy b/c I'm cheap
<SpamapS> m_3: oh, you *can* actually call open-port w/o a context, IIRC
<SpamapS> m_3: You just have to set the socket path
<m_3> it wouldn't take it
<m_3> oh,
<SpamapS> bummer
<flaviamissi> jimbaker: yes, I've quoted the command, I am trying to execute the following command: $ juju ssh django/0 ls
<jimbaker> flaviamissi, i would expect that to work
<flaviamissi> jimbaker: and it outptus juju --help screen, with "juju: error: unrecognized arguments: ls" at the end
<flaviamissi> yeah, me too
<jimbaker> flaviamissi, does it work with ls * ?
<flaviamissi> nope, it executes the ls command in my machine
<jimbaker> flaviamissi, hmmm
<flaviamissi> jimbaker: look at the error output: juju: error: unrecognized arguments: ls charms _charms data_dir env
<SpamapS> flaviamissi: actually it evaluated the * on your machine
<SpamapS> flaviamissi: what version of juju?
<jimbaker> SpamapS, right, you do need to protect from sh
<jimbaker> by quoting it
 * SpamapS REALLY thinks its time for juju to have a --version
<flaviamissi> jimbaker: from dpkg -l: 0.5+bzr398-0ubuntu1
<SpamapS> flaviamissi: AHA!
<flaviamissi> SpamapS: ?
<SpamapS> flaviamissi: arguments to ssh were added as a feature after 11.10 released
<SpamapS> flaviamissi: I'd suggest using juju from the PPA
<jimbaker> all makes sense now
<flaviamissi> SpamapS: aaaaaahm
<SpamapS> flaviamissi: sudo add-apt-repository ppa:juju/pkgs && sudo apt-get update && sudo apt-get install juju
<flaviamissi> yeah, shit
<flaviamissi> sorry for that
<flaviamissi> rs
<SpamapS> its ok
<SpamapS> we should have shipped a man page
<flaviamissi> and I would love to have a --version
<flaviamissi> :B
<SpamapS> though --help most likely does not show args to ssh ;)
<jimbaker> SpamapS, correct, it does not in the old version of juju ssh --help
<flaviamissi> yeah, it doens't
<flaviamissi> I had to read the souce
<SpamapS> usage: juju ssh [-h] [-e ENV] unit_or_machine [command]
<SpamapS> in current juju, it shows [command]
<SpamapS> should really be [command [args...]]
<flaviamissi> just for the record, I installed from the ppa and it worked
<flaviamissi> thank you, guys :D
<SpamapS> flaviamissi: woot
<SpamapS> flaviamissi: note that your environment is going to continue deploying the old juju because of the way juju works.
 * SpamapS is reminded he is supposed to send an email to the list about this
<flaviamissi> hmmm
<flaviamissi> SpamapS: I should re-bootstrap it, then, right?
<flaviamissi> be right back
<SpamapS> flaviamissi: thats probably the best option yes
<jono> jcastro, meet alienth
<jcastro> hi alienth!
<alienth> jcastro: hola :)
<jcastro> hey so, jono says you'd dig to learn some juju
<jcastro> I can give you a rundown on how everything works
<alienth> yeah!
<jcastro> heh ok, so I guess I'll start with something familiar.
<jcastro> https://github.com/reddit/reddit/wiki/Install-guide
<jcastro> let's pretend I want my own reddit and I want to deploy this.
<jcastro> the tldr is juju takes all these steps here, and lets you script them (in whatever language you want).
<alienth> gotcha
<jcastro> so that when I want to deploy on EC2, bare metal, or OpenStack it's deployed the same
<jcastro> so ...
<alienth> here is the existing script we point folks to: https://gist.github.com/922144
<jcastro> I'm going to pseudo command this page you have here.
<alienth> but, it installs everything on one machine (not ideal)
<jcastro> oh perfect dude
<jcastro> this is basically that script 2.0
<jcastro> except we collect them all together and make it so people can build
<jcastro> so we'd do like this:
<jcastro> juju bootstrap
<jcastro> juju deploy reddit (this basically is the same as your script)
<jcastro> juju deploy postgres
<jcastro> juju deploy memcached
<jcastro> then we do:
<jcastro> juju add-relation reddit postgres
<jcastro> this fires off hooks, which are basically the db commands on your install page
<jcastro> juju add-relation reddit memached
<jcastro> hooks to whatever you need memcached and reddit to do fire off
<alienth> would the add-relation bit be where the reddit ini file is configured with the proper PG server?
<jcastro> the deploys fire off instances
<jcastro> yeah
<alienth> kk
<jcastro> the add-relation hooks are basically "I am memcached, when I talk to postgres I need to do foo, bar, and baz."
<jcastro> and then the hooks are done
<jcastro> you do: juju expost reddit
<jcastro> juju expose reddit I mean
<jcastro> which opens the port, then you just go to the IP.
<SpamapS> alienth: relations are 2-way configuration channels between the individual units of a service.
<jcastro> I missed rabbit and cassandra
<jcastro> but, we have charms for those two
<SpamapS> wow, we have all those already :)
<alienth> awesome :)
<alienth> out of curiosity, where does juju install reddit from? i don't think it is in the ubuntu ppa yet.
<alienth> cassandra
<alienth> err
<alienth> we actually package it ourself on our ppa
<jcastro> yeah so basically, TL;DR; you could adapt your install script to basically be the install hook
<SpamapS> alienth: "wherever you want it to" :)
<jcastro> alienth: that would fire off a "charmed" version of your install script
<alienth> gotcha
<jcastro> which is basically your install script + some meta data
<SpamapS> alienth: we have charms installing from git, ppa's, the distro, and even embedding tarballs inside the charm
<jcastro> alienth: ok so now here's the cool part.
<jcastro> once you've defined how the reddit charm talks to all the other charms via hooks
<jcastro> you can scale out
<jcastro> so "I need more cassandra" becomes "juju add-unit cassandra" fires off another instance
<jcastro> and it already knows what it needs to do
<alienth> so, does juju have a central store that knows what servers are where?
<jcastro> right so when you start juju
<jcastro> the "juju bootstrap" fires off a node with zookeeper
<alienth> awesome, we actually already use zookeeper
<jcastro> and each subsequent launched instance talks to it.
<jcastro> right so here's what we can deploy today: http://charms.kapilt.com/charms
<jcastro> and there's about 50 more branches of people hacking on stuff that isn't really in there yet.
<jcastro> so when someone writes a charm, we put it in this charm store
<jcastro> and then basically anybody can deploy anything in the store in a similar manner
<jcastro> here's an inprogress hbase one as an example: http://charms.kapilt.com/~james-page/precise/hbase
<jcastro> jamespage: hey, the cassandra charm needs a README. :)
<jcastro> alienth: the "hooks" here for cassandra give you an idea on what the charm does: http://charms.kapilt.com/charms/oneiric/cassandra
<alienth> so, right now, we use haproxy for loadbalancing. If we were to define a relation between the reddit app and haproxy, would we just tell juju to fire off that relation, and then it would modify haproxy to include the new servers once they are built?
<alienth> also, are there any tests that juju can perform before bringing something in to the infra?
<jcastro> so you would do "juju add-relation reddit haproxy"
<SpamapS> alienth: the haproxy charm is pretty simple, and probably needs work, but in its most basic form yes thats how it works.
<SpamapS> alienth: for tests we are just now defining "charm tests" to make sure the charm works as you intend it to. In the past we've also had "tester" charms that would relate to the actual service as deployed and verify it, so you could do that, and once that relation passes, then relate it to the real one. Thats a cool idea actually.
<alienth> interesting
<alienth> yeah, we have to test that a new app server acutlaly serves the app before we can roll it into haproxy
<SpamapS> alienth: Probably the simplest way to do that would be to have the website relation joined hook kick off tests before it sends its port/hostname to haproxy.
<jcastro> that is an interesting idea I've never even thought of
<SpamapS> alienth: another way to do it is to have each app server as its own "service" in juju, and relate it first to the tester, and then to haproxy after those tests pass
<SpamapS> jcastro: gets problematic because of ordering though. :-/
<jamespage> jcastro, on my list to refresh
<jamespage> needs upgrading to 1.x series as well
<SpamapS> once relations can affect other relations, this becomes a breeze. For now, the 2nd method would be better.
<SpamapS> alienth: how "encapsulated" do your tests need to be? Like, is an app server allowed to verify itself, or is the test done remotely?
<alienth> test probably should be remote
<alienth> we test remote manually, at this time
<alienth> to verify the new app server will actually load reddit
<SpamapS> alienth: so another option is to have your check_url return a negative result until tests have passed.
<alienth> gotcha
<SpamapS> alienth: yeah so you could just test a new "unit" manually and when it looks good, flip the switch on it to let haproxy know its ready to go
<SpamapS> alienth: I'd think that would be a huge PITA though
<SpamapS> alienth: automation FTW ?
<alienth> hehe
<SpamapS> alienth: anyway.. it seems like we've accidentally implemented your whole stack, to some degree, in charms already. Perhaps you should challenge one of your interested users to enter our charm writing contest?
<SpamapS> alienth: or even, you could enter it. :)
<alienth> perhaps! i think most of our opensource folks use ubuntu
<jcastro> yeah so the cool bit is the charm store comes with ubuntu
<jcastro> so anybody can just deploy a reddit if they want to
<hazmat> jml, thanks for the blog post
<SpamapS> alienth: we won't hold it against anybody who normally uses fedora.. ;)
<jcastro> hey so we also happen to be running a contest right now for cool charms: http://cloud.ubuntu.com/2012/02/juju-charm-contest-help-bring-free-software-into-the-cloud/
<jcastro> yeah so I think what makes reddit so cool is how much other stuff it can consume
<jcastro> it'd be a wicked demo
<jcastro> alienth: hey so, a ton to consume all at once, but that's basically it.
<alienth> awesome
<alienth> thx for the info
<alienth> jcastro: so, we use puppet pretty heavily right now. Do you guys see juju existing alongside puppet?
<alienth> jcastro: for example, when we kick a server, a puppet recipe does most things at this time
<SpamapS> alienth: we've experimented with embedding puppet in charms.
<SpamapS> alienth: and I think there's a use case for having juju be a source of config data and/or classification for puppet
<SpamapS> alienth: in fact there may be a good argument to use your puppet recipes as the basis for your charms.
<SpamapS> alienth: I see the relationship of puppet <-> juju as the same as autoconf <-> dpkg ...
<SpamapS> alienth: so in theory, juju + tools should be able to just take a set of puppet recipes, and turn that into a charm that is then able to relate to other charms maybe written in chef, or shell.
<benji> hi guys, I just got this error when doing a deploy-service:
<benji> DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -5] No address associated with hostname.
<SpamapS> benji: you need to use 'juju deploy --repository path/to/charms local:charmname'
<SpamapS> benji: your directory where your charms are also needs to have a sub-dir corresponding with the release of ubuntu you want to deploy, such as 'oneiric'
<benji> yep, I forgot the "local:"
<gary_poster> +1 on a more direct error message in the future :-)
<gary_poster> well, more...friendly?
<SpamapS> I think at this point, the message will be "DEPLOYING x from store.juju.ubuntu.com" .. as in, that service is almost ready
<benji> I suggest: "Oh no! DNS lookup failed, man!"
<SpamapS> "DNS's not here man"
<gary_poster> heh
<benji> http://www.youtube.com/watch?v=MZX1Mvk84Y0
<_mup_> juju/hook-alias-expansion-redux r469 committed by kapil.thangavelu@canonical.com
<_mup_> hooks that die from unhandled signals are considered to have error'd out
<_mup_> juju/hook-alias-expansion-redux r470 committed by kapil.thangavelu@canonical.com
<_mup_> joined event hookalias expansion won't execute a changed if the joined hook fails
<Jake> HELP PLEASE
<philipballew_> Jake, im not sure what your problem might be, but if you say what it is and wait around, someone might help out. Or at least someone will be able to know if they know what to do.
<jcastro> SpamapS: hate to be a bother, but Nathan's been waiting like 2 days for his second round of reviews.
<jcastro> I believe I sorted the launchpad status?
<SpamapS> jcastro: Its on my short list for today.. had to wrap up some MIR response bug fixes.
<jcastro> thanks
<jcastro> bummer to miss m_3 and nijaba this week, thanks for the extra effort
<SpamapS> jcastro: charmers has 25 members..
<SpamapS> jcastro: I'd like to see more review from !(us)
<SpamapS> us being, m_3, nijaba, and me. ;)
<jcastro> indeed.
<jcastro> ok I'll just send a mail to the list
<SpamapS> jcastro: our traffic is low enough, its not red alert or anything.. but people should be checking for new charms occasionally
<jcastro> yeah I just want to be more vigilant during the contest
<jcastro> oy mira, negronjl is a charmer!
<negronjl> jcastro: mira ... que pasa
<jcastro> got time to do a charm review for a community contributor?
<SpamapS> BONG BONG , "lodgeit" promulgated.
<negronjl> jcastro: sure ... got a bug number ?
<jcastro> oh sneaky Clint
<jcastro> https://bugs.launchpad.net/charms/+bug/940677
<_mup_> Bug #940677: Charm needed: Stack Mobile <new-charm> <Juju Charms Collection:Confirmed for george-edison55> < https://launchpad.net/bugs/940677 >
<SpamapS> err
<SpamapS> I got that one
<jcastro> negronjl: ok never mind I guess. :)
<negronjl> jcastro, SpamapS: no worries ...
<SpamapS> BONG BONG , "stackmobile" promulgated.
<jcastro> SpamapS: you only promulgate on Fridays, have you noticed that?
<SpamapS> yes
<jcastro> never during the week, when I can write out a nice glorious blog post about how amazing the contribution is
<jcastro> always, nearly always right after I EOD.
<jcastro> I didn't notice at first
<jcastro> touche` sir ...
<SpamapS> http://bit.ly/zTp84S
<negronjl> SpamapS: Since you appear to be in the "promulgate" mood ... you mind reviewing https://bugs.launchpad.net/charms/+bug/944989
<_mup_> Bug #944989: charm needed: distcc <Juju Charms Collection:Fix Committed by negronjl> < https://launchpad.net/bugs/944989 >
<SpamapS> negronjl: OOOO thats a good one!
<jcastro> oh man
<jcastro> that is
<jcastro> negronjl: did you see our discussion with alienth earlier today?
<negronjl> jcastro: no .. where ?
<jcastro> go back a few hours in the log
<negronjl> jcastro: just read it ...
<negronjl> jcastro: to which part are you ref. to, puppet or haproxy ?
<jcastro> I just think the whole thing is interesting
<jcastro> they use a bunch of stuff I hadn't expected, things we already have charms for
<jcastro> SpamapS: mail sent to the list, I hope I don't come across as too condascending
#juju 2012-03-03
<_mup_> juju/hook-alias-expansion-redux r471 committed by kapil.thangavelu@canonical.com
<_mup_> report signal that caused hook to terminate
<negronjl> jcastro:  How do you guys monitor the queue of charms that need to be reviewed ?
<negronjl> jcastro:  If you add me to whatever notification method is in place, I can be more responsive.  Right now, I just look at the Charm needed bugs and go from there but, there is also what almost happened today .. I almost started reviewing a charm that SpamapS was working on ...
<negronjl> jcastro:  I guess a bit more structure on the process would help ( at least me ) working through this in a more efficient manner without stepping on any "toes" :)
<SpamapS> negronjl: I just look at all the bugs tagged 'new-charm' periodically
<SpamapS> negronjl: I tend to only really look hard at the New/Confirmed/Triaged ones
<SpamapS> tho from time to time I will click through to the Incomplete/In Progress ones and see if somebody forgot to change the status
<negronjl> SpamapS: Do you assign the bug to yourself when you are reviewing ?  How do I know that you are not already working on it ?  Do you comment on the bug ?
<SpamapS> negronjl: I don't, I think its fine if 2 people review the same charm.
<SpamapS> negronjl: but to that end, its probably a good idea. :)
<negronjl> SpamapS: ok .. I think we could either comment on the bug or assign it to ourselves while reviewing ( probably both ) so we know that there may be someone already looking at it and we could ask/coordinate better.
<SpamapS> negronjl: thus far I haven't seen much duplication, but yes, that sounds like a good plan
<SpamapS> negronjl: ahh, your distcc was "Fix Committed" .. I should add that to the bug statuses that I dig further into
<negronjl> SpamapS: What's the norm for when you get a charm done .... I thought fix committed would be appropriate ...
<SpamapS> negronjl: I actually think you're right.
<SpamapS> negronjl: I just was focusing on Confirmed/Triaged/New
<SpamapS> negronjl: distcc reviewed
<negronjl> SpamapS: cool .... I can promulgate it then
<negronjl> SpamapS: spoke too soon ... I'll read the review and check things out
<SpamapS> negronjl: sorry, I had a redbull right before I reviewed it ;)
<negronjl> SpamapS: no worries ... I'll work on it
<SpamapS> negronjl: like I said, the only thing you actually really need to fix is the hostname -f's
<SpamapS> negronjl: the other stuff is just me nit picking you to death
<negronjl> SpamapS: nah .. I'll fix it all ... do it right or don't do it ;)
<hazmat> m_3, new charm runner uploaded w/ fix for snapshot
<basil_kurian_> Hi
<basil_kurian_> How can I run the juju hooks within the instances ?
<basil_kurian_> root@root-local-drupal6-0:/var/lib/juju/units/drupal6-0/charm/hooks# ./db-relation-changed
<basil_kurian_> usage: unit-get [-h] [-o OUTPUT] [-s SOCKET] [--client-id CLIENT_ID]
<basil_kurian_>                 [--format FORMAT] [--log-file FILE]
<basil_kurian_>                 [--log-level CRITICAL|DEBUG|INFO|ERROR|WARNING]
<basil_kurian_>                 setting_name
<basil_kurian_> No JUJU_AGENT_SOCKET/-s option found
<SpamapS> basil_kurian_: thats under discussion right now actually. :)
<SpamapS> basil_kurian_: because of the way hooks work, you need to run them in the context of joining/changing relations right now.. but that will change.
<basil_kurian_> Is there any other way by which i can mannually run such scripts to debug it ?
<basil_kurian_> How can I get logs of such scripts , when we issue an add-relation command ?
<basil_kurian_> SpamapS: Is there way to see  logs of such scripts , when we issue an add-relation command ?
<SpamapS> basil_kurian_: if you want to just test developing on them, you can use 'debug-hooks'
<SpamapS> basil_kurian_: debug-hooks will ssh to the node, and pop up a window in screen whenever a hook is supposed to be executed
<SpamapS> basil_kurian_: so you can then run the hook over and over in the right context, and edit it all you want. :)
<basil_kurian_> a byobu window , right ?
<SpamapS> right
<basil_kurian_> how can i run the scripts ? goto that directory and then ./<sript> ??
<SpamapS> I have used it to write an entire charm before.. :)
<basil_kurian_> oh
<SpamapS> you'll be in the charm root already
<basil_kurian_> ./<scriptname> will work ?
<SpamapS> you should stay there, some charms expect CWD == $CHARM_DIR
<SpamapS> hooks/scriptname
<basil_kurian_> oh ,  let  me try
<basil_kurian_> It is not working :(
<basil_kurian_> please see this http://pastebin.ubuntu.com/866367/
<SpamapS> basil_kurian_: that looks like two separate things
<SpamapS> basil_kurian_: when you ran 'debug-hooks', did it start a byobu session?
<basil_kurian_> yes , it did
<basil_kurian_> see the change in hostname
<basil_kurian_> Sorry I got disconnected
<SpamapS> basil_kurian_: odd
<basil_kurian_> I tried several times ;(
<SpamapS> basil_kurian_: I'd expect JUJU_AGENT_SOCKET to be set
<SpamapS> basil_kurian_: can you pastebin 'env | grep JUJU' ?
<basil_kurian_> on virtual instance , right ?
<SpamapS> yes
<basil_kurian_> root@root-local-drupal6-0:~# env | grep JUJU
<basil_kurian_> root@root-local-drupal6-0:~#
<basil_kurian_> no output
<SpamapS> basil_kurian_: weird!
<basil_kurian_> any file to source ?
<SpamapS> basil_kurian_: you're inside the byobu window, and no JUJU_ ..
<SpamapS> basil_kurian_: no it should be set
<SpamapS> basil_kurian_: what version of juju ?
<basil_kurian_> also , pleaseee the putput of pwd
<SpamapS> basil_kurian_: dpkg -l juju would show you
<basil_kurian_> root@root-local-drupal6-0:~# dpkg -l | grep juju
<basil_kurian_> ii  juju                            0.5+bzr467-1juju2~oneiric1              next generation service orchestration system
<basil_kurian_> ran that on virtual instance
<SpamapS> basil_kurian_: ok, so this sounds like a bug. I have not tried debug-hooks on the local provider, so its possible its broken somehow.
<basil_kurian_> I 'm running it on LXC containers
<SpamapS> basil_kurian_: well past my bed time.. so I'll leave you with that. :-/ for a log, btw, you can run 'juju debug-log' to get all the logs except for the 'install' hook.
<basil_kurian_> ok
<basil_kurian_> bye , thanks for the help
<mightbereptar> o/
<mightbereptar> I hope I can learn to create a charm and enter the contest
<_mup_> Bug #945505 was filed: Use ipAddress instead of dnsName now that txaws supports it <juju:New> < https://launchpad.net/bugs/945505 >
<SpamapS> mightbereptar: WELCOME!
<SpamapS> mightbereptar: we hope you can learn as well. :)
<hazmat> SpamapS, there's a default tmux session there thats not associated to hooks where basil was
<hazmat> the hook sessions pop as a new windows in the session
<hazmat> but it sounds like he was in the default window which is not executing a hook
<_mup_> Bug #945862 was filed: Support for AWS "spot" instances <juju:New> < https://launchpad.net/bugs/945862 >
<koolhead17> jcastro: so 2nd and 3ed winner at charm contest both equally gets 100$ gift voucher?
#juju 2012-03-04
<SpamapS> hazmat: yeah that makes sense.. that will teach me to try and support users 3 minutes before sleep. ;)
<_mup_> juju/force-upgrade r457 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
#juju 2014-02-24
<bitgandtter> good nigth everyone
<bitgandtter> there is any document about how to deploy juju on rackspace infrastructure?
<JoseeAntonioR> marcoceppi: ping, I think it's done, I'll be around for a couple more hours :)
<JoseeAntonioR> marcoceppi: hey, you coming downstairs?
<cargill> trying to run juju test, I get ' does not exist in ~/.juju/environments.yaml' or when specifying the current environment using -e, says it is already bootstrapped, so I need an environment but not have it bootstrapped?
<cjohnston> hazmat: is this any better: http://paste.ubuntu.com/6986642/ http://paste.ubuntu.com/6986643/
<hazmat> cjohnston, yes.. thanks..
<cjohnston> hazmat: do you need anything else from it before I trash it?
<hazmat> cjohnston, btw.. just needed deployer run output and machine-0.log not all-machines.
<cjohnston> ack
<hazmat> cjohnston, let me check on #juju-dev one moment
<cjohnston> http://paste.ubuntu.com/6986666/
<hazmat> cjohnston, i'm wondering if you picked/sized a larger state server if this issue would go away..
<cjohnston> a 1g server isn't good enough? :-(
<hazmat> depends on how over subscribed things really are on canonistack
<hazmat> it was pretty bad last i looked, not sure what the current state is.
<cjohnston> horrible would be the correct word I believe :-)
<hazmat> cjohnston, have you run mpstat or top to look at your stolen cpu percentage?
<cjohnston> hazmat: no.. the load avg yesterday when I watched it got up to ~3
<hazmat> cjohnston,  on a single vcpu system.. that seems high
<cjohnston> hazmat: I'll give it a try with a larger bootstrap today
<lazyPower> Are we upgrading the image stream information again? I'm getting missing image stream data errors from juju this morning on AWS East AZ
<lazyPower> s/East/US East/
<lazyPower> ah looks like i missed an update: https://bugs.launchpad.net/juju-core/+bug/1283246
<_mup_> Bug #1283246: cannot deploy on 1.17.2 on trusty: error on image-stream <juju-core:Incomplete> <https://launchpad.net/bugs/1283246>
<rick_h> lazyPower: yea, get .3
<lazyPower> It threw me for a loop. I've seen that error on HPCloud before, but not aws - google sorted me though.
<jamespage> lazyPower, I just got that as well - its 1.17.2 client against a 1.17.3 environmnet
<lazyPower> jamespage: Yep - the bug sorted me out. I forgot we got a version bump last week that i was dragging feet on upgrading.
<lazyPower> Thanks for the follow up though!
<jamespage> lazyPower, np
<cargill> trying to run juju test, I get ' does not exist in ~/.juju/environments.yaml' or when specifying the current environment using -e, says it is already bootstrapped, so I need an environment but not have it bootstrapped?
<vds> is there a way to juju set a new key, a key that is not in config.yaml already?
<marcoceppi_> cargill: correct, juju test will do bootstraping for each test
<marcoceppi_> vds: you can't create new configuration values for charms on the fly, it has to already exist in config.yaml
<mattyw> is it possible to run the juju run command from inside a unit?
<nessita> hello! I'm running juju devel (1.17.3-0ubuntu1~ubuntu13.10.1~juju1) and I'm using juju to bootstrap some services for the click-package-index project. With juju we start a solr and a elasticsearch unit. When bootstrapping those I never see the units progressing from the "pending" state. Ouput of juju status, and unit logs are at https://pastebin.canonical.com/105456/
<rick_h> mattyw: in the dev releases there's a new 'juju run' command that can do that
<mattyw> rick_h, yeah - can that command be run from inside a unit?
<rick_h> vds: no, the key has to exist and the charm has to know what to do/change on that key as there's not an explicit connection between the config.yaml and what the charm does
<rick_h> mattyw: oh, inside the unit? well it runs the command in the unit
<rick_h> like ps or such
<rick_h> an example is juju run uptime
<cargill> marcoceppi_: and how do I create one? I tried adding "test: type:local admin-sercet: ..." but still says that it's already bootstrapped
<marcoceppi_> cargill: well, is that environment bootstrapped?
<cargill> no, I just added it to the environments.yaml file
<marcoceppi_> cargill: do you have /any/ local environment bootstrapped?
<cargill> yes, it's name is local
<cargill> *its
<marcoceppi_> cargill: you can't bootstrap two local environments at the same time without some tweaking
<nessita> re: my issue with unit stuck in pending state, one of the errors show "2014-02-24 15:43:22 ERROR juju runner.go:220 worker: exited "upgrader": cannot read tools metadata in tools directory: open /home/nessita/.juju/local/tools/1.17.3.1-precise-amd64/downloaded-tools.txt: no such file or directory" and when browsing the content of /home/nessita/.juju/local/tools/ there is no folder named 1.17.3.1-precise-amd64, just 1.17.3.1-saucy-
<cargill> marcoceppi_: so how can i test stuff then? :)
<nessita> is that a known issue? shall I downgrade to 1.17.2?
<marcoceppi_> cargill: well, you'll need to either destroy the existing local environment, or make a few tweaks to have two simultaneously bootstrapped local environments
<marcoceppi_> nessita: that's something weird, like cloud-init failing, can you pastebin the cloudinit logs?
<nessita> yes
<marcoceppi_> cargill: also, try running ssh elasticsearch/0 and running sudo apt-get update
<marcoceppi_> err, nessita ^^ sorry, cargill
<marcoceppi_> nessita: verify those boxes have networking
<nessita> marcoceppi_: how can I do that?
<marcoceppi_> nessita:  try running ssh elasticsearch/0 and running sudo apt-get update
<nessita> ack
 * nessita1 internet hiccup
<nessita1> marcoceppi_: thanks for the pointer. So, I ssh'd into elasticsearch and confirmed the unit has network connection
<marcoceppi_> nessita1: cool, cloud-init logs will help illuminate why git wasn't installed
<nessita1> pasting
<nessita1> marcoceppi_: http://pastebin.ubuntu.com/6987662/
<nessita> marcoceppi_: the issue I see is the mismatch between the tools folder
<nessita> /home/nessita/.juju/local/tools/1.17.3.1-saucy-amd64 vs /home/nessita/.juju/local/tools/1.17.3.1-precise-amd64
<nessita> the unit expects the latter
<nessita> and cloud-init uses the former?
<marcoceppi_> the unit and cloud-init should both be using the precise one
<marcoceppi_> nessita: this cloud-init from bootstrap node
<nessita> marcoceppi_: perhaps is a bug on juju devel tools 1.17.3?
<marcoceppi_> nessita: juju ssh elasticsearch/0
<nessita> I installed the update this moring
<marcoceppi_> and give me the cloud-init log from there
<nessita> marcoceppi_: on it
<nessita> marcoceppi_: where shall I find the cloud init inside the unit?
<marcoceppi_> nessita: somewhere in /var/log
<nessita> marcoceppi_: http://pastebin.ubuntu.com/6987690/
<nessita> perhaps you need the full content not just the last 10 lines, no?
<nessita> let me fix that, sorry
<marcoceppi_> nessita: yeah, full content of the ones in /var/log, don't need /var/log/juju
<nessita> marcoceppi_: http://pastebin.ubuntu.com/6987710/
<nessita> so as you predicted there seem to be some network failure, but when I ssh'd into the unit, apt-get update fetched all the repos
<marcoceppi_> nessita: yeah, seems like for some reason it wasn't able to resolve dns at first
<marcoceppi_> nessita: so, juju destroy-environment, try bootstrapping/deploying again
<nessita> marcoceppi_: thanks, will do
<nessita> marcoceppi_: so, I destroyed the env and re-bootstrapped, both agents (solr and elasticsearch) show as started, thanks a lot for your help. One more question though: unit-solr-jetty-0.log and unit-elasticsearch-0.log keep showing:
<nessita> 2014-02-24 17:16:46 ERROR juju runner.go:220 worker: exited "upgrader": cannot read tools metadata in tools directory: open /home/nessita/.juju/local/tools/1.17.3.1-precise-amd64/downloaded-tools.txt: no such file or directory
<marcoceppi_> nessita: is there a /home/nessita/.juju/local/tools/1.17.3.1-precise-amd64 directory?
<nessita> marcoceppi_: no, there is not http://pastebin.ubuntu.com/6988138/
<nessita> just 1.17.3.1-saucy-amd64
<marcoceppi_> nessita: interesting, wonder why that didnt' get created. What's in 1.17.3.1-saucy-amd64
<nessita> marcoceppi_: http://pastebin.ubuntu.com/6988149/
<marcoceppi_> nessita: huh, maybe make a symlink to that directory named1.17.3.1-precise-amd64>
<nessita> marcoceppi_: can this be "caused" by latest 1.17.3 juju-local?
<marcoceppi_> nessita: hard to say, I haven't seen anyone else have problems yet
<nessita> marcoceppi_: ok, symlink added -- shall the units pick up the change by themselves?
<nessita> or this should "work" on futures bootstraps?
<nessita> marcoceppi_: I was getting the ERROR every 3 seconds, now they seem to have stopped
<marcoceppi_> nessita: the agents should keep trying
<nessita> right
<nessita> marcoceppi_: thanks a lot for your help!
<webbrandon> hello @ all of you!
<marcoceppi_> webbrandon: o/
<lazyPower> Need an additional +1 on this MP if anyone has time: https://code.launchpad.net/~stub/charms/precise/postgresql/syslog/+merge/206176  its going to run pretty quick, solid tests unless someone see's something glaring
<dpb1> HI -- how do I get revision numbers from the store?  I know juju spits them out in different areas, but I was wanting a more api-ish way?
<rick_h_> dpb1: if you query the store you can get the latest version number. e.g. https://store.juju.ubuntu.com/charm-info?charms=cs:precise/juju-gui
<dpb1> rick_h_: ah, excellent, that will be fine.
<webbrandon> is there a bug with the unit logger?  My install script is running but there is no history in my unit log of the installation.
<marcoceppi_> webbrandon: be more specific, what version of juju, how are you trying to log, etc
#juju 2014-02-25
<manjiri> Hello. I am running into bug https://bugs.launchpad.net/charms/+source/postgresql/+bug/1239681 which appears to have been fixed by sinzui. How can I get that fix?
<_mup_> Bug #1239681: relation-get failing with 'permission denied' <regression> <juju-core:Triaged> <postgresql (Juju Charms Collection):Triaged> <https://launchpad.net/bugs/1239681>
<marcoceppi_> manjiri: what version of juju are you using?
<manjiri> marcoceppi_: ii  juju-core                                    1.16.6-0ubuntu1~ubuntu12.04.1~juju1              Juju is devops distilled - client
<marcoceppi_> manjiri: also, it doesn't say it's been fixed, merely triaged for 1.18
<manjiri> I think I recently upgraded juju-core. Do you recommend that I down-grade? To which?
<hazmat> manjiri, can you reliably reproduce that?
<hazmat> manjiri, are you running into that  with the postgresql charm?
<manjiri> hazmat: I am not running postresql charm. I am developing my own (set of) charms for deploying Contrail software. I think the symptoms are the same.
<hazmat> manjiri, namely that relation-get tosses an error?
<manjiri> hazmat: yes. that is correct.
<manjiri> hazmat: as to whether I can reproduce the problem - I am seeing it now. I was thinking of rebooting the machine in the hope that the problem would go away.
<hazmat> manjiri, could you send the log for the unit in /var/log/juju/
<hazmat> i'll try reproducing w/ the postgres charm | unit test
<hazmat> i'm on 1.17/trunk though
<manjiri> hazmat: I am new to IRC. When you say "send" the log - what do you mean?
<hazmat> manjiri, typically pastebin.ubuntu.com if its not sensitive
<sarnold> (the pastebinit tool is awesome for this)
<hazmat> manjiri, welcome to irc btw.. also there's a command line client
<hazmat> sudo apt-get install pastebinit
<hazmat> you can point it to a file as a param or pipe data into the pastebinit cli
<marcoceppi> hazmat: --wait in deployer just subscribes to the event stream from the API, right? It doesn't actually do proper "wait for 'idle' environment", does it?
<hazmat> marcoceppi, the latter doesn't exist
<hazmat> atm
<hazmat> marcoceppi, yes.. its event stream wait and configurable timeout
<marcoceppi> hazmat: didn't think so, that's why I got kind of excited when I saw the option
<hazmat> marcoceppi, and stop/shortcircuit on error
<hazmat> rephrasing its wait for x seconds without error post action
<manjiri> hazmat: Please check out http://pastebin.ubuntu.com/6991890/
<hazmat> manjiri, are you changing users in your charm?
<hazmat> manjiri, the juju cli commands use a  unix socket owned by root. changing user would be the most likely cause.
<manjiri> hazmat: no change in user
<manjiri> hazmat: to which cli commands are you referring?
<hazmat> manjiri, ok
<hazmat> manjiri, all the commands used to interact with juju from a hook (relation-ids, relation-list, relation-get, relation-set, config-get, open-port, close-port, etc)
<manjiri> hazmat: got it. to reconfirm, my charm does everything as 'root'
<manjiri> hazmat: do you have a recommendation? Downgrade juju-core?
<hazmat> manjiri, not atm
<hazmat> still trying to understand root cause, but past eod, so intermixing other things
<hazmat> manjiri, 1.14 is very old.. not recommened
<manjiri> hazmat: I have 1.16.6-0ubuntu1~ubuntu12.04.1~juju1
<hazmat> understood
<hazmat> manjiri, is your charm opensource?
<hazmat> if so where can i find it..
<manjiri> hazmat: not yet
<hazmat> currently trying to run the postgresql charm unit tests per the bug report
<manjiri> hazmat: I can provide code snippet if you think that it will help
<hazmat> manjiri, that would be helpful
<manjiri> hazmat: Is it OK if I email it to you?
<hazmat> manjiri, sure
<manjiri> hazmat: Give me a few minutes
<hazmat> manjiri, np
<hazmat> stub, ping.. how do you run these postgresql charm tests
<manjiri> hazmat: Sorry that took so long, but I have sent you the information in an email. I will reboot the machine to see if I can continue to make progress without being hit by this bug
<hazmat> manjiri, got it.. i'll have to look at in the morning its late here.. and i'm done for the night
<stub> hazmat: Bootstrap an environment, then 'make test'. tests/00_setup lists packages that might be needed
<stub> hazmat: it has been a while since I tried anything but the local provider
<hazmat> stub, thanks
<hazmat> stub, i use a variation a manual provider variation on local provider with lxc btrfs snapshots.. should be about the same.
<hazmat> stub, you still have that failing test per the bug report .. http://pad.lv/1239681
<hazmat> ?
<danob> i installed LXC container in my ubuntu desktop and when i start my desktop or restart my desktop, why i need to run a2ensite default everytime inside apache2/0 unit???
<marcoceppi> danob: you shouldn't are you setting a vhost template?
<chris38home> hi, I'm trying local provider with fresh checkout and juju add-machine lxc:0 stays in pending mod, is there any prerequisite to install ?
<marcoceppi> chris38home: uh, I don't think you can LXC on LXC
<sarnold> you can do nested lxc: https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/
<chris38home> ok then juju add-machine should create a new lxc container ?
<sarnold> no idea how well juju will handle it though :)
<chris38home> non nested is also pending
<marcoceppi> chris38home: well, non-nested needs to be instarted before nested can work
<marcoceppi> chris38home: can you show your juju status in a pastebin?
<chris38home> marcoceppi,  http://pastebin.com/AB8HDfZS
<chris38home> maybe I have a problem with routing : cannot load index "http://10.0.3.1:8040/tools/streams/v1/index.sjson"
<jcastro_> is this bucket thing still a thing? http://askubuntu.com/questions/425745/maas-juju-bootstrap-404
<jcastro_> I haven't seen that one in a while
<Lord_Set> Hello everyone
<marcoceppi> o/ Lord_Set
<Lord_Set> How is the world of Juju going?
<marcoceppi> Lord_Set: really well, imo
<Lord_Set> Glad to hear it. I am really enjoying using it as an app deployment platform. I am doing a pretty good sized test today with it.
<Lord_Set> Going to be using MAAS and Juju to deploy a rack of 35 servers with Openstack and then a second rack of 20 servers to build a Hadoop Cluster. Also going to doing some multi-cluster controller between different sites tests. The locations are connected via VPN tunnels.
<Lord_Set> We are upgrading to 100mbps burstable to 1gbps fiber soon between locations.
<marcoceppi> Lord_Set: Sweet! Let us know if we can help in any way
<Lord_Set> I will for sure. I got an email from Dustin Kirkland this morning saying he wanted to discuss what my team is doing with MAAS, Juju and Ubuntu. Was after a good and long conversation with Julian Edwards of the MAAS tema.
<Lord_Set> err team
<jcastro_> Lord_Set, this is awesome, let us know how it goes for you
<jcastro_> marcoceppi, I hate to sound like a raving nut, but putting the docs on github makes things so easy now with the inline editing.
<marcoceppi> jcastro_: you sound like a raving nut ;)
<Lord_Set> I will
<jcastro_> Apache Syncope charm incoming to the queue!
<Lord_Set> I am hoping with this tech startup I am working with to form an awesome partnership with the MAAS and Juju teams. We have a lot of resources and big plans for deployment with 2 different tech startup companies we are moving forward with.
<Lord_Set> Currently 4 geographical sites with a total of 12 racks currently. 2 of which are at Switch SuperNAP in Vegas
<jcastro_> negronjl, I kind of need bug 1267222 fixed, could you possibly update it with what needs to be done and I can ask someone else to fix it?
<_mup_> Bug #1267222: Race condition when deploying a simple replica set <audit> <mongodb (Juju Charms Collection):New for negronjl> <https://launchpad.net/bugs/1267222>
<negronjl> jcastro_, sorry about that ... I'll update today
<marcoceppi> jcastro_: didn't lazyPower fix that ?
<jcastro_> I dunno, did he?
 * marcoceppi waits for lazyPower
<lazyPower> Nope, i fixed a race condition in mediawiki, and pointed it out back in the day
<lazyPower> but have not applied any patches to mongodb charm
<jcastro_> lazyPower, marcoceppi, mbruzek: hey when you guys get time can you go through and vote in the juju tag?
<lazyPower> negronjl: I do however have tests in amulet to assert that it works - i think you already patched it.
<jcastro_> dimiter is answering a ton but not really getting any upvotes
<lazyPower> negronjl: latest bzr branch is lp:~lazypower/charms/precise/mongodb/ci-fix
<jcastro_> mbruzek, marcoceppi: heya, looks like wildfly is ready to go
<jcastro_> what do you say about some promulgation?
<marcoceppi> jcastro_: needs a finaly review, but otherwise, sure
<mbruzek> Yeah it needs to be reviewed by a charmer.
<mbruzek> Marco let me know if you have any questions or find anything that I did not.
#juju 2014-02-26
<danob> marcoceppi: sorry for late response. I just deploy a apache2 charm and what should i do to see "it works" default page if i hit public address of apache2 unit?
<jcastro_> you need to expose the service first
<jcastro_> juju expose apache2
<marcoceppi> danob: you need to provide a vjost template
<danob> jcastro_: i exposed it :)
<jcastro_> oh nm, don't listen to me
<danob> marcoceppi: how do i do that, i dont know :)
<danob> marcoceppi: can you point me a doc, how to?
<marcoceppi> danob: check the charm readme?
<jcastro_> http://manage.jujucharms.com/charms/precise/apache2
<jcastro_> scroll down ^^^
<iamveen> Has anyone successfully boostrapped juju on a vagrant/virtualbox guest? It never works for me. There always seems to be problems looking up the architecture or ubuntu release.
<iamveen> I want to like juju, but It's so incredibly frustrating at this point.
<marcoceppi> iamveen: What do you mean vagrant/virtualbox? How are you trying to bootstrap these?
<iamveen> I bring up an ubuntu host in vagrant, then use the "null" provider, then run juju bootstrap
<sarnold> is there still a null provider? I thought it was renamed 'ssh' ages ago...
<iamveen> oh really... I could swear the docs still refer to null
<sarnold> can you contact your other hosts from within the vangrant guest?
<iamveen> let me check
<axw> there is, it's called "manual" now. iamveen, which version of juju are you using?
<axw> iamveen: it's still called null in the released version
<iamveen> 1.16.6 on OSX
<sarnold> aww man, I thght 'null' was such a bad name it only lived for a month or so. darn. :)
<sarnold> bugger the name made it to release..
<axw> iamveen: the 1.16.x version of the null provider is pretty alpha, I would highly recommend trying 1.17.3
<axw> there have been many bug fixes
<iamveen> This page still says to use "null" - https://juju.ubuntu.com/docs/config-manual.html
<axw> yep - it will be changed when 1.18 is released
<sarnold> iamveen: sorry, it's my mistake. I thought the name 'null' was killed before release..
<iamveen> ah
<iamveen> is that not available via brew?
<axw> iamveen: sorry, I don't know about juju packaging on OS X. You may have to build from source...
<iamveen> k
<iamveen> thanks guys
<iamveen> I'll give that a shot
<iamveen> Is there a preferred ubuntu release for hosts?
<axw> precise still has the most charms, AFAIK
<sarnold> iamveen: precise is probably your best bet; most charms, most support
<iamveen> cool, thanks
<iamveen> okay, 0.17.3 works much better
<iamveen> 1.17.3 even
<sarnold> cool!
<danob> marcoceppi: its working now ;)
<danob> jcastro_: thanks :)
<noodles775> Anyone able to do a charm-helpers review? https://code.launchpad.net/~michael.nelson/charm-helpers/include-relations-for-type/+merge/205087
<noodles775> marcoceppi: ^^ Do you know who is around to review charm-helper branches these days?
 * noodles775 checks the revision history.
<JoshStrobl> hey marcoceppi, when you get the time, mind reviewing the following pull request and accepting it upstream if it LGTY? https://github.com/juju/plugins/pull/7
<JoshStrobl> also, marcoceppi, Q regarding my charm. I modified the config-changed file so it properly uninstall / installs apache2 and/or nginx (depending on the config-get engine). nginx doesn't have a service file, so it returns as an unrecognized service when you call service nginx start or /etc/init.d/nginx start. Any recommendations?
<marcoceppi> JoshStrobl: uh, nginx does/should have an init file
<marcoceppi> noodles775: I, and a few others, are
<JoshStrobl> marcoceppi: hmm
<noodles775> marcoceppi: ok. Let me know if you won't have time to look at that branch and I'll try to persuade someone else :)
<marcoceppi> noodles775: it's in the review-queue so I have to look at it sometime ;)
<noodles775> marcoceppi: sure, but I've only just pushed an update, it's been in the review queue for a few weeks (not your issue - just that I feel I need to find someone rather than just leave it there).
<marcoceppi> noodles775: ack, turns out it wasn't in the review-queue after all, I'm going to make sure ~charmers get assigned to reviews for these so it will in the meantime
<noodles775> Ah - the default ~charm-helpers isn't correct?
<marcoceppi> noodles775: yeah, I thought it was ~charmers, which would put it in our review queue
<marcoceppi> noodles775: why change the dict keys to _?
<noodles775> marcoceppi: because in a jinja2 template (which ansible and saltstack use) you can't do mydict.my-key.value, only mydict.my_key.value
<marcoceppi> noodles775: ah, okay - I assumed as much but wanted to make sure
<JoshStrobl> and not longer afk...in a sense
<JoshStrobl> marcoceppi: Regarding nginx: Still doesn't explain why it server nginx start claims the service doesn't exist.
<JoshStrobl> *why service
<marcoceppi> JoshStrobl: http://paste.ubuntu.com/6999673/ really not sure
<JoshStrobl> marcoceppi: I'll go ahead and look into it further on my end
<JoshStrobl> Wouldn't surprise me if it is a bug my charm :D
<JoshStrobl> config-get x would be the appropriate call during something like an install script to get the setting value, correct?
<marcoceppi> JoshStrobl: it could be done during the install hook, but make sure you implement logic in config-changed that handles that config value as well - or just have all the config managment stuff in config-changed and the install hook be very light
<JoshStrobl> marcoceppi: So would the appropriate line for getting the value be: x='config-get y', x=${'config-get y'}?
<JoshStrobl> example, for my charm: preferredEngine='config-get engine'
<marcoceppi> well, you'll want to use `` not ''
<marcoceppi> either $(config-get <key>) or `config-get <key>`
<JoshStrobl> alrighty
 * JoshStrobl is still newish to bash scripting
<marcoceppi> np
<Ming_> want  to try Juju GUI 1.0 It requires Juju 1.17.3. How do I install it? repository is pap:juju/develop?
<Kauko> Hi, I just heard about juju and I have to say it looks amazing! Saw that there's an IRC channel and thought I'd join and ask some questions
<Kauko> How does juju work with AWS Elastic Beanstalk? Not sure if this is a stupid question since I'm not totally sure if I completely understand how juju works yet :P
<Kauko> I found some examples of using juju on normal ec2 instances, but I'd really like to use elastic beanstalk since it has some great features
<Kauko> is there any difference between deploying on elastic beanstalk vs ec2?
<marcoceppi> Kauko: currently juju doesn't support beanstalk to my knowledge
<Kauko> ah, thats a damn shame
<marcoceppi> Kauko: currently, it generalizes clouds to give you the ability to drive different cloud envionments from the same tool
<marcoceppi> Juju is like beanstalk, but for more than just Amazon
<jamespage> marcoceppi, got a charm branch for mysql on trusty I can steal? looking to deprecate the use of my hacked one in our lab :-)
<marcoceppi> as a very crude comparison
<marcoceppi> jamespage: not yet, but I should have one by Friday
<jamespage> \o/
<marcoceppi> just finishing the tests for mysql
<jamespage> all nice and in python?
<Kauko> marcoceppi: Yeah I think I understand what you mean. It
<marcoceppi> jamespage: not entirely rewritten yet, there's still a few outstanding merges I want to get in before starting down that road
<Kauko> It's a shame it doesn't support beanstalk though, I kind of need the versioning, deployment etc features beanstalk can give me
<marcoceppi> Kauko: could you elaborate a bit? Juju might have some comperable features
<Kauko> kk, just give me a moment, I'll need to look through the beanstalk features first! ;)
<Kauko> actually off the top of my head I can say that we need to be able to deploy different versions of our app (so that we can have beta, stable, prev_stable)
<JoshStrobl> hmm, mkdir -p /var/www/ fails on the install script with the Vagrant precise vagrant box, yet worked during testing on my old Ubuntu Server VM.
<marcoceppi> Kauko: cool, you can do that in juju
<JoshStrobl> *Vagrant precise box
<marcoceppi> JoshStrobl: is that the local provider on Vagrant, or just using the manual provider?
<JoshStrobl> Even tried with sudo and it failed.
<JoshStrobl> local provider
<JoshStrobl> juju deploy --repository ~/charms local:metis
<Kauko> marcoceppi: oh! Also beanstalk scales automatically, and it should be pretty easy to deploy using scripts (we want CI)
<Kauko> marcoceppi: can juju do this too?
<marcoceppi> Kauko: so, we don't have "autoscaling" yet (on the roadmap) but you can scale, and juju has a full websocket API so you can have tools do scaling for you automatically. Otherwise, using CI to deploy and drive juju environemnts is pretty straight forward, either via the CLI or by connecting to the bootstrap node directly with the API
<JoshStrobl> marcoceppi: To be more exact, https://juju.ubuntu.com/docs/config-vagrant.html > precise-server-cloudimg-amd64-juju-vagrant-disk1.box
<Kauko> marcoceppi: hmm ok, cool. Any idea when we could expect to have autoscaling? 1 month, 1 year, 5 years? :) It's a feature that would be really good for me since I'm new with these things, and would prefer to spend my time thinking about other stuff  :P
<marcoceppi> Kauko: I'm not sure, but we'd probably see traction towards the fall of this year
<JoshStrobl> marcoceppi: Line 184 & 185 of http://paste.ubuntu.com/6999928/.
<marcoceppi> JoshStrobl: it seems to work there?
<JoshStrobl> Well, sudo mkdir -p and mkdir -p don't output any errors (otherwise the script would fail), unzip extracts properly, yet when you try to cd /var/www the directory doesn't exist, nor any contents.
<JoshStrobl> compared it to the same code in the old install script, no changes that'd cause it.
<JoshStrobl> I'll pastebinit the install script so you can see
<marcoceppi> JoshStrobl: line 208 seems to show that /var/www exists and zip extracts to it
<JoshStrobl> marcoceppi: Indeed, hence my confusion.
<JoshStrobl> http://paste.ubuntu.com/6999943/
<JoshStrobl> Ignore the use of sudo, I added those in to see if it'd make a difference and nope
<marcoceppi> JoshStrobl: are you looking on the vagrant box or on the LXC container in the vagrant box?
<JoshStrobl> it is on machine 1: instance-id: vagrant-local-machine-1...is it's local FS stored elsewhere?
<JoshStrobl> If so, then please facepalm me :D
<JoshStrobl> series: precise, agent-state: started on machine 1.
<marcoceppi> JoshStrobl: right, you're running `juju ssh metis/0` (in the Vagrant box) then looking in /var/www
<JoshStrobl> marcoceppi: I'm ssh'd into the Vagrant via `vagrant ssh`. Any difference?
<marcoceppi> JoshStrobl: yeah, you need to go /one/ layer deeper
<JoshStrobl> lemme guess, need to ssh into 1?
<JoshStrobl> ah
<marcoceppi> after vagrant ssh, run juju ssh metis/0
<marcoceppi> JoshStrobl: yeah, the Vagrant box acts as a new machine which has the local provider on it
<JoshStrobl> you are completely right...
<JoshStrobl> I ssh'd in, went to /var/www/ and it's there.
<marcoceppi> so you can use it as a clean/pristine box to spin up LXC containers, destroy, recreate, etc
<marcoceppi> also useful if you don't have an Ubuntu/Linux machine and want to use the local provider
<JoshStrobl> Yea. On my VM I always deployed it locally, guessing that particular machine was the host (machine 0), so whenever I looked in /var/www I expected it there, didn't even think about ssh'ing into the machine.
<JoshStrobl> that'd also explain the service issues
<JoshStrobl> since it was on a different machine
<JoshStrobl> yep
<JoshStrobl> did service nginx status and voila, it is there
<JoshStrobl> marcoceppi, I don't know what I'd do without ya man :D
<marcoceppi> happy to help!
<JoshStrobl> Just let me know when I start driving you insane.
<Kauko> marcoceppi: thanks for your help!
<cargill> hi, I'm still trying to get tests running on vagrant (local provider), but the tests seem to fail (log at http://pastebin.ubuntu.com/7000435/)
<lazyPower> cargill: what are you using to run the tests? A vagrant provisioner statement?
<cargill> the 'juju test' command inside the vagrant box
<lazyPower> is this in the juju quickstart vagrant box?
<lazyPower> Ah, ok - i see the issue. 1.16.5 does not support sudoless bootstrapping and has an issue running tests against the local provider.
<cargill> ah, are the latest vagrant boxes set up with 1.17 or still 1.16?
<marcoceppi> cargill: probably 1.17, not entirely sure
 * marcoceppi isn't sure how often they get refreshed
<JoshStrobl> Not sure about later versions, but the latest precise build has 1.16.6. guessing 1.17 hasn't been backported or 1.17 hasn't been backported
<marcoceppi> JoshStrobl: we don't backport to precise, I think that the vagrant boxes only have ppa:juju/stable enabled. You need to also enable ppa:juju/devel to get 1.17 releases
<JoshStrobl> Meh, it's not a priority to me :P I'm content with ppa:juju/stable.
<JoshStrobl> What other "features" besides sudoless bootstrapping does 1.17.x bring?
<lazyPower> a ton of fixes to the local provider experience
<lazyPower> https://launchpad.net/juju-core/+milestone/1.17.3
<lazyPower> release notes are here ^
<cargill> ok, so tests do not work on stable with local? Just changing the repo to /devel and updating will work on precise?
<lazyPower> cargill: correct. That should be all you need to do to get tests running with the local provider.
<cargill> complete absence of tests has been the only thing preventing me from raising a merge proposal for a while...
<JoshStrobl> lazyPower: Oh...erm...yea maybe I should start using 1.17.3 :D
<lazyPower> JoshStrobl: i started working on a community maintained vagrantfile, its kind of a precursor for some functionality marcoceppi and I are working on to enhance the cross platform charmer story
<lazyPower> if you're interested in what i've done - and note, its got a serious drawback of working on top of a 801 mb (yikes!) boxfile
<lazyPower> https://github.com/chuckbutler/juju-vagrant -- this is specific to tests. If you pull the cloud image and update to juju 1.17.3 it should cut down the boxfile size, and give you similar results
<lazyPower> ymmv however, as i haven't tested it.
<JoshStrobl> lazyPower: I'll take a look at it :)
<lazyPower> cool! :) Issues/pull requests welcome
<lazyPower> if you get a functional upgrade from the cloud image, feel free to submit your work. I'd love to see what you did
<cargill> hmm, and there we go again, I've wiped the vagrant box, started over and I get "The box 'JujuQuickstart' could not be found"
<lazyPower> cargill: vagrant box list
<lazyPower> can you pastebin that for me?
<cargill> vagrant box list $ JujuQuickStart (virtualbox)
<lazyPower> Casing on the S
<lazyPower> it appears we haev an issue on the docs. I'll get a patch in a pull request. TY for calling that out cargill
<cargill> oh yes, so the name I set must match the one in the image?
<JoshStrobl> Yes, it is case sensitive.
<lazyPower> Correct, whatever you import the boxname as must match your vagrant init
<cargill> ah, hmm, that explains a couple of things, lazyPower, thank you very much!
<lazyPower> marcoceppi https://github.com/juju/docs/pull/6
<JoshStrobl> I also recommend you saving the .box file locally. It makes doing a vagrant box add much faster when you have it fetch the box locally rather than via http
<cargill> JoshStrobl: that's what I do :)
<JoshStrobl> Yea after re-adding / destroying Vagrant boxes a few times, felt it would be smart to save the box file locally :D
<lazyPower> You do now that once its cached you dont need to remove it unless you're upgrading right?
<cargill> a note about the online docs, could they perhaps include the section name in the <title>?
<dpb1> marcoceppi: I'm hitting another case where I need an environment variable passed through for juju-test.  Any chance we could reconsider allowing all environment to be passed through?
<lazyPower> if it appears in vagrant box list, it will use that copy of the box for each subsequent spin up.
<lazyPower> cargill: What do you mean?
<lazyPower> oh you mean the title tag
<marcoceppi> dpb1: which one
<cargill> the title of https://juju.ubuntu.com/docs/config-LXC.html is Uju documentation
<cargill> yes
<cargill> s/Uju/Juju/
<dpb1> marcoceppi: test specific.  we have an env var used to skip certain tests.
<Ursinha> hi all, is it possible to set machine constraints on the yaml file? instead of using juju set-constraints --service foobar ?
<dpb1> marcoceppi: do you know why it was a design decision to zero out the environment?  I don't see other test tools doing that.
<marcoceppi> dpb1: because it runs on a lightweight environment, so it can't rely on people using env variables that are set on their system and theirs only
<dpb1> marcoceppi: could I introduce an argument to allow the environment to be preserved?
<lazyPower> Ursinha: yep. heres an example: http://paste.ubuntu.com/7000694/
<Ursinha> lazyPower: awesome, thank you very much. I've read the docs but couldn't find it, maybe I was searching in the wrong place.
<lazyPower> Ursinha: I fired up a juju-gui and modeled it, then exported my bundle.
<lazyPower> Thats how i've been writing my deployment YAML's for the last few weeks.
<Ursinha> interesting.
<lazyPower> Ursinha: https://juju.ubuntu.com/docs/charms-bundles.html
<lazyPower> Juju bundles are a first class citizen in the juju ecosystem - it makes deploying complex configurations trivial
<Ursinha> lazyPower: this is great! thanks for the pointers :)
<lazyPower> Happy to help :)
<Ursinha> I was wondering if I could set the constraint in the environment configuration file, do you know if that's possible? to ensure all instances created on that env would follow the constraints
<Ursinha> let's say I have env01, and want all machines created on that env to have mem=4000
<lazyPower> Ursinha: i'm not aware of any env based configuration like that. Doesn't mean it doesn't exist, i just dont know about it.
<Ursinha> lazyPower: thanks anyway, you provided great information :)
<dpb1> Ursinha: no, you can't currently set constraints in the environments.yaml file.  Once you bootstrap, you can provide make a "default" like you are wanting though.  See: https://juju.ubuntu.com/docs/charms-constraints.html for more info.
<dpb1> Ursinha: btw, the "bundles" that you have been discussing provide options for setting constraints on each service.
<Ursinha> dpb1: I read these docs but couldn't find a way to do that in the conf file though, but thanks :)
<Ursinha> dpb1: yes, I saw that. was wondering if I could set environment constraints, hence my last question :)
<dpb1> Ursinha: ya, not possible.  I was checking for a bug/enhancement request, but can't find it.
<Ursinha> I have two environments to deploy, one is nice, other not so nice, so one of them might require more tight constraints then the other, so configuring the service wouldn't be ideal
<Ursinha> dpb1: I can file a bug, if that helps
<marcoceppi> dpb1: maybe, I was considering a "passthrough" option to supply which env variables you want to pass through
<marcoceppi> dpb1: what's the environment variable though?
<dpb1> Ursinha: https://bugs.launchpad.net/juju-core/+bug/1228311
<_mup_> Bug #1228311: Specify default constraints <constraints> <improvement> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1228311>
<dpb1> Ursinha: knew it was out there somewhere. :)
<Ursinha> dpb1: awesome! thanks :)
<dpb1> marcoceppi: in this case... "SKIP_SLOW_TESTS", very specific to this charm, of course.
<marcoceppi> dpb1: yeah, so that wouldn't work in our charm automated tests, but I can see how it would be helpful to you though
<dpb1> marcoceppi: well, I can throw up an MP if you would like, won't take me too much time.  If you would rather it append to the whitelist rather than just allow everything through, let me know.
<ahasenack> marcoceppi: do you really take down the environment between tests in the tests/ directory?
<ahasenack> marcoceppi: I mean, I know juju-test does it, but do you really want it to? Do you use it that way?
<marcoceppi> dpb1: something like --preserve-envs "comma,seperated,list,of,envs,to,add,to,whitelist" might work
<marcoceppi> ahasenack: yes
<ahasenack> marcoceppi: that's super expensive
<dpb1> marcoceppi: ok.. I'll throw something up (and a bug to start)
<marcoceppi> ahasenack: right, there was/is an option (not sure if it's released yet) that will do only one bootstrap for all tests
<ahasenack> marcoceppi: ok, but why would you (a generic "you", not you you) prefer to destroy the environment between tests against the same charm?
<marcoceppi> ahasenack: well, not all tests would use the same base? What if a previous test fails - you wouldn't want to preserve that environment for the next test run, etc
<marcoceppi> if you consider each test a userstory, having a potentially "dirty" environment from the last test run could influence that test file test run
<ahasenack> yeah, bad tests do that
<ahasenack> but re-deploying is expensive most of the time, not the exception
<ahasenack> anyway, that just makes me put everything in one file under tests/
<ahasenack> since the deployment takes between 10-15min
<lazyPower> ahasenack: i've got a complex mongodb deployment amulet test that takes approx 12 minutes to run.
<marcoceppi> ahasenack: however, some people write tests that can safely cascade (ie do cleanup) so there's a flag that can be used to only setup bootstrap once
<marcoceppi> ahasenack: that way, if your test files know how to clean up the deployment, etc, you can use multiple test files with the same boostrap
<marcoceppi> I try to include as many options as possible for people to write tests that work for them
<ahasenack> marcoceppi: is that in the package yet?
<marcoceppi> ahasenack: let me check
<lazyPower> its not as bad as you would think. I feel its cost of knowing that clean environment is going to perform as expected, and any post deployment op checking should be done within the given configuration.
<lazyPower> at least for now until the post-deployment command patch lands
<marcoceppi> ahasenack: not yet, it will be the "--one-bootstrap" flag though in the next release of charm-tools
<ahasenack> marcoceppi: ok, thanks
<lazyPower> I just ran into a new issue i haven't seen before
<lazyPower> bootstrapping on HPCloud
<lazyPower> ERROR bootstrap failed: rc: 1
<lazyPower> seems to have been temporary
<dpb1> marcoceppi: https://code.launchpad.net/~davidpbritton/charm-tools/preserve-environment/+merge/208439
<cargill> hmm, upgrading to 1.17 does not really help (from a fresh precise vagrant box, installed amulet, charm-tools, destroyed evn local, switched to /devel, upgraded juju, run juju test -e local, and it still thinks the env is already bootstrapped...)
<mgz> cargill: what have you got in ~/.juju/environments/ ?
<cargill> have no touched that
<mgz> cargill: yhr point is you might need to :)
<cargill> local.jenv
<cargill> should I remove it?
<mgz> okay, so try moving that out of the way, and rego
<cargill> moved local.jenv elsewhere, still the same: ERROR environment is already bootstrapped
<mgz> can you run with --debug and pastebin?
<cargill> --debug on juju test does not really do anything
<lazyPower> cargill: give me a moment to catch up and I'll give it a go
<cargill> hmm: "verbose is deprecated with the current meaning, use show-log"
<cargill> juju show-log: "ERROR unrecognized command: juju show-log"
<mgz> --show-log
<cargill> ah
<mgz> but you really just want `juju --debug COMMAND ....`
<cargill> well, there is no difference in what "sudo juju --debug test -e local" and "sudo juju test  -e local" output
<cargill> ah, when not using sudo, I get one message more: "error: flag provided but not defined: -y"
<mgz> cargill: just proposed https://codereview.appspot.com/68180045 which should make that flag hint clearer
<mgz> cargill: on the actual issue, is it related to the test being run at all? if you just have an empty tests/ dir does it still complain?
<cargill> with tests/ empty it refuses to run before anything else
<mgz> how about with a file in there that's basically a noop?
<mgz> (I'm not actually sure where the test command in implemented or what it does exactly)
<cargill> as soon as there is a file with +x: "ERROR environment is already bootstrapped"
<cargill> but the environment does not exist anymore and juju destroy-environment local cannot really do anything: "ERROR state/api: websocket.Dial wss://10.0.3.1:17070/: dial tcp 10.0.3.1:17070: connection refused"
<cargill> ok, the environment actually might partly be with destroying environments as well? If I set up another environment, then try to use it, I get the following http://pastebin.ubuntu.com/7001148/
<cargill> s/environment/problem/
<marcoceppi> dpb1: lgtm, I have another release of charm-tools landing before friday, this will be in that release
<cargill> I'll need to go soon, should I report this under juju-core or elsewhere?
<lazyPower> marcoceppi: cargill is getting strange output from juju test, sounds like a charm-tools bug more so than juju right?
 * marcoceppi reads scroll back
<lazyPower> cargill: i got a bit preoccupied. I'll continue working with vagrant and the current quickstart vagrant box. ping me when you're around next and i'll follow up
<marcoceppi> lazyPower: cargill it looks like the version of juju you're using doesn't support the -y flag for destroy-environment
<marcoceppi> butttt, you have 1.17.3 so....
<marcoceppi> not sure what is going on there.
<marcoceppi> cargill: when you run juju test, use this combination instead `juju test -v -e another`
<marcoceppi> and then post the output
<cargill> marcoceppi: when I run "juju destroy-environment -y local" I get a failure and this "ERROR no CA certificate in environment configuration"
 * marcoceppi makes sure there's no weird regression in 1.17.3
<marcoceppi> cargill: interesting
<cargill> with "juju destroy-environment -y --force another" two lines of "ERROR exit status 1"
<cargill> after that "juju --debug test  -e another" gives "ERROR cannot use 37017 as state port, already in use"
<marcoceppi> cargill: can you file a bug against charm-tools about destroy-environment needs to continue even if the destroy command fails?
<marcoceppi> cargill: oh
<marcoceppi> try sudo juju destroy-environment
<cargill> just one line of the same
<cargill> and no change in juju test result/output
<cargill> I can start over but that's what I've just done twice, so either I'm doing something wrong with the upgrade or somewhere else
<cargill> and this is a vagrant box I've downloaded today as well to make sure nothing else is funky
<marcoceppi> cargill: well, the upgrade goes from needing sudo to not needing sudo
<marcoceppi> which is likely causing all sorts of permission issues
<dpb1> marcoceppi: excellent
<cargill> well, what I've done: get the vagrant box running, install amulet, charm-tools, sudo destroy evn local, switch repo to /devel, upgrade juju, run juju test -e local, and only then started with all sorts of stuff
<cargill> to try and get it running based on the error messages and suggestions here
<marcoceppi> cargill: I think we may just need to use a fresh box that has 1.17 installed
<marcoceppi> lazyPower: did you poke at that saucy->trusty cloudimg thing? just curious
<lazyPower> I'm actually working that now
<lazyPower> i have a new branch brewing on the vagrant repository for it
<cargill> is there one for vagrant? I'm on debian and just debootstrapping ubuntu does not seem juju compatible, looks like it needs some upstart magic
<cargill> and my init is not upstart
<cargill> or maybe something with cgroups
<lazyPower> cargill: there's a juju quickstart vagrant image. I have a working unit for test execution that is homebrewed with veewee, the box size again is 801mb though
<lazyPower> so its a hefty size hit for test execution, right now i'm working through getting our current quickstart image patched to work as a testing appliance so you dont need to leave your native OS for running quick tests during your devel cycle.
<cargill> I don't mind the box size as long as it's happy with 1GiB of RAM, have only 4 available and just firefox is already hogging 1.5 of that :)
<lazyPower> https://github.com/chuckbutler/juju-vagrant
<lazyPower> i welcome all bugs, patches, complaints, etc.
<lazyPower> its an active WIP
<cargill> "STOP If you haven't installed charm-tools on your development machine": does that mean it needs an ubuntu host?
<lazyPower> only if you want to use the generators
<lazyPower> its for charm add tests
<lazyPower> but not strictly required
<cargill> so running this in $CHARM_DIR will start testing that charm, and if a debugger/shell/... kicks in during the test, I will be able to control that as well?
<cargill> lazyPower: ok, I need to run now, the image is still downloading, so I'll see where it is in the morning, should be here until ~5 GMT tomorrow, is that still early morning for you?
<lazyPower> It is.
<lazyPower> however, my bnc catches all messages. so feel free to ping me here if you need anything.
<lazyPower> and the github issue tracker works equally as well :)
<cargill> ok, thanks for your help
<lazyPower> anytime. Let me know how you get along with the box. it hasn't been tested on debian yet.
<lazyPower> also, quick note, can you tell me what version of vagrant you will be using?
<cargill> I just need to get going on the charm tests, which have been nothing but pain so far :) if this gets them going, I'll be glad to help getting it work
<cargill> lazyPower: Vagrant 1.4.3
<lazyPower> cargill: i'm using it for my charm testing, which seems to be going well. So i'm happy to help you get them going.
<lazyPower> awesome, thats recent
<lazyPower> +1 should be g2g after the box is downloaded
<cargill> +1?
<lazyPower> it gets my stamp of approval
<cargill> ah, great, see you
<lazyPower> take care
<lazyPower> marcoceppi: after running do-dist-upgrade (which took 22 minutes from start to finish in the box w/ 1gb of memory) - it seems to have solved the shared folder issue, however exporting the box yields a 1.9GB vmdk
<marcoceppi> lazyPower: bahhhhh
<lazyPower> so its not an ideal solution
<dpb1> Hi, in 1.17+, how can increase verbosity of the juju logging?  It seems to leave out DEBUG now by default
<lazyPower> seems like we are riding the right path with the build. I'm going to keep poking around until I find a way to reduce the overall size of that box.
<lazyPower> i've found some scripts that will help trim it down and optimize it, but I dont thin ki'll get one as small as the cloud images unless someone wants to share the secret sauce
<lazyPower> and we wont be waiting that long until the new cloud images get pressed for trusty, so it may just be beneficial to wait
<marcoceppi> lazyPower: we should gang up on utlemming and find out when there will be trusty cloud images
<lazyPower> rabble rabble rabble!
<danob_> i am getting this error "ERROR cannot get latest charm revision: charm not found in "/carh/path": local:precise/charmname" my charm do not have any revision file in it
<danob_> i am getting this with charm proof "I: relation website has no hooks"
<danob_> i do not need any relation
<danob_> can i make a charm with out any relation hook?
<danob_> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1129319 will this bug resolve if i use hard link temporally
<_mup_> Bug #1129319: Local charm deployment not working if symlinks are used <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/1129319>
<danob_> _mup_: will hard link work?
<danob_> _mup_: hard link has the same problem (bug)
<marcoceppi> danob: I think hardlinks should work
<danob> marcoceppi: I just tried its not working :(
<danob> marcoceppi: will this bug fix within next update?
<marcoceppi> it's fix released, as of 1.13.0
<marcoceppi> so it should be working, if there's a regression a new bug should be opened
<danob> marcoceppi: i am getting this "I: relation website has no hooks" with charm proof, is this is the reason?
<marcoceppi> danob: no, the I is informational, saying you might need to have a hook, but it's not a blocker and can be ingored
<danob> marcoceppi: hmm :) my juju-core version is 1.16.6
<danob> marcoceppi: then why i am getting this error "ERROR cannot get latest charm revision: charm not found in "/carh/path": local:precise/charmname" ? need help
<danob> marcoceppi: this is my debug output https://gist.github.com/anonymous/47255921f3d123f646e0
<marcoceppi> danob: does /home/danob/juju/localcharms/xxxx exist? where xxx is your charm?
<danob> marcoceppi: yes my charm :)
<danob> marcoceppi: it exists
<danob> marcoceppi: i mean i am hiding my charm name by xxxx
<danob> marcoceppi: though it just test charm
<marcoceppi> danob: right, so is there a revision file in that charm?
<danob> no
<marcoceppi> OH
<marcoceppi> danob: in localcharms, create a precise directory
<marcoceppi> danob: then put the charm in that
<marcoceppi> then run the command again
<marcoceppi> that should fix it
<danob> marcoceppi: ok
<danob> marcoceppi: no getting same ERROR
<marcoceppi> danob: create a revision file in the charm and put a 0 in it
<danob> marcoceppi: my command is $juju deploy --repository=. local:oxrp
<danob> ok
<marcoceppi> and where is "." ?
<marcoceppi> like . should be localcharms directory
<danob> marcoceppi: yes it is in localcharm
<danob> marcoceppi: revision file created in ixrp charm dir
<danob> marcoceppi: inside it i place 0
<danob> marcoceppi: oh man it is working worked
<danob> marcoceppi: thank you very much :)
<danob> marcoceppi: thanks man :)
<marcoceppi> danob: np, I'll file a bug that if there's not a revision file, juju should create one on first deploy
<danob> marcoceppi: my first ixrp charm is deploying :)
<danob> marcoceppi: yes report it, will be awesome
#juju 2014-02-27
<designated> can someone point me in the right direction for information on how to manage multiple NICs for an openstack deployment using juju?  What needs to be modified in the charm in order to designate NICs to different networks? By default it seems to use a single NIC, is this incorrect?
<themonk> i am tring to remove a unit from lxc using $juju remove-unit command
<themonk> juju status showing its dying
<marcoceppi_> themonk: is the service in an errored state?
<marcoceppi_> themonk: ie, the unit that machine is attached to?
<themonk> but it is taking like forever
<themonk> marcoceppi_, yes it is in error state
<themonk> marcoceppi_, the install script failed
<marcoceppi_> themonk: that's why it's not going away. Juju blocks future events from occuring when a unit is in an error state. You can either force remove the unit (not always recommended), or just run juju resolved <unit/#> to mark it as resolved and have it finish it's events, the last event queued being the "remove unit" command
<themonk> marcoceppi_, i am back
<marcoceppi_> themonk: that's why it's not going away. Juju blocks future events from occuring when a unit is in an error state. You can either force remove the unit (not always recommended), or just run juju resolved <unit/#> to mark it as resolved and have it finish it's events, the last event queued being the "remove unit" command
<themonk> marcoceppi_, how to force remove it?
<themonk> marcoceppi_, though i will try juju resolved first
<marcoceppi_> juju terminate-machine --force #; where # is the machine ID
<themonk> marcoceppi_, thanks :)
<themonk> marcoceppi, is there any way to restart a charm on previously error state unit? without removing it by force
<themonk> is there any way to restart a charm on previously error state unit? without removing it by force
<themonk> found it juju resolve --retry=true unit-name
<marcoceppi> themonk: yeah, just juju resolved --retry unit will work too, no need to explicitly state =true
<themonk> marcoceppi, i updated py code in local repository then $juju resolve --retry unit-name will this code change take place automatically
<marcoceppi> themonk: no, you have to run upgrade charm
<themonk> marcoceppi, after upgrading charm code it got new revision number but after resolve retry new code is not inside charm unit i checked in /var/lib/juju/agents/unit-oxrp-0/charm/hooks/hooks.py
<themonk> marcoceppi, is it a bug or i am doing something wrong
<bloodearnest> arosales: hangouts wouldn't let me into the cross-team meeting, btw, was full! :)
<jcastro> lazyPower, wanna document local/vagrant this afternoon?
<lazyPower> jcastro: I've already got a screencast recorded
<jcastro> \o/
<lazyPower> i need to do the voiceover, the slides, and cut
<lazyPower> but i'm about 20% there
<lazyPower> i'm leaving in about an hour and a half to head downtown to a meetup with some of the pittsburgh cloud peoples
<lazyPower> so this afternoon is not good for me. can we postpone until tomorrow?
<jcastro> works for me
<jcastro> hey evilnickveitch
<jcastro> adding bundle review criteria to this page:
<jcastro> https://juju.ubuntu.com/docs/authors-charm-policy.html
<jcastro> might make it too big
<jcastro> should I split it into a new page?
<jcastro> I was thinking under Reference instead
<jcastro> something like "Review Criteria"
<jcastro> sorry wrong URL, the review stuff I want to split from is here: https://juju.ubuntu.com/docs/authors-charm-store.html#submitting
<jcastro> holy smokes review queue!
<noodles775> mgz: I still see the same issue - http://paste.ubuntu.com/7005758/ (added paste to bug)
<jcastro> marcoceppi, figured out the git thing: https://help.github.com/articles/syncing-a-fork
<rick_h_> jcastro: need a https://github.com/juju/juju-gui/blob/develop/HACKING.rst#typical-github-workflow :)
<jcastro> ohhh, I am stealing this
<lazyPower> #TIL there is an autosquash flag
<lazyPower> I've been doing that manually for the longest time
<jcastro> yeah we are stealing this
<lazyPower> also i just moved the datastax opscenter charm to incomplete. since i'm reworking the cassandra charm - it doesn't need to be reviewed right now. If you attempt to deploy at scale its going to turn into chunky salsa.
<lazyPower> thats 1 off the rev queue once it re-syncs
<jcastro> hah
<jcastro> marcoceppi, ok I landed the doc workflow in the README, lmk if it makes sense to you
<lazyPower> Question for anyone thats listening. WHen generating self signed certificates - and using the gui - would you rather see a config box for each option in the cert? (country, state, company name, etc.) or a single line with options in key/value pairing, separated by pipes?
<lazyPower> C=US|ST=PA|O=FooCorp
<marcoceppi> jcastro: you mean you landed without a review
<marcoceppi> lazyPower: both?
<jcastro> that was an accident
<jcastro> marcoceppi, but I figured it out now
<marcoceppi> ;)
<lazyPower> marcoceppi: i meant a box per option or a single box
<marcoceppi> lazyPower: yeah, why not both? Advanced mode or easy mode
<marcoceppi> otherwise, go with what makes sense for the average user
<jcastro> marcoceppi, OK, NOW I think I did it right
<jcastro> marcoceppi, I was missing the workflow bits rick_h_ showed me, so now we can do feature branches in our own space and then gate trunk
<jcastro> that should make evilnickveitch happy
<jcastro> let's not mention my accidental cowboy commits though. :p
<marcoceppi> jcastro: yup, as it should be
<jcastro> fixed my email address on github too, so now I look more respectable: https://github.com/juju/docs/graphs/contributors
<marcoceppi> there ya go
<marcoceppi> much more reflective
<marcoceppi> such contribute
<jcastro> much cowboy
<evilnickveitch> jcastro, few things are making me happy today, so well done
<jcastro> marcoceppi, so I just dogfooded what I wrote and it all works, the only areas I am grey on are the aliases at the bottom
<marcoceppi> aliases?
<jcastro> marcoceppi, rick's team has some convenience aliases I kept, they look useful
<marcoceppi> where?
<jcastro> the bottom of the README
<marcoceppi> jcastro: ah, I'd suggest making juju-docs-upsteam https://github.com/juju/docs.git instead
<marcoceppi> since it'll prompt you for user/pass when you try to push which is a good indicator you're doing something wrong
<jcastro> ok I'll polish that later
<jcastro> right now I only needed to know how to do this to finish my card, heh
<jcastro> evilnickveitch, so wrt a reviewing section under References, is that ok?
<marcoceppi> TheMue: I don't think we should have $ in the docs, we made a concious decision that anything wrapped in <code> is to be considered a command, and anything else would use syntax highlighting to suggest otherwise
<marcoceppi> $ make copy and pasting more difficult and if we really wanted the $ I'd rather use CSS to stylize command prompt sections with a secial class to prepend the $ then to have it straight in the docs
<marcoceppi> and, I'll just put this in the bug report
<jcastro> marcoceppi, ok before I start over on a new feature branch I need to make sure I pull from master again correct?
<marcoceppi> jcastro: yeah, so basically: git checkout master, git fetch upstream, git merge upstream/master; git checkout -b <feature>
<marcoceppi> though, you could skip the fetch/merge and just do git pull upstream master
<jcastro> that just wants to merge it into my feature branch though
<marcoceppi> jcastro: git checkout master
<marcoceppi> the first command puts you back on master
<jcastro> aha!, got it
<TheMue> marcoceppi: ok, have to change it back then. but found other docs with $, so I harmonized it
<marcoceppi> TheMue: which? those should be un$signed?
<fginther> anyone see this when bootstrapping to hp cloud? ERROR bootstrap failed: cannot start bootstrap instance: index file has no data for cloud {region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<marcoceppi> fginther: can you try to bootstrap again with the --debug and --show-log command options?
<fginther> marcoceppi, running
<TheMue> marcoceppi: https://juju.ubuntu.com/docs/authors-hook-environment.html, first code of relation-get
<TheMue> marcoceppi: and it made sense to me for a better separation of what the command is and what the output
<fginther> marcoceppi, are there any secrets in that data? (I don't see any with a simple scan)
<marcoceppi> TheMue: I can create a special HTML class which will place the $ without affecting copy and paste where it's needed to help distingush command from output
<marcoceppi> fginther: the very very first line might provide some private data, everything else should be safe (the giant json blog)
<marcoceppi> blob*
<TheMue> marcoceppi: but it would place the $ before every line, wouldn't it?
<fginther> marcoceppi, http://paste.ubuntu.com/7006105/
<marcoceppi> TheMue: not nessiarily
<marcoceppi> TheMue: it would be a <span> class you could use inside the <code> block
<TheMue> marcoceppi: ah, yes, cool idea. ugly in writing, but the output is important
<marcoceppi> TheMue: right
<marcoceppi> lmk if that interests you and I'll make a card for it
<TheMue> o/ <= see my hand waving
<marcoceppi> fginther: have you ever successfully bootstrapped on HP before?
<fginther> marcoceppi, nope
<fginther> marcoceppi, I'm using 1.17.3-0ubuntu1~ubuntu13.10.1~juju1 on saucy
<fginther> marcoceppi, and 1.17.3-0ubuntu1 on trusty
<marcoceppi> fginther: do you have an image-metadata key defined in your environments.yaml?
<fginther> marcoceppi, no, I only have admin-secret, control-bucket, auth-url, default-series, use-default-secgroup and firewall-mode
<fginther> marcoceppi, OS_TENANT_ID, OS_TENANT_NAME, OS_REGION_NAME, OS_USERNAME and OS_PASSWORD are in the env
<fginther> oh and OS_AUTH_URL
<marcoceppi> fginther: run juju destroy-environment hp-cloud, rm -f ~/.juju/environments/hp-cloud.jenv, change the control-bucket name to something else, then try to bootstrap again
<bbcmicrocomputer> so I noticed in Juju 1.17, .juju/ssh/id_rsa.* exists... is Juju using it's own generated keys now?
<marcoceppi> bbcmicrocomputer: I guess so?
<fginther> marcoceppi, looks like the same error http://paste.ubuntu.com/7006173/. I need to break for lunch, bbiab if you can still assist
<fginther> marcoceppi, and thanks for the help
<marcoceppi> fginther: sounds good
<jhobbs> when I run sync-tools, I'm only getting tools for amd64 and i386; how can I include armhf tools?
<mgz> jhobbs: good question. I don't see any obvious references to arch in the code
<mgz> but streams.canonical.com does have armhf for the latest versions
<arosales> utlemming: fyi joyent provider is hitting https://bugs.launchpad.net/juju-core/+bug/1285803
<_mup_> Bug #1285803: [Joyent Provider] metadata mismatch when testing again Joyent Public Cloud <joyent-provider> <test-failure> <juju-core:Triaged> <https://launchpad.net/bugs/1285803>
<arosales> utlemming: could you confirm this is not image or stream related
<jhobbs> mgz: i found the bug
<jhobbs> https://bugs.launchpad.net/juju-core/+bug/1285410
<_mup_> Bug #1285410: juju names arm arch 'arm' internally, but 'armhf' in tools <armhf> <constraints> <juju-core:Triaged> <https://launchpad.net/bugs/1285410>
<jhobbs> mgz: juju's default filter for tool is amd64/i386/arm
<jhobbs> mgz: so armhf doesn't match
<jhobbs> mgz: hardcoded too so there is no way to override it from sync-tools..
<mgz> jhobbs: well, isn't that fun.
<mgz> you can hack a local copy of juju to have the right name in the arch list, run sync-tools with that, will that unstick you?
<jhobbs> mgz: yeah i think so - that's what i'm doing now; thanks
<fginther> marcoceppi, I was able to find the answer to my problem. I needed to add the image-metadata-url and tools-metadata-url for our project specific data
<marcoceppi> fginther: ah, so you're using something other than what we provide in our streams?
<fginther> marcoceppi, this appears to be the issue: https://bugs.launchpad.net/juju-core/+bug/1275280
<_mup_> Bug #1275280: HP Cloud needs image metadata for newer accounts <juju-core:Invalid> <https://launchpad.net/bugs/1275280>
#juju 2014-02-28
<bloodearnest> rick_h_: heya, did you see my gunicorn branch?
<rick_h_> bloodearnest: I saw you mention it. I've not peeked at anything though
<rick_h_> bloodearnest: were you looking for a review or more an fyi?
<bloodearnest> rick_h_: an fyi, since you're doing the gunicorn charm review, right?
<bloodearnest> rick_h_: I bring the gift of tests
<rick_h_> bloodearnest: very cool
<bloodearnest> rick_h_: the diff is ungainly cos I vendored charmhelpers, but it's actually a lot less charm code
<rick_h_> coolio
<xp1990_> \exit
<lazyPower> bloodearnest: we <3 tests. therefore by the transitive property we <3 you
<lazyPower> and i may only be speaking for myself...
<lazyPower> if you dont mind waiting a bit, i'll be addressing the charmq later today
<lazyPower> and I'll get you a full write up review
<lazyPower> jcastro: ping me when you've got time to do the local writeup if thats still on the plate for today
<jcastro> lazyPower, yeah, I just wanna bust out null->manual, bundles, and quickstart in one go this morning.
<bloodearnest> lazyPower: sweet
<lazyPower> jcastro: ack. just lmk
<jcastro> lazyPower, I should be done with that before the weekly call
<lazyPower> ok i'm going to continue burning through SSL Everywhere on nagios until that time
<bloodearnest> lazyPower: I'll have a few more once that's landed, like adding syslog support, and maybe some monitoring
<lazyPower> bloodearnest: excellent!
<bloodearnest> lazyPower: and py3 support :)
<lazyPower> bloodearnest: offtopic - i just ordered the parts for my steambox build. We should hook up some steamage in the near future
<bloodearnest> lazyPower: I'm game (pun fully intended :)
<lazyPower> http://goo.gl/KvXXeB
<bloodearnest> heh
<mattyw> does juju deployer support deploying charms from a local rep?
<jcastro> hey evilnickveitch
<jcastro> got some time to discuss getting-started.html?
<evilnickveitch> jcastro, going in to the qa weekly, but probably have a bit of time after that
<jcastro> ok
<jcastro> lazyPower, you. me. Vagrant boxes.
<lazyPower> oontz
<lazyPower> lets do this
<noodles775> How do I start whatever enables juju run to work on a unit? (to work around bug 1286213)
<_mup_> Bug #1286213: juju run not available after reboot - run.socket: connection refused <juju-core:New> <https://launchpad.net/bugs/1286213>
<evilnickveitch> jcastro, you free now?
<jcastro> evilnickveitch, yeah, so basically
<jcastro> how do you feel about me replacing the manual deployment steps on the getting started with a bundle?
<evilnickveitch> jcastro, I think bundles are good. i think quickstart is good too.
<evilnickveitch> However:
<evilnickveitch> I don't want to lose the manual process. I think it is very useful for user's understanding to see the steps inolved
<evilnickveitch> so maybe if we could replicate that too
<jcastro> ok, for the existing PRs I added on the quickstart at the top
<jcastro> for each of the providers
<jcastro> but I was wondering if we still wanted the manual stuff for Getting Started specifically
<evilnickveitch> there would be no harm in doing a bundle, but I think it would be great to show that process in manual format afterwards maybe?
<evilnickveitch> i think my concern is, its great that we get users started quickly, but then they have no idea of what has actually happened or how juju actually works. Maybe that isn't an issue
<jcastro> OMG.
<jcastro> marcoceppi, sit down
<jcastro> if you do "bundle:foo" in the gui
<jcastro> it only returns bundles
<dannf`> so.. what are the odds something like this could be merged: https://pastebin.canonical.com/105726/ (arm64 simulator running on ec2) - or is there a way to force this support w/o hacking source?
<bloodearnest> with 1.17.3 can I have 2 local envs bootstrapped at once? I get an error about port 37017 in use
<bloodearnest> I have set alternate storage ports in the config
<rick_h_> bloodearnest: yea, I got notes from tim on doing that
<rick_h_> bloodearnest: actually they're in that doc I shared with you setting up charmworld
<rick_h_> bloodearnest: check that out for notes and config
<jamespage> sinzui, I already uploaded....
 * sinzui palmface
<sinzui> jamespage, We might not be in a bad position because I am removing the 1.17.4 tools
 * sinzui will test in 5 minutes
<bloodearnest> rick_h_: thanks!
<bloodearnest> rick_h_: I was going to do that tonight anyway, talks on Tues and confirmed no internet!
<rick_h_> bloodearnest: ouch
<rick_h_> ccccccbtujivhntrgiigcjjthgdjgubhdvgffbefnurg
<bloodearnest> rick_h_: that too :)
<rick_h_> bloodearnest: heh, working on setting up new yubikeys
<rick_h_> focus fail
<bloodearnest> rick_h_: I would really really like someone to write something that uses my builtin webcam to do "focus follows eyes"
<bloodearnest> in really should be doable
<rick_h_> hah, creepy
<rick_h_> jcastro: do you guys not believe in newlines in the docs? wow that's hard to read/review/write
<jcastro> donde?
<rick_h_> looking at https://github.com/juju/docs/pull/18/files
<jcastro> it's html, I'm pretty much doomed no matter what
<rick_h_> it's ok to newline in between tags so it fits on the screen
<rick_h_> html crushes multiple spaces
<jcastro> yeah that's my fault from my text editor actually
<rick_h_> it auto wraps for you?
<sinzui> jamespage, The issue appears to be 1.17.4 client. I think I need to release a 1.17.5 with a fix or and outright revert to 1.17.3. I will let you know what I know.
<bloodearnest> jcastro: if I was gonna pick a slide deck from the juju ones on github, to give a talk to a local devops group, would you recommend any in particular?
<jcastro> any of the scale or ODS ones
<jcastro> https://github.com/juju/presentations/tree/master/deck.js/scale-2014-juju
<jcastro> is my latest one
<jcastro> all of them are kind of iterations of each other.
<bloodearnest> jcastro: thanks
<jcastro> bloodearnest, but with quickstart ...
<jcastro> you could easily just start off with juju quickstart -i
<jcastro> select local or show the ec2 part, etc.
<jcastro> then start your intro
<bloodearnest> jcastro: I was thinking along those lines :)
<jcastro> then come back to the GUI, and begin
<jcastro> people won't believe you anyway
<jcastro> so you have to expose and show the service
<bloodearnest> jcastro: yeah, rick_h_ has given me instructions on how to set up charnworld locally
<jcastro> yeah just don't do wordpress
<jcastro> that's a solved problem, do something like say, mongo or something
<bloodearnest> jcastro: what about the hadoop thing in the slides?
<jcastro> that's part of the intro
<bloodearnest> right
<jcastro> that's the part where you are explaining how complex things are
<jcastro> LOL XML files, and so on
<bloodearnest> XMLOL
<bloodearnest> this is gonna be fun :)
<lifeless> jcastro: how do you view the presentation?
<lifeless> jcastro: (without cloning it..)
<bloodearnest> lifeless: I couldn't figure that out, so I cloned
<lazyPower> So, I feel like its time for me to admit i've been an nginx fan for a bit too long and i've forgotten some of the apache XML goodness. Anyone have a moment to assist in ironing out some apache2 config magic with me? (literally it will take about 10 minutes)
<bloodearnest> lazyPower: apache config is most certainly not XML ;) but am happy to help if I can
<lazyPower> hah, ok :)
<lazyPower> https://gist.github.com/chuckbutler/9280172 - this is a jinja2 based template of the nginx config that ships with nagios
<lazyPower> *nagios3
<lazyPower> The location i've placed the SSL binding, apache config checker hates it. So I feel like at this point i need to build a vhost config. But that seems like extra effort - do i just define inline vhosts in this file and sep the :*80 and :*443 host definitions, and plug the ssl require on a directory tag in them?
<jcastro> lifeless, I only dumped the up there this past weekend, I haven't set it up to view somewhere yet, sorry.
<lifeless> jcastro: np
<bloodearnest> lazyPower: yeah, you can't set SSL config on a port 80 host, at least to my knowledge, you have to split them
<jcastro> lifeless, I saw the slides you guys did with clint @ his presentation though, nice work.
<lazyPower> ok, as the template shows, it doesn't even define a port 80 host. its just some aliases for the apache conf, and it appears to be mounting that on any vhost i define.
 * lazyPower snaps
<lazyPower> hey, i bet i can do this in default-ssl and it'll "just work"
<lazyPower> ty bloodearnest
<lazyPower> you're the best
<lifeless> jcastro: which presentation ? [there's been many... :)]
<bloodearnest> lazyPower: sounds right. If you want port 80 to redirect to 443 when enableSSL is set, you may need to define a redirecting vhost to use, not sure
<bloodearnest> idk if apache ships a redirect-to-443 config you can just enable in that case, maybe
<bloodearnest> lazyPower: this if for a nagios3 master charm?
<lazyPower> correct
<lazyPower> i'm adding SSL-Everywhere support to the nagios charm
<bloodearnest> ah yes, I saw that we needed to do that. "fun". :)
<bloodearnest> heh, we should have a bootstrap option
<bloodearnest> juju bootstrap --ssl-everywhere
<bloodearnest> that forces it on all charm configs :)
<lazyPower> bloodearnest: that worked btw
<bloodearnest> lazyPower: good good
<hazmat> marcoceppi, ping
<hazmat> marcoceppi, looking at the wordpress charm.. and having all kinds of strange issues.. it looks like the charm is trying to remove $CHARM_DIR/wordpress and it keeps failing and that directory keeps populating itself in the background.. and there's some sort of nfs thing going on was well.. sound familiar?
<marcoceppi> hazmat: not at all :\
<hazmat> marcoceppi, from debug hooks http://pastebin.ubuntu.com/7013135/
<hazmat> just very odd behavior.
<hazmat> just straight wordpress <-> mysql
<hazmat> hmmm.. this could be aufs related..
<hazmat> hmmm.. nfs kernel mods
#juju 2014-03-01
<cory_fu> If I'm making a charm for an (http) application that has a setup step that needs to run once mongo is available but before it can be started, I assume that relation-mongodb-relation-joined is the appropriate place to do that?
<arosales> cory_fu: I think that would be appropriate. You would need to ensure it is idempotent though
<arosales> ie,
<cory_fu> Not sure if I missed your reply after "ie,"
<arosales> cory_fu, python-django may have some good example of a relation-joined for mongo
<hazmat> cory_fu, yes.. that's the place... joined is a bit early if you depend on getting any settings from mongodb
<cory_fu> How do you mean?
<arosales> cory_fu, I was going to ref in python-django but I realize that is a "sym-link" charm
<arosales> "sym-link" == all the hook files point to a central file
<cory_fu> Yeah, I figured that out.  mongodb_relation_joined_changed was still helpful.  :-)
 * arosales was just looking to see if https://bazaar.launchpad.net/~charmers/charms/precise/python-django/trunk/view/head:/hooks/hooks.py#L659
<arosales> was helpful
<arosales> cory_fu, I think what hazmat is saying you can expect mongoDB on relation-joined _start_ getting the DB hooked up. So if you need an action to happen before mongo is set up then this would be a good place
<arosales> As long as you don't need mongo up and running at that time.
<arosales> cory_fu, helpful?
<cory_fu> But it would have the host and port settings, at least, right?
<cory_fu> What hook should I use if I need mongo up and running?
<arosales> pending how much it needs to be up and running you may be able to insert toward the end of relation-joined, if not you could leverage relation-changed
 * arosales looking at node.js as an example
<cory_fu> Also, Allura needs four mongo databases: task, activity, pyforge, and project-data.  Is it ok to use those hard-coded names?  The databases will need to be shared across allura instances, so I assume hard-coding them is ok, though they could conflict with other charms (especially task)
<arosales> http://bazaar.launchpad.net/~charmers/charms/precise/node-app/trunk/view/head:/hooks/mongodb-relation-changed
<arosales> ya, I don't see immediately why a user would need to name those DBs differently
<cory_fu> So, if relation-get private-address works, then mongo should be up and ready?
<cory_fu> Also, I notice that python-django uses host while the node-app one uses private-address; but I don't see either of those in the mongodb hook's relation_set
<arosales> cory_fu, should be
<arosales> re mongo being up
 * arosales looking at those code bases
<cory_fu> I was looking at this: http://bazaar.launchpad.net/~charmers/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py#L1057
<arosales> cory_fu, I am looking around http://bazaar.launchpad.net/~charmers/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py#L136  and django and node both seem to be getting the host
<cory_fu> Hrm.  If my charm depends on mongodb, can it be assumed that the mongo command-line client is available?  I'm guessing not, and I'd be better off using python
<cory_fu> Alternatively, is it reasonable to create some sort of flag in the installed location to track whether a needed-once setup step has been done?
<arosales> cory_fu, you can specify in the install hook which packages you want to ensure are specifically available
<cory_fu> Ah, yeah, of course
<arosales> cory_fu, sorry I didn't parse your last question
<cory_fu> :-)  Instead of connecting to mongo and checking for the database's existence to know if the setup-app step has been run, I could do something like touch /var/local/allura/setup-done, but that seems hinky
<arosales> cory_fu, fyi python-django uses https://bazaar.launchpad.net/~charmers/charms/precise/python-django/trunk/view/head:/hooks/hooks.py#L660 for the  install
<arosales> cory_fu, you could call the mongodb-relation-changed if you want to take some action on the mongodb post a certain setup call
<arosales> cory_fu, hooks should be able to be called more than once.  this is what the node-app does
<arosales> cory_fu, I am not sure if that information is helpful though
<arosales> https://bazaar.launchpad.net/~charmers/charms/precise/node-app/trunk/view/head:/hooks/mongodb-relation-changed#L8
<hazmat> cory_fu, sorry went afk.. but i was referencing .. relations being bi-directional, if mongo needs to send something like an ssl cert.. joined is early to see any settings from the remote side.. lots of charms just do changed hooks for relations, and check to see if the values they need are present.
<cory_fu> Makes sense
<arosales> cory_fu, does allura always need this setup-app to be done before other services are connected?
<cory_fu> Yes.  The setup-app creates the databases, collections, and indexes that the app needs
<cory_fu> So it needs to be done once before the app is started, when mongo is available
<arosales> ah, once mongo is available
<arosales> cory_fu, seems you would need to do that in the mongodb-relation-joined hook and check to make to see if setup-app has run and if not  . .  .
<arosales> this would put a dep on the charm to mongodb. Specifically the charm is not usable until the mongodb relation is created
<arosales> that would need to be documented in the readme
<arosales> hazmat, sound reasonable? ^
<cory_fu> If you're interested in seeing what I have so far, I just put it up here: https://sourceforge.net/u/masterbunnyfu/allura-charm/ci/master/tree/
<arosales> seems the install hook is too early as mongo may not be available
<hazmat> arosales, sounds good
 * hazmat peaks
<hazmat> cory_fu, allura is python afaicr.. i'd probably go with an upstart job for the wsgi api.. and s/paster/gunicorn
<hazmat> oh.. allura made it to an apache project.. cool
<cory_fu> Incubating, but we're hoping to get it graduated soon
<hazmat> cory_fu, re install i'd probably do pip ---use-mirrors
<arosales> looks like your following the node example and skipping the mongodb-relation-joined and checking for the db in the mongodb-relation-changed
<cory_fu> Yeah
<cory_fu> hazmat: I've not tried to run Allura under gunicorn.  Is it more or less drop-in?
<hazmat> cory_fu, some charms capture there deps locally.. probably overkill but with pip you can do offline installs if you $ pip install -d  dist -r requirements.txt  to download all the eggs and then install  in the charm with $ pip install --no-index --find-links=file://tools/dist -r requirements.txt
<hazmat> cory_fu, yeah.. more or less.. there's a gunicorn_paster command that behaves like a paster analogue for serve  http://gunicorn-docs.readthedocs.org/en/latest/run.html
<hazmat> er.. http://gunicorn-docs.readthedocs.org/en/latest/run.html#gunicorn-paster
<hazmat> cory_fu, i'd go ahead and get it running with whatever your comfortable with first.. there's always room for optimization later
<cory_fu> Yeah
<arosales> cory_fu, if you have issues with waiting the the db in mongodb-relation-changed you may want to specifically move the setup-app into the -joined hook
 * arosales still looking to see what mims is doing to skip the -joined and block to the -changed hook is fired
<cory_fu> Now I'm confused.  Is -changed before or after -joined?
<arosales> cory_fu, sorry for the confusion
<arosales> -joined is before -changed
<hazmat> joined is always called before the first changed, and always accompanied by a changed.
<arosales> <name>-relation-joined is run once only, when that remote unit is first observed by the unit.
<arosales> ref = https://juju.ubuntu.com/docs/authors-charm-hooks.html
<hazmat> once for each unit of the remote service
<hazmat> so if you have a replica set.. you'll get joined multiple times.
<hazmat> and each of those joins will be immediately followed by a changed hook firing for the same unit.
<arosales> hazmat, is -joined to early to call setup-app here?
<arosales> or just the right place
<arosales> cory_fu, needs setup-app to be called once only after mongodb is ready
<arosales> ready = mongodb can start creating tables
<hazmat> arosales, joined feels a bit early for db initialization..  ie. a mongodb client connections might need a replicaset name that's sets on the connection, and port is a config option for mongo which is conveyed along relations (although defaults to std 27017)... neither would be set in -joined
<arosales> cory_fu, if all you need in setup-app is the private address and the DB names which you know before hand -joined should be ok, but you can also check if it is a first run -changed
<arosales> hazmat, good point on the replica set
<arosales> hazmat, thanks for the info
<arosales> cory_fu, hopefully that info is helpful.
<arosales> cory_fu, I am going to grab some dinner but feel free to leave a ping here if you run into any other questions
<cory_fu> thanks.  I'm probably going to call it a night soon (on east coast time), but I'll be on tomorrow working on this.
<hazmat> cory_fu, np.. folks will be around
<cory_fu> Thanks
 * hazmat included
<hazmat> cory_fu, so you've got local provider in vagrant you want to access from cli on osx host
<hazmat> ?
<cory_fu> Well, I followed this setup: https://juju.ubuntu.com/docs/config-vagrant.html
<hazmat> hmm.. so there are two ports you would need forwarded from host to container
<hazmat> not sure if the vagrant box does that
<cory_fu> So, yeah, wondering how to use that from the cli now.  (Or how to add and debug my charm to it)
<hazmat> cory_fu, you can use the cli from within the vagrant box
<cory_fu> Ah
<hazmat> cory_fu, vagrant ssh
<cory_fu> I did do the sshuttle step, so it would be nice to take advantage of that
<hazmat> cory_fu, that will be helpful for access services you deploy in the local provider (lxc contianers in the vagrant box)
<cory_fu> Ah, so it doesn't really help with using the cli locally against the vagrant box?
<hazmat> to use the cli on osx, you'd need to copy ~/.juju/environments/$env_name.jenv within the box to the host in the same place.. you'll also need to forward some additional ports..
<hazmat> cory_fu, yeah.. it doesn't seem like it does..
<hazmat> cory_fu, you can not sure which version of the gui its using, but you can drag and drop local charm folders to the gui in the latest version to deploy them
<hazmat> but not helpful for debugging
<cory_fu> Oh, nice
<hazmat> cory_fu, i'd recommend just doing cli from within the vagrant box
<cory_fu> Just the local folder into the GUI?
<cory_fu> Ok
<cory_fu> vagrant@vagrant-ubuntu-precise-64:~/allura-charm$ juju debug-log
<cory_fu> Permission denied (publickey,password).
<cory_fu> The authorized-keys in ~/.juju/environments/local.jenv matches the id_rsa.pub.  o_O
<hazmat> marcoceppi, you know anything about that vagrant box setup? ^
 * hazmat notes he's not here
<hazmat> cory_fu, switch to the ubuntu user
<cory_fu> Ah
<hazmat> cory_fu, i'm guessing.. i haven't used that vagrant box before
<cory_fu> Can't seem to su; none of the mentioned passwords work
<cory_fu> Ok, su -> su ubuntu worked
 * hazmat installs virtualbox
<cory_fu> Nope.  No .juju directory for that user
<hazmat> cory_fu, and no private keys in ~/.ssh?
<cory_fu> It does seem like it's set up to use the vagrant user.
<hazmat> for vagrant user
<hazmat> yeah
<hazmat> it does
<cory_fu> Not for ubuntu, no
<hazmat> but it should have a private key as well is the odd part
<cory_fu> The vagrant user does have one, and it matches what's in the local.jenv file
<cory_fu> But I still get the error
<hazmat> sounds like a different issue then
<hazmat> cory_fu, i'm in process of downloading the jujubox img.
<hazmat> cory_fu, works out of the box for me
<hazmat> cory_fu, interesting i take that back
<hazmat> cory_fu, aha
<hazmat> juju debug-log doesn't.. juju ssh unit_does
<hazmat> cory_fu, you can juju ssh allura/0   and view log at /var/log/juju/unit-allura-0.log
<hazmat> cory_fu, debug-log doesn't work for local provider in 1.16.6 version of juju
<cory_fu> Ah
<cory_fu> This is after doing juju deploy allura from within the vagrant image?
<cory_fu> vagrant@vagrant-ubuntu-precise-64:~$ juju ssh allura/0
<cory_fu> ERROR unit "allura/0" has no public address
<hazmat> cory_fu, juju status allura
<cory_fu> Ah, I see.  It wasn't up yet
<cory_fu> I don't see a juju destroy command; how do I re-deploy if it failed?
<cory_fu> Oh, nevermind.  juju help commands
<cory_fu> :-p
<cory_fu> How long should I expect destroy-service to take?
<cory_fu> Hrm, yeah.  The status says it's in "life: dying" but it doesn't seem to be doing anything
<hazmat> cory_fu, shouldn't take that long
<cory_fu> Is there another log besides the unit-allura-0.log?
<cory_fu> that I should check?
<hazmat> cory_fu, for local provider on the vagrant box there should be some logs in ~/.juju
<hazmat> the interesting one is the host machine-0.log
<cory_fu> Yeah, it doesn't seem to do anything when I issue a destroy-service allura
<cory_fu> Can I more forcibly remove it with destroy-machine?
<hazmat> cory_fu, yes.. juju terminate-machine --force machine_id_with_allura
<cory_fu> Ok, that worked
<cory_fu> Wonder why it didn't work with destroy-service
<hazmat> not sure, i've been using the dev version 1.17 series and hitting it pretty hard.. haven't seen that
<cory_fu> Blargh.  It didn't use my update to the install script
<cory_fu> *hook
<cory_fu> I assume that once I referenced it once with --repository=/path and local:precise/allura, it's somehow cached and I need to do something to clear that?
<hazmat> cory_fu, you have to deploy with -u
<cory_fu> Ah
<hazmat> cory_fu, you can also juju upgrade-charm --force to update in place
<hazmat> --force is needed to upgrade if its currently in an error state
<cory_fu> Ok
<hazmat> cory_fu, if you hit a hook error, you can juju resolved --retry to have it re-execute
<hazmat> juju debug-hooks drops you in a tmux session where you can interactively explore the state of the hook (the hooks pop up in new tmux windows) can be used in combination with resolved --retry..
<hazmat> caveat there is debug-hooks always returns success for the hook being executed
<cory_fu> Though I can't retry the hook, since I need the copy of the hook to be updated.  I assume it won't re-pull the hook from the source I gave to the original deploy command?
<hazmat> cory_fu, you can juju upgrade-charm --force to put the new source in place
<hazmat> cory_fu, or deploy -u  which will increment the revision file automatically in the charm
#juju 2014-03-02
<lazyPower> hazmat: ping
<hazmat> lazyPower, pong..
<lazyPower> hazmat: i see that you've been hacking on charmhelpers. has the general workflow with charmhelpers changed in syntactic sugar?
<hazmat> lazyPower, state of the art is  a yaml file for defining parts, and then using charm-sync-helper
<hazmat> lazyPower, which will copy those bits into the charm
<hazmat> lazyPower, most of the openstack charms use that technique, and several others as well
<lazyPower> I did do that following along with the video tutorial. My hang up is the module importing and then referencing. I'm long-tailing my definition and I dont understand why I have to do that.
<hazmat> lazyPower, did not parse that last sentence
<lazyPower> yeah i dont know how to explain this - can i gist what i'm doing and ask for feedback?
<lazyPower> https://gist.github.com/chuckbutler/9315090
<hazmat> lazyPower, sure
<hazmat> lazyPower, oh.. i try not to do that.. just mapping helpers to hooks/charmhelpers avoids that need
<hazmat> re the sys.path manipulation
<lazyPower> ok so move the lib dir to "charmhelpers" in hooks?
<hazmat> lazyPower,  no.. move lib/charmhelpers to hooks/charmhelpers
<lazyPower> ah ok
<hazmat> lazyPower, then py hooks can just do import charmhelpers
<lazyPower> hazmat: along that veign, my yaml would change to be:  destination: hooks
<hazmat> lazyPower, typically python syntax to get access to a module
<hazmat> lazyPower,  http://paste.ubuntu.com/7024543/
<lazyPower> right on. Thank you!
<hazmat> lazyPower, would be from charmhelpers.core import hookenv
<lazyPower> this has hung me up for the last half hour, i was starting to get discouraged
#juju 2015-02-23
<Murali> Hi Jamespage
<jamespage> Murali, good morning
<Murali> Good Morning Jamespage
<Murali> finally we are able to successfully launch the openstack
<Murali> we are able to see the openstack-dashboard
<jamespage> Murali, \o/
<jamespage> congrats!
<Murali> Now we have issue in logging to instances we created in openstack
<jamespage> Murali, oh yes
 * jamespage braces for mtu confusion
<jamespage> Murali, you've done the normal stuff like enabled SSH and ping access to the instances?
<Murali> its says "console is currently unavailable. please try again later" from openstack dash-board
<jamespage> Murali, oh the console access is disabled by default
<Murali> ok
<jamespage> Murali, it does not scale that well and is not fully security supported in Ubuntu, so we can't enable it by default
<jamespage> Murali, we try to stick to fully supported options by default
<Murali> how to make to enable?
<Murali> or how to access the instance we created in openstack
<jamespage> Murali, the best way IMHO is with SSH
<jamespage> Murali, the bundle on jujucharms.com has some details on how to configure the firewall to allow access
<jamespage> Murali, you can inject an SSH key as part of booting an instance as well
<jamespage> Murali, the dashboard is OK for a quick drive through, but for any serious use you'll want to get to know the command line tooling :-)
<Murali> on cli also we tried Jamespage
<jamespage> Murali, what problem did you hit?
<Murali> we are now able to login via CLI Jamespage
<jamespage> Murali, good
<Murali> Thanks a lot for kind response
<jamespage> Murali, no problem
<jamespage> Murali, I was thinking about why you ended up with no relations to start with - I suspect it was due to the error on the neutron-gateway during deployment
<jamespage> Murali, I *think* that any errors like that will stop the second phase of adding relations from being executed
<jamespage> Murali, I also believe that the juju-gui is switching to not work like that - but rick_h_ would need to confirm on that - are we moving to a deploy and related,
<jamespage> ?
<jamespage> not waiting for services to deploy before relating
<Murali> we first ran the jujuquickstart
<Murali> it successfuly deployed all the services
<Murali> we waited for longtime couldnt see the relations added
<Murali> and juju-gui showed some error saying "error occured but information"
<Murali> then we applied juju-deployer bundle.yaml as per you
<Murali> But we saw that neutron-gateway deployment was success
<rick_h_> jamespage: Murali yes, depending on the gui version it would stop the deployment if it hit an error. There's plans to update that to not be the case, but it's part of a bigger chunk of how bundles are handled and not ready yet.
<Murali> ok
<Murali> Hi Jamespage
<Murali> sorry to trouble you with questions
<Murali> we would like add the local jujucharm which we developed to bundle.yaml
<Murali> is it possible?
<jamespage> Murali, it is - you can just add it to the deployed environment and the export the bundle from the GUI
<Murali> is there any changes required in bundle.yaml Jamespage?
<Murali> how to add the repository path in bundle.yaml file?
<jamespage> Murali, oh I see
<jamespage> tricky that one
<rick_h_> Murali: the GUI doesn't support bundles with local charms because they're not reusable then. I don't recall if the deployer had a way to do that. I think it tends to use vcs locations in order to support a charm not in the main charm store
<Murali> ohh ok rick_h_
<Murali> is there any other way to deploy the local charms with out juju-gui
<Murali> and using jujudeployer command
<Murali> is it possible via CLI  juju deployer
<bloodearnest> is it possible to loosen the permissions on log files for the local provider? Currently 0500, syslog:syslog
<bloodearnest> 0600 rather
<bloodearnest> I can edit the /etc/rsyslog/conf.d/ juju file, but was wondering
<bloodearnest> if it would get rewritten on bootstrap, or if it was worth loosening by default? Given the local provider is explicitly for dev  only?
<lazyPower> rick_h_: you can use local charms with deployer if you have JUJU_REPOSITORY env set
<rick_h_> Murali: ^
<lazyPower> we make extensive use of this in our testing of bundles that dont yet exist or have a place in the charm store - as ingest in personal namespace introduces a latency between changes.
<lazyPower> Murali: if you need an example of a bundle leveraging local charms - see:  https://github.com/whitmo/bundle-kubernetes/blob/master/specs/local.yaml
<Murali> Thanks lazyPower
<ctlaugh> I'm looking for some help on the correct way to configure the charm(s) to deploy Nova correctly to use either Neutron or FlatDHCPManager (whatever is easier) when deployed by Juju on a MAAS node.  The node only has 1 network interface that is brought up (eth0) and it is already bridged to juju-br0.  I'm running trusty/icehouse.
<stub> bloodearnest: I opened a bug on that, about changing ownership of the logs, but it is won't fix as the team are trying to make the local provider behave more like the rest of the providers rather than being full of special cases.
<bloodearnest> stub, ack, thanks, good to know
<bloodearnest> approve of that goal
<stub> yeah, agreed
<johnny_shieh_> Hi, I've recently installed a juju master (charm development) and when I run a "charm proof" command on my charm, I get this error message
<johnny_shieh_> I: Includes template icon.svg file.
<johnny_shieh_> W: README includes line 6 of boilerplate README.ex
<johnny_shieh_> I: relation website has no hooks
<johnny_shieh_> I: missing recommended hook config-changed
<johnny_shieh_> ERROR subprocess encountered error code 100
<johnny_shieh_> I also ran this on a colleague's charm and got different warning messages, but also the same error code 100.
<johnny_shieh_> Is there a FAQ for error codes?  I tried searching on google first, but no "100" hits in the first pages
<ayr-ton> guys, the HACKING.txt from charms-tools is outdated. Could I submit a merge request with updated instructions based on the latest Makefile?
<thumper> ayr-ton: sounds good to me
<ayr-ton> thumper: okay :)
<MOZGIII> hello, I have armhf board with debian on it, and ubuntu trusty inside an lxc on it, and I run juju on that ubuntu. and when I try to bootstrap with local env (meaning that would create nested lxc containers) - that doesn't work (lxc starts but agent fails to start)
<MOZGIII> how do I fix this and make agent start up properly?
<MOZGIII> I'm trying to deploy juju-gui
<MOZGIII> http://code.re/7oB
<MOZGIII> http://code.re/7oC
<MOZGIII> http://code.re/7oD
<MOZGIII> http://code.re/7oE
#juju 2015-02-24
<AskUbuntu> cann't bootstap in lxc environments | http://askubuntu.com/q/589150
<AskUbuntu> Where does juju store its cache | http://askubuntu.com/q/589242
<jamespage> dosaboy, gnuoy: if you have time - https://code.launchpad.net/~openstack-charmers/charm-helpers/0mq/+merge/238855
<jamespage> addes redis support for zeromq
<gnuoy> looking
<jamespage> gnuoy, its fairly isolated
<gnuoy> jamespage, approved
<jamespage> gnuoy, ta
<jamespage> I'll merge it
<hazmat> cool
<stub> marcoceppi: I just got an email about failed test results pointing to http://reports.vapour.ws/charm-tests/charm-bundle-test-11056-results, but the page is empty of information.
<stub> http://reports.vapour.ws/charm-test-details/charm-bundle-test-11056-results looks like the correct URL
<rick_h_> hazmat: got time to chat this version key in the bundle file? That seems to be the only sticking thing on that branch left correct?
<hazmat> rick_h_: sure
<rick_h_> hazmat: hangout or just chat?
<hazmat> rick_h_: so nutshell is that if we start sniffing, we'll keep sniffing each change till its a mess
<hazmat> rick_h_: can't hangout atm
<rick_h_> hazmat: understand but the only sniff is just the removing of multiple bundles per file. it's a one time 'raising' of the data up one level
<rick_h_> the rest of backward compatible/etc
<rick_h_> hazmat: I guess I'd agree but for two things. 1) these things are hand written so we're not going to be able to rely on a version key long term and 2) the changes planned and such are small and blindingly sniffable
<hazmat> rick_h_: but isn't the next incoming change, machines as a top level key
<rick_h_> so it just seems overkill and unreliable to go the version string route atm
<hazmat> hmm
<rick_h_> hazmat: right, but I'm missing how that's a version increase. If it was an API I'd just call it a new API and not increment the version number of that api
<hazmat> rick_h_: the thought is also there that things might diverge down more lines
<rick_h_> that's totally true, but honestly the talk now is a whole new format
<rick_h_> and so I don't want to go too far into that future to be honest
<hazmat> ie. not linear version.. but machines added to both formats, the extant format is majority usage .. with inheritance
<rick_h_> that's more with core involved, specs, and a lot of total changes from the sounds of things
<rick_h_> hmm, I suppose on the inheritance front...
<hazmat> rick_h_: i'm okay with pushing forward with the change (w/ test)..
<hazmat> its not ideal.. but its small and easy to fix if we go a different way in the future
<rick_h_> hazmat: cool, I know he was adding that yesterday per standup so that'd be great. I think we'll keep an eye out though as you're right that if the delta gets big explicit > implicit
<rick_h_> right
<rick_h_> Makyo: ^
<Makyo> hazmat, rick_h_ that should be pushed
<rick_h_> Makyo: cool then. hazmat can you TAL then please?
<Makyo> Oh, I'd left the internal services check.  Do you think there's a chance of someone naming their bundle 'services'?  I can take that out if not.
<hazmat> rick_h_: Makyo i have to wait till this evening, don't have network access for it atm
<rick_h_> hazmat: rgr ty much
<hazmat> Makyo: let's take that out.
<Makyo> ack
<hazmat> Makyo: i think they deserve what they get if they do that ;-)
<rick_h_> lol
<Makyo> Haha, fair :)
<marcoceppi> stub: looks like ci has changed the details page, I'll update reviewq
<jshieh> hey, got a question about variables set in config.yaml and if they can be overridden with the "juju set" command
<ctlaugh> I need some help with configuring and deploying nova-compute with FlatDHCPManager.  I can not get it deployed correctly where I can ssh into a VM.  I would like eth1 to be the public interface.
<ctlaugh> I have tried various combinations using both eth0 and eth1 for flat-interface, but so far it isn't working.  I can look at the VM boot log and it can't reach the metadata server.  Any help, please?
<ctlaugh> jamespage ^^?
<AskUbuntu> juju deploy ceph 'hook failed: "mon-relation-changed"' | http://askubuntu.com/q/589506
<lazyPower> ctlaugh: jamespage is UK time, response may be latent, its pretty late over there.
<ctlaugh> lazyPower: I'll take help from anyone -- just trying to start pinging ones I thought might be able to help :)  I was about to post on AskUbuntu and hopefully get a response.
<lazyPower> ctlaugh: that would be a good place to put it, or the mailing list.
<AskUbuntu> How to correctly configure nova-compute to use FlatDHCPManager | http://askubuntu.com/q/589516
<blahdeblah> Methinks Azure tests are broken: http://reports.vapour.ws/charm-tests-by-charm
<blahdeblah> The 3rd one in the list is mine, and it shouldn't be a change that makes *anything* fail.
<marcoceppi> blahdeblah: if you look at the test results, http://reports.vapour.ws/charm-test-details/charm-bundle-test-11054-results, you can see that the test threw a SKIP (100)
<marcoceppi> looks like a timeout
<blahdeblah> marcoceppi: Is that fairly common?
<marcoceppi> blahdeblah: I've seen azure deployments timing out at 15mins. Tests by default use a 15 min timeout, you could try to increase that for slower clouds
<blahdeblah> marcoceppi: What's our obligation as MP submitters in terms of getting clean testing results?  I only looked at it because it emailed me about a failure, and it seems pretty clear to me that it's the cloud's fault, not the charm's.
<marcoceppi> blahdeblah: charmers will always review the test output regardless of pass or fail. With the new charm testing stuff we're actually going to change a bit on how the CI bot reponds to merge requests.
<marcoceppi> So as a reviewer, I would look at the change, the test results, and if need be, run the tests myself for clouds missing
<blahdeblah> OK - thanks
<marcoceppi> instead of saying "TESTS (PASSED|FAILED)" it's just going to say testing was completed results available at URL
<marcoceppi> blahdeblah: increasing the timeout to something higher than 15 mins wouldn't hurt
<marcoceppi> HPCloud hit 10 mins for deployment
<beisner> hi ctlaugh, see corresponding nova-compute flat networking options @ http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/nova-compute/trunk/view/head:/config.yaml#L89  (ymmv with flat - but let us know)
<marcoceppi> moving it to 30 mins, for example, wouldn't hurt blahdeblah
<blahdeblah> marcoceppi: Is that something that is controlled in the charm?
<marcoceppi> blahdeblah: it's in the test file typically
<blahdeblah> marcoceppi: Pardon my ignorance - the test file in the charm repo?
<marcoceppi> http://bazaar.launchpad.net/~mbruzek/charms/precise/nrpe-external-master/tests/view/head:/tests/99-autogen#L21
<mbruzek> hello
<marcoceppi> mbruzek: unintentional ping
<marcoceppi> blahdeblah: http://bazaar.launchpad.net/~charmers/charms/precise/nrpe-external-master/trunk/view/head:/tests/99-autogen#L21
<blahdeblah> marcoceppi: So would a change to that take effect on the first test run afterwards?
<marcoceppi> blahdeblah: if you change that and reupload, on the next test run it'll use the updated timeout value for waiting for the deployment
<blahdeblah> cool - thanks
<ctlaugh> beisner: Thank you, I've seen those (both looking in the charm and on the charm store) -- what I can't get working right is the correct combination of settings needed to make it work on my config.  For example, what do I need to set flat-interface, bridge-interface (and anything else?) so that it uses eth1 for all VM public network traffic and uses eth0 for everything else?
<ctlaugh> ^^ I posted this question (http://askubuntu.com/q/589516) that has more details on what I have done and what I am seeing.
<beisner> ctlaugh, tbh, i don't have a definitive answer.  but i just made a best guess for my enviro and kicked off a flat deploy.  may not get back to that till tomorrow though.
<ctlaugh> beisner: Sorry, I see that you already saw the question and responded :)
<ctlaugh> beisner: ok, thank you.  I appreciate it.
<beisner> ctlaugh, so here's how I interpret it:   bridge-ip  is something static in your LAN, and of course bridge-netmask is the matching subnet.
<beisner> ctlaugh, bridge-interface's default br100 shouldn't matter;  flat-interface for me is eth1.   so, the charm should take all instance traffic across eth1, and create bridges on eth1.
<ctlaugh> So, flat-interface is for instance traffic?  I thought it was for internal traffic.
<beisner> ctlaugh, fwiw, secgroups could also be tripping you up.
<beisner> ctlaugh, aiui, yes.
<ctlaugh> And, for the bridge-ip (and netmask), what purpose does that serve?  Does it need to be an address in the same subnet I would assign for floating-ips, or something different?
<beisner> ctlaugh, i don't think you can use floating ips with the flat network option.
<beisner> ctlaugh, what i'm doing is this:   eth0 and eth1 are wired to the same LAN.  i've told the charms nothing about eth0, and expect it to use it for api traffic.  i've told the charms about eth1, giving it some static IP info compatible with my LAN.
<beisner> ctlaugh, but keep in mind, i've not personally used the charm flat network options, so i'm just going by what i know about the default behaviors, the configs I see, and a little bit of filling in the gaps.
<ctlaugh> beisner: understood
<ctlaugh> beisner: I've got to head out for a few hours, but will be back online sometime later tonight.  Thank you for the tips -- I'm going to kick another install off tonight to try.
<beisner> ctlaugh, hope that helps, best to you!
#juju 2015-02-25
<AskUbuntu> Installing Landscape and launch the OpenStack Autopilot | http://askubuntu.com/q/589618
<jamespage> dosaboy, do you have capacity to look at bug 1423153 ?
<mup> Bug #1423153: /var/lib/mysql/mysql.passwd no longer exist <mysql (Juju Charms Collection):New> <https://launchpad.net/bugs/1423153>
<dosaboy> jamespage: sure
<Murali> Hi Jamespage
<jamespage> hey Murali
<Murali> when we are deploying juju charm install hook failed
<Murali> we dont add code to exit or return for failure and install is keep-on retrying
<Murali> http://paste.ubuntu.com/10405985/
<Murali> for that charm we removed the unit and service
<Murali> after when we try deploy new charm we getting the above error
<jamespage> Murali, that's just a log message related to the serial execution of hooks within a container
<Murali> its not allowing to install new charms
<jamespage> hmm
<Murali> is there a way release that
<jamespage> it would indicate that a hook is still running - check the process listing for the server
<jamespage> something might be blocking
<Murali> we destroyed that machine it self
<jamespage> Murali, midonet-cassandra/0 specifically is still running its install hook
<jamespage> Murali, juju enforces serial hook execution within a container, so if something is running a hook, nothing else can
<jamespage> which would block any futher service deployments
<jamespage> Murali, see if you can reproduce and check to see if the midonet-cassandra charm is blocking in its install hook
<Murali> how to get out of it Jamespage
<jamespage> Murali, workout what's wrong with the hook
<jamespage> Murali, its probably still doing something
<Murali> that service is not shown in juju status
<Murali> on the previously existing logs for the unit, it had an error with dpkg lock and was retrying every 10 secs..
<Murali> i did a resolved unit, followed by remove unit and remove service ... everything went on clean and relevant cass service was removed from juju status ..now when i try to deploy some other charm I get this error
<jamespage> Murali, recyle the machine
<jamespage> Murali, juju terminate-machine
<jamespage> and then redeploy
<Murali> Is there any hack to remove the lock .. i have some other services running on that node :( ... if there is no way, i shall do the terminate machine
<jamespage> not that I know of
<Murali> Ok, Thanks James.. I will do the terminate option ..
<Murali> @Jamespage : Is there a difference between terminate-machine and destroy-machine ??
<jamespage> I don't think so
<Murali> Ok
<AskUbuntu> Container lxc has two IP: One of them is bridge's. When connect with juju, connects to bridge's | http://askubuntu.com/q/589819
<marcoceppi> o/ apuimedo
<apuimedo> marcoceppi: ;-)
<jshieh> hey folks, I have a Ubuntu 14.04 installed on a Power system (console-only) want to check out how my charm might look via the gui
<jshieh> Any recommendations on desktop/vnc packages that are easy to install?
<lazyPower> jshieh: are you on a posix host that you can run sshuttle?
<lazyPower> jshieh: it may be easier to sshuttle into the machine (duct-tape vpn style) and pull up the GUI that way
<jshieh> ummm, unsure about that.  I typically use vncserver / vncviewer to get to systems.
<jshieh> my "viewer" client is a mac
<apuimedo> jshieh: you can run sshuttle on the mac
<jshieh> and my "host" is Ubuntu LE on Power 8 vm
<apuimedo> iirc
<lazyPower> jshieh: are you > mavericks?
<jshieh> yes
<lazyPower> i know it works fine on lion, i *think* it works on mavericks, but you're completely abandoned on yosemite
<lazyPower> it dumps core
<jshieh> yup, staying on mavericks for a while.
<lazyPower> ok, might want to give that a go then
<lazyPower> if it fails, its a simple brew uninstall
<jshieh> Did some googling and pulled down lots of stuff, but maybe too much stuff
<lazyPower> aisrael: does sshuttle work on mavericks? or is my memory of this completly kapoot?
<jshieh> I've pulled down and installed x11vnc and tightvncserver
<lazyPower> jshieh: well you can give that a go - if you wan tot install X on the host its a reasonable path forward
<apuimedo> isn't it possible to start the mac's x server and then use X forwarding?
<jshieh> hmmm, a thought.
<jshieh> I already spent the 15 minutes pulling down the gnome desktop!
<jshieh> ha-ha
<jrwren> jshieh: you have the juju-gui deployed? can you curl it and see content? If you can, I'd ssh -L someport:ip:portthatyoucurled thatserver  to use ssh port forwarding.
<aisrael> lazyPower: It works on Mavericks, but not on Yosemite going forward
<lazyPower> ah, thats what i thought. thanks for confirming
<sebas5384> jcastro: ping
<jcastro> sebas5384, pong!
<sebas5384> hey jcastro !
<sebas5384> just wanted to say that the event was awesome and juju got a lot of attention
<sebas5384> but i'm gonna reply your email to better explaining of how was it :)
<jcastro> yeah
<jcastro> good to know it went well!
<sebas5384> yes! a lot of open mouthes
<sebas5384> *mouthed
<jcastro> did the hackfest on the charm itself bear fruit?
<sebas5384> jcastro: not much
<sebas5384> because of the internet
<sebas5384> there were like 60 people in a big place under a hotel
 * jcastro nods
<sebas5384> so we tried to download everything we need
<sebas5384> but the process of testing and developing in the charm was too slow
<sebas5384> so the guys wanted to know more about how juju actually works and the charm of course
<sebas5384> we get some starts(at github) and some promises of people that want to contribute
<jcastro> it would be awesome to know which parts were slow
<sebas5384> jcastro: well, everyone use vagrant
<jcastro> you don't have to tell me now
<jcastro> but like in the email
<sebas5384> jcastro: sure :)
<jcastro> so we can fix the easy ones.
<lazyPower> sebas5384: o/
<sebas5384> ping lazyPower !
<sebas5384> hey!
<sebas5384> I briefly sow the video of your last talk about juju
<ahasenack> hazmat: ping, around?
<sebas5384> lazyPower: really cool man! congrats :)
<lazyPower> nice :D glad you enjoyed it
<lazyPower> sent you some info in a PM, more details to come about that soon
<hazmat> ahasenack: yes
<hazmat> ahasenack: what's up?
<ahasenack> hazmat: you bumped pythoh-jujuclient's version in setup.py to 0.5.0,
<ahasenack> hazmat: but that number is lower than what was there before
<ahasenack> 0.18.5 or even 0.19.0
<hazmat> ahasenack: doh... ouch.. math is rough
<ahasenack> :)
<hazmat> ahasenack: i'll fix that, thanks
<ahasenack> hazmat: cool
<dpb1> hazmat: btw, I committed a fix in lp:python-jujuclient.  wasn't sure how the syncing was going with the github branch.
<dpb1> hazmat: hi btw. :)
<hazmat> dpb1: greetings.. saw the review comments, thanks
<hazmat> dpb1: functional tests are flakey for you? if you could file a bug with errors that would be useful.. there's generally so little logic there, ie return self._rpc that i'm loathe to mock
<dpb1> hazmat: haha, as I speak I'm bootstrapping an ec2 to test.  I was against maas before.
<hazmat> dpb1: re git sync.. its a bit of a dance to get a working copy.. you have to stitch them together ie. two checkouts, move .bzr or .git to the other, and then commit and push to both.
<dpb1> hazmat: gross
<hazmat> no worries about it though
<hazmat> dpb1: yeah.. if you know a better way i'm all ears
<hazmat> dpb1: hope was then killing the bzr and setting up lp mirror
<dpb1> hazmat: so, next time, I'll do that and submit a MP/pull request
<hazmat> like marcoceppi did with amulet
<hazmat> dpb1: sounds good
<lazyPower> hazmat: http://github.com/chuckbutler/git-vendor/
<lazyPower> hazmat: works for me, but i'm tracking history in 1 repository as opposed to 2
<aisrael> Anyone had luck running the tests for charm-tools? It's throwing a `ln: failed to create hard link âbin/testâ => âscripts/testâ: Operation not permitted` on me
<lazyPower> aisrael: i haven't run them recently, are you doing this in a vm or on your osx host?
<aisrael> lazyPower: in a vm
<lazyPower> hmm, interesting error response
<kwmonroe> hey hazmat, once upon a time, you mentioned a hosting provider that did wonky architectures.. not site-ox.  do you remember their name?  one-up or up-yours or something like that?
<whit> to answer my earlier question, python jujuclient does not seem to run on python3
<whit> hazmat, does HEAD support 3?
 * whit saw six installed, but gets syntax errors on 3.4
<hazmat> kwmonroe: there's a few . one for power one for arm
<hazmat> the power one was  a subsidiary of ovh
<hazmat> whit it should
<hazmat> whit the release version that supports is fubar'd due to bad version math, i'll update with a newer release so it pulls down the right one (ie. 0.5  < 0.18 oops)
#juju 2015-02-26
<Roconda> Hey there! I'm trying to deploy juju on a public openstack provider(cloudvps) but when bootstrapping I get the following message: http://privatepaste.com/afbed44133
<Roconda> any clue?
<Guest21093> Hi! I have added a unit to a service but it's stuck in pending. There is no /var/log/juju/unit-*.log on the server but the output of /var/log/juju/machine-*.log shows an error: http://privatepaste.com/0a4646aa53
<Guest21093> Does anyone know what is the problem?
<rick_h_> Roconda: some debug info to look into with a quick google search of your error: http://askubuntu.com/questions/475348/juju-bootstrap-no-data-for-cloud https://bugs.launchpad.net/juju-core/+bug/1242476
<mup> Bug #1242476: Bootstrap fails with --upload-tools on private openstack cloud <bootstrap> <canonical-is> <canonical-webops> <openstack-provider> <upload-tools> <juju-core:Triaged> <https://launchpad.net/bugs/1242476>
<rick_h_> Roconda: basically checking the images available and the url for the list of images/etc
<xetqL> hello everone
<xetqL> *everyone
<xetqL> Someone here knows a bit about LXC issues with Juju?
<Roconda> rick_h_: thanks! Got it working. Had to add an image manually and add --meta-source and --upload-tools to the bootstrap command
<rick_h_> Roconda: awesome
<xetqL> Nice
<marcoceppi> xetqL: what's your question?
<xetqL> hi marcoreppi, i'm trying to deploy some services with juju and i wish i could create lxc container on remote machines (with maas) but my lxc container are spinning in pending ...
<xetqL> I think it's a dns problem because i cannot fetch some archive but the remote machine itself is connected to internet and can ping google it's just the lxc
<xetqL> I can deploy service on root container without any problem
<marcoceppi> xetqL: logs for that machine would be very helpful, /var/log/juju/machine-#.log for example on the host where you're trying to spin up LXC machines
<Muntaner> hello
<xetqL> a lot of error /dev/mapper/control: open failed: Operation not permitted
<xetqL> Failure to communicate with kernel device-mapper driver.
<xetqL> Check that device-mapper is available in the kernel.
<xetqL> Command failed
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty/main bridge-utils amd64 1.5-6ubuntu2
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty/main msr-tools amd64 1.3-2
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty/main cpu-checker amd64 0.7-0ubuntu4
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty/main liberror-perl all 0.17-1.1
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main git-man all 1:1.9.1-1ubuntu0.1
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://archive.ubuntu.com/ubuntu/ trusty-updates/universe rsyslog-gnutls amd64 7.4.4-1ubuntu2.5
<xetqL>   Temporary failure resolving 'archive.ubuntu.com'
<xetqL> Err http://security.ubuntu.com/ubuntu/ trusty-security/main git-man all 1:1.9.1-1ubuntu0.1
<xetqL>   Temporary failure resolving 'security.ubuntu.com'
<xetqL> <4>init: plymouth-upstart-bridge main process (888) terminated with status 1
<xetqL> Stopping landscape-client daemon
<xetqL>    ...fail!
<xetqL>  * Stopping rsync daemon rsync
<xetqL>    ...done.
<xetqL>  * Stopping open-vm guest daemon vmtoolsd
<xetqL>    ...done.
<xetqL>  * Asking all remaining processes to terminate...
<xetqL>    ...done.
<xetqL>  * All processes ended within 1 seconds...
<xetqL>    ...done.
<xetqL>  * Deactivating swap...
<xetqL>    ...fail!
<drbidwell> I am running ubuntu-14.04.2 and juju 1.21.3 building and openstackHA.  I attempted to start 3 sessions of rabbitmq in lxc. Instances 0 and 2 started just fine, instance 1 failes to start rabbitmq.  /var/log/rabbitmq/startup_log contains:Error description:
<drbidwell>    {could_not_start,rabbit,
<drbidwell>        {bad_return,
<drbidwell>            {{rabbit,start,[normal,[]]},
<drbidwell>             {'EXIT',
<drbidwell>                 {rabbit,failure_during_boot,
<drbidwell>                     {error,
<drbidwell>                         {timeout_waiting_for_tables,
<drbidwell>                             [rabbit_user,rabbit_user_permission,rabbit_vhost,
<drbidwell>                              rabbit_durable_route,rabbit_durable_exchange,
<drbidwell>                              rabbit_runtime_parameters,
<drbidwell>                              rabbit_durable_queue]}}}}}}}; What is it doing wrong and how do I correct it?  I would like to understand what went wrong so I can avoid it on the retry and successive builds.
<hazmat> drbidwell: please use pastebins for multiline output and post link in irc channels.. paste.ubuntu.com
<hazmat> xetqL: ^
<lazyPower> aisrael: when you reviewed this mongodb MP did you get a collision during the merge?
<lazyPower> https://code.launchpad.net/~james-page/charms/trusty/mongodb/kilo-support/+merge/247123 <- that guy
<aisrael> lazyPower: you know what? I did, and forgot to mention it. Another merge must have updated charm helpers as well (which is all this one is doing, too)
<lazyPower> aisrael: that MP was declined by you :)
<lazyPower> aisrael: no rush - but when you get that sorted ping me and i'll approve the final outcome.
<aisrael> lazyPower: another another one :) One already approved
<lazyPower> ah ok
<lazyPower> so one got merged then?
<lazyPower> i'm multi-tasking and admittedly didnt look @ the mongodb charm tree in teh store to see what was up there.
<aisrael> an update of charm-helpers, but it may not be current enough for kilo support, which is why I approved james page's merge
<lazyPower> I can follow up on that MP and ask directly
<lazyPower> seems like the correct path forward
<aisrael> Thanks, lazyPower.
<lazyPower> cheers
<Roconda> So today i've been playing around with juju and openstack. Found some issues when trying to bootstrap that it could not assign a floating ip and could not connect to the spawned juju machine. I've tried setting
<Roconda> 'authorized-keys-path: /home/vagrant/.ssh/' but it doesn't seem to work.
<Roconda> anyone has a clue?
<sarnold> Roconda: are you sure that setting takes a directory rather than a filename?
<Roconda> sarnold: just tried it, not working, time out.
<sarnold> Roconda: does ssh to that host work?
<Roconda> sarnold: yes, it just prompts a passwd auth instead of logging in
<Roconda> ERROR juju.cmd supercommand.go:323 failed to bootstrap environment: waited for 2m0s without being able to connect: Permission denied (publickey,password).
<lazyPower> Roconda: are you on an OSX host running the juju vagrant image to interface with your openstack env?
<Roconda> lazyPower: nope, elementary OS
<lazyPower> Roconda: ok, makes sense i was trying to figure out where vagrant fit into the overall thing
<lazyPower> allright - so your Openstack Env is stood up and you're trying to bootstrap into that OpenStack env correct?
<Roconda> lazyPower: yep
#juju 2015-02-27
<halcyon> Hye ..does anyone hv idea why I cannot ssh into vm as I already installed juju and setup the vm and add services and relation
<halcyon> last day I am able to view juju gui admin via web browser
<halcyon> however, I hv to shutdown the host and today I cannot ssh into vm 1 as it said no route to host...
<halcyon> anyone could help me
<halcyon> hello
<halcyon> could anyone help me why I cannot ssh to machine 1 and as a result the juju-gui admin cannot be view
<halcyon> I do check the status but all services mentioned is in the state of down
<thumper> halcyon: which provider?
<thumper> halcyon: have you checked that the VMs are actually started?
<halcyon> local provider'
<halcyon> already started yesterday
<halcyon> thumper: local provider
<thumper> halcyon: what do you see with 'sudo lxc-ls --fancy' ?
<halcyon> thumper: nothing , is it something with lxc??
<thumper> halcyon: unless you told it to use kvm
<thumper> halcyon: you said it was working yesterday?
<halcyon> thumper: yes , working yesterday
<thumper> which version of juju?
<halcyon> thumper: I managed to view gui admin already
<halcyon> thumper: i'm not sure, how do I check the version??
<thumper> juju version
<halcyon> thumper: 1.21.1-trusty-amd64
<thumper> are you sure you used the 'sudo' part?
<thumper> otherwise you are asking about user space lxc containers
<thumper> sudo lxc-ls --fancy
<halcyon> yes I used sudo
<halcyon> anyway I can ssh into machine2
<halcyon> thumper: anyway I ccomnnect ro machine2
<thumper> if lxc-ls is not showing the containers, then you are probably not using the local provider
<thumper> type 'juju env'
<halcyon> thumper: in my environments.yaml file I mentioned kvm as the container
<thumper> ah
<thumper> then you are using kvm containers, not lxc
<thumper> which would explain why you can't see lxc containers
<thumper> I'm not familiar enough with the kvm tools to help just now, but likely that the first machine didn't restart properly
<thumper> not sure why
<halcyon> ok I check juju env it mentioned local
<halcyon> thumper: how can I restart back machine 1
<halcyon> machine 2 I can connect as well
<thumper> what does juju status say? (pastebin plz)
<halcyon>  "1":     agent-state: down     agent-state-info: (started)     agent-version: 1.21.1.1     dns-name: 192.168.122.109     instance-id: halcyon-local-machine-1     series: precise     containers:       1/lxc/1:         agent-state: down         agent-state-info: (started)         agent-version: 1.21.1.1         dns-name: 192.168.122.134         instance-id: halcyon-local-machine-1-lxc-1         series: precise         hardware: arch=amd64  
<halcyon> thumper: this is machine 1
<thumper> you can try this:  'ssh ubuntu@192.168.122.134' to see if the vm is up
<thumper> hang on
<thumper> that is the lxc container inside
<thumper> ssh ubuntu@192.168.122.109
<halcyon> thumper: yes lxc container is there
<halcyon> thumper: ssh ubuntu@192.168.122.109 still get the same error , no route to host
<halcyon> thumper: u hv any idea why?
<thumper> yes, the VM didn't start
<thumper> I don't know why the VM didn't start
<thumper> I'd probably start by looking for the kvm logs locally
<halcyon> thumper: ok , how do I check kvm logs locally??
 * thumper shrugs
<thumper> not sure
<thumper> I'd guess something like: /var/logs/kvm
<thumper> if you are not familiar with kvm, why use it with the local provider?
<halcyon> thumper: previously I deploy service using this command
<thumper> you obviously had to choose it
<halcyon> thumper: i'm just beginner, I follow the instruction only
<thumper> ok, in which case, lxc is much better...
<thumper> which instructions said to use kvm?
<halcyon> like this my command : juju deploy juju-gui --to lxc:1
<halcyon> hmm...using dell whitepaper
<halcyon> from above command I understand tht I deploy a service to lxc container in machine 1
<halcyon> thumper: how should I do now in order to start back my vm
<halcyon> thumper: I try edit environments.yaml file and replace container: lxc
<halcyon> thumper: ??
<thumper> if you are just playing around, destroy the environment and try again with  the basic local provider
<thumper> and just use 'juju deploy juju-gui' with no --to
<thumper> or alternatively, find the docs on 'juju quickstart'
<thumper> which also helps here
<lazyPower> thumper: quickstart isn't in the official doc site
<thumper> lazyPower: oh? why?
<lazyPower> we've got a pinned TODO to get that done - it only exists in the UX blog at present
<lazyPower> should see that land early next week actually
<rick_h_> thumper: because quickstart doesn't support windows and so it's not the official way to go
<thumper> rick_h_: ok, cheers
<rick_h_> thumper: and we've got a todo to help with a quickstart docs section as a sub-section then to help it
<thumper> lazyPower: could you point halcyon in the direction of some good new starter documentation?
<rick_h_> but <3 you think of us :) quickstart ftw
<thumper> rick_h_: you guys are awesome
<halcyon> thumper: can I used command line for quickstart
<thumper> rick_h_: remember that I love you guys when you do my 360 review :-)
<thumper> rick_h_: if you were asked that is
<lazyPower> thumper: as of today - https://jujucharms.com/docs/getting-started
<rick_h_> halcyon: yes, quickstart is a cli tool that helps bootstrap and get a gui running quickly
 * thumper chuckles
<rick_h_> thumper: yea, on my todo list
<rick_h_> thumper: damn 360s :P
<thumper> I've done mine...
<halcyon> thumper: if it means basic local , it means w/o kvm??
<thumper> correct
<rick_h_> man, I had 8 reviews to do already just with myself/team. Though I like the nice short/sweet format of the 360's this go round
<halcyon> thumper: which manual do u used? could u share with me
<lazyPower> halcyon: https://jujucharms.com/docs/config-LXC
<thumper> halcyon: lazyPower recommends  https://jujucharms.com/docs/getting-started.  I didn't use a manual, I work on the Juju project
<halcyon> thumper: I hv already play around several times and this include my 4th times I would need to destroy the environment
 * thumper destroys environments regularly :)
<lazyPower> halcyon: i'm sorry you've had a rough go, its hard for us to QA any dell documents around juju or even know how out-of/up-to date they are.
<lazyPower> however, thats the power of the local provider, is its intended to be a staging environment to be used for development and evaluation of charms - its not a full provider thats intended to be used for production purposes
<lazyPower> that would be left to a public/private cloud (and if they aren't officially supported - the manual provider lets you orchestrate them too!)
<lazyPower> i invite you to look at our docs, and if you have any issues with them, file a bug - we'd be more than happy to help you work through the problems and fix anything you feel is a weak point in our official docs
<halcyon> lazypower: i see..btw tq..I'm bit stress coz dont know y my vm suddenly cant ssh ...:D
<lazyPower> halcyon: so you say VM - you're using a KVM provider
<lazyPower> are you using a front end to that like virt-manager?
<halcyon> lazyposer: yes I do install virt-manager as well\
<lazyPower> if so, why not fire it up using virt-manager and give it a look? see what the IP address is, i imagine what has happened is your KVM is set to obtain an IP from your DHCP server, when it came back online the IP address has changed
<halcyon> *lazyPower
<lazyPower> which means we'll need to edit some config files
<lazyPower> halcyon: its super simple - i've outlined the process here - http://blog.dasroot.net/reconnecting-juju-connectivity.html
<rick_h_> lazyPower: did you get your demo stuff all good?
<rick_h_> lazyPower: e.g. keys and isos end up helping?
<lazyPower> rick_h_: i've got the iso's but i've been having a real bummer of a time getting the win2k12 image to boot properly
<rick_h_> :(
<halcyon> lazyPower: ok let me clear my mind, I would destroy the environment again and would try to bootstrap new environment which still local...and I'll not follow from dell whitepaper anymore
<lazyPower> yeah 11'th hour of this project
<lazyPower> which is why you see me this late :)
<lazyPower> halcyon: i'll be around to help if you need anything
<halcyon> it make me confused anyway to refer to many manual in once
<lazyPower> halcyon: i would recommend giving the LXC packages a try - which was linked above.
<rick_h_> lazyPower: ugh, best of luck man
<lazyPower> rick_h_: thanks :)
<halcyon> lazyPower: ok noted
<halcyon> lazyPower: thnkx :)
<halcyon> lazyPower: I already follow the manual using local provicer
<halcyon> lazyPower: but when I exposed the service I cannot view in web browser , it said bad gateway...is it something to do with my firewall??
<lazyPower> hard to tell - what service?
<halcyon> halcyon: wordpress
<halcyon> lazyPower: wordpress
<lazyPower> halcyon: ok, i'm assuming you deployed a mysql service as well, set data-size to 20% (its bugged on local provider, thats called out in teh readme) and added a relation between wordpress and mysql?
<halcyon> lazyPower: do l need to do something with bridging part, coz I did some bridge connectivity before? and it able to display
<lazyPower> *data-set-size
<halcyon> lazyPower: I forget to set constraint yet on the machine I created yet
<halcyon> lazyPower: just try to exposed the service first
<lazyPower> halcyon: let me get some more info about your deployment environment, this is all running on your local machine right?
<halcyon> lazyPower: yes local machine
<lazyPower> ok, good - the 502 bad gateway typically is one of 2 things
<lazyPower> you either attempted to load teh service before it was done configuring, or something went awry during the service configuration
<lazyPower> the 502 bad gateway is coming back from apache trying to communicate with the php-fpm daemon
<lazyPower> run this and see if there is any log information beign emitted from teh wordpress unit
<lazyPower> juju debug-log -x machine-0
<halcyon> lazyPower: I already exposed and the command executed correct, then I prompt juju status it display the public ip-address\
<halcyon> lazyPower: I'm trying once again
<halcyon> lazyPower: and will let u know hows
<lazyPower> ok
<halcyon> lazyPower:  i'm able to display and view the wordpress
<lazyPower> good news!
<halcyon> using local machine
<halcyon> lazyPower: tq :)
<halcyon> lazyPower: hye! if let's say I would like to view the service using web browser from other machine in my home network , it wont allow , right?? so how would I configure all those ? it is smtg to do with networking am I right?
<lazyPower> halcyon: it can be tricky -  i suggest to read this thoroughly before you follow it blindly
<lazyPower> http://blog.dasroot.net/making-juju-visible-on-your-lan.html
<lazyPower> halcyon: what you can do is use sshuttle to create a vpn style connection between two pcs and you wont have to make any changes in your networking
<halcyon> lazyPower: will try read the article first
<halcyon> lazyPower: have u experience on how from host to supply IP to the services or vm created under juju?
<halcyon> halcyon: is it possible to do so??
<halcyon> lazyPower: is it possible to do so?
<Muntaner> hello guys o/
<marcoceppi> hey Muntaner
<Muntaner> hi marcoceppi
<Muntaner> did you read my question in juju-dev? :)
<marcoceppi> so, the way WordPress does this, you don't want to install MySQL on the machine itself because then you lose scale
<marcoceppi> Muntaner: yup!
<jrwren> Muntaner: do you mean the web application is installed via apt and it is asking you debconf settings?
<Muntaner> marcoceppi, exactly: I would like to use the mysql existing charm
<marcoceppi> Muntaner: what software are you installing? out of curiosity
<Muntaner> the web application (prestashop) asks, in installation phase, where the mysql server is
<Muntaner> yep marcoceppi, it's prestashop
<jcastro> does it write that information to a config file?
<Muntaner> marcoceppi, https://www.howtoforge.com/prestashop-ubuntu-14.04 <- I'm trying to follow that guide
<frenda> Hi
<Muntaner> jcastro, dunno exactly - I started working on it this morning
<marcoceppi> Muntaner: Yeah, one thing you can do is just bypass the whole web interface and write the configuration file directly when you get the values
<frenda> I've written an app, it's a accountancy software. I want to force people to use a hosted solution via my domain.
<frenda> I need a hosting service and I need a software to manage auto-installation
<frenda> Users should be able to update themselves!
<Muntaner> marcoceppi, so, does prestashop use this configuration file? not sure about that
<frenda> I want something like this: https://community.nodebb.org/topic/2552, what is you advice?
<frenda> Can juju help me?
<marcoceppi> Muntaner: another thing you can do is just use CURL to hit the webpage when teh database relation connects and seed the information you would normally fill out to have it completed there
<marcoceppi> Muntaner: let me take a looksy
<Muntaner> marcoceppi, another curiosity: how do I tell to the mysql charm (or my personal charm?) the user and password to use on the DBMS?
<Muntaner> I know I look noobish, but first time I try this stuff :)
<marcoceppi> Muntaner: you don't tell it, when you connect MySQL to your charm MySQL will create a schema name, username and password when you connect and it'll tell that information to your charm
<frenda> This is an example: http://vanillaforums.com/plans
<marcoceppi> Muntaner: It looks like you can put the configuration values in config/settings.inc.php
<frenda> I want to have something like that, can juju do it?
<marcoceppi> Muntaner: I recommend checking out the WordPress charm and seeing how it manages the database information, etc. You'll need to do something similar when writing prestashop
<marcoceppi> Muntaner: https://jujucharms.com/wordpress/
<Muntaner> marcoceppi, that's a really good idea
<Muntaner> marcoceppi, so... does wordpress have an apache2 "inside" itself?
<marcoceppi> Muntaner: in this case yes, the charm even lets you choose between nginx or apache2
<jcastro> frenda, sure, you can use juju to deploy whatever a customer wants for example
<Muntaner> marcoceppi, great!
<Muntaner> it looks clearer now
<frenda> So, Is juju a SaaS?
<marcoceppi> frenda: well, no
<marcoceppi> frenda: you can create a SaaS around juju, but it will require you to codify all the SaaS bits
<jcastro> not really, you can use juju to deploy a saas
<marcoceppi> Juju is just a deployment/service orchestration tool
<frenda> It's not important for to be/not be a SaaS; I just need something to automate installation of my software for customers
<jcastro> yeah, that's what it can do
<frenda> for me*
<frenda> So Juju can do it, yes?
<jcastro> yes
<frenda> Do you have any hosted juju or I should find a hosting service?
<jcastro> I don't know of any hosted juju solutions atm
<mwak> helo
<frenda> What does 'jujucharms.com' offer currently? Isn't it installed Juju?!
<frenda> Is Juju translate-able?
<frenda> Is it also RTLable?
<marcoceppi> frenda: no, jujucharms.com is a sandbox to play with juju, Juju is written in golang, not sure if the client is easily translatbale or not, not sure what RTLable is
<frenda> RTL: Right-to-Left for RTL language
<marcoceppi> well, juju is a command line tool, I'm guessing you're talking about the Juju GUI
<frenda> yes, I'm talking about demo.jujucharms.com (GUI)
<marcoceppi> frenda: that's something seperate from juju but works with juju. It's actually itself a charm
<marcoceppi> frenda: https://github.com/juju/juju-gui
<frenda> well; tanx
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: Hi guys... just letting you know that today is my last day for canonical, and there are over 30 pending branches (and some that couldn't ask for merge because of pending bugs): https://code.launchpad.net/~nicopace/+activereviews https://bugs.launchpad.net/~nicopace/+reportedbugs
<jcastro> ack
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: all of the merge requests are pretty small... i'm sure you will be able to review them pretty fast (if you have time)
<whit> hey nicopace dunno how much time you have left, but combining tests against the same charm into a single MP would greatly speed up reviewing
<nicopace> whit: i don't get how this could increase the speed, as each of the tests builds it's own environment
<whit> nicopace,  not the speed of the tests running, the amount of work required for an reviewer
<nicopace> whit: btw, i left you some comments on my possition about integrating all tests in one setup-teardown step
<whit> nicopace, I saw those
<whit> nicopace, I believe I replied
<nicopace> i think not :(
 * whit will check for new one
<nicopace> whit ^
<whit> ah ok, maybe I did not hit send
<whit> nicopace, in summary, there are many times when in the course of reviewing charms, reviewers need to run all the tests themselves
<nicopace> whit, and is better if you have them all together?
<nicopace> whit, i can merge them for you if it lightens the job
<whit> nicopace,  wrt to merge proposals,  a single MP for a group of tests is preferable
<arosales> nicopace: we'll hammer on them. How much longer are you available today?
<nicopace> whit: ok, i'll send you a list of links via email then
<nicopace> arosales: 4 more hours
<whit> nicopace, wrt the separation of tests and fixtures, don't worry about that, the way you have done it is idiomatic for the how tests are written now
 * whit would just like to improve that
<nicopace> whit: i've already done that
<jshieh> Hey, is there a maintained list or a web site where I can look at all the available charms that work on Power?  Obviously 14.04, but I'm not to sure that ALL trusty charms work on Power.  Clarification appreciated
<nicopace> whit: i disagree in merging multiple tests into one big setup-multiple tests-teardown block
<nicopace> as one failure (or side effect) could affect the others
<nicopace> whit ^
<whit> nicopace, of course it depends on the test and what you are testing of course, but for the examples I was looking at with logstash forwarder, iirc most of those tests were not creating actions that should have sideeffects
<whit> and considering the time for spin up and teardown, if even one of those tests did not need isolation, combining would be a win
<whit> nicopace, understand this though, what you did was perfect correct for the corpus of examples we have
<whit> I just think we are generally doing it kinda wrong
<whit> nicopace, please understand per my message to the list, this is a critiscism of how we do charm testing, not of your work
<mbruzek> jshieh: There used to be a list only a few of the charms did not work on Power (ones that depended on specific architecture).  Which charms are you specifically interested in?
<jshieh> Actually, my goal is to "debug" those that don't work on Power. i.e. find out what the issue is and help it along, where possible
<jshieh> so I'm trying to find the delta
<jshieh> And, also trying to determine if the list has widened since the last list was generated and debug that too, if possible
<nicopace> whit: i understand
<nicopace> whit: i'll merge the different branches, and send you an email
<whit> nicopace, awesome! thanks
<mbruzek> jshieh: https://bugs.launchpad.net/charms/+source/hive/+bug/1356086
<mup> Bug #1356086: Charm fails on PPA  <audit> <ppc64el> <hive (Juju Charms Collection):New> <https://launchpad.net/bugs/1356086>
<mbruzek> jshieh: A charm could fail on power for many reasons, in this case it is using a ppa which does not build a power version
<mbruzek> jshieh: https://bugs.launchpad.net/charms/+source/phpmyadmin/+bug/1350023
<mup> Bug #1350023: Charm fails on charm helper PPA <audit> <ppc64el> <phpmyadmin (Juju Charms Collection):New> <https://launchpad.net/bugs/1350023>
<jshieh> right.  appreciate this.  Also, I have:  https://bugs.launchpad.net/charms/+bugs?field.tag=ppc64el
<mbruzek> jshieh: Yes that is the list
<mbruzek> that we know about
<jshieh> okay - thanks.  I wanted to confirm.  Building a list, etc...thanks for pointers.
<mbruzek> jshieh: happy to help, if you have more questions ping me directly I worked on this last year.
<jshieh> ahhh, great.  will be in touch soon!
<mbruzek> jshieh ... oh he left
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: i've sent you the first batch of merged tests
<nicopace> after lunch i'll send you the rest
<nicopace> regards
<arosales> nicopace: thanks
<Muntaner> marcoceppi,
<marcoceppi> o/
<Muntaner> I wrote my charm code. When I deploy it, it runs in juju (can see it in juju-gui), but no VM is instanced and stays in pending state
<Muntaner> marcoceppi, I just wrote in the install and the start hook some basic things (apt-get, etc) and nothing more
<marcoceppi> Muntaner: what provider are you using?
<Muntaner> OpenStack, a local all-in-one installation
<Muntaner> other charms (mysql, wordpress, etc.) deploy perfectly
<Muntaner> also, I did some juju-log...where are exactly the logs?
<Muntaner> marcoceppi, where do I tell to the charm what kind of VM it should instance?
<mbruzek> Muntaner that is constraints
<mbruzek> Muntaner: https://juju-docs.readthedocs.org/en/latest/constraints.html
<jcastro> https://jujucharms.com/docs/charms-constraints
<jcastro> is the proper link
<jcastro> https://jujucharms.com/docs/reference-constraints
<jcastro> also
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: i've sent you the second (and last) batch of merged tests via email
<marcoceppi> nicopace: thank you!
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: hope you can look over them... i'm going offline now... but i'll check any review you do later today
<arosales> nicopace: thanks for the work on those tests!
<sinzui> rbasak, bug 1417771 is fixed. there is nothing for you or I to do
<mup> Bug #1417771: juju-core vivid ppc64el fails to build <packaging> <ppc64el> <vivid> <juju-release-tools:Invalid by sinzui> <gcc-4.9 (Ubuntu):Invalid> <gccgo-5 (Ubuntu):Fix Released> <gccgo-go (Ubuntu):Fix Released by canonical-server> <gcc-4.9 (Ubuntu Vivid):Invalid> <gccgo-5 (Ubuntu Vivid):Fix
<mup> Released> <gccgo-go (Ubuntu Vivid):Fix Released by canonical-server> <https://launchpad.net/bugs/1417771>
<kwmonroe> hey nicopace, i'm disapproving your individual merge proposals in favor of all-tests proposals where applicable.. don't take it personally though -- the work is much appreciated!  i just want to make sure future reviewers see that we're consolidating comments into the all-tests merge proposals.
<kwmonroe> and thanks again, btw, for merging those together.  it really does help speed up reviews.
<nicopace> kwmonroe: thanks... i forgot to remove them
<kwmonroe> np - thank you nicopace!  wanted to make sure i wasn't coming off like a jerk with a bunch of naks ;)  the re-merged proposals are much appreciated.
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: i've replied to your comments, and removed all the unneded individual merge requests. Also, you can look over the merge proposals here: https://code.launchpad.net/~nicopace/+activereviews
<nicopace> whit, marcoceppi, asanjar, jcastro, arosales: i've replied to your comments, and removed all the unneded individual merge requests. Also, you can look over the merge proposals here: https://code.launchpad.net/~nicopace/+activereviews
<lp|away> nicopace: Thanks for that cleanup. o/
<lp|away> and all the effort on those tests to boot
#juju 2015-02-28
<jose> marcoceppi: hey, is there any way to manually clear an item from the revq?
<marcoceppi> jose: let me know the item and I'll do it
#juju 2015-03-01
<jose> marcoceppi: most of the ~nicopace merges are 404'in
<marcoceppi> jose: yeah, I'll clean those out in a hot min
<jose> no prob, just wanted to give you a heads up :)
<jose> thanks!
#juju 2016-02-29
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/mysql/forward-compat-xenial/+merge/287311
<jamespage> if you have a sec
<beisner> jamespage, +1 @ https://code.launchpad.net/~james-page/charms/trusty/mysql/forward-compat-xenial
<beisner> jamespage, fwiw, that should unblock our gate on several charm amulet tests that started failing with the python 2 default change.
<jamespage> icey, for https://code.launchpad.net/~chris.macnaughton/charms/trusty/ceph-osd/add-encryption/+merge/287483
<jamespage> what happens with regards to keys when I turn on that feature?
<icey> they get placed in /etc/ceph/dmcrypt-keys
<icey> jamespage:
<icey> (that's ceph's default key location)
<jamespage> generated?
<icey> yes
<icey> ceph generates keys and drops them in that location if you don't specify otherwise
<jamespage> so this deals with the someone steals the disks, but not the server problem
<jamespage> ?
<icey> yes
<jamespage> I'd really love to see an amulet test for this as well
<icey> server problem is more complex and will have a subordinate charm that abuses this a bit :)
<jamespage> not for every combination but for one would be nice
<icey> jamespage: agreed, I'd like to do an amulet that adds a disk and checks that the directory is created methinks
<jamespage> +1
<jamespage> the change generally looks OK to me
<jamespage> I'd be tempted to drop that version check - firefly is now the older supported release of ceph we have
<jamespage> so only someone doing something really backward would hit an error there....
<icey> then we could remove all of the version checks :)
<jamespage> yes I know
<jamespage> that would be a nice bit of maintenance todo
<icey> how about a sepoarate MP for that :)
<jamespage> yes please..
<jamespage> but don't add a new one if its not required...
<icey> +1 will get the test written, then start working on removing unneeded version checks
<icey> what's the cost of an additional (minimal) MP?
<icey> do we have any example openstack charms that test differing config? It looks like the config is setup once for all tests, and that the tests all verify with the same config currently in the ceph-osd charm
<icey> jamespage: ^
<Prabakaran> Hello Team, We are facing issue while we are trying to upload product installer files which has size of more than 700 MB to https://files.support.canonical.com/  The actual issue is file uploading progess will be interrupted in middle of transfer. We have tried both CLI and UI mode of file upload still we are facing this issue. Could someone please advise on this issue.
<beisner> icey, ^ re: cost of diff configs ... is that re: amulet tests or unit tests?
<beisner> icey, fyi - rule of thumb for amulet tests:  if you need to exercise a topology or a configuration which is significantly different than what we exercise there, then that is a candidate for a mojo spec test.
<beisner> icey, otherwise it is actually quite resource-expensive to add an alternate config/topology to exercise against all currently-supported ubuntu+openstack release targets.
<kwmonroe> hey cory_fu jcastro:  Prabakaran was asking about videos related to layered charm development.  do we have any recordings (maybe from Ghent) about the layered approach?
<kwmonroe> Prabakaran: i found this on our youtube channel -- it may be a bit basic for you, but marcoceppi gives a good overview of layers: https://www.youtube.com/watch?v=UzgW1NW1M6A
<sparkieg`> there's https://www.youtube.com/watch?v=aRQcERLnbIQ too
<jcastro> https://www.youtube.com/watch?v=UzgW1NW1M6A
<jcastro> kwmonroe: ^
<kwmonroe> Prabakaran: there's a couple for ya ^^  (thx sparkieg` jcastro)
<Prabakaran> Thank you all ..
<Prabakaran> I will refer those videos...
<icey> beisner: I think there may be a confusion in the communication; I'm adding a new single test to verify config stuff
<beisner> icey, yep that's all good.
<icey> and talking about a separate merge proposal fo removing the unnecessary version checks
<bdx> charmers: can we get some review here? --> https://code.launchpad.net/~stub/charms/trusty/postgresql/built/+merge/282588
<jose> bdx: THERE YOU ARE!
<jose> are you busy atm?
<bdx> jose: ehh, define busy .. :-)
<bdx> jose: i have a few mins to spare for you, whats up?
<jose> I wanted to help you out with your charm, that's it
<lazyPower> kwmonroe - ping
<kwmonroe> pong lazyPower
<lazyPower> kwmonroe - i've heard a lot about this "layer:java" from jcastro - but i think what he means is "interface:java" - https://github.com/juju-solutions/interface-java - and this is a pattern is houdl be adopting when charming up java services right?
<kwmonroe> right lazyPower, there are interface providers (openjdk, zulu8, and ibm-jdk), but currently only 1 consumer, layer-ubuntu-devenv
<lazyPower> this will be the second consumer I suppose :)
<kwmonroe> w00t
<lazyPower> ok cool so i just implement the interface, and adjust my metadata accordingly
<lazyPower> get some spiffy reactive magic like you have in the interface readme, and build the open-jdk subordinate?
<lazyPower> actually, are those java subordinates already listed in the store?
<kwmonroe> yeah lazyPower, not promulgated, but openjdk is in my namespace
<lazyPower> https://jujucharms.com/u/kwmonroe/openjdk/trusty/6
<lazyPower> look at that spiffy little java dude
<lazyPower> thanks man, i'll let ya know how i get along with this shortly
<kwmonroe> heh, 2 hours on the charm, 2 days on the icon ;)
<lazyPower> amir would be proud ;)
<kwmonroe> lazyPower: the current (simple) java consumer is here: https://github.com/juju-solutions/layer-ubuntu-devenv -- note the provides: java there because we expect javas to be a subordinate.
<lazyPower> ah right, just as iw as putting it under requires
<lazyPower> derp
<kwmonroe> :)
<cory_fu> kwmonroe: We should probably get that openjdk charm promulgated, no?
<cory_fu> Otherwise, the ibm-java charm is gonna beat you.  ;)
<magicaltrout> hold on, one consumer, just cause I've not bothered publishing my charm yet, doesn't mean there is just one consumer! ;)
<kwmonroe> yeah cory_fu, can't have that.  but i think i wanted to revisit the interface: java bits first.  i seem to recall you objecting to the readme..
<kwmonroe> and by "readme", i mean "core logic in the interface"
<kwmonroe> same same
<cory_fu> kwmonroe: What you mean?
<lazyPower> six of one, half dozen of the other ;)
<cory_fu> magicaltrout: Ha, too right!
<firl> lazyPower any ideas when the networking layer might be working with the kubernetes bundles? I am not really able to use it until that gets resolved for me
<lazyPower> firl - we're workign on logging infrastructure now
<cory_fu> kwmonroe: I don't actually remember what you're referring to, and I didn't open an issue
<magicaltrout> the manic Xenial LXD hacking has been in part so I can finish off PDI in a sensible manner
<magicaltrout> should have something testable over the next couple of days
<firl> kk, if itâs on the roadmap and you want me to test it, let me know.
<lazyPower> firl - thats not currently in our backlog, its on the long term roadmap solvables, but we're not scheduling time for it atm. I'll bring it up with marco to see where it sits on our delivery timeline and if we can squeeze it in directly after cycle close
<lazyPower> firl - that being said, if you want to get some dev cycles in on it, i'm game to help and we'd <3 the contribution
<firl> yeah I would love to, just donât have the dev cycles either :(
<kwmonroe> cory_fu: in the java interface readme, we have example usage for people that are providing the interface (eg, JREs), but we don't document how to consume it.   i think we also wanted to put JAVA_OPTS bits in there for consumers, though i'm fuzzy on that.
<cory_fu> Oh, so it really is just README changes?  Yeah, it needs to be improved
<kwmonroe> also, this was around the time we were talking connected vs related.  but i think we settled on connected.
<cory_fu> But the README on the interface doesn't block the openjdk charm in any way
<cory_fu> kwmonroe: Well, now that seems to have gone to "joined" in the big data charms.  We can't ever decide on anything, can we?  :p
<kwmonroe> yeah cory_fu, so we should decide pretty quick on that... before magicaltrout and lazyPower get to work.  do you have current objections to .connected?
<cory_fu> I don't
<lazyPower> too late on "before lazypower"
<lazyPower> but i'm sniffing on "java.installed"
<kwmonroe> :)  motion carries.  we're connected.
<lazyPower> so if thats wrong, speak now :P
<magicaltrout> don't make me sad :P
<cory_fu> magicaltrout: You ok with "java.connected" as the state name?
<kwmonroe> lazyPower: java.installed is set within the providers (like openjdk).  you want to sniff on java.ready.
<magicaltrout> i'm okay with whatever you're okay with :P
<lazyPower> ah solid
<magicaltrout> whats the difference between java.connected and java.installed?
<magicaltrout> like, i get installed :P but how would it be installed but not connected?
<lazyPower> the step in the dance of communication where the units are aware of one another but may not have all the data required to configure the java service
<magicaltrout> cool
<magicaltrout> whatever, when it doesn't work i'll just defer to you guys anyway :P
<kwmonroe> so... firstly, stop saying java.installed ;)  that doesn't matter for consumers of the java interface.  that's just a thing that the JREs will do to know if they need to install java.  what you guys want (magicaltrout and lazyPower) is to watch for java.ready.  once you see that, you know java has been installed and things like "java_home" and "java_version" are available on the java relation.
 * magicaltrout greps his code
<magicaltrout> oh yeah
<magicaltrout> you're not lying
<kwmonroe> :)
<magicaltrout> okay, whats the difference between ready and connected :P
<magicaltrout> ie: I don't really care, but which do I listen for?
<magicaltrout> java.ready i assume from your comment
<magicaltrout> in which case I shall ignore you all for ever more
<kwmonroe> magicaltrout: you could have your charm report "blocked: waiting on java relation", then @when java.connected @when_not java.ready report "waiting for java to become ready", then @when java.ready report "java in da house"
 * lazyPower raises the java roof
<magicaltrout> heh
<kwmonroe> magicaltrout: so it's just available for you to inform your users when a java relationship is present, but may or may not be ready (apt-get install openjdk can take a few minutes, so people might like to know that the relation is there, but we're waiting on java to ready itself)
<magicaltrout> k
<lazyPower> kwmonroe - well this is weird. provides:  java: interface: java -  is clearly in my metadata, doesn't look like builder likes it though :(   http://paste.ubuntu.com/15246844/
<magicaltrout> https://github.com/OSBI/layer-pdi here's my attempt lazyPower
<kwmonroe> lazyPower: i'm not sure what you mean by "provides: java: interface: java", but if you do it like magicaltrout's magicaltrout's metadata and layer.yaml, you should be 2legit2quit.
<cory_fu> kwmonroe, lazyPower, magicaltrout: https://github.com/juju-solutions/interface-java/pull/2
<kwmonroe> look at the fingers on cory_fu!  i was gonna draft more messages about how the readme is lacking, but you just straight up fixed the problem.  nice.
<cory_fu> :)
<cory_fu> I was waiting for my spark deployment to fail anyway.  ;)
<kwmonroe> ha!
<magicaltrout> acceptable fast fingers, acceptable
<magicaltrout> now if only kwmonroe had written that readme before the meetup, i'd not have been stuck.... :P
<magicaltrout> aww but what would have done in the bar for 5 hours....
<kwmonroe> saved a bunch of money, for one.
#juju 2016-03-01
<mux> hmm
<mux> I'm having a hard time understanding the difference between 'juju storage <whatever>' and the juju 'storage' charm
<mux> is the 'storage' charm still a thing?
<thumper> mux: the storage charm was a temporary fix until juju got the right primitives in place
<thumper> the juju storage commands should supercede the storage charm
<mux> ahhhh
<stokachu> wallyworld: to elaborate on my question, when I do a juju list-controllers, is there an exposed api call that I can use to get the same data?
<Makyo> Trying to bootstrap an lxd env, but I keep getting "ERROR cannot find network interface "lxcbr0": route ip+net: no such network interface" - what would that mean?
<wallyworld> stokachu:  list-controllers is a client side operation working the the locally stored controllers.yaml file
<wallyworld> there's code to cread that file
<wallyworld> but it's not a remote api
<stokachu> wallyworld: ok so my next question is should i rely on the controllers.yaml file?
<wallyworld> for your locally bootstrapped controllers yes
<stokachu> there are several files in ~/.local/share/juju some created by hand (clouds.yaml,credentials)
<mux> man. seriously. the documentation on juju storage is really cryptic and difficult
<mux> days later I'm still trying to figure out how to use juju to deploy something (anything) on AWS, while creating an EBS volume and attaching it, and using that volume for storage
<mux> like, say MySQL or Postgresql or something
<mux> the only one that I can find is the "demo" version of Postgres that supports the storage system, and it hasn't been touched in over a year
<alexisb> mux, this doc helped me recently with making the required charm updates to use storage: https://jujucharms.com/docs/devel/developer-storage
<jamespage> gnuoy, https://review.openstack.org/#/c/286451/
<jamespage> anyone space a cycle for https://code.launchpad.net/~james-page/charm-helpers/fix-ssl-certs/+merge/287639 ?
<jamespage> tightens up sec for the apache https frontends for openstack
<magicaltrout> kwmonroe: is there a simiple layered leader election snippet somewhere I can leach?
<magicaltrout> s/layered/reactive
<marcoceppi> magicaltrout: there's a leadership layer that stub wrote
<magicaltrout> excellent
 * magicaltrout goes off in search of the grail
<marcoceppi> magicaltrout: I think it's used in cassandra xor postgresql
<marcoceppi> magicaltrout: https://git.launchpad.net/layer-leadership/tree/README.md
<marcoceppi> magicaltrout: stub is great at documenting his layers
<magicaltrout> aye, i've bookmarked interfaces.juju.solutions now :P
<cory_fu> magicaltrout: I don't know of a specific example, but it should be as easy as @when('leadership.is_leader')
<magicaltrout> yeah
<magicaltrout> sadly that time has come where I feel compelled to waste hours of my life figuring it out for the first time in search of a dynamically registering pdi cluster....
<marcoceppi> \o/
<magicaltrout> indeed, it should really be too hard
<magicaltrout> shouldn't
<magicaltrout> brain fade, whats the templating stuff if I need to dump a variable driven XML file into a node?
<lazyPower> magicaltrout - charmhelpers.core.templating import render
<lazyPower> and use jinja2 templates i suppose
<magicaltrout> ah that jinja2 stuff
<magicaltrout> thanks lazyPower
<magicaltrout> weird. PDI can have multiple masters
 * magicaltrout ignores that fact
<marcoceppi> magicaltrout: that's actually okay with leadership
<marcoceppi> magicaltrout: you don't have to have a 1-to-1 match of leader to master
<lazyPower> hey marco
<marcoceppi> juju just says "hey, you're a leader, you can help coordinate", so if that means they, the code, says "me and my pal here unit 2 are masters" that's all cool
<marcoceppi> hey lazyPower
<lazyPower> if i'm iterating through a scope:unit relation, and i want to poll back all the cached data from teh relations, i return the conversation scope, not the unit, right?
<lazyPower> i only need the conv.scope if i'm setting data, to read just return the conversation object?
<marcoceppi> lazyPower: the unit is the conversation scope
<lazyPower> for conv in self.conversations(): yield conv
<marcoceppi> conv is the scope, which is the unit name if you're scope: unit
<magicaltrout> hmm interesting. I'll park that for now and get the basics working. I'm not sure when I'd need multiple masters, there must be a usecase I've not figured out. I'll ask the guys at pentaho when I'm really bored
<lazyPower> well
<lazyPower> when ir eturn conv() i can get at the data i'm looking for
<lazyPower> but i'm doing get_remote() in my reactive method, not in my interface-layer
<lazyPower> so it seems like i'm still breaking encapsulation
<marcoceppi> lazyPower: so you're not going to want to do that
<marcoceppi> get_remote should be done in the interface
<marcoceppi> you can yield get_remote return data instead of the conversation, if you wanted
<lazyPower> yeah, i guess that works
 * lazyPower updates 
<magicaltrout> this is probably a bad pattern but i'll ask anyway. the @when stuff, if I wanted 2 different launch commands one for when its a leader 1 when its not, i could annotate the trigger method with @when('leadership.is_leader') but is there a way to just check the state in the code rather than having 2 full methods and the additional stuff to go with it to launch the leader and non leader services?
<magicaltrout> like a simple if(leader): start_leader() else: start_non_leader()
<magicaltrout> ooh
<lazyPower> if is_state('leader.is_leader')
<magicaltrout> hehe
<lazyPower> or whatever the label is :)
<magicaltrout> https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.is_leader
<magicaltrout> I found that just as you posted that
<lazyPower> yeah, thats charmhelpers leadership helpers
<magicaltrout> better than extra methods for the sake of it
<lazyPower> bit different than the layers, but gets the job done
<lazyPower> brb
<magicaltrout> I think I saw this on the mailing list
<magicaltrout> you can't chain annotations currently can you?
<magicaltrout> like "when leader.changed.hostname or when leader.changed.port"
<magicaltrout> oh actually that should just work
<magicaltrout> me
<magicaltrout> h
<magicaltrout> or should it?
<magicaltrout> cory_fu: englighten me
<cory_fu> Yes, you can chain decorators.  They all just get ANDed together.  The only restriction is that you can't use @hook with anything else, but you shouldn't ever use @hook anyway
<cory_fu> Oh, but you can't OR things together currently
<magicaltrout> yeah but if I wanted to monitor a bunch of leader params, its or not and isn't it
<magicaltrout> so I need a method for each param I want to track
<magicaltrout> no probs
<cory_fu> There's been some discussion of having a @when_any which would be super useful but has difficulties if you use it with relation states
<cory_fu> But as often as it comes up for non-relation states, I think I'm just going to make it, and have it never pass params and just document it as not being recommended for relation states
<larrymi> marcoceppi: hi, is there any plan for juju-deployer to support juju 2.0?
<marcoceppi> larrymi: we're talking about that right now actually. We do plan to support it, but at a very narrow scope
<marcoceppi> larrymi: what do you need deployer for that 2.0 juju deploy doesn't have?
<rick_h__> larrymi: fginther was looking at updates to make deployer and mojo updated yesterday.
<fginther> larrymi, yes, the work is in progress now
<fginther> well, for python-jujuclient and juju-deployer, not for mojo
<marcoceppi> hold on
<marcoceppi> how many people are trying to update jujuclient and deployer right now?
<fginther> uh, hopefully 1 :/
<marcoceppi> are there like 4 teams racing to be the first?
<larrymi> marcoceppi: our bundles are currently using branches .. IIUC that won't be supported in 2.0
<marcoceppi> larrymi: cool, just curious the missing feature
<fginther> marcoceppi, are you also working on this?
<marcoceppi> fginther: I found out that dpb1 had someone working on this and we're about to work to completely deprectate jujuclient and use another library instead of deployer
<marcoceppi> sounds like I'll need to send an email
<fginther> marcoceppi, I'm that dpb1 someone
<marcoceppi> oh, whew
<larrymi> fginther: rick_h__: ah cool
<marcoceppi> I thought someone else from dpb1's team was doing this
<larrymi> marcoceppi: in our case, we also use private branches
<marcoceppi> larrymi: ack
<marcoceppi> larrymi: but why not just upload to the charmstore and set-perms to kepe them private?
<marcoceppi> so you don't have to use deployer?
<fginther> marcoceppi, what are the consumers your targeting for this "another library"?
<marcoceppi> fginther: basically, libjuju
<marcoceppi> a native python juju library
<larrymi> marcoceppi: hmm that's something we can look into if we can have the same permissions on the charm store.
<larrymi> marcoceppi: does juju 2.0 support placement directives in the bundle?
<magicaltrout> ooh look at that, my leader works
<magicaltrout> that was painless
<magicaltrout> what can go wrong now
<cory_fu> :)
<magicaltrout> now leadership people
<magicaltrout> I assume there is a case where PDI tries to start, but leadership hasn't been sorted out?
<magicaltrout> does that sound like a plausible scenario?
<magicaltrout> or are leaders sorted early on
<lazyPower> cory_fu - i call schenanigans
<lazyPower> cory_fu - @when_file_changed('path/to/file') -  its coming up as not a valid decorator :(
<cory_fu> lazyPower: Are you importing it?
<lazyPower> nevermind
<cory_fu> magicaltrout: My understanding is that leadership is sorted out before any hooks are called
<magicaltrout> okay thats good, so i don't need to trap it
 * magicaltrout spins up a slave to find out what goes on
<cory_fu> That said, I haven't used it extensively, and admcleod- was running into an issue where is_leader seemed to be false on all units and I'm not sure what came of that
<magicaltrout> bloody love LXD, you lot make me so happy
<magicaltrout> ah yeah i saw that
<lazyPower> cory_fu - i've encountered that when i force destroy the leader on pre 2.0 builds
<lazyPower> cory_fu as in juju destroy-machine --force 5
<lazyPower> if 5 were the leader
<cory_fu> Hrm.  It's possible he did that
<cory_fu> Entirely possible
<lazyPower> yeah that'll break a leaderized unit
<magicaltrout> thats what happens when you let loose a new zealander with a british passport who doesn't want to be associated with britain and lives in spain, loose on a development platform
<magicaltrout> -o
<cory_fu> lol
<magicaltrout> this is weird
<magicaltrout> this happened twice today
<magicaltrout> I spin up a node
<magicaltrout> and it claims to be up and running
<magicaltrout> but there is absolutely nothing on the box
<cory_fu> o_O
<magicaltrout> i'm great and finding crazy shit
<magicaltrout> oops
<magicaltrout> sorry lazyPower
<magicaltrout> http://ibin.co/2YlapnApEXm7 look cory_fu
<magicaltrout> there isn't even a unit log file
<magicaltrout> yet juju status tells me its ready
<larrymi> marcoceppi: looking into try the charm store option, can you please point to doc for how to set the permissions?
<magicaltrout> ooh
<magicaltrout> that ip address is localhost
<magicaltrout> so thats weird
<cory_fu> I assume your charm should be installing stuff under /opt?
<magicaltrout> well it should be running for a start. This is probably some LXD weirdness. juju ssh pdi/2 drops me into an LXD instance
<cory_fu> That's very strange considering your charm apparently set a status, so it seems like the hook code all  ran
<magicaltrout> but the juju status returns 127 as an ip address
<magicaltrout> it reports ready but its not installed in the container or the host
<cory_fu> Very strange
<magicaltrout> jam: you wanted some LXD feedback ----^ there it is
<magicaltrout> there isn't even a charm installed in that container cory_fu
<lazyPower> magicaltrout - RAAA :)
<magicaltrout> can you not attach multiple files to a bug report in launchpad?
<magicaltrout> jam: I filed it: https://bugs.launchpad.net/juju-core/+bug/1551842
<mup> Bug #1551842: Juju 2.0 Trunk launches disconnected nodes <juju-core:New> <https://launchpad.net/bugs/1551842>
<magicaltrout> sorry I can't offer much more repro help
<magicaltrout> I'll see what I can do
<magicaltrout> oh add-unit you make me so sad
<magicaltrout> maybe the repro path is easy :)
<magicaltrout> https://launchpadlibrarian.net/244640212/Untitled.png
<magicaltrout> thats a good one as well
<jamespage> beisner, next -> openstack-charmers-next is working ok as well
<lazyPower> mbruzek / anybody listening - i could use a quick review on this one - super simple + supporting test run to validate - https://github.com/juju-solutions/charms.docker/pull/12
<mbruzek> on it
<jamespage> beisner, hmm - no stable branch for pxc
<jamespage> I'll look into that
<beisner> jamespage, ack thanks
<cmars> anyone seen python exceptions when trying to `charm build` on xenial, and there's empty lines in yaml files?
<lazyPower> cmars - have a stacktrace for me?
<cmars> lazyPower: yep, hang on
<cmars> lazyPower: https://paste.ubuntu.com/15260600/
<lazyPower> cmars - most def bug it
<lazyPower> seems like the yaml parser hates the space you have in the yaml file
<lazyPower> cmars - can you also gist up or tag the repos in question so we can repro with what you have that triggered it?
<cmars> lazyPower: https://github.com/juju/charm-tools ?
<lazyPower> yessir
<_Sponge> lazyPower: Are you manx ?
<ChrisHolcombe> i need some brainstorming help :).  I have a way to detect hard drives that are dying with libatasmart.  I don't know of a good way for a program to call juju based on an event ( drive dying ).  I also don't know of a good way to have juju poll for for an event and take action.  Any thoughts?
<lazyPower> ChrisHolcombe you have update-status which runs every 5 minutes give/take
<lazyPower> thats one way to poll
<ChrisHolcombe> lazyPower, ok that's not bad.  5 mins is often enough that i can catch a drive dying :)
<lazyPower> like i have a lot of awesome ideas about this
<lazyPower> but its predicated by talking to the juju api
<lazyPower> aisrael has some experience there
<ChrisHolcombe> lazyPower, juju api?
<ChrisHolcombe> lazyPower, this? https://github.com/juju/juju/blob/master/doc/api.txt
<lazyPower> well the thought was that sounds like you would have an action on the charm to do the work right?
<lazyPower> so an admin can do it manually
<lazyPower> having the update-status invoke that action FOR the admin
<lazyPower> and potentially trigger any nagios/ext monitoring alarms
<lazyPower> which is kind of what aisrael did for the VPE-Router that was on demo at MWC
<lazyPower> the charm invoked actions on itself to configure the router
<lazyPower> it was quite slick
<ChrisHolcombe> lazyPower, yeah pretty much.  i don't want the admin to have to manually do anything
<lazyPower> _Sponge not sure what you're asking me, and sorry i missed that
<beisner> jamespage, we have to adjust ACLs for the bot to be able to do anything other than comment.  we do that here.  and it looks like we could grant whatever powers we wish.  http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/charm.config
<beisner> jamespage, for now, i can add the bot user to charms-core.  but perhaps we should add an intermediate group, such as what cinder has done @ http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/cinder.config
<cory_fu> By popular demand: @when_any (and @when_not_all) -- https://github.com/juju-solutions/charms.reactive/pull/56
<rick_h__> cory_fu: not going to be satisfied unti lI can do @tobe_or_not_tobe :P
<lazyPower> rick_h__ - why do charms need to be concerned with outerwear? http://www.tobeouterwear.com/
 * lazyPower ducks
 * rick_h__ is afraid to click links supplied by lazyPower 
 * ChrisHolcombe agree with rick_h__ haha
<jrwren> i read it as tobe the brand at first too. ;]
<cory_fu> rick_h__: https://i.imgur.com/d1grx5m.jpg
<jamespage> beisner, nah - lets get the acl updated correctly to include charms-ci
<jamespage> beisner, we can read the comments for now
<beisner> jamespage, already added to the group, confirming vote success.  can remove the bot from that group when done, or leave there until we have the acl nudged.   just lmk.
<beisner> jamespage, shall i take a stab at the acl change?
<beisner> jamespage, not sure where we request new gerrit groups though.
<bbaqar> The LXD charm has a block-device config setting that defaults to /dev/sdb. Anyone got any idea how to create this block-device?
<lazyPower> o/ bbaqar
<jamespage> beisner, ok leave as is - maybe a group for one user is overkill
<jamespage> beisner, you get then by adding them to charms.config
<jamespage> that's all I did
<beisner> jamespage, ok cool.   so even adding to the charms-core group doesn't give verified permission, so we'd still have to add a line to the .config.   but i can have the bot do +/- 1 code review as we sit today.
<jamespage> beisner, how about you have a go at raising the change against project-config
<beisner> jamespage, ok will do
<ney1> hi? =)
<ney1> just testing
<beisner> jamespage, ok, here's what it looks like atm with the bot in code-review voting mode:  https://review.openstack.org/#/c/286653/1
<jamespage> beisner, ok for now - lets get it doing verified instead :-)
<beisner> jamespage, yep, on it
<jamespage> beisner, awesome-o
<marcoceppi> magicaltrout: you around?
<marcoceppi> lazyPower: ping
<lazyPower> marcoceppi pong
<marcoceppi> lazyPower: want to look over a blog post I wrote before it goes live?
<lazyPower> marcoceppi give me 5 and you betcha
<marcoceppi> magicaltrout lazyPower https://gist.github.com/marcoceppi/e182065c837a3fffc918 feedback welcome
<marcoceppi> I just read the first paragraph and realize I have a lot of things to fix
<magicaltrout> checking
<magicaltrout> didn't know go was in apt :P
<magicaltrout> marcoceppi: make sure they have git and bzr installed
<marcoceppi> magicaltrout: good call, I'll add that
<marcoceppi> here's an updated version https://gist.github.com/marcoceppi/e182065c837a3fffc918
<magicaltrout> "got get" isn't a command ;)
<marcoceppi> UGH, that bites me like 40% of the time, to the point that I aliased got => go
<magicaltrout> or if it is I've been running the wrong build stuff for this past week
<magicaltrout> hehe
<marcoceppi> holy cow, every command
<magicaltrout> i've not used update alternatives, so I'll just assume you know what you're writing there
<magicaltrout> the rest looks fine
<marcoceppi> magicaltrout: I just tested it out, it's pretty sweet
<marcoceppi> I get tired of misspelling $GOTPATH ;)
<magicaltrout> hehe
<magicaltrout> everything I have currently runs on some version of 2.0 so my requirement for 1.2 went away a couple of weeks ago
<magicaltrout> although i'll cry if anyone updates the api and locks me out again.... ;)
<marcoceppi> magicaltrout: I don't think that will happen *shifty eyes*
<magicaltrout> hehe, until v3 at least
<marcoceppi> though we're pretty much like "2.0 is out chance to 'fix' these things"
<marcoceppi> yeah, juju 3.0 ;)
<marcoceppi> but by beta3/beta4 those will be done
<marcoceppi> we should have a beta2 out in the next few days IIRC
<magicaltrout> i've not had any real issues on trunk amazingly, apart from this LXD stuff, which is undeerstandable
<magicaltrout> there are a few niggles, but for the novice like me it all looks pretty good
<magicaltrout> although i'm sure the dev guys are like "its lies, we're holding bits of it together with string"
<skay> lazyPower: hi, I see this is marked at fix released. which revno should I use?
<skay> lazyPower: derp. https://bugs.launchpad.net/charms/+source/block-storage-broker/+bug/1487636
<mup> Bug #1487636: b-s-b charm easily confused by multiple volumes partially matching its expected volume name  <block-storage-broker (Juju Charms Collection):Fix Released> <https://launchpad.net/bugs/1487636>
<magicaltrout> marcoceppi: i told jcastro earlier, I got 3 talks accepted into ApacheCon big data and 2 will feature juju in some form or other
<magicaltrout> you guys should definately come up
<magicaltrout> it'll be a blast
<magicaltrout> (not my talks, the event)
<lazyPower> skay - that may have been a mistake, looking at the latest trusty code tree, i dont see a related revision that I merged.
<lazyPower> https://launchpad.net/~charmers/charms/trusty/block-storage-broker/trunk - i last touched this on may of 2014, its been quite a while
<kwmonroe> hey bcsaller, i'm working on a zookeeper layer and notice we have 2 interfaces for it:  interface-zookeeper for the provides/requires of a zookeeper convo, and interface-zookeeper-quorum for zookeeper peers.  since provides, requires, and peers are already 3 separate classes, is there any reason why i should not merge zk-quorum into zk and just have a single interface-zookeeper handling provides/requires/peers?
<marcoceppi> magicaltrout: yeah, we're going to chat about that in an hour or so, it sounds awesome
<skay> lazyPower: it looks like there may be a branch related to it (going on the name of one of thedac 's branches)
<kwmonroe> bcsaller: the metadata.yaml for layer-zookeeper would then look like this: http://paste.ubuntu.com/15261951/, so provides and peers are the same interface, but with different names.
<magicaltrout> cool!
<cory_fu> bcsaller: From my discussion with kwmonroe, it seems like it should work from a technical standpoint, but the question remains as to whether there's a decent argument against doing that.  You save a repo and interface layer, but on the other hand you're conflating two different interface protocols into the same repo.  But on the gripping hand, the code is distinct in that repo, so how much confusion is there, really?
<lazyPower> skay i'll change this issue back to something not fix-released unless we have definitive evidence of a MP that fixes it
<cory_fu> It generally seems a bit wrong to me, but I can't come up with a well-reasoned argument against it.
<skay> lazyPower: ack
<magicaltrout> random question of the day, reative buids if i want to do a build for xenial or whatever, do I just checkout a xenial image and do a charm build in it?
<lazyPower> magicaltrout - even easier
<lazyPower> charm build -s xenial
<magicaltrout> banging
<bcsaller> kwmonroe: I don't know that there is a hard fast rule about doing this one way or the other. The guidelines I would think about are, do the conversations differ significantly, is there an ability to replace client impls of one kind but not the other, does it make the implmentation more complex if only some clients have some permissions/capabilities to the conversation. ZK has some native support for peer models so it can do the right
<bcsaller> thing with the same connection information as a client but this isn't the common case.
<bcsaller> kwmonroe: as we look forward though and see a world where interfaces represent richer conversations I expect we'll find things like this come up more frequently and we'll begin looking at what I've called faceted interfaced in the past. In that case they can negiotiate with the remote end which of a set of capabilities they share over a given relation
<marcoceppi> magicaltrout: you can even make it easier, just add the series key to the metadata.yaml with a YAML list of series supported, then charm build ;)
<magicaltrout> oooh
<magicaltrout> that does make life easy
<lazyPower> that only works on juju 2.0+
<lazyPower> it'll break on 1.25 in a weird way with a metadata key error
 * magicaltrout only works in juju 2.0+ :P
<magicaltrout> okay well i'm not making use of 2.0 stuff yet, i'll stick with lazyPower's way and resolve it later
<cory_fu> bcsaller: Note, with what kwmonroe is asking about, the peer relation protocol is entirely separate from the provides / requires, just hosted in the same repo and sharing the same interface name.  Because of how the framework loads the handlers, for the peer relation, only the code in peers.py will matter for the peer relation
<cory_fu> Because peer relations can never be connected to anything other than another peer, they're really their own class of interface protocol already
<lazyPower> ^
<lazyPower> that x 1000
<lazyPower> writing peer relationships with the notion of both provides/requires in the same context gets confusing, so yeah. calling them their own class of interface protocol is decidedly choice verbiage.
<bcsaller> cory_fu: but they share an interface name in the metadata. In that case I'd say they are not the same name. If we want a way to host that concept in a single repo we can get there with metadata, but if the conversations are different they need different interface names in metadata
<cory_fu> lazyPower: I think that may be an argument against what kwmonroe is wanting to do.  By co-locating the peer relation implementation with the provides/requires and having them share the name, it seems like it would have more potential for confusion
<kwmonroe> yeah bcsaller cory_fu, i can conceive of a time when a ZK replacement is born, but doesn't do peering the same as ZK.  then you'd have this new thing using 2/3 of the interface-zookeeper with a separate interface for its peering.  that would be confusing.
<bcsaller> I'm happy to teach charm build about some more metadata for interface repos, but we can't combine them into the same in the model. They could live in the same repo though
<cory_fu> kwmonroe: Also, from bcsaller's last comment, when you look at the metadata.yaml and see "zookeeper" in there twice, it makes you think that they speak the same language, so to speak
<bcsaller> cory_fu: right, having to talk about the peer and non-peer version of an interface by name is wrong
<kwmonroe> yeah, ok, i'm over it.  back to bcsaller's original reply "do the conversations differ significantly", and the answer is yes.
<kwmonroe> let me warm up my ctrl-z fingers
<bcsaller> code sharing is a good goal and is different from this though, think about a layout for the repo, or a change to interfaces.yaml that can describe what you want to charm build
<jose> hey everyone. I'm writing some tests with bdx and I'm setting it to grab the unit number/name from ['servicename'][0]. is that the correct way? Because it's throwing an error, and I'm having to hardcode the unit number for the tests to pass
<kwmonroe> thx for the input bcsaller cory_fu lazyPower!
<marcoceppi> lazyPower magicaltrout it'll only break on local deploys of the charm
<marcoceppi> lazyPower magicaltrout not in the store, the store will just parse and deliver the charm as expected AIUI
<magicaltrout> k
<marcoceppi> but I haven't tested that assertion
<lazyPower> marcoceppi - i'm still having problems getting charm2 uploaded charms down in a 1.25 environment
<marcoceppi> lazyPower: :\
<lazyPower> all of the bundles i published for firl to scope out the k8s work, didnt work on 1.25
<lazyPower> so this is a pretty big caveat we need to make ppl aware of if they are participating in the beta
<magicaltrout> alrighty then, dinner, wine, and ec2 pdi slaves.....
<magicaltrout> work my pretties.....
<magicaltrout> booo its done sweet fa
<marcoceppi> http://marcoceppi.com/2016/03/testing-juju-without-wrecking-juju/
<magicaltrout> ooh blimey some slave stuff is actually working
<lazyPower> marcoceppi - +1 LGTM on your blogpost, sorry i didnt get wrapped up on that sooner
<marcoceppi> lazyPower: no worries
<lazyPower> or you know, run juju in a container ;)
<lazyPower> marcoceppi - there is one thing i did notice however, you didnt explicitly call out that $JUJU_HOME is not compliant between 1.25 and 2.0
<marcoceppi> lazyPower: production or notin' ;)
<lazyPower> that'll cause some grief on users
<marcoceppi> lazyPower: not really
<lazyPower> k
<marcoceppi> $JUJU_HOME is ~/.juju in 1.X and ~/.local/config/juju in 2.X
<lazyPower> ah thats right,t hey changed it in beta1
<lazyPower> or alpha2
<lazyPower> whichever
<magicaltrout> anyway on more serious matters.... I hope you all got out and voted for Trump today!
<lazyPower> magicaltrout - obvious troll is obvious
<magicaltrout> lol
<thumper> marcoceppi: hey...
<thumper> marcoceppi: at the end of your blog post you talk about updating juju
<thumper> if you do "make install" instead of "make godeps" you don't need the last "go get -v github.com/juju/juju/..."
<marcoceppi> thumper: really?
<marcoceppi> nice
<thumper> also, you can condense "git fetch origin && git merge --ff-only origin/master" to "git pull upstream master"
<jamespage> beisner, hey - checking out for the night - ping me with details on the project-config change and I'll try get it nudge in tomorrow
<thumper> marcoceppi: yeah...
<thumper> 'make check' runs all the tests too
<beisner> jamespage, ack, will do.  back at it now.
<beisner> jamespage, o/  thanks!
<thumper> marcoceppi: good post though
<magicaltrout> lazyPower: what charm helper do i use to get the private ip address?
<marcoceppi> magicaltrout: I think there's a private_address() in charmtools.core.hookenv
 * marcoceppi checks
<marcoceppi> magicaltrout: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html#charmhelpers.core.hookenv.unit_private_ip
<marcoceppi> it's unit_private_address()
<magicaltrout> bah, the search box sucks :P
<magicaltrout> thanks
<lazyPower> or unit_get('private-address')
<lazyPower> rate my readme - https://github.com/juju-solutions/interface-logstash-client
 * lazyPower drops mic and walks away
<magicaltrout> +1
<magicaltrout> if i can understand it
<magicaltrout> anyone can understand it
<cory_fu> lazyPower: Is that relation for a subordinate?
<lazyPower> could be
<lazyPower> but not in this context no
<cory_fu> Ok, good
<cory_fu> Oh the hoops one must jump through to explain how subordinates work with provides / requires: https://github.com/juju-solutions/interface-java#charms-needing-a-java-runtime
<lazyPower> oooo cory_fu  your readme's bringin the heat!
<cory_fu> lazyPower: It's ridiculous that I have to go through that much effort to explain when you should use provides vs requires.
<lazyPower> oh i hear ya man
<lazyPower> i haven't documented dockerhost yet for that reason
<cory_fu> Though maybe that's just an artifact of the types of subordinate charms I write, but it seems way more common to me than the other way around
<magicaltrout> that java stuff made me very sad for at least 30 minutes
<cory_fu> magicaltrout: The provides / requires?
<magicaltrout> yup
<magicaltrout> i always get it the wrong way around
<cory_fu> lazyPower: Feel free to use the java README as a template for the dockerhost one
<magicaltrout> i should just accept that and when i think its right, flip it
<cory_fu> magicaltrout: Me too.  Even now
#juju 2016-03-02
<metsuke> Is there a way to get juju to create containers directly on openstack compute nodes rather than creating nova instances on top?
<blahdeblah> metsuke: you can either deploy directly to MAAS-deployed nodes, or into LXC containers on those nodes
<metsuke> blahdeblah: the lxc method is for evaluation though so it is not recommended?  We are trying to cut out the additional layer that Nova creates.
<bradm> metsuke: do you mean using lxc instead of kvm for nova instances?
<blahdeblah> I'm not sure whether the LXC method is officially supported, but it definitely works, and there is an LXD version coming in 16.04.
 * blahdeblah defers to bradm on this :-)
<metsuke> bradm: It's more like not using nova instances at all
<metsuke> maybe this isn't really a problem
<bradm> metsuke: there's a nova-compute-lxd coming
<metsuke> but it just seemed overbearing for juju to create 50 separate vms for separate containers
<hloeung> change OpenStack to use the LXC driver (nova.virt.lxc.LXCDriver)? I haven't tried it myself, but https://wiki.ubuntu.com/OpenStack/LXC might help
<blahdeblah> metsuke: Why do you want to use openstack compute nodes without nova?  The point of nova is to make it easy to create your compute instances, whether they're containers or VMs.
<metsuke> blahdeblah: well, we wanted everything from Openstack except Nova, but it is installed by fuel and juju leverages it automatically
<blahdeblah> I think you will find the upcoming LXD version of nova that bradm mentioned pretty good.
<metsuke> ok.  Upcoming like mitaka? =)
<metsuke> in terms of scaling charms, my company has always used haproxy to scale horizontally for web apps.  However, just looking at the Wordpress charm, that methodology is very different now.  Should all new, non-inherently scalable, http charms use the same methodology as Wordprerss (apache load balancer?) or is there a Good way to use haproxy?
<blahdeblah> metsuke: upcoming in xenial (16.04)
<blahdeblah> metsuke: If we're talking about the old public Wordpress charm, it's not a great example.
<blahdeblah> The haproxy charm has relations that you can hook up to anything supporting the http interface.
<metsuke> blahdeblah: would you recommend that for http scaling as opposed to doing it in-charm?
<blahdeblah> That's how we usually do it
<metsuke> cool, it would definitely be easier
<blahdeblah> It may work well for you that way, or it may be better to do it in-charm, depending on your application.
<blahdeblah> But the general idea is that charms are composable pieces that can be hooked together into application suites
<metsuke> Can ironic be in place of MAAS for cutting nova vm's out of the equation?
<jamespage> gnuoy, if you are around - https://code.launchpad.net/~james-page/charm-helpers/fix-ssl-certs/+merge/287639 could do with a review
<gnuoy> jamespage, +1
<jamespage> gnuoy, thankyou
<gnuoy> np
<magicaltrout> this looks rubbish, but is actually pretty cool: http://ibin.co/2Yr5fKlDWBeD the first Juju controlled PDI ETL cluster
<jamespage> gnuoy, wanna do a +2 and landing? https://review.openstack.org/#/c/287087/
<jamespage> ready to go imho
<gnuoy> jamespage, done
<jamespage> gnuoy, you have to hit the "workflow +1 as well"
<jamespage> to move things along...
<gnuoy> What am I =1'ing when I do that>
<gnuoy> ?
<jamespage> gnuoy, http://docs.openstack.org/infra/manual/developers.html#project-gating
<jamespage> gnuoy, the +2 is a review; the +1 is actually 'approve'
<jamespage> workflow +1 that is
<jamespage> gnuoy, and job done..
<jamespage> thanks!
<jamespage> dosaboy, https://review.openstack.org/#/q/project:%22%255Eopenstack/charm.*%22+status:open is a useful view btw
<dosaboy> yep i got that thanks
<jamespage> gnuoy, beisner: pushing up a charmhelpers resync for review/test
<jamespage> https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
<admcleod1> magicaltrout: hey! ...i still havent thought of a comeback.
<admcleod1> lazyPower: some of the times the leadership broke, a force destroy of a machine was involved, other times a service destroy / resolved and no force
<magicaltrout> hehe
<admcleod1> magicaltrout: i was introduced to 2 expats here as 'british but doesnt want to be british'. they were not impressed.
<magicaltrout> are they the expats I see on facebook who don't want refugees coming into the UK whilst they live on the costa del sol? ;)
<admcleod1> magicaltrout: no.. one was this english artist / yoga teacher who told one of our spanish friends that 'english people are much more interesting' though.. so less beer-drinking-lobster tan but similar amount of strange++
<magicaltrout> ah yes
<magicaltrout> those interesting english-folk
<magicaltrout> of course
<admcleod1> magicaltrout: i met another english girl here who told me her parents had a place somewhere in the south of mallorca. i asked her: "are there many foreigners there?" and she replied "no, its mostly english"
<magicaltrout> oooh
<magicaltrout> thats bad
<magicaltrout> did she looks that stupid or just act it?
<admcleod1> ...yes :}
<admcleod> magicaltrout: nice blog post btw
<magicaltrout> random ramblings on a monday afternoon
<magicaltrout> for the people I work with in the "real world" I'm hopefully doing a PDI ETL cluster demo this afternoon if Amazon speeds the f**k up
<admcleod> hah
<magicaltrout> now my charm has leader election, config detecion stuff in, it should be ready for people to test. I'll get it into the listings and worry about unit tests later ;)
<admcleod> magicaltrout: what ver juju are you using?
<magicaltrout> once juju<->lxd stops messing up my ip addressing, I reckon most of my development workload will switch to some form of charmed up development process
<magicaltrout> Trunk! :)
<jamespage> beisner, getting alot of "ImportError: No module named pika"
<jamespage> for amulet test executions....
<marcoceppi> magicaltrout: Having had to do Pentaho ETL stuff many years ago, I'm curious what you've whipped up
<magicaltrout> having never used PDI clustering before.... so am i
<marcoceppi> as a point of reference, the project I was working on basically piloted pentaho, but ended up making us write our own ETL tool
<magicaltrout> well my background for the last decade has been mostly Open Source BI around Pentaho and stuff
<magicaltrout> I've done some Talend projects, but I absolutely hate the platform
<magicaltrout> so we mostly end up using PDI
<magicaltrout> anyway so my plan is thus..... get this deployed which will allow users to either, run upload jobs & transformations and run them via actions or cron tab them for scheduled execution or use PDI Carte as a webservice so people can create stuff in Spoon and either run it as a slave server or, if they want to scale up juju add-unit pdi and run as a PDI cluster
<magicaltrout> thats phase 1
<magicaltrout> and will give people scale out capabilities and remote execution capabilities with little effort on my/our part
<magicaltrout> this pretty much works, i'm just putting some finishing touches to it, like informing the user of the leaders public ip etc
<magicaltrout> phase 2 will work as a subordinate to the hadoop and yarn charms, because PDI these days has Big Data scaleout stuff that the Hadoop folk use, I've never touched it myself but now I have an excuse to
<magicaltrout> and you can write MapR transformations within PDI and run them in a Hadoop cluster
<magicaltrout> so my plan is to create something that allows you to hook it up to Hadoop, at which point it will embed itself like their docs explain, and then allow users to run PDI jobs in the Hadoop/Yarn environment
<magicaltrout> http://ibin.co/2Yr5fKlDWBeD as you'll sorta see here marcoceppi on the left, pdi doing some stuff on the right, carte these days doesn't suck too much and you get some basic webservices with a picture of the transformation you select, the run log, etc
<magicaltrout> so there might be some scope to write a nice carte UI as well to wrap it all up, but thats somewhere down the road
<marcoceppi> sounds awesome
<jamespage> marcoceppi, can you take a peek at https://github.com/marcoceppi/leankit.hostmar.co/pull/3 ?
<magicaltrout> well if no one else uses it I certainly will. But in reality it should fit in quite well with the stuff kwmonroe and cory_fu do, so I'm sure once the hadoop stuff is up and running there will be some fun stuff to demo around Hadoop ETL work
<marcoceppi> jamespage: i just opened it up, looking now!
<marcoceppi> magicaltrout: It's a great solution space. Lots of users and little operational knowledge. So this fits well with the write once use many idea of Juju
<magicaltrout> yup
<marcoceppi> jamespage: deployed, cheers!
<magicaltrout> aww marcoceppi
<magicaltrout> get amazon to up your quote allowance :P
<magicaltrout> -e +a
<jamespage> marcoceppi, working nicely - thanks!
<marcoceppi> magicaltrout: eu-west-1 is full of a lot of other people
<marcoceppi> magicaltrout: you might want to try eu-central-1, no one is in it atm. We may need to spin up another AWS account to support all the CDP members
<magicaltrout> k
<marcoceppi> magicaltrout: actually, I'm the problem in eu-west-1
<marcoceppi> magicaltrout: let me tear down an old demo
<magicaltrout> hehe, get back the other side of the pond!
<marcoceppi> magicaltrout: when at MWC, it was much more convient ;)
<admcleod> magicaltrout: + kostas + I ;}
<magicaltrout> show off
<marcoceppi> magicaltrout: I just cleared out 6 instances
<marcoceppi> will work with amazon to get better resources
<admcleod> magicaltrout: even chocolate blue cheese icecream is all about the showmanship
<magicaltrout> don't you worry marcoceppi once LXD stuff is done, I'm all over LXD based development :P
<magicaltrout> admcleod: that I can understand
<marcoceppi> magicaltrout: well you're not the first to hit limits, I've started a dialog with aws
<beisner> jamespage, yep that's the amulet smoke not doing the 00-setup bit.   fixed and will recheck those.
<beisner> jamespage, ps.  the amulet smoke is live! ;-)
<beisner> jamespage, rechecking just this one to confirm, then will rerun the others  https://review.openstack.org/#/c/286451/2
<lazyPower> ..chocolate..bluecheese...icecream?
<lazyPower> but...why
<magicaltrout> yeah
<magicaltrout> weirdly, everyone else on the table also decided to try it
<magicaltrout> i was wise
<magicaltrout> and refused :P
<jamespage> beisner, i saw and did some low risk landings on that basis
<admcleod> actually.. i had some 'white chocolate blue cheese icecream' again in valencia and it was quite nice...
<magicaltrout> you disgust me
<admcleod> thats why im here
<magicaltrout> that said
<beisner> admcleod, hmmm i'll have to take your word on that
<magicaltrout> it would certainly be an improvement on white chocolate and american cheese....
<lazyPower> i dont know where you're getting this crazy amount of chocolate and cheese, but they really aren't complimentary flavors in my mind
<magicaltrout> correct
<admcleod> tell that to philadelphia
<admcleod> or like, cheese cakes
<magicaltrout> thats not a winning argument
<magicaltrout> for starters they aren't blue cheese cakes
<magicaltrout> thats not to say you can't get such a thing, i'm sure you can
<magicaltrout> doesn't make it right though
<jamespage> beisner, guess we can turn off the +1 code review from uosci now right?
 * beisner looks
<beisner> oh sweet!  yep that change landed.
<jamespage> beisner, it did - I asked this morning and got perms on the charms-ci group and twiddled things around
<jamespage> verification is working fine for uosci
<beisner> ok cool i'll make it so
<admcleod> magicaltrout: well.. the second time was good.
<beisner> jamespage, fyi, anything that's currenty running will still crvw vote, but new jobs won't
<jamespage> beisner, ok
<jamespage> beisner, I also have a juju-deployer branch that deals with using anything other than the master branch with git repo's
<beisner> jamespage, ah nice.  -b ftw?
<jamespage> beisner, yah
<jamespage> beisner, https://github.com/openstack/charm-percona-cluster;stable format
<jamespage> beisner, also noticed that recheck-charm also gets picked up by the standard check processes as well
<beisner> jamespage, yeah it's all regex based.  so, they must be   ^recheck.*
<beisner> jamespage, i noticed that too in the sandbox.  but not everyone retriggered on recheck-charm.    i'm going to add charm-recheck and see if that limits the scope.
<beisner> jamespage, ok, charm-recheck is hot, and i just did that on:  https://review.openstack.org/#/c/286669/1
<beisner> if that confirms a-ok, i'll update the comment template.  but recheck-charm will continue to work as well.
<jamespage> beisner, I guess that we don't have vast capacity for concurrency atm right?
<jamespage> check is taking >1.5 hrs atm
<jamespage> beisner, good - I think not consuming vast amount of openstack resources when we don't need to is optimal...
<beisner> jamespage, each check is about 45 min.  i just bumped the concurrency up from 4 to 6.
<beisner> jamespage, depending on which charm some are as low as 30 min for lint, unit, charm-single, and amulet-smoke.   that's from the moment the start running of course.
<jamespage> beisner, hmm  - test_charm_single: FAILURE
<beisner> jamespage, 00:06:10.118 W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/trusty-security/universe/i18n/Translation-en  Hash Sum mismatch
<beisner> jamespage, looks like a launchpad ppa blip got a handful of those jobs at about the same time.
<magicaltrout> okay leadership reactive knowitalls
<jamespage> beisner, \o/
<magicaltrout> i'm clearly not doing something write
<jamespage> oh crappers
<magicaltrout> marcoceppi: I want to set a bunch of variables when the leader is elected and also reset those variables if a new leader is elected
<magicaltrout> but
<magicaltrout> I don't want to set those variables if the leader isn't changed
<magicaltrout> https://gist.github.com/buggtb/328f36919bd637811a61 that was my latest attempt
<magicaltrout> but it seems leadership.changed is only called when you update one of the variables
<lazyPower> magicaltrout leader-elected
<lazyPower> and all followers should just consume the data via leader-get when leader-settings-changed
<lazyPower> leader-elected will be run on *any* leader, the first cycle that they get elected. so control clsuter state there. I'm doing something very similar in layer-etcd
<magicaltrout> hmm
<lazyPower> exchange data over peer-relation-joined on first run, then all subsequent cluster status is formed based on data being received from leader-get, as registration is 2 step w/ a http post (Self registration) and then reading cluster data broadcast from the leader.
<magicaltrout> according to the leadership layer it means I don't have to write leader-elected hooks :P
<lazyPower> @when('leader.elected')  i guess? i'm not sure, i haven't used stubs leadership layer yet
<marcoceppi> magicaltrout: right leadership.changed is for the settings, not the leader itself
<marcoceppi> magicaltrout lazyPower there is no leader.elected https://git.launchpad.net/layer-leadership/tree/README.md
<marcoceppi> stub:  ^^ opinions?
<magicaltrout> nope
<marcoceppi> magicaltrout: actually
<marcoceppi> magicaltrout: wouldn't this just work? https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
<lazyPower> marcoceppi - seems like an oversight,  the leader being elected is totally an event we care about
<marcoceppi> magicaltrout: or do you only want to have the settings sent once?
<marcoceppi> lazyPower: it's not, you care about the state
<lazyPower> schenanigans
<marcoceppi> either you are the leader or you're not
<lazyPower> what if hte leader changed, and i have stale cluster data?
<lazyPower> that leader-elected state gives me an opportunity to cleanup and reconcile before i declare the current cluster state again, from my leader perspective
<marcoceppi> lazyPower magicaltrout how aobut this? https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
<magicaltrout> marcoceppi: thats sort of what I had before
<marcoceppi> magicaltrout: right, so I added one more state
<magicaltrout> but i seem to get dumped into an endless restart cycle because the leader_set appears to trigger a change even it it remains the same
<marcoceppi> if the leader changes, that unit becomes the leader and sees it hasn't been configured yet for leadership
<marcoceppi> err
<marcoceppi> crap, I rushed that
<lazyPower> mmhmm
<magicaltrout> i was playing around with something along your lines marcoceppi with my init=True stuff
<marcoceppi> magicaltrout: https://gist.github.com/marcoceppi/d91d6b2cc5072f8204d2
<magicaltrout> seems to me it would be good to have a fire and forget call that runs on leader elected
<magicaltrout> maybe I'm just lazy ;)
<marcoceppi> magicaltrout: just set and manage the state instead
<marcoceppi> that way it'll only be run once if you're the leader and you haven't run through that method yet
<magicaltrout> hmm didn't thinkg about that
<magicaltrout> still need to use states better
<magicaltrout> thanks marcoceppi I'll give it a go
<lazyPower> magicaltrout - or, do the dreaded antipattern of using @hook('leader-settings-changed') and defend the users right to choice :P
<marcoceppi> magicaltrout: replace "self" with the layer name and you're on your way
<lazyPower> rather @hook('leader-elected')
<marcoceppi> lazyPower: no! hooks are for fools
<magicaltrout> lol
<marcoceppi> lazyPower: and you can't combine with states
<lazyPower> only if you care about stacked decorators
<lazyPower> if i have a single method, decorated by @hook('leader-elected') that runs everytime a leader is elected, i've solved this with a one liner
 * marcoceppi beats lazyPower with a pillow full of quarters for using @hook
 * lazyPower points at the field where his figs are grown
<magicaltrout> i heard if I mix hooks and states cory_fu will force feed me admcleod's ice cream
<lazyPower> behold my field of figs, notice they are barren
<marcoceppi> lazyPower: however, if you want to update the leadership settings, you can just remove the state
<marcoceppi> lazyPower: don't have to wait for a hook now
<beisner> jamespage, will be retriggering pxc for amulet smoke after resolving an env var that the test expects:  Please set the vip in local.yaml or env var AMULET_OS_VIP to run this test suite
 * magicaltrout wonders if he can get around lxd deploying to localhost by blocking up localhost....
<magicaltrout> booo
<magicaltrout> bloody thing
<cory_fu> Hey, sorry I'm late to the leadership party.  Don't use @hook
<magicaltrout> told you....
<cory_fu> magicaltrout: I think you're over thinking it.  You don't need to care specifically about when leadership changes, you just need to react to @when('leadership.is_leader') and @when_not('leadership.is_leader')
<magicaltrout> hmm
<magicaltrout> well i need the leader to broadcast new variables and I need the slaves to pick them up
<magicaltrout> I also need to trigger a carte restart
<cory_fu> In the @when('leadership.is_leader') handler, write the leadership settings.  If they're different, all the peers will get notified that they changed
<cory_fu> And they can then react to leadership.changed
<cory_fu> (all the peers, including the leader)
<magicaltrout> okay i'll just put something together and you can tell me if you think it works
<magicaltrout> then we'll have a bet :)
<magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L113 bit of that cory_fu ?
<cory_fu> magicaltrout: I was thinking like this: https://gist.github.com/johnsca/8cf0f25cce06c13bdecb
<cory_fu> Also, no need to leader_set something that's already in config
<magicaltrout> good point
<jamespage> beisner, ack
<cory_fu> magicaltrout: Note the first line.  It's important to use that version of leader_set to ensure the leader gets notified of the settings changes internally
<cory_fu> (It might get notified by another hook call, but I'm not certain, and it would take quite a bit longer even if it works)
<magicaltrout> yeah i'm using that one
<beisner> jamespage, looks like charm-recheck is the ticket.
<jamespage> beisner, awesome
<kjackal> Hi everyone, what is your setup to deploy charms in xenial? Do you use the local provider while in trusty?
<stub> marcoceppi, magicaltrout: @when('leadership.is_leader') ?
<marcoceppi> stub: yeah, we moved on a bit from that
<stub> cory_fu: I was tempted to monkeypatch charmhelpers.core.hookenv, but thankfully thought better of that.
<cory_fu> I am also interested in kjackal's question about deploying xenial units using local provider on trusty
<cory_fu> stub: :)
<magicaltrout> right then
<magicaltrout> cory_fu: this is why i asked the question in the first place
<magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L120
<magicaltrout> post bootstrap it all sorted itself out
<magicaltrout> but then update_master_config is run
<magicaltrout> which triggers a restart
<magicaltrout> https://gist.github.com/buggtb/1cab57c0f1d1d4a7fa99
<cory_fu> magicaltrout: It should only trigger that if the leadership settings had changed
<magicaltrout> well i only have 1 node
<cory_fu> Which would imply that it needed to restart
<lazyPower> leader-settings-changed seems tor un anytime you leader-set, it doesn't retain the same behavior as relation-set
<lazyPower> unless i'm completely mistaken
<lazyPower> but thats what i've seen the last week or so in 2.x land
<magicaltrout> yeah that was my thinking
<magicaltrout> its not like the other properties
<lazyPower> so i'm 98% certain thats teh case
<magicaltrout> this stuff is rexecuted verbatim
<lazyPower> and this entire conversation about not caring when leader changes is fud
<lazyPower> well not fud
<lazyPower> but not right
<cory_fu> magicaltrout: In a meeting now
<magicaltrout> oh any excuse when you don't know the answer :P
<cory_fu> :)
<cory_fu> I haven't tested it, but what you just said doesn't sound right at all, per my understanding.  Maybe stub could help
<magicaltrout> sure. But I'm siding with lazyPower, leader_set seems to force leadership.changed to trigger
<magicaltrout> which is why it sucks a bit :)
<lazyPower> yep
<lazyPower> doesn't matter if the data changed or not
<lazyPower> it has different behavior characteristics
<magicaltrout> sooooo hello stub
<magicaltrout> do you have an opinion? :)
<lazyPower> magicaltrout what you could do..
<lazyPower> there's a @leader.setting_changed context, you can stuff the data into a cache, and do something like the data_changed bits that cory exposed
<lazyPower> so it only does a restart if the leader-data differed from the last hearbeat it got from leader-set
<lazyPower> s/hearbeat/heartbeat-update-thing-whatever-we-call-this
<stub> charms.leadership.leader_set should set leadership.changed, and that state will remain set for the remainder of the hook
<lazyPower> @when('leadership.changed.admin_password')
<beisner> jamespage, so are you thinking we could just serialize the next-sync with the existing gh2lp sync job?
<lazyPower> magicaltrout ^ is what i was looking for
<lazyPower> or does that trigger regardless of the actual value changing?
<magicaltrout> yeah stub so if I wanted to set some variables on leader election but not reset them later how do you go about catching it?
<magicaltrout> lazyPower: dunno, i was trying to avoid just annotating all the config stuff
<magicaltrout> give it a whirl though i guess
<magicaltrout> oh that way i need a method per config as well I think
<magicaltrout> which is annoying
<stub> magicaltrout: http://pastebin.ubuntu.com/15267478/ ?
<magicaltrout> ah right you do a state set as well
<magicaltrout> okay stub thanks
<stub> Or just use @hook('leader-elected') :)
<magicaltrout> lol
 * magicaltrout ducks the projectiles coming from cory_fu 
<cory_fu> magicaltrout, stub: I really think that thinking about it in terms of "I need to do something when leadership has changed" is the wrong way to think about it
<cory_fu> 1 min, I'll explian more
<magicaltrout> well, I can't ignore a leadership change event :P
<stub> magicaltrout: Mixing leadership.is_leader and leadership.changed could be dangerous, as you can't tell if the leadership settings where changed in the current hook by the current leader, or by the previous leader that has been deposed.
<stub> cory_fu: Normally, yeah. But if you want to ensure the credentials are cycled every time the leadership has changed, you want to do something when leadership has changed.
<stub> I just set the credentials once and leave them no matter who the leader is
<stub> But I guess if you were setting a public key, and the private key never left the leader, there might be some sense in regenerating the creds rather than unnecessarily share the secret.
<deanman> A fresh install of juju on windows 7 gives "ERROR there was an issue examining the environment: dial tcp 127.0.0.1:37017: ConnectEx tcp: No connection could be made because the target machine actively refused it." when running bootstrap. Any hints?
<stub> (although my example is also simple without using @hook either)
<magicaltrout> stub: so if running leadership.changed and leadership.is_leader is dangerous, how do you know if your running on the leader or slave?
<stub> magicaltrout: leadership.is_leader is only ever set on the leader. leadership.changed is set at the start of a hook if the leadership settings are different to the last hook.
<stub> So if a leader did leader_set, then died, a new leader will get its leader-elected hook run but find leadership.changed already set.
<cory_fu> stub: My point is that if you just set the leader settings whenever you are leader, then they will naturally update when it changes
<stub> cory_fu: Yup.
<cory_fu> And if you also gate your restart on leader settings changed, then, even if you are the only unit, you will still be elected leader and the config write and restart will happen at the right place
<magicaltrout> this is turning into a much larger exercise than it should be. How had can it be for the leader to propogate a few variables, and then the leader set 1 config, the slaves set another config and they all restart, but only when a leader changes or a config variable is *actually* updated? :P
<cory_fu> magicaltrout: That means, don't do a separate config write + (re)start other than the leadership.changed; that can be your main config write + (re)start handler
<stub> cory_fu: Where was your example?
<cory_fu> magicaltrout: My point is that it's easier than you're making it out to be.  If you just work under the assumption of leadership and don't worry about specifically reacting to changes in leadership status (only to changes in leadership values) then it becomes much simpler
<magicaltrout> stub: https://gist.github.com/johnsca/8cf0f25cce06c13bdecb
<cory_fu> Yeah, ^
<cory_fu> With that, if you don't have any other calls to render_*_config and restart(), then you will only ever get restarted when it
<magicaltrout> how can i work under the assumption of leadership when I have a separate master and slave template to fill?
<cory_fu> *it's necessary
<cory_fu> magicaltrout: I mean, under the assumption that you will always either be leader or not, like in my example
<cory_fu> i.e., don't have a separate set of handlers for "no leadership" or "single unit" case
<stub> So the crazy edge case that would be next to impossible to engineer is update_master_config being called before change_leader
<cory_fu> That's just naturally handled because Juju will always pick a leader, even for a single unit
<cory_fu> stub: How could that happen if change_leader is what updates the leader settings?
<magicaltrout> I don't assume there will be no leadership. Do i?
<jamespage> beisner, that might work ok
<stub> cory_fu: Another unit is leader and changes settings, and loses the leadership lease. This unit is elected leader, and runs leader-elected. The leadership settings have changed, so that state is true when the hook starts.
<stub> cory_fu: So both handlers are valid, and it is undefined which is called first.
<cory_fu> Hrm
<stub> but pretty much impossible due to the 30s lease period on leadership. It really means I've thought about this too much :)
<cory_fu> ^__^
<stub> LET ME SPECIFY PRIORITY OF MY HANDLERS DAMMIT
<cory_fu> lol
<magicaltrout> i'm still lost
<magicaltrout> how on earth do I set the config without it restarting? :P
<cory_fu> magicaltrout: If you set the config and it's different, that implies that you *need* to restart.
<stub> magicaltrout: What is the actual problem you are trying to solve btw?
<stub> magicaltrout: The coordinator layer might be a better fit.
<magicaltrout> cory_fu: I set the config because  is_leader is called
<magicaltrout> nothing in that configuration actually changes
<magicaltrout> but leader_set is called so leadership.changed is fired
 * magicaltrout feels like we've been here before....
<cory_fu> magicaltrout: If nothing in the settings changes, then it shouldn't trigger that changed state.
<magicaltrout> lazyPower: can you have a word with cory_fu please :P
<cory_fu> magicaltrout: I could see it triggering on the first run, since the leader settings aren't set before
<stub> Yeah, just checked the code. If it sets leadership.changed even if its a noop, then its a bug.
<cory_fu> stub: Actually, is it the case that on the first run, leadership.changed will always be set?
<stub> Yes
<cory_fu> Well, that means your "nearly impossible" edge case is guaranteed to happen on the first run.  :(  And that explains magicaltrout's gist
<magicaltrout> https://gist.github.com/buggtb/e6faa5b897bcf7d5d282
<magicaltrout> thats happened whilst we've been mulling this over
<stub> (because in Juju, deleting a key is just setting its value to None)
<stub> Well, there's your problem. Your charm is full of Java.
<magicaltrout> lol
<magicaltrout> well that I have no control over, but at least I know how Java works :P
<mbruzek> why so much Java hate?
<mbruzek> It is *just* a language people!
<magicaltrout> the java appeared because after a bunch of restarts that failed its given up and run out of memory :)
<magicaltrout> yeah i got called out on an ASF mailing list last week for mocking PHP, I'm keeping quiet now ;)
<stub> mbruzek: For me, because of the damn click through license and the effort involved in working around it :-P
<cory_fu> magicaltrout: Hrm.  That is a problem.  I really don't understand why update_master_config would be running during update_status
<cory_fu> It really shouldn't be
<cory_fu> magicaltrout: Can you point me to the code that that is running right now
<cory_fu> ?
<cory_fu> s/right now/currently/
<magicaltrout> er http://paste.ubuntu.com/15267724/
<magicaltrout> I think its that
<magicaltrout> but we've been talking and I've been messing around with ideas
<stub> cory_fu: leadership.changed will always be set the first time leader_set is called, not always set the first run at the start.
<cory_fu> stub: Sure, but in that case it's guaranteed that you'll end up with both leadership.changed and leadership.is_leader at the start of the hook at the same time
<cory_fu> So at least once, it could potentially run in the wrong order
<cory_fu> And since changing the values would then not update leadership.changed (since it would already be set), the @when('leadership.changed') handler would not run again after the leader settings were updated
<stub> cory_fu: I don't follow. The first handler runs, guarded by @when('leadership.is_leader'). It sets stuff, turning on leadership.changed. Then the other hook runs.
<cory_fu> stub: Oh, I misunderstood
<magicaltrout> okay so can we try and agree on what everyone thinks will work
<magicaltrout> then we can test it in reality
<magicaltrout> https://gist.github.com/buggtb/a96f3ea9d1044760b642#file-gistfile1-txt-L111
<magicaltrout> if I'm following, leader_set should run
<magicaltrout> then on the leader I should get leadership.changed and it update and restart?
<cory_fu> stub: However, I think what I was saying was still true, because for the very first hook run, https://git.launchpad.net/layer-leadership/tree/reactive/leadership.py#n41 is going to return an empty dict
<cory_fu> magicaltrout: Yes.  I'm concerned that for the very first hook run, you could have leadership.changed set from the start of the hook, which could cause the order of change_leader and update_master_config to be undetermined
<magicaltrout> okay i'm gonna refresh and see how knackered it is
<cory_fu> It would be correct for the following hook runs, but that first run seems like it will hit the edge case
<stub> cory_fu: Right, but there are no keys in an empty dict so we have nothing to iterate over and the flag remains unset
<stub> (previous is an empty dict, current is an empty dict)
<cory_fu> stub: Oh yeah!  Great
<cory_fu> Ok, so I'm back to having no idea why update_master_config is running before change_leader in update_status for magicaltrout
<magicaltrout> well i'll run this clean one with code I know what it looks like and we can argue about it some more
<jamespage> beisner, https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync status atm
<jamespage> I submitted a few rechecks...
<beisner> jamespage, ok cool.  the queue is pretty clear atm.
<HollyRain> hi! I've started to use LXD. I would want to know whether Juju is integrated or is going to be integrated with LXD?
<magicaltrout> LXD!!!! woop
<magicaltrout> HollyRain: from an unofficial POV, 2.0 will include LXD support
<HollyRain> magicaltrout, how to install that beta?
<magicaltrout> you need to wait a few days or get Xenial installed and compile Juju lxd-container-type branch
<cory_fu> stub: Did you see https://github.com/juju-solutions/charms.reactive/pull/56
<stub> cory_fu: yes. might simplify some things here.
<stub> Are java subordinates rather than java layers the future ?
<lazyPower> It seems the easiest way to abstract that without many copies of a charm that only vary by layers. We used it in layer-logstash and it works really well
<lazyPower> granted we have only tested with kwmonroe's openjdk subordinate
<stub> cool. I guess even I need to allow people to select between OpenJDK and Oracle, and unless a layer supported both.
<beisner> jamespage, on your charm-odl-controller review, the single charm deploy fails with http://pastebin.ubuntu.com/15268096/   (Failed to connect to nexus.opendaylight.org port 443: Connection timed out).   We'll need to file an RT to open up access to that.
<jamespage> beisner, oh wait - thers is an amulet override for that
<lazyPower> interface:java makes that pretty intuitive out of the box. you gate on java.ready
<stub> A layer may still be useful, that sticks in OpenJDK until a subordinate is added to override it.
<lazyPower> true statement
<beisner> jamespage, but that's just:  juju deploy XYZ
<jamespage> beisner, ah right I see
<jamespage> hmm
<stub> Someone please write an Oracle java layer so I don't have too :-P
<stub> c/layer/subordinate
<magicaltrout> you get to make use of the fancy new license acceptance stuff :P
<kwmonroe> stub, we've got layers for openjdk, ibmjdk, and zulu8 (all providing subordinates).  none are promulgated yet, but they're all close.
<stub> Not me. Last time I tried that the lawyers told me to get knotted. Someone elses turn.
<magicaltrout> lol
<kwmonroe> stub, i wouldn't write a layer that just sticks in openjdk.  i'd use a layer option instead.. if/when a subordinate comes along, it should do the update-alternatives thing and switch you over.
<kwmonroe> layer option meaning this.. just stick in openjdk or whatever in the package section: https://github.com/juju-solutions/layer-ubuntu-devenv/blob/master/layer.yaml
<stub> was thinking of an option to specify the default, but can't think of anything that could be a default except OpenJDK (I don't know zulu or ibm)
<stub> oh, right.
<stub> 'cept I need openjdk 8 under trusty, which means a PPA
<beisner> jamespage, i shall raise that rt now
<kwmonroe> yeah stub, i'll update layer-openjdk to add ppa:openjdk-r/ppa if someone wants java8 installed on trusty.
<magicaltrout> awww F**$"Â£$%(IG race conditions
 * magicaltrout goes in search of some food and alcohol to get over the annoyance
<stub> kwmonroe: No rush - I'll probably have Xenial released before I get to refactor Cassandra into reactive
<kwmonroe> heh, right on.
<stub> kwmonroe: And now I think of it, the openjdk8 ppa needs to be a config option so it can be overridden at the sites without network egress.
<kwmonroe> yeah stub, probably need a generic ppa config opt regardless of the version.. blank by default, but document setting it for restricted network envs or for getting 8 on trusty.
<stub> I'm using the charmhelpers/apt layer install_sources in Cassandra, since it is 'standard'
<HollyRain> I've a container with Xenial, where is the info. to compile Juju lxd-container-type branch?
<HollyRain> magicaltrout, https://github.com/juju/juju do I compile the last release?
<HollyRain> there is juju-2.0-beta1 https://github.com/juju/juju/releases
<cory_fu> HollyRain: http://marcoceppi.com/2016/03/testing-juju-without-wrecking-juju/ has some useful info
<cory_fu> HollyRain: There is also a docker container that lets you run the latest juju w/o building it yourself, if you're ok with running inside docker
<HollyRain> cory_fu, thanks!
<roryschramm> hi im having a problem deploying openstach to power8 compute nodes, im using pre-release juju 1.25.4 due to bug 1532167.  Afix was released for it but I cant find it in the ppa's and its preventing me from deploying
<mup> Bug #1532167: maas bridge script handles VLAN NICs incorrectly <addressability> <maas-provider> <network> <juju-core:Fix Released by frobware> <juju-core 1.25:Fix Committed by frobware> <https://launchpad.net/bugs/1532167>
<roryschramm> juju cant find the ppc64el packages to push to the power nodes after juju add-machine
<kwmonroe> roryschramm: is deployment complaining about a specific package not being available for ppc64le, or is it that the ppc64le OS image isn't available in your maas setup?  if the latter, you might have more luck asking in #maas.
<roryschramm> its complaining about the package being unavailable
<roryschramm> i hav the deb files for doing a manual install but i have no idea how to get juju to use those packages for power
<kwmonroe> roryschramm: what charm are you trying to deploy?
<roryschramm> i was just tring to do juju add-machine to get the host into juju
<roryschramm> plan is is put nova-compute on the power nodes
<roryschramm> deploying power via maas gui is working fine. just not through juju
<Gil> trying to deploy nova-compute to openstack liberty, get in syslog:  "AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds"
<kwmonroe> sorry roryschramm, but i'm not sure what's causing add-machine to fail.  beisner, i see you answered an openstack/p8/juju question here: http://askubuntu.com/questions/677199/can-bundle-https-jujucharms-com-openstack-base-39-be-used-for-ppc64el-environm -- any pointers for roryschramm?
<roryschramm> well i think the problem is that im using 1.25.4 juju-core. However that code is not available in a ppa currently so juju is failing when deploying to power. for x86 its jush pushing the local x86 installed juju code.
<jrwren> roryschramm: is the controller on power as well? did you deploy that with --upload-tools then? i'd think add machine would use the tools from controller, but I do not know.
<roryschramm> the controller is on x86. I can deploy to x86 nodes just fine. Yes I deployed with upload tools.
<roryschramm> is it possible to juju bootstrap --upload-tools for multiple architechures? ie upload 1.25.4 juju tools for x86 and pplc64el?
<roryschramm> ppc64el*
<magicaltrout> some days I hate my clients "Views are getting dropped randomly in the production server."
<magicaltrout> now i'm not a redshift administration expert, but I severely doubt redshift is randomly dropping views without user action
<beisner> kwmonroe, i've just regained access to ppc64el hw in our openstack charm testing lab in maas, but haven't exercised 1.25.[3-4] against it yet.  we plan to late this week, into next week.
<mux> can anyone tell me if there's a list of public charms that have been updated to use the new 'storage' subsystem?
<mux> as far as I can tell the only one out there is cs:~axwalk/postgresql
<mux> I would love to be wrong
<aisrael> mux: I have an example charm that demonstrates some of the storage functionality (though it's not full featured): https://github.com/juju-solutions/storage-demo
<mux> aisrael: nice, checking it out now
<bbaqar> Anyone got a working lxd bundle for trusty?
<marcoceppi> bbaqar: what does that mean?
<marcoceppi> cory_fu arosales https://jujucharms.com/big-data
<cory_fu> Awesome!
<cory_fu> kwmonroe: ^
<rick_h__> cory_fu: kwmonroe ty for the patience on that.
<magicaltrout> ooh that looks snazzy
<cory_fu> We still need our elephant hats.  :p
<rick_h__> good luck :P
<arosales> marcoceppi: cory_fu  \o/
<cory_fu> ha
<arosales> thanks urulama !
<bbaqar> marcoceppi: So the bundle for openstack-lxd available on the charms store (https://code.launchpad.net/~openstack-charmers-next/charms/bundles/openstack-lxd/bundle) pulls the wily charms
<cory_fu> rick_h__, marcoceppi: So cards are production now?  Is there a UI for generating them?
<bbaqar> i need to run it with the trusty charms
<marcoceppi> bbaqar: ah, you can't lxd is only supported in wily for nova
<urulama> arosales: you're welcome :)
<marcoceppi> cory_fu: no idea
<rick_h__> cory_fu: cory_fu https://jujucharms.com/community/cards
<rick_h__> oops
<marcoceppi> rick_h__ cory_fu so we should start killing off cards.juju.solutions - I'll start by transparently sending requests to this service now
<rick_h__> marcoceppi: awesome
<marcoceppi> rick_h__: mind if I mail the list about https://jujucharms.com/community/cards ?
<rick_h__> marcoceppi: I'm working with urulama and team to do an overall email on the release
<marcoceppi> kk
<rick_h__> marcoceppi: as the multi-series charms should work now as well
<rick_h__> marcoceppi: e.g. upload multi-series, deploy on juju 1.25
<magicaltrout> cards?! blimey you guys have so much stuff hidden
<rick_h__> marcoceppi: so there's other stuff to note people on
<marcoceppi> nioce
<magicaltrout> +away
<marcoceppi> magicaltrout: we just released it!
<magicaltrout> that explains that then.... :P
<bbaqar> marcoceppi: Are you sure because the nova-compute charm allows lxd as the virt-type and https://jujucharms.com/u/openstack-charmers-next/lxd/trusty/ exists
<magicaltrout> thats great when I get Saiku stabilised, dump that card into the wiki, website & splash screen
<rick_h__> marcoceppi: exactly
<rick_h__> oops, magicaltrout exactly
<marcoceppi> bbaqar: I've just been told it's wily, you might want to check with zuul coreycb or beisner who are still online
<marcoceppi> bbaqar: or email the juju list: juju@lists.ubuntu.com
<bbaqar> Thanks
<coreycb> rockstar, I don't see zul in here but maybe you know the answer for bbaqar
<zul> hi
<rockstar> bbaqar: afaik we don't support lxd on trusty just yet.
<zul> no we dont yet
<rockstar> "trusty" in that charm name is merely a placeholder.
<beisner> hi bbaqar, keep in mind that the "-next" charms track the current development versions of the charms.
<beisner> bbaqar, if you've not seen this yet, pasting for reference:  https://jujucharms.com/u/openstack-charmers-next/openstack-lxd/bundle
<bbaqar> Okay thanks guys. I appreciate your help.
<HollyRain> with juju, is necessary to have to work directly with Systemd or is handled automatically?
<HollyRain> I want that my service can be started/stopped from juju, and restarted auto. if it faills and it's possible
<magicaltrout> HollyRain: i did something outside of systemd for that with the help of the charmers
<magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L76 start gets called everytime and it just checks to see if its running
<magicaltrout> so every 5 minutes if its down it'll get restarted
<magicaltrout> if you have systemd kicking around, you could probably do something similar where you check the status
<magicaltrout> and if its down, kick it
<HollyRain> thanks
<urulama_> marcoceppi: one more thing ... you can publish a multiseries charms now and old clients will work with it now :) there's been a regression if it's gated - so make it public, but we'll fix that with new release
<marcoceppi> urulama_: YESSS
<urulama_> marcoceppi: https://jujucharms.com/u/uros-jovanovic/wpnew and https://api.jujucharms.com/charmstore/v5/~uros-jovanovic/wpnew/archive/metadata.yaml
<magicaltrout> this seems to work cory_fu https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L129 I think i was getting a race condition which was causing the rolling restart now I schedule a restart
<magicaltrout> that way it waits for PDI to be installed
<cory_fu> magicaltrout: A race condition with what?  Any chance I could see the reactive log lines from your run that works?
<cory_fu> Oooh
<cory_fu> magicaltrout: Yeah, your update_master_config really ought to have @when('pdi.installed') on it.  Do you ever want to update the config before it's installed?
<magicaltrout> well thats part of the problem cory_fu
<magicaltrout> from what I could tell in the logs
<magicaltrout> with @when('leadership.is_leader', 'leadership.changed')
<magicaltrout> its run before the PDI package is installed
<magicaltrout> and if you dump a @when('pdi.installed')
<magicaltrout> it doesn't execute but then leadership.changed isn't recalled
<magicaltrout> leader elected -> leadership changed -> pdi and java installed
<magicaltrout> if i defer the restart though of course by then the package exists
<cory_fu> magicaltrout: Ok, what I would recommend is putting the @when('pdi.installed') on change_leader.  That way, it will only run after PDI is installed, and would then immediately trigger  the update_master_config.
<cory_fu> Well, actually, the solution you came up with is ok, too
<cory_fu> Except that you're potentially rendering the config before PDI is installed.  I trust the install won't overwrite the config?
<magicaltrout> na it writes else where, although now you mention it i can see yours working without the deferal
<magicaltrout> woop, even carte is up and running which has failed miserably all afternoon
 * magicaltrout chucks a job at it before testing the slave logic
<magicaltrout> http://52.29.99.174:9999/kettle/status/
<magicaltrout> cluster/cluster
<magicaltrout> look at that cory_fu
<magicaltrout> working and everything
<cory_fu> Nice
<cory_fu> :)
<aisrael> tvansteenburgh: Are you still here?
<tvansteenburgh> yup
<aisrael> tvansteenburgh: Could you kick off a test for me? https://code.launchpad.net/~brad-marshall/charms/trusty/memcached/add-monitors-relation/+merge/276958
<tvansteenburgh> aisrael: you can do it from here if you're logged in http://review.juju.solutions/review/2354
<aisrael> tvansteenburgh: Oh sweet, I missed that. Thanks!
<tvansteenburgh> aisrael: np :)
<tvansteenburgh> aisrael: fyi joyent queue is empty
<magicaltrout> woop self registering slaves
<magicaltrout> cory_fu: on https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L54
<magicaltrout> why would I get
<magicaltrout> TypeError: check_running() missing 1 required positional argument: 'java'
<magicaltrout> haven't i included an argument?
<cory_fu> So that error is perhaps misleading.  It means that, at runtime, the function was not passed the argument (java) that it expects.  However, it looks like the java.ready state ought to provide that param, so I don't know why you hit that
<magicaltrout> i only hit it when I added the when_not
<magicaltrout> but I don't want a restart and check_run to run at the same time
<cory_fu> magicaltrout: Can you give me the output of `charms.reactive --format=yaml get_states` from that unit?
<magicaltrout> {}
<cory_fu> You have to run it from the charm directory
<magicaltrout> boo
<cory_fu> juju run --unit pdi/0 'charms.reactive --format=yaml get_states'
<cory_fu> That would work, but using `juju ssh` to do the same would not
<magicaltrout> although helpfully you've answered a question I was going to ask earlier :P
<magicaltrout> {leadership.set.hostname: null, leadership.set.public_ip: null, leadership.set.username: null, pdi.installed: null}
<cory_fu> (Unless you first cd to the charm dir)
<cory_fu> Hrm.  That seems like the java state isn't set at all, so no idea how that handler ran
<magicaltrout> well
<magicaltrout> funny you say that
<magicaltrout> juju status says
<magicaltrout> hook failed: "java
<magicaltrout> -relation-changed" for java:java
<magicaltrout> hook failed: "java-relation-changed" for java:java
<cory_fu> Oh, well, that makes sense.  It failed during that hook, so the state wasn't flushed
<cory_fu> But, again, I don't see how that state was set without a value
<cory_fu> Also, btw, the way you're using those gating states is going to cause your check_running handler to run more times than you expect (possibly)
<magicaltrout> which is probably whats happening
<cory_fu> Because it could run once before restart() and then again after restart finishes since the pdi.restarting state changed
<magicaltrout> well if check_running runs >1 thats not a problem
<magicaltrout> if its up it does nothing
<magicaltrout> if its down it kicks it
<cory_fu> I think you would be better served by changing check_running and restart to both use the pdi.restart_scheduled state
<magicaltrout> that shouldn't cause an issue
<cory_fu> Fair enough
<magicaltrout> aww wtf, I took that out because you told me to use @when('pdi.installed') :P
<cory_fu> Oh, well, I still saw it in there
<cory_fu> But yeah, if you're going to use that instead then... um
<magicaltrout> well its still in the code, its not called
<magicaltrout> well I don't care, I'll revert to using restart_scheduled
<magicaltrout> if it works it works, I just need to trap 2 methods both trying to do the same, but slightly different thing
<cory_fu> magicaltrout: Where is the java.updated state coming from?
<magicaltrout> dunno that was something we did in ghent
<cory_fu> I don't think that state exists any more.  I don't see it in the current java interface
<magicaltrout> k
<cory_fu> https://github.com/juju-solutions/interface-java/blob/master/provides.py
<magicaltrout> oh actually
<magicaltrout> I think it was a kwmonroe "todo"
<magicaltrout> so we boiler plated it
<magicaltrout> cory_fu: if I do this: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L52 its just going to give me the same error isn't it?
<magicaltrout> as that was the cause and effect from before
<magicaltrout> so can I check the state from within the method?
<cory_fu> magicaltrout: I don't know.  I honestly have no idea how you got that error
<cory_fu> But the problem isn't with the @when_not, it's with that java.ready state.  That's the one that should be giving a relation instance and it's not
<magicaltrout> hmm
<cory_fu> magicaltrout: My recommendation would be to put a breakpoint in one of your other handlers that (tends to) get triggered before that one and do some get_state checking in that
<cory_fu> Unfortunately, my wife is saying I have to call it a day now.  :)
<magicaltrout> aye
<magicaltrout> hehe
<magicaltrout> well someone needs to have a life!
<cory_fu> Have a good evening!
<magicaltrout> you too
#juju 2016-03-03
<jamespage> gnuoy, could you take a run through the verified of theses: https://review.openstack.org/#/q/status:open+topic:charmhelpers-resync
<gnuoy> sure
<jamespage> gnuoy, n-gateway and n-openvswitch have relations to n-api right?
<gnuoy> right
<gnuoy> jamespage, I'm supposed +1 the workflows on those as well, right?
<jamespage> gnuoy, yeah so +2 review and +1 workflow
<gnuoy> ack
<jamespage> gnuoy, beisner: I've just raised a project-config change that should fix bug tracking for the charms as well - its currently not functional
<jamespage> gnuoy, lets book some time to discuss v3 and get it swinging
<jamespage> I have some ideas of what might be up...
<gnuoy> jamespage, ta
<jamespage> gnuoy, just looking at these - https://review.openstack.org/#/q/status:open+topic:bug/1545886
<mup> Bug #1545886: need to support vni_range option for ml2 plugin <openstack> <sts> <neutron-api (Juju Charms Collection):In Progress by xtrusia> <neutron-gateway
<mup> (Juju Charms Collection):In Progress by xtrusia> <neutron-openvswitch (Juju Charms Collection):In Progress by xtrusia> <https://launchpad.net/bugs/1545886>
<jamespage> I think that in fact we only need to set those in the plugin for neutron-api, not all of the agent locations but I'm not 100% sure
<gnuoy> jamespage, I would expect to only have to set then for neutron-api
<jamespage> gnuoy, yeah - I think thats true
<jamespage> gnuoy, let me dig a little
<gnuoy> jamespage, Do i still need to mark nbugs fix committed or do I sit back and let gerrit do it?
<jamespage> gnuoy, for now please do manually
<jamespage> gnuoy, hopefully my proposed change will fix that
<gnuoy> ack
<jamespage> gnuoy, beisner: http://stackalytics.com/?metric=commits&project_type=all&company=canonical&release=all
<gnuoy> nice
<jamespage> gnuoy, and - http://stackalytics.com/?metric=commits&project_type=all&release=all
<jamespage> commits not reviews but meh - we don't get metrics on those
<jamespage> gnuoy, lets do it
<jamespage> gnuoy, http://paste.ubuntu.com/15272811/
<tinwood> gnuoy, jamespage: is there an easy way to determine the github repo for a OS charm?  e.g. in my case I'm looking for keystone.
<tinwood> I'm guessing it's this one: https://github.com/openstack-charmers/charm-keystone
<sparkieg`> anyone else seeing invisible text in the Search field on https://jujucharms.com/docs/ ?
<evilnickveitch> sparkieg`,  yes, it has been an issue for a while now - I believe it is being worked on
<sparkiegeek> evilnickveitch: ok
<evilnickveitch> it is annoying! there are too many pages of docs not to have working search!
<sparkiegeek> evilnickveitch: +1
<sparkiegeek> evilnickveitch: the search is ... non intuitive too
<Muntaner> hello guys! I'm having a problem. My environment was going well, but all of a sudden when I try to deploy a new service, it stays in the state "Waiting for agent initialization to finish" forever. How can I diagnose?
<Muntaner> (I use Juju on an openstack all-in-one installation)
<jamespage> gnuoy, http://paste.ubuntu.com/15272863/
<jamespage> tinwood, yes it is
<tinwood> thanks jamespage
<jamespage> Muntaner, take a look in /var/log/juju on machine 0
<Muntaner> jamespage, too late! I destroyed the environment, bootstraped it again and works. Weird
<jamespage> gnuoy, hey - when you did the last release of charms to stable branches, I think you did some 'at the point of release' type changes
<Muntaner> jamespage, I thinks it is related to the security groups of openstack... when I delete and create a VM, the sec group created for that VM remains and does not get deleted into openstack
<gnuoy> jamespage, update charmhelper location, that sort of thing?
<jamespage> gnuoy, yah
<gnuoy> jamespage, that + update amulet branches to point at stable branches
<jamespage> tvansteenburgh, dpb1_: I know we're not really focussing on deployer any longer but https://code.launchpad.net/~james-page/juju-deployer/git-branch-support/+merge/287929 would help us out in the short term...
<beisner> jamespage, woah, the stackalytics thing!
<marcoceppi> jamespage: love it
<tvansteenburgh> jamespage: i can get that deployer merge released today
<jamespage> tvansteenburgh, that would be great - are you OK with the syntax?
<jamespage> I think that longer term we'll just be deploying from the charm-store but this works us through the interim...
<tvansteenburgh> jamespage: i just merged this yesterday https://code.launchpad.net/~niedbalski/juju-deployer/add-refspecs/+merge/287829
<jamespage> beisner, I have a few reviews that I'd like the full amulet suite run against - is that a manual kickoff atm?
<tvansteenburgh> jamespage: but it looks like your change can coexist with that
<jamespage> tvansteenburgh, it would appear so
<tvansteenburgh> jamespage: so yeah, lgtm. will let you know when it's released
<beisner> jamespage, it is.  i'll be focusing on the gate job integration and the artifact public linking now that we're in a good place.  will send you info shortly.
<jamespage> tvansteenburgh, awesome - thankyou
<jamespage> beisner, I fixed up the single deploy for swift-storage - https://review.openstack.org/#/c/287156/
<jamespage> if you'd like to review, that will unblock other reviews inflight I think
<beisner> jamespage, ah good.  i suspect there may be a few of those.
<jamespage> beisner, yes...
<jamespage> my resync flushed out a few...
<beisner> jamespage, was it the setup_storage bit that was tripping?
<jamespage> beisner, basically with no explicit config, there are no block devices found, so the perms changes at the end of setup_storage error out as /srv/node never gets created...
<beisner> yah ok
<beisner> jamespage, so this is valuable to eek out.  i think each charm should deploy to a blocked state instead of an error state, when deployed with its defaults.
<jamespage> beisner, and that would be a valid test ?
<jamespage> beisner, ok that does make sense...
<beisner> jamespage, so it tests the user experience a bit, in that if i've just added a charm, and it has deployed before i add configs, i don't have to jump through resolved/retry manual steps.
<tvansteenburgh> frankban: when you have time can you add juju-deployer-0.6.4 (on pypi) to the juju/stable ppa?
<frankban> tvansteenburgh: oh sorry yes
<sparkiegeek> tvansteenburgh: what makes it stable? how much testing has it undergone?
<tvansteenburgh> sparkiegeek: aside from my own, and the testing of the patch submitters, none
<sparkiegeek> tvansteenburgh: can I suggest proposed is a better place for it, at least for now?
<tvansteenburgh> sparkiegeek: sure. frankban, juju/proposed instead please
<tvansteenburgh> which i guess is actually juju/devel?
<dpb1_> tvansteenburgh: is lp:python-jujuclient not getting any updates?  ahasenack has had ppas building from there forever.
<dpb1_> tvansteenburgh: lp:juju-deployer rather.
<tvansteenburgh> dpb1_: yeah that's where updates are happening, which ppas are getting updates?
 * dpb1_ gets links
<dpb1_> https://code.launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient and https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
<tvansteenburgh> dpb1_: ok, that's good to know. so if ppl want latest tell them to install from there?
<dpb1_> yup
<dpb1_> tvansteenburgh: it's where I pull from on all my machines
<tvansteenburgh> dpb1_: do those builds ever graduate to juju/stable?
<frankban> tvansteenburgh, dpb1_: sounds great, can we just copy those when a deployer needs to be updated in stable ppa?
<sparkiegeek> note the failures in https://launchpadlibrarian.net/245134994/buildlog_ubuntu-vivid-amd64.juju-deployer_0.6.1~bzr165~48~ubuntu15.04.1_BUILDING.txt.gz
<dpb1_> frankban: tvansteenburgh: there is no workflow: it's always been just ping frank or marco to copy to /stable
<sparkiegeek> the test is trying to reach out to reviews.openstack.org and failing
<marcoceppi> well, I never copy binaries to stable, so that's mostly a frank thing
<marcoceppi> I prefer rebuilding
<tvansteenburgh> dpb1_: yes, it's always been a "by request" thing
<frankban> marcoceppi: if we already have a working deb that has been tested, why not just copy it over?
<beisner> fwiw, with openstack package flow, we build for staging, then rebuild for proposed.  then it's a binary copy to release it.  that way your release binary is identical to the thing that was tested in proposed.
<beisner> also, woot! for that deployer daily build ppa.  :-)
<sparkiegeek> beisner: note the FTBFS errors though
<tvansteenburgh> what is FTBFS
<beisner> fail to build from source
<tvansteenburgh> ack, i will address that. i didn't realize the build host didn't have egress https access
<tvansteenburgh> i did run all the tests before releasing
<beisner> tvansteenburgh, ah yes i see jorge's added test is the thing there.
<tvansteenburgh> beisner: yeah, i'm gonna ask him to fix
<beisner> tvansteenburgh, yah, not sure that test should actually reach out.
<tvansteenburgh> agree, asked for a mock
<tvansteenburgh> or just a straight unit test
<beisner> right.  anyway, cool stuff.   we're all about the refspec now.  thanks for the bits.
<magicaltrout> http://www.meteoriteconsulting.com/spinning-up-pentaho-data-integrations-quickly-with-juju/ <- those card things fsck-ing rock
<lazyPower> haha!
<lazyPower> yeah they do!
<magicaltrout> better fix the icon
<beisner> jamespage, should we take that swift-storage fixup through the full deal before +2?
<jamespage> beisner, meh
<jamespage> beisner, its a fairly innocuious change
<beisner> jamespage, ack, i'm good with it
<magicaltrout> there's a bug in the cards lazyPower, who's the boss?
<lazyPower> magicaltrout - https://github.com/CanonicalLtd/jujucharms.com/issues/ <- that board is
<magicaltrout> cool
<beisner> icey_, your test update expects one or more ceph-osd processes, whereas we were previously expecting exactly 2.  is that intentional?
<icey_> beisner: yes, although it's not ideal; to make the tests repeatably passable in a given deployement, it has to either do that or ebe able to expect 2 OR 3 OSD processes to be running
<beisner> we could plumb that helper to take a list type, then specify [2, 3
<beisner> ]
<beisner> bahh new keyboard still improving my experience here ;-)
<icey> beisner: same, I think my fingers are *nearly* used to this keyboard :); I like the idea of that helper being able to take a list
<icey> beisner: what's the timeline to get a openstack contrib charmhelper change landed?
<lazyPower> if its anything like charmhelpers.core.host mods - > a month
<beisner> icey, lazyPower - yah the core c-h stuff is likely to be more sensitive than a test helper.  we can usually land test helpers quickly.
<lazyPower> i'm just salty that i'm still asking for help with the same merge and received nothing in response, thats all :)
<lazyPower> https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers/+merge/285044 <-- really needs some love if anyone has the spare cycles to go +1 that for me
<beisner> lazyPower, ha!  where be it?
<jacekn> is https://bugs.launchpad.net/charms/+bug/1538573 on somebody's radar? It should be trivial review now. Do I need to do anything for it to appear in the review queue?
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1538573>
<rick_h__> cory_fu: do you recall the name of the dev from neo4j at the charmer's summit for merlijn's email to the list?
<magicaltrout> just google, loud neo4j dev who is canadian but lives in sweeden
<magicaltrout> -e
<rick_h__> lol
<rick_h__> first result: "Ask HN: Who is hiring? (January 2016)"
<lazyPower> steven
<lazyPower> steven fisher i think?
<lazyPower> i follow him on twitter
<magicaltrout> steven.baker@neotechnology.com,
<lazyPower> thats it
<lazyPower> srbaker
<rick_h__> <3 ty
<magicaltrout> thanks to jcastro for not bcc'ing the emails in his mailshot;)
<rick_h__> <3
<rick_h__> found it and the charm
 * marcoceppi sighs
 * magicaltrout cracks out the ear plugs and hides the alcohol
<jcastro> the sign up sheet was publically, technically, that's like a bcc. /me runs.
<lazyPower> magicaltrout that takes all teh fun out of thursday
<magicaltrout> hehe
<jcastro> not like a bcc I should say
<icey> anybody have issues on xenial running `charm build`? I just got `TypeError: write() argument 1 must be unicode, not str` the moment it got to the first interface to process
<rick_h__> jcastro: what's the topic for the next charm hangout?
<rick_h__> jcastro: can/should I join?
<marcoceppi> rick_h__: it's an open office hours, all are welcome
<jcastro> rick_h__: it's almost always open ended
<jcastro> if you have something to show, show
<rick_h__> jcastro: can you toss me an invite to the calendar and I'll show up please?
<lazyPower> icey - errr
<lazyPower> icey - are you using charm from marco's ppa?
<icey> probably not, let me dig up that email; just installed ubuntu on this machine on Monday so it's pretty fresh :)
<lazyPower> ok, i'm not sure that will fix anything
<lazyPower> but its worth a shot to get started
<lazyPower> it has all the road-to-2.0 fixes in there
<marcoceppi> lazyPower: road-to-2.0 is now master
<marcoceppi> as an fyi
<jacekn> kjackal: hey. Thanks for feedback on collectd layer. I think it should be ready to go now: https://bugs.launchpad.net/charms/+bug/1538573
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:In Progress> <https://launchpad.net/bugs/1538573>
<jcastro> rick_h__: done
<lazyPower> well
<lazyPower> its built from FYI-Master apparently
<lazyPower> :P
<icey> lazy I'm on 1.11.1, do you have a link to marco's PPA?
<rick_h__> jcastro: <3 ty much
<kjackal> jacekn thank you
<icey> nevermind lazyPower: ppa:marcoceppi/charm-tools-2.0
<lazyPower> https://launchpad.net/~marcoceppi/+archive/ubuntu/charm-tools-2.0
<lazyPower> ye
<icey> BAM nope
<icey> same exact error
<icey> given that xenial is targeting python3, shouldn't charm tools see about updating :)
<marcoceppi> icey: i got you
<marcoceppi> https://github.com/juju/charm-tools/pull/119
<marcoceppi> icey: alpha1 build is being released as I type
<icey> three cheers for marcoceppi!
<lazyPower> icey - its almost like we were expecting you :)
<jamespage> beisner, can you workflow +1 https://review.openstack.org/#/c/287156/ as well please
<lazyPower> we are devx, expect us
<lazyPower> jamespage o/ heyo
<jamespage> lazyPower, ola
<lazyPower> jamespage - friendly ping to get this back on your roadmap - https://code.launchpad.net/~dbuliga/charm-helpers/charm-helpers
<jamespage> lazyPower, yup
<jamespage> on my list still
<lazyPower> same bat time same bat channel next week?
<nagyz> hey guys a quick question
<nagyz> if I specify nodes in my juju-deployer config like this:
<nagyz> '0': constraints: name=xxx blahblah
<nagyz> should it always pick that node from MAAS?
<nagyz> because currently it looks like it ignores the name actually and just picks based on the tag
<lazyPower> nagyz - you're looking for tags
<lazyPower> --constraints="tags=bootstrap"   for example
<nagyz> yes, I know them, but in my juju-deployer config I'd like to pin the charms to nodes
<lazyPower> i'm not sure you can use names as a constraint
<lazyPower> afaik its only ever supported tags
<nagyz> it doesn't complain that it would be invalid... :)
<rick_h__> lazyPower: nagyz no, i think you need to add maas tags and use the tags as constraints
<nagyz> ok let me rerwite it and test it
<nagyz> cheers
<lazyPower> cheers nagyz
<nagyz> wait, one more thing: will the machine constraints I specify in the config override the constrainst I specified when bootstrapping?
<lazyPower> yeah, nice thing for you to expect in 2.0
<lazyPower> you get boostrap constraints, and you get model constraints
<nagyz> ok so I'm on the latest stable atm :)
<nagyz> so they get applied after the boostrap constraints in 1.25?
<lazyPower> yeah if you bootstrap with --constraints=tags=bootstrap
<nagyz> so if I have tags=a and then in the juju-deployer config I have tags=b, will it pick a node that has a and b?
<lazyPower> all machines launched after the fact will try to launch with --tags=bootstrap
<lazyPower> you can clear it, or change it, its not catastrophic
<nagyz> yes, that's the behaviour we're seeing
<lazyPower> just a minor annoyance
<nagyz> so how could I pin different services to different tags? would that even work now then?
<nagyz> let's say I clear the global constraints after bootstrapping
<lazyPower> if you clear globals and have --constraints=tags=abc   per unit, they will apply those units constraints when the bundle gets deployed
<lazyPower> sorry, il mean per charm
<nagyz> I currently list the constraints per machine listed in the yaml - and then for each charm I explicitly bind it to a machine id like --to '5'
<nagyz> is that not a good thing?
<lazyPower> thats fine, it makes the bundle a bit brittle
<nagyz> (based on wolsen's tokyo demo yaml :P)
<lazyPower> but with how we're modeling networking, storage, etc. i'm sure we'll see more bundles like that start to crop up
<nagyz> ok let me give it a try
<nagyz> if I have globals set, they will ALWAYS override local constraints, right?
<nagyz> or are they getting an "AND" inbetween?
<magicaltrout> aahh cory_fu I knew there was a reason for me piping the hookenv stuff through leader_set..... whilst it will get set, if someone changes it, it wont trigger a restart
<cory_fu> magicaltrout: Ah, that's true
<cory_fu> @when_any('leadership.changed', 'config.changed.foo') would be useful there
<cory_fu> I should cut a release of charms.reactive so that's available
<aisrael> marcoceppi: tvansteenburgh: This item shouldn't be in the queue (it got merged yesterday): http://review.juju.solutions/review/2429
<nagyz> lazyPower, do you have a second?
<nagyz> lazyPower, it doesn't work as expected... :-)
<nagyz> this is the yaml that we're running with juju-deployer: http://pastebin.com/4BVd32L1
<nagyz> but for example haproxy gets put on machine "2", while the config says it should go in 3,4,5
<nagyz> what did we screw up? :-)
<lazyPower> thats...what?
<lazyPower> it clearly says -to '2'
<lazyPower> i mean - '3'
<nagyz> haproxy: (the very last) lists -3, -4, -5
<nagyz> yep
<nagyz> juju status shows (while the deployer is running): machine: "2"
<nagyz> plus it requests a bunch of machines from maas
<lazyPower> yeah something went awry here
<nagyz> not just the ones we've listed in the yaml
<tinwood> thedac, gnuoy: quick question: if I want to run an action inside a unit (juju ssh unit/n ; then "run an action") is there anything I need to set up re: ENV to get it to work?  I need to see the stacktrace that's blowing up on my keystone charm!
<lazyPower> tinwood - you can juju debug-hooks on that unit, then execute teh actin
<lazyPower> dbug-hooks will trap the action tinwood
<tinwood> even for an action?
<lazyPower> nagyz - i'm thinking through this and its not obvious to me why this is the case :(
<tinwood> ok. will try.  ta!
<lazyPower> tinwood - yep, actions run in an anonymous hook context
 * thedac learned something too
<nagyz> lazyPower, I'm happy to try any suggestion.
<nagyz> the debug tells me this:
<nagyz> 2016-03-03 16:33:03 [INFO] deployer.import:  Deploying service haproxy using cs:trusty/haproxy
<nagyz> 2016-03-03 16:33:03 [ERROR] deployer.deploy: Service haproxy to be deployed with non-existent service 3
<nagyz> maybe that's why?
<lazyPower> its trying to co-locate with a service huh?
<lazyPower> remove the quotes around the number's declaring the machines? (not even sure this will have any effect)
<nagyz> the problem is for every retry I need to destroy the env and then rebootstrap...
<nagyz> takes a while.
<lazyPower> i understand
<nagyz> what does it mean by non-existent service 3? I mean the machines are not "services", are they?
<lazyPower> you may have some success dropping that bundle on the store gui and inspecting machine view
<lazyPower> i think the 'string' is getting interpolated as a service name
<lazyPower> deploy supports the following syntax
<lazyPower> to: - 'wordpress/0'
<lazyPower> i think its getting confused and trying to colo with a service named 3, so it spins up a machine expecting a parent service to be deployed there
<nagyz> ok let me remove the quotes.. :)
<lazyPower> s/parent/principal
<jacekn> kjackal: so is setting charm review bug status to "New" something I should have done after my reply? Or you prefer for charmers to do it themselves?
<lazyPower> jcastro - move it to fix-committed
<lazyPower> jacekn ^
<kjackal> Thank you lazyPower
<lazyPower> we dont move status to anything other than tog et them out of the queue after we've replied :)
<jacekn> alright, good to know thanks
<tvansteenburgh> lazyPower, nagyz: i don't think machine placement works for v3 bundles
<lazyPower> ah that would explain it too
<rick_h__> tvansteenburgh: lazyPower right, it was added in v4
<lazyPower> nagyz - might want to hold off on that re-test
<lazyPower> sounds like we've found the blocker
<rick_h__> lazyPower: how is it a v3 bundle? We don't do those in the store for a while any more?
<rick_h__> lazyPower: are we sure it's the old format?
<lazyPower> well this is a labeled bundle
<lazyPower> thats by definition the v3 format right?
<lazyPower> rick_h__ http://pastebin.com/4BVd32L1
<rick_h__> lazyPower: k, just checking
<rick_h__> lazyPower: right, remove that, dedent one level and try again?
<lazyPower> nagyz - the good news is the format change is super simple to make, delete that openstack: key at teh top and fix the indentation now that we've removed the parent key and you'r ein a v4 bundle
<lazyPower> nagyz - give a v4 bundle a go, and lets see if you get better results
<nagyz> oh that's how I started out
<nagyz> but I had to add the openstack: at the beginning to actually be able to make juju-deploy parse it...
<rick_h__> nagyz: juju-deployer should take a v4 bundle. The team updated it to use it
<nagyz> it was complaining about not getting any deployment names
<nagyz> that's why we added openstack: at the top
<rick_h__> nagyz: if you have a problem without it let us know and we can look into what's not happy. https://github.com/juju/charmstore/blob/v4/docs/bundles.md
<rick_h__> nagyz: ^ has the docs/description of the v4 vs v3 and what changed
<nagyz> looking at it.
<jamespage> beisner, does uosci understand which pxc test to run yet?
<Gil> Can etcd and flannel be installed on the same physical node?  I want to put etcd & flannel on physical node 2, then add a unit of flannel to phys node 3.  Will it work ok?
<lazyPower> Gil - sure can
<lazyPower> Gil - which flannel charm are you using?
<tvansteenburgh> nagyz: what version of deployer are you using?
<beisner> jamespage, checking..
<lazyPower> brb
<beisner> jamespage, yes:  "Automatically selected test:  ./tests/10-deploy_test.py"
<jamespage> beisner, \o/
<jamespage> oh wait - I think it was an amulet test failure nm
<nagyz> tvansteenburgh, I've got it via pip install
<nagyz> as the stable ppa doesn't have the 0.4 package
<nagyz> (based on readthedocs there is 0.4, right?)
<tvansteenburgh> nagyz: 0.6.4 is latest on pypi
<tvansteenburgh> nagyz: if you're on 0.4.x then you probably don't have the v4 support
<nagyz> well I just did pip install
<nagyz> that I assume gets the latest :)
<nagyz> there is no -v or --version
<nagyz> ok I'm on 0.6.3
<nagyz> according to pip list :)
<tvansteenburgh> k, should be fine then
<nagyz> ok, changed the bundle to v4, running now
<nagyz> so what I specify as machine '0' in my bundle actually becomes '1', right?
<nagyz> I should just skip declaring '0'?
<nagyz> right now I have 0,1,2 declared with a storage tag and then now according to juju status 0 is the juju bootstrap node, 1 became storage and 2 become a controller which I only list for 3,4,5...
<tvansteenburgh> nagyz: the machine keys that you define in the machines section of your bundle do not necessarily map to actual machine numbers
<tvansteenburgh> nagyz: for that reason, yes, it is best to not use '0' in your machines definition
<tvansteenburgh> to avoid confusion
<nagyz> cheers
<jamespage> beisner, https://review.openstack.org/#/c/287082/ is looking better now as well
<beisner> jamespage, ready to land that?  lgtm
<jamespage> beisner, +1
<jamespage> can you do the honours?
<beisner> jamespage, yep
<jamespage> beisner, a little tidy here as well - https://review.openstack.org/#/c/287848/
<jamespage> just template rollups and removal
<jamespage> amazing how much old stuff we lug around....
<beisner> yah, that should be a safe landing too, jamespage
<jamespage> lovely
<lkraider> Can juju be deployed to third party account with external_id access?
<lkraider> (in AWS)
<aisrael> How long should it take for updated charms to be ingested into the store?
<lazyPower> longest case: an hour and 20 minutes
<lazyPower> lkraider - not sure what you mean
<lazyPower> external_id access?
<lkraider> @lazyPower - I have an IAM user that was granted to another AWS account through a Role (assumeRole permission). To use that I in aws_cli need to set external_id in ~/.aws/config
<lazyPower> ah, i dont know that we support that. i would most def. poke the mailing list about it though
<lkraider> https://docs.aws.amazon.com/cli/latest/userguide/cli-roles.html#cli-roles-xaccount
<lazyPower> someone who's more versed in that specific case may be able to chime in and prove me wrong
<lkraider> @lazyPower another question: does juju support user that has MFA enabled?
<lazyPower> not that i'm aware of. the IAM credentials you would give juju assumes that i has all the required permissions, and there's no interactive way for it to prompt you for the MFA credentials
<lazyPower> s/that i has/that it has/
<lkraider> thanks
<lazyPower> np lkraider - sorry i wasn't full of good news for your questions :/
<ChrisHolcombe> i'm a bit confused about something with magic mock
<ChrisHolcombe> i'm trying to mock hookenv.config() and while config('source') works properly config.previous('source') does not.
<lazyPower> ChrisHolcombe - it creates another mock
<lazyPower> and now that i've said that, i realize how nebulous the question and answer both were
<lazyPower> ChrisHolcombe - give me another go at what you're seeing vs what you're trying to do
<ChrisHolcombe> lazyPower, yeah i think i get that but i've also done this: self.config.previous.return_value = "blah"
<ChrisHolcombe> lazyPower, so i'm wondering.  how do i mock config.previous
<icey> beisner: do you think you can weigh in on https://review.openstack.org/#/c/287446/ one more time? jamespage has given it the +1 but wants your +2 for the tests
<lazyPower> ah common misconcenception.   self.config.return_value.previous.return_value="blah"
<lazyPower> i think that'll get you sorted ChrisHolcombe ^
<ChrisHolcombe> lazyPower, omg .. ugh
<lazyPower> i could be wrong, but i recall having ot do something like that with like filepointers
<lazyPower> i forget the exact formula for when that works but give it a go, if it works yay
<lazyPower> if not, i call schenanigans
<beisner> icey, see comment.  i didn't -1 it, but one lil misspelling.  if you want to bump that one more time, i'll watch for it.
<icey> beisner: I'll make that one right quick
<ChrisHolcombe> lazyPower, you're correct in that it's making another mock but i can't figure out how to get it to return the right thing :)
<icey> beisner: pushed, pending jenkins
<beisner> icey, thx sir
<icey> for a change to c-h, should it be on LP?
<lazyPower> icey yep
<lazyPower> charm-helpers head is still up on launchpad in /charm-helpers
<nagyz> lazyPower tvansteenburgh so I retested it, changed my bundle to v4 format, now it looks much better, except that somehow 6 becomes a compute, 7 becomes a controller yet my deployment descriptor clearly specifies it differently
<nagyz> let me repastebin both
<nagyz> lazyPower tvansteenburgh my v4 deployer bundle: http://pastebin.com/CY2kDma2, and the status output (clipped, but visible): http://pastebin.com/rGq7BUDH
<nagyz> 6 gets compute 7 gets controller and I don't get why
<magicaltrout> i see Mark is keynoting ApacheCon US this time
<magicaltrout> he's been promoted to the big time ;)
<rick_h__> cargonza: ping
<cargonza> hi
<rick_h__> cargonza: got a sec please?
<cargonza> sure
<rick_h__> https://plus.google.com/hangouts/_/canonical.com/rick?authuser=1 cargonza
 * magicaltrout messed up his ApacheCon submission and instead of 3 presentations, has 1 presenation and 2 - 2 hour tutorials.....
<magicaltrout> arse
<tvansteenburgh> nagyz: same reason as before, there's no guarantee that the labels for the machines in your bundle will match up to the actual machine numbers
<rick_h__> nagyz: bundles are meant to be 'self contained' so they can be reused by others. So it's machine 0, within that bundle.
<tvansteenburgh> nagyz: deployer asks juju for the correct number of machines with the specs you want, but it can't tell juju "make it this number"
<icey> beisner: build failed because of a timeout :( http://logs.openstack.org/46/287446/5/check/gate-charm-ceph-osd-python27/326d4fb/console.html
<marcoceppi> magicaltrout: sounds like fun! ;)
<beisner> icey, ha!  upstream test timed out.   comment back on your review with:   recheck
<magicaltrout> well marcoceppi at least with 2 hours, I'll get to demo a crap load more Juju stuff to pad it out :P
<icey> also beisner, I'm about to push a change for c-h that adds test coverage for the pid checking method, as well as support for lists :-D
<magicaltrout> "hello gang, we're doing data management, so to get started and fake distrubuted-ness we're going to install juju and spin up some nodes"
<beisner> icey, coolio
<magicaltrout> also my presentation is at 5:10pm, i'm normally in the bar by then... how rude
<magicaltrout> maybe i'll just do the presentation in the bar....
<marcoceppi> magicaltrout: we'll have to bring the bar to the presentation
<magicaltrout> aye, sounds like a plan
<icey> bam beisner: https://code.launchpad.net/~chris.macnaughton/charm-helpers/pids-can-be-a-list/+merge/288014 ; once again, I make a 5 line change with 33 lines of tests to support it :)
<beisner> icey, imho, as it should be ;-)
<icey> I'm happy with it :)
<icey> there were no tests covering that function at all, now there's a test covering all of the accepted types :-D
<marcoceppi> I love how declarative the branch is, it's so assertive and sassy like "pids can be a list too, DAD"
<beisner> icey, but where are the tests for your tests that test the test you updated as a helper to another test to test the feature that you really just want to land?
<icey> :)
<beisner> fwiw i'm still laughing over here, marcoceppi
<icey> I laughed out loud as well marcoceppi
<icey> beisner: I just can't catch a break today: http://10.245.162.36:8080/job/test_charm_amulet_smoke/78/console
<icey> Failed to fetch http://nova.clouds.archive.ubuntu.com/ubuntu/dists/trusty-updates/main/i18n/Translation-en  Hash Sum mismatch
<beisner> icey, apt repo glitch
<icey> yeah, build retrying (again)
<magicaltrout> duuuh
<magicaltrout> there should be a big fat warning sign if you're in a repository charm directory building your charm instead of the development directory :P
<magicaltrout> or maybe i should just build in the right place
<lazyPower> actually
<lazyPower> thats really good feedback magicaltrout - i've been bit by that
<lazyPower> and the manifest being present in cwd is a pretty large identifier that you're doing something wrong....
<magicaltrout> hehe i think i did 4 updates before realising what i was doing :)
<nagyz> tvansteenburgh, of course it could, it just doesn't do it.
<nagyz> tvansteenburgh, but more importantly, will the mappings still work? eg if I map haproxy to 4, but the controller becomes machine 6, will it install haproxy to 6?
<tvansteenburgh> nagyz: yes, it will install services to the correct machines
<nagyz> ok, thanks. I'll let it run through then we'll see :-)
<nagyz> but I think it's confusing that if I map a machine to 3 and it gets to 6...
<nagyz> while juju could very well deploy the machines in order
<tvansteenburgh> nagyz: actually, i see another possible cause for what's happening
<nagyz> happy to experiment :)
<nagyz> and to rewrite the bundle to be better
<tvansteenburgh> nagyz: deployer iterates over the machines defs in the bundle, requesting juju to create them one at a time. but...dictionaries aren't ordered
<tvansteenburgh> nagyz: so it's possible this could be fixed by sorting the machine keys first
<tvansteenburgh> nagyz: it'd be a change to deployer, no the bundle
<nagyz> depending on what people use as keys. can the dict keys only be numbers in this case (for machines)?
<tvansteenburgh> nagyz: it's not required they be ints, but that's the convention
<nagyz> so the bundle should just work as-is then, right?
<nagyz> regardless of the numbers juju status tells me.
<tvansteenburgh> nagyz: yes
<nagyz> alrightly, let me run it through
<tvansteenburgh> run it through with a broadsword
<nagyz> maybe the wrong expression ;-)
<tvansteenburgh> lol
<icey> wolsen: took care of your comments on ceph-osd
<wolsen> icey: awesome thanks
<wolsen> icey: doh, still typo - search for notr in config.yaml
<icey> haha not anymore!
<wolsen> haha
<nagyz> all my lxc containers are stuck in agent-state: pending... is there a docu on how to debug it? :)
<nagyz> after ssh I do see them starting up, running dhclient, but as I don't have DHCP on the network that will fail. is it possible to manually assign IPs to them via the deployment config?
<nagyz> ok, it deployed \o/
<nagyz> I'll do the more complex stuff tomorrow, thanks for all the help guys!
#juju 2016-03-04
<stub> lazyPower: I was wondering if charm-build should be VCS aware, and able to build directly to a branch (potentially the same repo the source branch is in). But I don't know enough yet to know how much effort that would involve.
<lazyPower> stub - i think i missed a message somewhere and have lost context...
<stub> oh, building in the wrong dir from 7 hours ago
<lazyPower> Interesting thing is, it is *somewhat* vcs aware, insofar that it uses VCS to grab the resources listed in the directory
<stub> just mumbling ideas I doubt I'll ever get a chance to investigate :)
<lazyPower> i think sniffing for a manifest would be sufficient to quell the build error gotchya of sitting in the charm artifact dir trying to charm build
<stub> I end up with crap in my built branches due to test runs and stuff, so getting the stuff 'officially' added and/or committed would mean I don't need to go through later and ad/commit.
<stub> And I imagine uploading to the charm store would have similar issues, unless that is smart enough to only upload what is in the manifest.
<lazyPower> yeah, thats true. then again adding all that to a clean target, and running make clean before you do anything wrt pushing your charm, should be g2g
<lazyPower> but thats an interesting idea, only including the artifacts in the manifest...
<stub> Yeah, I do that so it isn't anything important. I do need to maintain the rules, so a little bit of maintenance.
<lazyPower> that would be a pretty good way to trim the fat, unless you're binpacking in last minute dependencies to a "fat charm"
<lazyPower> that could be problematic
<stub> A command to add files to the manifest post-build might be better for that, since then it is explicit and you get a nice checksum embedded in the manifest.
<lazyPower> ohhhh, i like that too
<stub> heck, might even be some resources workflow involved here
<lazyPower> ^
<lazyPower> that sounds like the winner
<lazyPower> manfiest for charm code, everything else is treated as a resource
<stub> I don't do fat charms so can't really comment ;)
<lazyPower> I try not to, but i'm also not limited by an angry egress firewall
<blahdeblah> On that note, if my charm is not getting the updated basic layer with "config.changed*" support, where should I be looking to troubleshoot it?  Is $JUJU_REPOSITORY the only place it should look for the layer?
<lazyPower> make sure you dont have a local copy of layer:basic in $LAYER_PATH
<blahdeblah> lazyPower: All of our egress firewalls are perfectly calm and rational. :-P
<lazyPower> blahdeblah - schenanigans!
<stub> blahdeblah: Its a LAYER_??? variable
<lazyPower> there is a LAYER_PATH env var, yes
<lazyPower> oh, nvm
<blahdeblah> So, hypothetically, if $LAYER_PATH isn't set, where would my layer be coming from?
<lazyPower> from the API, so it reaches out and clones it in the deps dir in $JUJU_REPOSITORY
<stub> Oh... and charms.reactive is emedded by pip, so if you have a pip cache
<blahdeblah> aaargh
<stub> Oh, its def in base layer so pip doesn't matter
<jacekn> kjackal: hey, I think setting my collectd bug to "Fix commited" did not work as expected. I can see my review is still showing as failing tests, they were not rerun. Also I think it probably belongs in the "Incoming Charms" section not "Charm Reviews"
<kjackal> Hey jacekn , let me check
<jacekn> kjackal: thanks. If my charm stays in "Charm Reviews" I think it can take a long time before somebody will pick it up, it's way down the list and some charms at the top have been waiting for well over a month for review
<jacekn> and my review shoudl be just a few lines
<kjackal> wait up, there should not be any new round of review for your charm
<kjackal> We are talking about this one: https://bugs.launchpad.net/charms/+bug/1538573
<mup> Bug #1538573: New collectd subordinate charm <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1538573>
<kjackal> Yes, cool
<kjackal> so here is where we are with this charm
<kjackal> yesterday i did the review
<kjackal> It was the third review iteration
<kjackal> I did not find any issue so you got the green light from me
<jacekn> so what happens next? I am looking at http://review.juju.solutions/ and it's way down the list there
<kjackal> however, I am rather new in the team, so a more senior member will read my report and proceed wit the process of promulgating the charm
<jacekn> kjackal: I see so there is basically another process outside of review.juju.solutions that my charm is going through?
<kjackal> your charm is part of a batch that was reviewed yesterday. The batch is almost finalised. We will send an update on the list with our progress
<kjackal> So, you will not have to do anything, I will bring it up on our daily sync but it is a normal internal process
<jacekn> kjackal: cool, thanks for explanation!
<jacekn> kjackal: just a suggestion - maybe add somethjign to the review queue to indicate that the charm review is "in progress" or something similar
<kjackal> The list that we will send our update is here: 	juju <juju@lists.ubuntu.com>
<kjackal> Are you registered, there?
<jacekn> yes I am
<jacekn> cool thanks again for explaining this
<kjackal> Great! Regarding the review queue, there will be a number of changes. We are working on improving the process especially since now there is the need to review layers (in addition to "old-style" charms))
<jamespage> beisner, gnuoy: I've switched over the offical charm branches to the reverse-imported ones from github
<jamespage> not that we no longer have different branches for trusty/precise
<jamespage> figured out that magic as well...
<gnuoy> s/not/note/ ?
<jamespage> note
<jamespage> indeed
<jamespage> lol
<gnuoy> ack
<jamespage> gnuoy, https://code.launchpad.net/~openstack-charmers/+branches?field.category=OWNED&field.category-empty-marker=1&field.lifecycle=MATURE&field.lifecycle-empty-marker=1&field.sort_by=most+recently+changed+first&field.sort_by-empty-marker=1
<jamespage> gnuoy, they show as precise charms as that the current default series in charms distro
<magicaltrout> stub: ping
<stub> magicaltrout: pong
<magicaltrout> ooh high, quick leadership q if you have 2 mins
<stub> np
<magicaltrout> okay so here: https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L90
<magicaltrout> if I do a juju set-config carte_password=1234 or something it will get picked up on the next is_leader execution
<magicaltrout> correct?
<magicaltrout> that runs every 5 mins or something
<stub> You are thinking of update-status, which I think runs every 5 mins with Juju 1.25
<stub> At the moment, you seem to be resetting the credentials every hook.
<stub> Which is fine, as none of them are random
<magicaltrout> actually I lied, the leadership.changed was going to be my question, it is being called :P
<magicaltrout> I have no question :)
<stub> You probably should rely on update-status here. The config-changed hook is what gets invoked after a 'juju set'
<stub> should not
<magicaltrout> yes I set the credentials everytime, but thats because its something that can be changed by set-config and needs to be propogated to the leader and the slaves because they need to know how to login
<magicaltrout> hmm actually you have a point, as the nodes all share the same password, I guess on config change I could set it and trigger a restart
<magicaltrout> actually check_running already does that for the password so it might be a bit of a noop anyway
<stub> So at the moment, you run 'juju set'. The config-changed hook runs on all units. On the leader, change_leader is called. On the others, update_slave_config.
<stub> Sory - I'll start again
<magicaltrout> hehe like I said, in reality I don't think i have a problem, my implementation might jump through a few extra hoops but I think it works the logs look okay: https://gist.github.com/buggtb/3b65eb1672dc602c98ac
<stub> The implementation seems fine.
<stub> Apart from the bad choice of method names, but that is just opinion :)
<magicaltrout> yeah thats just because they've evolved over time :)
<magicaltrout> I'm gonna clean all that up in a bit
<magicaltrout> they start doing one thing then cory_fu makes me rework it and they do something else :P
<stub> All your units might restart at the same time atm. Is that a problem?
<magicaltrout> restart?
<magicaltrout> oh the service
<magicaltrout> yeah i had wondered about that
<magicaltrout> because the slaves register with the master
<magicaltrout> currently i'm not sure what happens in the app if the master is unavailable, whether they retry or not
<stub> If you want to avoid that, check out the coordinator layer
<stub> I've only used it for top level charms though. This seems to be a layer?
<stub> https://github.com/stub42/layer-coordinator has the docs
<magicaltrout> yeah just reading the readme, thanks!
<stub> Of course, rolling restarts mean your slave are running for longer with outdated credentials so it might not be useful :)
<magicaltrout> how do I copy files to a unit during an amulet test?
<magicaltrout> can I run juju scp somehow?
<sparkiegeek> magicaltrout: yes, you can - see https://github.com/juju/amulet/blob/master/amulet/sentry.py#L69 for how amulet itself uses juju scp
<magicaltrout> aaah I misread the comment
<magicaltrout> I thought it said "juju scp doesn't work"
<magicaltrout> :)
<magicaltrout> thanks sparkiegeek
<jamespage> beisner, I think I'm always getting a 'full' amulet execution atm
<jamespage> no initial smoke...
<beisner> jamespage, indeed.  one too many triggers on that pipeline.  that should be fixed shortly.
<tinwood> gnuoy, have you got a moment?  I'm having problems testing keystone.
<gnuoy> tinwood, defo
<tinwood> gnuoy, so I'm trying to run the 015-basic-trusty-icehouse and it's hanging because rabbitmq-server is failing.  Have you seen anything like that?
<gnuoy> tinwood, yes. It'll be reverse DNS
<tinwood> ah, as in the rabbitmq can't work out its domain name
<tinwood> I see.  I've been having DNS problems - that gives me a new direction to look at.
<jcastro> We'll do office hours in about 1 hour!
<magicaltrout> I'm trying to test my charm, but it involves changing ports/passwords etc
<magicaltrout> whats the best way to reset the state between each test?
<magicaltrout> just like an amulet remove() type teardown?
<jamespage> gnuoy, coreycb: hey can we chat about aodh briefly?
<coreycb> jamespage, yes
<gnuoy> I have a 15mins window before child shuffling
<jamespage> coreycb, gnuoy: ok so sitrep - alarming is split from ceilometer -> aodh
<gnuoy> yep
<coreycb> yep
<jamespage> ceilometer have removed old code from mitaka; aodh has its own api etc...
<gnuoy> splendid
<jamespage> so I suggest we drop aodh from the ceilometer charm altogether and produce a new aodh one
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg
<gnuoy> yep
<jamespage> or we can piggy back then together
<gnuoy> split gets my vote
<jcastro> ^^^ Hangout for the office hours in ~15 minutes
<jamespage> gnuoy, ok I'll do the work to drop the aodh bits for now
<coreycb> split makes sense to me since separate apis
<gnuoy> jamespage, as coreycb has said before, it's the upgrade that;s is worrying
<coreycb> jamespage, that sounds good, I put those there before I realized there were separate apis
<jamespage> gnuoy, well people will lose alarming from ceilometer
<jamespage> we'll need to release note to add aodh
<gnuoy> jamespage, what about existing data in the db?
<tinwood> thanks gnuoy, reverse dns fixed, tests proceeding.
<gnuoy> tinwood, tip top
<magicaltrout> blimey laptop upgrade from trusty to xenial didn't completely brick it
<tinwood> gnuoy, yay! new pause/resume works with keystone :)
 * tinwood does a little dance
<sparkiegeek> tinwood: woot!
<beisner> wraskelly wrabbits
 * tinwood now has to work out how to do the git review bit.
<marcoceppi> OFFICE HOURS! Starting in just 5 mins
<rick_h__> marcoceppi: linky me please
<rick_h__> marcoceppi: so I can join and get setup
<marcoceppi> http://ubuntuonair.com/ - https://plus.google.com/hangouts/_/hoaevent/AP36tYeW6FuePgoCJC8HN9v_cpHfg1mQpsdKPmc5OIRopOb0NN1qAg
<magicaltrout> hello bundletester environment resets....
<cholcombe> question about charmhelpers related_units.  It looks like from the code it's valid to call this function without any arguments.  However i'm getting a CalledProcessError saying I must specify a relation id.  Maybe the API changed?
<magicaltrout> says it happens after every test
<magicaltrout> in python land is a test a single method a test? or is it class level?
<cholcombe> magicaltrout, every def fn is a test
<magicaltrout> hmm
<magicaltrout> thanks cholcombe
<cholcombe> magicaltrout, i think the class is just an easy way to group related tests
<magicaltrout> yup, same as java-land then, cool
<cholcombe> right
 * magicaltrout goes off to find the cause of run away procs then
<cholcombe> lazypwr, do you know of related_units can be called without any args?
<magicaltrout> rick_h__: is that a vanity mirror above your head? ;)
<magicaltrout> ooh mic stuff
<rick_h__> magicaltrout: yea, do I sound decent? :)
<magicaltrout> hehe
<rick_h__> but love the vanity mirror idea :)
<magicaltrout> better than my dodgy videos
<magicaltrout> I like a mirror, juju grooming
<lazyPower> cholcombe -  nah i'm pretty sure you have to either a) only call that when you're in a relation-* hook, b) provide the rel_id so it can query for active conversations
<magicaltrout> marcoceppi: for those of us use to the gui, there should be a rendering of juju status within the UI, i find myself flipping back and forth
<beisner> https://github.com/openstack/?utf8=%E2%9C%93&query=charm-
<beisner> https://github.com/openstack-charmers/openstack-community/blob/master/README.dev-charms.md
<beisner> https://jujucharms.com/u/openstack-charmers-next/
<marcoceppi> magicaltrout: good point, I'll bring it up
<bdx_> jcastro, thanks for stressing this
<jcastro> \o/
<magicaltrout> testing people
<magicaltrout> if you set a config option in amulet
<magicaltrout> whats the python way of waiting a while for it to take effect? :)P
<beisner> magicaltrout, in the openstack charms, we address that by waiting for the extended workload status message, which is where the charm declares itself done, ready and settled.  that requires that all of the charms in the deployment possess such logic.
<beisner> ex.  https://github.com/openstack/charm-keystone/blob/master/tests/basic_deployment.py#L39
<aisrael> tvansteenburgh: I'm seeing an LXC test failure that looks like it's a problem in the environment: http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/2862/console
<beisner> magicaltrout, we do that to avoid race conditions in the tests.  naturally, you want the charm to be done doing its work before poking at it.
<tvansteenburgh> aisrael: thanks, fixed
<aisrael> tvansteenburgh: no no, thank *you*
<magicaltrout> beisner: good idea thanks!
<beisner> marcoceppi - is there a vanilla wait-for-readiness amulet method?
<bdx_> officehours: so, each openstack tenant would corresponds to a separate cloud, or separate controller?
<bdx_> marcoceppi: yes
<tvansteenburgh> beisner, yes
<tvansteenburgh> beisner https://github.com/juju/amulet/blob/master/amulet/sentry.py#L307
<tvansteenburgh> beisner, also https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380
<beisner> tvansteenburgh, right, that waits for a specific status.  what i'm asking is actually for charms that don't use extended status.
<beisner> such as mongodb or mysql
<bdx_> rick_h: awsome!
<sparkiegeek> sounds fishy to me
<bdx_> awesome
<tvansteenburgh> beisner wait_for_status doesn't depend on extended status i don't think
<jamespage> beisner, hey so I have a number of 'rollup' reviews up to drop old release configuration files...
<beisner> tvansteenburgh, yep looks like that's the one magicaltrout ^
<jamespage> could you take a peek? I've been doing a recheck-full
<magicaltrout> rick_h__: are you going to make cloud metadata pluggable eventually? so we can inject new instance types etc without upgrading Juju?
<tvansteenburgh> beisner, magicaltrout: there is also sentry.wait() which waits for hooks to complete
<tvansteenburgh> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L345
<beisner> ah yes, thanks tvansteenburgh.  we've wrapped all of those amulet helpers in other test helpers so i've lost memory of their names. :-)
<beisner> jamespage, ok so on those, clear to land if all passing?
<magicaltrout> hmm
<magicaltrout> thanks beisner tvansteenburgh
<jamespage> beisner, yah
<bdx_> rick_h, officehours: what is the status of lxd <--> network spaces?
<jamespage> bdx_, next two weeks I think...
<jamespage> so not quite but almost
<bdx_> jamespage: awesome! exciting!
<lazyPower> YES!
<lazyPower> \O/
<lazyPower> rick_h__ - is it hard being that awesome?
<magicaltrout> lol
<beisner> cool stuff, bigdata :)   https://jujucharms.com/big-data
<rick_h__> lazyPower: exhausting :P
<jamespage> beisner, just trying to de-cruft old stuff in between things...
<rick_h__> lazyPower: helps to have tons of awesome people around
<lazyPower> i'll let mbruzek know you said that ;D
 * rick_h__ runs fast to keep up with them all
<beisner> jamespage, +1 for spring cleaning
<mbruzek> heyo
<deanman> Hello, I'm trying to use juju with manual provide to connect to an already bootstrapped environment but for some reason it uses my personal key instead of the juju key. Is there a way to define which key to be used when running juju command ?
<tinwood> jamespage, do I need to be an Openstack Foundation member before doing a git review request?
<jamespage> tinwood, yes
<tinwood> jamespage, as a person, I'm guessing - i.e. I have to physically join myself.
<bdx_> officehours: thanks everyone!
<jamespage> tinwood, yes
<tinwood> Thanks jamespage, I'm staring at the form now :)
<cherylj> hey lazyPower, I'm going to co-opt your bug 1553059 for just the last part of your problem - providing a way to clean up the cache.  I'm fixing the help text now, but I'd like to use that bug to track the cache.yaml cleanup
<mup> Bug #1553059: Help output when killing a `shared model` is incorrect <destroy-model> <juju-release-support> <juju-core:Triaged> <https://launchpad.net/bugs/1553059>
<lazyPower> oh you betchya
<lazyPower> its a polymorphic issue anyway :)
<cherylj> so I'll be changing the title
<lazyPower> duck-typed problems ftw
<bdx_> officehours: As a side note, I'm giving a presentation of juju-openstack-ha at a portland openstack meetup this month .... I need to hammer out a few issues I'm experiencing with different service under ha .... is there someone that would be willing to work with me on this a bit so as my demo might be legit?
<magicaltrout> tinwood: beisner just some terminology clarification if I want to catch a password change for example, I can set that block under message and then use Amulet wait_for_status?
<magicaltrout> sorry not tinwood !:P
<magicaltrout> tvansteenburgh: --^
<magicaltrout> or is status literally the thing in the services that relays the status?
<magicaltrout> (I realise the irony in that question)
<tvansteenburgh> magicaltrout: not sure what you mean by "catch it'
<tvansteenburgh> you mean you want to change config and wait for the change to complete?
<magicaltrout> tvansteenburgh: I need to change a config option and wait for it to actually happen on the unit before proceeding in my test
<Gil> ERROR cannot retrieve charm "cs:~hazmat/trusty/etcd-6": cannot get archive: Get https://api.jujucharms.com/charmstore/v4/~hazmat/trusty/etcd-6/archive: dial tcp 162.213.33.121:443: connection timed out
<tvansteenburgh> magicaltrout: then you want to make the change and then call senty.wait()
<magicaltrout>  /o\
<magicaltrout> alrighty thats easier :P
<tvansteenburgh> magicaltrout: more specifically, deployment.sentry.wait()
<magicaltrout> on a slightly different subject, there are no amulet python api docs published yet are there?
<tinwood> np magicaltrout :)
<magicaltrout> I had a prod around but I've ended up grepping the source
<tvansteenburgh> magicaltrout: sadly, no :(
<tvansteenburgh> magicaltrout: there are some docs but they are lagging behind the source
<magicaltrout> no probs
 * tvansteenburgh makes card to generate api docs for amulet
<icey> beisner: it's merged: https://review.openstack.org/#/c/287446/
<icey> !
<beisner> indeed \o/
<marcoceppi> jcastro: let me know when the video lands
<jcastro> marcoceppi: Almost, you done editing the notes?
<jcastro> I need to paste those in
<jcastro> https://www.youtube.com/watch?v=zPLW7cGrrjE&feature=youtu.be
<jcastro> marcoceppi: ^^^ unlisted yet so we can fix the description
<marcoceppi> I don't have edit access
<marcoceppi> jcastro ^
<jcastro> you signed in? You're listed as a "manager"
<marcoceppi> jcastro: which account?
<jcastro> https://plus.google.com/u/0/b/103184405956510785630/+MarcoCeppi/
<jcastro> that one
<marcoceppi> OIC
<magicaltrout> can someone tell me in simplistic terms
<magicaltrout> what an environment reset entails in bundletester?
<magicaltrout> its supposed to be a machine tear down isn't it? from my look at the code
<marcoceppi> magicaltrout: yeah, bundletester -TT is the 1.x equivalent of destroy-model in 2.0
<marcoceppi> it kills machines, services, units, etc, but keeps the bootstrap node
<magicaltrout> and according to the README reset is true by default
<magicaltrout> so after every test I should get a fresh machine?
<jcastro> magicaltrout: are you indexing the video or you want me to do that?
<magicaltrout> not me :P
<marcoceppi> jcastro: I'll index
<beisner> jamespage, include ceilometer in those +2s?
<magicaltrout> aww yeah, when you think its deploying new code.... but it aint \o/
<stub> magicaltrout: Every bundletester target/test gets a fresh environment. What sort of tests are you writing?
<magicaltrout> hey stub, yeah it helps if I deploy more than boiler plate I guess :)
<jcastro> anyone know if we have the juju->juju2 update-alternatives bits documented? Can't seem to find anything
<lazyPower> jcastro - i know that marco did a blog post on it, bu ti dont think we have it officially documented in any capacity, no
 * rick_h__ can't recall where it was, release notes or what
<marcoceppi>  release notes and my blog lazyPower rick_h__ lazyPower
<magicaltrout> marcoceppi.com
<magicaltrout> what he said
<lazyPower> marcoceppi do i constitute a committee now?
<jamespage> beisner, do you want me to pickup the xenial-enable branches on monday?
<jamespage> I can work through those but they all need a fullcheck
<beisner> jamespage, yep, fulls are running.  it's not time-sensitive.  i was mainly needing something to double check triggers and workflow and thought i'd see if those are ready to flip on.
<beisner> i think they are, but we shall see :-)
<jamespage> beisner, \o/
<jamespage> beisner, nice work this week btw - I think the transition has gone pretty smoothly
<lazyPower> Gil - getting along ok w/ the new pointer to ETCD?
<beisner> jamespage, thanks :-)  and thanks for all your good work on it too!
<Gil> lazyPower no.  i'm working on getting juju upgraded to 2.0.x which you indicated was a pre-req, and that's not going well either .  I'm on 1.25.3 atm
<lazyPower> Gil - ok instead of going through a 2.0 upgrade path
<lazyPower> you can also build the charm from source, theres only 1 repository to clone, then you can use a local charm until 2.0 lands as stable
<lazyPower> unless you *want* to beta :) Then in which case, disco! i'll lend a hand where i can
<Gil> uhhhh
<lazyPower> lets go for the path of least resistance
<Gil> sounds good
<lazyPower> have you built any charms before? literally with `charm build`
<lazyPower> Gil - if you clone this repo https://github.com/chuckbutler/layer-etcd   and run `charm build -o $JUJU_REPOSITORY`  it will output the charm you're upgrading to 2.0 to get to in your charm repo (assuming that env var is set) you can then deploy with `juju deploy local:trusty/etcd`
<Gil> no but I think I could - i watched marco's youtube classes and did some work bringing charms down locally to get started on building an "oraclexe" charm
<Gil> cloning repos I do know
<Gil> from github
<Gil> that's a start
<lazyPower> \o/ woo, its a great start
<lazyPower> the rest is downhill from there
<marcoceppi> Step 1 clone, Step 2 ... step 3 profit
<Gil> export JUJU_REPOSITORY=/home/gstanden/charms/trusty
<Gil> charm build -o $JUJU_REPOSITORY
<Gil> returned 4 lines of output, no errors
<Gil> Added charm "local:trusty/etcd-0" to the environment.
<Gil> seems to have worked
<aisrael> tvansteenburgh: Looks like a few items in the review queue that are merged but still showing up: http://review.juju.solutions/review/2429 and http://review.juju.solutions/review/2435 are the ones I've seen
<Gil> machine 2 hasn't fired up yet though...
<lazyPower> Gil - ah, interesting. i'm surprised that did work, $JUJU_REPOSITORY is typically the directory just before series. it must have recused into the tree and found it.
<tvansteenburgh> aisrael: thanks, updated
<Gil> to what should I have set JUJU_REPOSITORY?
<aisrael> kwmonroe: Are you still working on the zulu8 review? (it's currently locked)
<kwmonroe> yeah aisrael
<lazyPower> based off your last paste - /home/gstanden/charms
<aisrael> kwmonroe: ack, thanks!
<kwmonroe> np, it's really a no-op, just need to move tests from ./trunk to his ./source branch.  it'll be done today.  thanks for checking ;)
<kwmonroe> well, not so much a no-op, just a not-much-op
<Gil> something may still not be right ... normally when I deploy a non-lxc charm a machine starts up (maas).  The metal is not starting - and I know that my maas setup is solid so I'm wondering if something is still not right
<lazyPower> yeah that doesn't sound promising Gil
<Gil> juju status does show the etcd deployment status
<lazyPower> Gil - however the machine seems to be left in pending?
<cholcombe> marcoceppi, quick question about juju relation-list.  On ceph-mon it's not showing the related units and just blowing up
<marcoceppi> cholcombe: when are you calling realtion-list?
<cholcombe> hookenv.related_units()
<cholcombe> oh when
<Gil> I ran "juju remove-service etcd" then deleted machine 2 using the juju gui, then retried after setting JUJU_REPOSITORY to /home/gstanden/charms
<cholcombe> marcoceppi, i'm calling it inside a config-changed hook
<Gil> then I got: WARNING failed to load charm at "/home/gstanden/charms/trusty/deps": open /home/gstanden/charms/trusty/deps/metadata.yaml: no such file or directory
<Gil> so I put JUJU_REPOSITORY back to what I had before...and then retried...and it did launch a machine this time
<lazyPower> Gil - thats known behavior. the deps directory is a cache that gets used when building layers
<marcoceppi> cholcombe: you have to give it the relation you want to list
<lazyPower> it pulls in the remote interface-layers, and required layers that you dont have locally to assemble the charm - all of which are listed on http://interfaces.juju.solutions
<cholcombe> marcoceppi, ah ok interesting
<Gil>  machine: "3"
<marcoceppi> cholcombe: so you can only call relation_list without parameters inside of a relation hook, otherwise you have to pass it a context
<cholcombe> marcoceppi, got it.  inside config-changed i'm just trying to see who else is in the cluster and get a list of unit names
<aisrael> kwmonroe: I saw you were having troubles with the xenial vagrant images. Did you run into this? http://pastebin.ubuntu.com/15282513/
<aisrael> kwmonroe: and if so, did you find a workaround?
<kwmonroe> yup aisrael: https://bugs.launchpad.net/ubuntu/+source/livecd-rootfs/+bug/1538547
<mup> Bug #1538547: vagrant box fails with private network interface <livecd-rootfs (Ubuntu):Confirmed for utlemming> <https://launchpad.net/bugs/1538547>
<kwmonroe> aisrael: but that didn't stop me from being able to 'vagrant ssh xenail'
<aisrael> kwmonroe: aha, I can get in.
<kwmonroe> aisrael: yeah, looks like it's borked, but it's just messin with you.  once you get in, you can add ubuntu-xenial to the 127.0.0.1 line of /etc/hosts to get rid of the "failure in name resolution"
<aisrael> kwmonroe: heh, I just hit that bug.
<cholcombe> marcoceppi, are the relation ids that relation-list wants just the names of the relations?
<marcoceppi> cholcombe: yes
<cholcombe> marcoceppi, looks like hookenv.relations()['mon']['mon:1'].keys() returns the right information
<neiljerram> jamespage, hi - a question if I may about the new Git/Gerrit-based process: will changes at https://git.openstack.org/cgit/openstack/charm-neutron-api be mirrored into https://code.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/next ?
<jamespage> neiljerram, yes but there is some lag
<neiljerram> jamespage, Or alternatively, is there a way in a bundle .yaml file to specify using the Git source directly?
<marcoceppi> cholcombe: mon:1 is not a hard coded key
<marcoceppi> don't hard code that
<cholcombe> marcoceppi, i won't
<marcoceppi> cholcombe: is this a peers relation?
<neiljerram> jamespage, I'm guessing you mean around once per day?
<jamespage> neiljerram, utimately those will end up under https://jujucharms.com/u/openstack-charmers-next/ and
<jamespage> beisner, whats the schedule on the git->bzr sync?
<jamespage> neiljerram, I think its more frequent than that
<cholcombe> marcoceppi, yeah
<marcoceppi> cholcombe: k
<neiljerram> jamespage, Well that's plenty frequent enough, anyway.  Thanks.
<beisner> jamespage, every 4hrs
<jamespage> okies
<jamespage> neiljerram, ultimately when all the bit for charm push are in place, all of those branches are redundant and charm-store publication becomes a post commit task in the ci train
<neiljerram> jamespage, Right, so then the process will go from git.openstack.org directly to  https://jujucharms.com/u/openstack-charmers-next/ ?
<Gil> lazyPower  - there is alot I can do without getting into etcd at this point.  I will just work on my oraclexe charm for now on my stable trusty.  I've alot to learn still
<lazyPower> Gil - ok, if you need any help with etcd/flannel/etc. let me know and i'll lend a hand :)
<lazyPower> thanks for taking a look
<Gil> cool thanks!
<beisner> neiljerram, it has a few hops and sync along the way, but yes, the tip of master of a charm at git.openstack.org flows to github.com to lauchpad and then to the openstack-charmers-next space in the charm store. :-)
<beisner> unless it goes through albuquerque, then i'm not sure
<neiljerram> beisner, thanks!
<neiljerram> :-)
<beisner> neiljerram, yw.  thanks for those maintenance updates on the neutron-api calico bits.
<magicaltrout> right fscking tests.... lets get you working
<magicaltrout> charmers
<marcoceppi> magicaltrout you rang?
<magicaltrout> can i run a test class on a unit thats already running?
<magicaltrout> rather than wait for bundletester to mess around
<tvansteenburgh> magicaltrout: if you run your amulet test against an env that already has the services deployed, it'll use them instead of deploying new ones
<tvansteenburgh> magicaltrout: note that if you have multiple test files (in the tests/ dir), bundletester will reset the environment between files
<tvansteenburgh> magicaltrout: you can prevent that by setting `reset: false` in the tests.yaml file
<tvansteenburgh> https://github.com/juju-solutions/bundletester#testsyaml
<magicaltrout> ah thanks tvansteenburgh never noticed you could just execute them, assumed i've have to dump some more stuff in there
<magicaltrout> I'm clearly doing something stupid: is there anything wrong with this line: https://gist.github.com/buggtb/5620a4b5abf403e7b997#file-brokentest-py-L39 ?
<lazyPower> magicaltrout - nope, that looks correct to me
<lazyPower> magicaltrout - matter of fact, i do something similar here https://github.com/chuckbutler/docker-charm/blob/master/tests/10-deploy-test#L68
<magicaltrout> bah! thanks lazyPower
<magicaltrout> oooh wtf
<magicaltrout> this is where the whole compile deploy, suck from Launchpad stuff gets messed up
<magicaltrout> somehow with bundletester I have current rev tests but old charm reactive code
 * magicaltrout sobs into his rum and soda
<jrwren> magicaltrout: its ok. its not worth drinking about. I too have experienced this pain.
<magicaltrout> hehe
<magicaltrout> i sit there wondering why the config isn't executing
<magicaltrout> it is, but the old code didn't work properly and it runs and fails
<magicaltrout> silly automated tests
<magicaltrout> clearly not my fault in the slightest :P
 * magicaltrout double checks launchpad before running this time
<magicaltrout> okay wtf
<magicaltrout> does bundletester cache charms or something?
<magicaltrout> why does
<magicaltrout> bundletester -t lp:~f-tom-n/charms/trusty/pentahodataintegration/trunk
<magicaltrout> when it spin up give me old code?
<magicaltrout> the code in my reactive pdi.py is stale
<magicaltrout> and i have no clue why
<lazyPower> there no older copy of the charm hanging around somewhere in $JUJU_REPOSITORY?
<lazyPower> i've had issues where i thought i was going to be clever and must mv a charmdir, cloned, and kicked off a test only to have the one i mv'd get deployed
<tvansteenburgh> magicaltrout: did you figure it out?
<magicaltrout> yeah i've just this second had one of those "duh" moments
<magicaltrout> my test is what I have locally but the charm iself is pulled out of the charm store...... *massive face palm*
<magicaltrout> i guess this is why fridays were invented
<tvansteenburgh> heh
<magicaltrout> its just one of those things isn't it "wtf, my code is infront of me... why is it not right damnit?!" "oh yeah because its pulling the source from somewhere else"......
<magicaltrout> bah
<magicaltrout> got another talk accepted at ApacheCon that will use juju \o/
<tvansteenburgh> magicaltrout: nice
<magicaltrout> yeah, i have a lot of writing to do :P
<tvansteenburgh> you know you can run bundletester against your local source right?
<magicaltrout> well i kinda figured thats what it was there for. but my setup was "borrowed" from kwmonroe's ubuntu-dev env and has cls.d.add('pdi', 'cs:~f-tom-n/trusty/pentahodataintegration') wired in the top
<tvansteenburgh> also, to answer your earlier question about the env reset, it destroys everything except the bootstrap node between each test file execution
<tvansteenburgh> (same as `juju-deployer -TT`
<tvansteenburgh> magicaltrout: is that the same charm in which these tests reside?
<tvansteenburgh> you can just do cls.d.add('pentahodataintegration')
<magicaltrout> thanks tvansteenburgh that will work when it gets pushed up to the canonical run tests as well I assume?
<tvansteenburgh> yep
<magicaltrout> marvelous
<tych0> howdy cats. if i do a, `juju bootstrap gce gce`, i get a, `ERROR cloud "gce" not found`. this is with a current trunk build of juju with patches on top, and my environments.yaml has suitable config (it worked in 1.25 and older 2.0s). i must be doing something dumb, but i'm not sure what
<rick_h__> tych0: environmemts.yaml is no longer used
<tych0> rick_h__: yeah, i figured as much. are there docs on how to migrate to whatever the new thing is?
<rick_h__> tych0: see juju list-clouds list-credentials and tge beta1 release notes
<rick_h__> tych0: release notes emailed
<tych0> rick_h__: cool, thanks
<rick_h__> cherylj: do we a start to the "getting started" tych0 could try?
<magicaltrout> okay tvansteenburgh if I run a config change, what does sentry.wait actually wait for?
<tvansteenburgh> magicaltrout: hooks to finish executing
<magicaltrout> hmm fair enough
<magicaltrout> the config gets executed okay now, but the test gets to the next step too early
<tvansteenburgh> magicaltrout: happy to look at your code if you want
<magicaltrout> thanks tvansteenburgh
<magicaltrout> https://github.com/OSBI/layer-pdi/blob/master/tests/01-deploy.py#L39
<magicaltrout> 39 and 40 need to stop a process and wait until its actually happened
<magicaltrout> that process is stopped via https://github.com/OSBI/layer-pdi/blob/master/reactive/pdi.py#L53
<magicaltrout> which then drops down to line 152
<magicaltrout> just kills a pid
<magicaltrout> but the next line in my test checks its stopped and it returns a pid
<magicaltrout> but if you login to the machine it has actually been destroyed
<tvansteenburgh> magicaltrout: i don't follow. the config change is supposed to kill the proc right?
<tvansteenburgh> and it getting killed?
<magicaltrout> tvansteenburgh: that is correct
<magicaltrout> but in the test when I check for the process, it returns a pid
<tvansteenburgh> oic
<magicaltrout> but if you login ot the box after its failed and check for the pid its not there
<magicaltrout> so its like the next line executes too soon
<magicaltrout> before the pkill has finished doing its thing
<tvansteenburgh> yeah, i wonder if wait() is getting called and returns before the hook even starts
<tvansteenburgh> i wonder if you could use wait_for_messages() instead
<tvansteenburgh> https://github.com/juju/amulet/blob/master/amulet/sentry.py#L380
<tvansteenburgh> although then you might end up changing your charm to support the test, which is backwards
<magicaltrout> well i don't mind
<magicaltrout> nothings finalised
<magicaltrout> so extended status, is that like when i do status_set('active', 'PDI Installed. Carte Server Disabled.')
<magicaltrout> its the blurb afterwards?
<tvansteenburgh> yep
<magicaltrout> okay so if I wait for the disabled message it should hang because that certainly comes after the pkill
<tvansteenburgh> magicaltrout: cool, try it!
<magicaltrout> other dumb question tvansteenburgh do I need to do anything other than talisman = Talisman([], timeout=self.timeout)
<magicaltrout> to get a talisman object?
<tvansteenburgh> magicaltrout: generally you don't create your own, you get it from d.sentry
<tvansteenburgh> d.sentry is a Talisman object
<magicaltrout> ah right
<tych0> rick_h__: thanks, i got it sorted. my last question is do you (or anyone) have any idea where i can put the enable-os-upgrade: false flag now to speed up deploys?
<tych0> arosales: might know? ^^
#juju 2016-03-05
<magicaltrout> woop figured it out tvansteenburgh, after looking at a hadoop test I found out that because pgrep was coming from an external call, it was picking itself up recursively! Needed to grep it out!
<magicaltrout> thanks for you help and advice today.
<tvansteenburgh> magicaltrout: ah, glad you figured it out. happy to help!
<arosales> tych0! :-)
<arosales> tych0: sorry missed you ping
<arosales> tych0: are you running the lastest beata or the alpha?
<cherylj> tych0: you can pass the enable-os-upgrade flag in the bootstrap command:    `juju bootstrap <controller name> google --config enable-os-upgrade=false
<tych0> arosales: np, from source so latest everything :)
<tych0> cherylj: cool, thanks!
<cherylj> tych0: I'll be back around this evening (after I do bedtime for the kiddo)
 * arosales was just reading release notes
<arosales> cherylj: thanks
<cherylj> tych0: You can check out a presentation I did for the STS folks if you get stuck on other bootstrappy things
<cherylj> tych0: https://docs.google.com/presentation/d/1XnKjnBpCYY44Bw9EMnvkY8tYfhPPl-0I__HibXhYWS0/edit#slide=id.p3
<arosales> cherylj: will there be a way in juju 2.0 to keep these settings?
<cherylj> arosales: they should be applied to the model config after bootstrap
<cherylj> arosales: if you're seeing that they're not, I would think that's a bug
<tych0> cherylj: ah, no worries. i actually have a branch coming up to make things a bit nicer for juju/lxd image handling
<cherylj> tych0: Look starting on slide 14 for the bootstrap help
<tych0> cherylj: but i need something to be merged into lxd first, and everyone on my team is eating dinner :(
<tych0> cherylj: cool, will check it out thanks
<arosales> cherylj: sorry I was saying will there be a file on the client local filesystem to keep these changes through controller destroys, or will it always have to be passed in via the command line?
<cherylj> arosales: you will always need to supply it, but you can pass in a file with config to bootstrap
<cherylj> like juju bootstrap foo aws --config=/path/to/my/config.yaml
<cherylj> arosales: that was one of my gripes, but now I have files for config for each type of cloud I bootstrap
<arosales> cherylj: ah interesting and that config.yaml will probably have conventions similar to env.yaml in 1.25?
<arosales> ya I don't like editing env.yaml file, but once I have it working I don't have to touch it
<cherylj> arosales: yeah, checkout slide 15 in that deck
<cherylj> I show the contents of my config file
<arosales> so setting up a cloud is tedious, but then I do like having my setting for each cloud env persist
<arosales> cherylj: ok thanks for the pointer
<arosales> cherylj: post 2.0 may be a usability point we gather feedback on, but for now doesn't seem like that big of a deal
<arosales> cherylj: thanks for the reply here
<arosales> cherylj: and have a good weekend
<cherylj> np!  It took me a while to experiment enough to put that deck together.  If it help people get started, then it's definitely worth it
<cherylj> you too!
<arosales> cherylj: one other reminder we should meld your slides with rick_h__  ones he did for the summit and then mail the list. Very helpful info
<cherylj> arosales: I'll be using that deck to generate a "Getting started" section in the release notes
<cherylj> once we release beta2
<arosales> cherylj: very cool --thanks
 * cherylj disappears for a bit
<arosales> I'll let you get to your kiddo :-)
<metsuke> will 16.04 be available for bootstrapping as soon as it comes out or will there be some lag time until you can use it in juju?
<cherylj> metsuke: you can use xenial now if you use daily streams.  What cloud are you using for bootstrap?  aws?  local / lxd?  maas?
<metsuke> cherylj: we wiped our deployment, but we plan on using a private cloud with maas, juju deploying openstack and lxd hypervisor
<cherylj> metsuke: if you change the boot images Sync URL to http://maas.ubuntu.com/images/ephemeral-v2/daily/, you'll see 16.04 as a option for your MAAS images
<cherylj> metsuke: but I guess you need them in glance for openstack
<cherylj> you'll specify a default-series: xenial in your environments.yaml for that environment (if you're using juju 1.25)
<metsuke> we want to use juju2...if we can get it to work, but if all we need to do is change that url then we should be set, thanks!
<cherylj> metsuke: there's a lot that's changed to bootstrap for juju2.  Want some help getting it set up?
<metsuke> we were able to get it up and running and got to the login screen
<cherylj> oh great!
<metsuke> as soon as we entered the credentials, it just spun forever
<cherylj> oh hmm, not so great
<cherylj> are you talking about logging into the maas controller?  or a juju bootstrap node?
<metsuke> unfortunately we were on a deadline so we didn't look into it too much, but since we re-wiped again, we will try juju 2 again
<metsuke> logging into the juju-gui charm
<cherylj> oooh
<cherylj> are you using 2.0-beta1?
<metsuke> yes, that was the one
<cherylj> there were some API changes that have made juju incompatible with the gui, but the gui team is working hard to get it fixed.  I thought I saw that they were going to release the updated version soon
<cherylj> let me see if I can find that announcement
<metsuke> that would explain it then.  We really like juju and we think it would fit perfectly with our organization.  We just need to work out the kinks ;)
<rick_h__> metsuke: there's a beta juju gui charm that works on 2.0
<rick_h__> metsuke: cs:~juju/trusty/juju-gui
<rick_h__> cherylj: ^
<cherylj> thanks, rick_h__ !
<cherylj> I was trying to dig that up
<metsuke> do we need a different repo for that?
<rick_h__> metsuke: cherylj see http://ubuntuonair.com
<rick_h__> for a lot of yalking on the new stuff, etc from today
<metsuke> very cool
<rick_h__> metsuke: no, it's just not on the short main namespace until it leaves beta (cs:juju-gui)
<rick_h__> only on trusty/etc in beta
<rick_h__> for the deployed charm
<nagyz> is there a sahara charm that works? :-)
<marcoceppi> nagyz: I don't think we have a sahara charm at this point in time. We do have apache hadoop charms and a whole suite of big data plugins
<nagyz> marcoceppi, thanks!
<Akarat> hey guys!
<Akarat> Anyone can answer why the install hook downloaded files are deleted?
<Akarat> someone?
<jose> what do you mean by deleted? they shouldn't be
<jose> unless specified, of course
<Akarat> downloaded files
<Akarat> they aren't in the home folder
<Akarat> '~'
<jose> no, they shouldn't be
<jose> everything runs on the charm folder, which is where it will be
<jose> unless you specify a download location
<Akarat> do you know where the charm folder is?
<Akarat> in the vm?
<Akarat> the bitnami stack is installed =)
<Akarat> nice
<metsuke> Are there any plans for an Openstack: Manila charm?  Or is there something in Juju that can give the same functionality (shared directories between containers/instances)?
#juju 2016-03-06
<metsuke> also, will there be User Management support in juju 2 (possibly ldap connections)?
<jose> marcoceppi: ping, I'm having troubles with amulet and conflicting dependencies (python3-path and python3-path.py), it doesn't want to install
<marcoceppi> jose: did you dist upgrade to xenial
<jose> marcoceppi: I do-release-upgrade and then dist-upgraded right now
<jose> but I've been using xenial for a while now
<marcoceppi> oh, this is amulet
<jose> yep, amulet requires: python3-amulet requires: python3-path python3-path.py
<marcoceppi> jose: I'll look into it tomorrow, for now just use charmbox to isolate deps
<jose> marcoceppi: ok, thanks :)
<nagyz> is there any way in a python venv to install a "global" ssl cert?
<nagyz> given that it doesn't inherit the system ones...
<nagyz> I know it's not a juju question, but someone might know the answer :-)
<lazyPower> nagyz - well you can add a ssl certificate to the CA trust store if thats what you're looking to do?
<nagyz> lazyPower, in the meantime I figured out that requests installs it's own cacert.pem into it's local folder...
<nagyz> :)
<nagyz> but thanks
<lazyPower> cheers :)
<nagyz> I need to develop a _good_ CLI around MaaS it seems
<nagyz> it's horrible to change interfaces and whatnot around quickly
<nagyz> and I probably want to stay away from bzr :)
<nagyz> so just a layer on top.
<nagyz> unless you guys know of a "superCLI" for it?
<lazyPower> just the maas command
<lazyPower> apt-install maas-cli
<nagyz> yeah, which is pretty... basic.
<lazyPower> maybe file a bug with whats missing, so we can at least track it?
<nagyz> I'm actually OK with contributing it in some form
<nagyz> I have a free couple hours :)
<nagyz> is bzr the only option for maas?
<nagyz> or can I do a git PR somehow?
#juju 2017-02-27
<ybaumy> jamespage: its working with maas and juju openstack-base bundle. the only thing i had to fix was the network interface name which was ens192 to eth1 in neutron gateway machine
<ybaumy> jamespage: i had to change udev rules and reboot
<ybaumy> jamespage: then it automatically fixed it
<ybaumy> jamespage: i now can go from there
<jamespage> ybaumy: great!
<jamespage> ybaumy: yeah we where discussing the configuration for the external networking last week
<jamespage> ybaumy: at a minimum we might stop attempted auto config of that and make a note in the bundle README that thishas to be set afterwards
<jamespage> ybaumy: you can config it via the data-port option on the neutron-gateway charm - so br-ex:ens192 for your case
<gabriel> I guys, I have a quick question. Is there any existing interfaces in neutron-gateway charm to restart neutron-l3-agent and neutron-dhcp-agent openstack services ?
<BlackDex> gabriel: use juju run --application neutron-gateway 'service neutron-l3-agent restart ; service neutron-dhcp-agent restart'
<gabriel> BlackDex: thanks for your quick reply. I want to do this operation in a charm. For the moment, I have something like:
<gabriel> for service in ['neutron-dhcp-agent', 'neutron-l3-agent']:
<gabriel>   hookenv.status_set('maintenance', 'Restarting %s' % service)
<gabriel>   host.service_restart(service)
<gabriel> is it the best way to do so ?
<BlackDex> that is a bit out of my lueage :p. Didn't do that much in charm dev
<gabriel> BlackDex: ok thanks ! :p However, it really close to your way to restart these services, just from a different point of view
<BlackDex> is it a subbordinate charm? Or an external charm which needs to communicate with neutron-gateway?
<BlackDex> for a subbordinate there are examples i think, but i don't know for external
<gabriel> yes it is a subordinate charm
<gabriel> but I did not seen any interfaces to do such
<gabriel> for example, in neutron-openvswitch charm, you have the neutron-control interface to restart the services
<gabriel> in fact, i think that it's quite similar :p
<BlackDex> ah :). Well i only did some small changes to existing charms, and that did not include stuff like that
<gabriel> BlackDex: ok, no problems and thanks again! :)
<BlackDex> :) Good luck!
<gabriel> thanks!
<BlackDex> maybe there are some people at openstack-charmers which can help you a bit more :)
<BlackDex> i mean #openstack-charms
<gabriel> oh ok thanks! :)
<cnf> hmz, maas is giving me a headache
<dgonzo> I've spent a couple days working to familarize myself with juju and charms. I'm still failing to understand how best to write a simple layers charm. The tutorials appear to be rather opinionated but quite variable. I'm coming from a docker/docker-compose background. I'd like to add GPU support to kubernetes worker. I've been following along SaMnCo article:
<dgonzo> https://insights.ubuntu.com/2017/02/15/gpus-kubernetes-for-deep-learning%E2%80%8A-%E2%80%8Apart-13/ but not getting to a working nvidia-smi
<dgonzo> any pointers on how best to write a layer that would just be adding a rather picky driver?
<SaMnCo> dgonzo: I will let the dev team answer on the later part but can you detail what is not working for Nvidia? What version of my layer do you use? What cloud and what Nvidia card (Latest gives you 375.26 and works for me for Pascal, but will not on k20 from AWS, which require 367 and I didn't model that
<SaMnCo> So look at the layer code, try to run it natively on the host, and see if it fails saying that the card is legacy and requires 367
<SaMnCo> If yes, just find out the versions you need. These are set at the very beginning of the layer script
<SaMnCo> And easily manageable
<dgonzo> SaMnCo: It doesn't appear that the drivers are installed. Minimum install required to get GPU availability is going to be something like https://github.com/NVIDIA/nvidia-docker/wiki/Deploy-on-Amazon-EC2#gpu-instance-creation
<dgonzo> last run of your nvidia-cuda layers I got a driver error or driver missing -- sorry it was a few iterations ago
<SaMnCo> So if you use the g2 instances, you are stuck to 367 and the charm will not work. You can check out the commit history and the version just before the last will work
<dgonzo> ok
<SaMnCo> dgonzo: if you use p2, you will be fine
<SaMnCo> We are working with nVidia to make that better
<SaMnCo> Go back in time to 367.xx essentially
<dgonzo> are you saying there's a specific commit with the 367.xx drivers or I that I should fork and modify to peg at 367.xx?
<dgonzo> ... I am using the g2 series ec2 for cost considerations -- I'm not running these for training but rather for evaluation nodes
<dgonzo> SaMnCo: I see the driver versions in the last commit. Thanks.
<SaMnCo> Yes, previous commit is 367 I think
<SaMnCo> Check it out
<SaMnCo> Or fork the repo and edit the reactive/cuda.sh
<SaMnCo> To repoint to these
<Hetfield> hi all
<Hetfield> in https://jujucharms.com/docs/2.0/models-config i read i can use noproxy for local addresses, but  it seems not working
<Hetfield> juju bootstrap --config noproxy=10.0.0.0/8 maas juju-controller --constraints tags=juju WARNING unknown config field "noproxy" WARNING unknown config field "noproxy"
<Hetfield> and the no_proxy defined in /etc/environment seems ignored too
<kwmonroe> Hetfield: did you mean to type noproxy there?  the config opt is no-proxy, and btw, --config will affect the bootstrap node, you can specify it again if you want your models to use it with "--model-default no-proxy=foo".
<kwmonroe> Hetfield: see more discussion about that here:  https://github.com/juju/docs/issues/1676
<dgonzo> SaMnCo: I built the resource with updated driver. I'm getting an error "hook failed: "install"
<dgonzo> I'm still entering commands line-by-line as the template file in the repo doesn't yet work
<Hetfield> kwmonroe: yes, i need both, for bootstrap and later for day by day
<xpmaven> trying to install Ubuntu OpenStack with Autopilot and receive the following JuJu Bootstrap Failure during installation "problem with juju bootstrap".
<SaMnCo> dgonzo: hmmm OK. I'll spin up a g2 when I can to reproduce and fix. These Nvidia drivers drive me crazy
<SaMnCo> At MWC right now so not before tomorrow
<xpmaven> from the log file: wait 15s\nAttempt 5 to download tools from https://streams.canonical.com/juju/tools/agent/1.25.6/juju-1.25.6-trusty-amd64.tgz...\ncurl: (56) SSL read: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac, errno 0\ntools from https://streams.canonical.com/juju/tools/agent/1.25.6/juju-1.25.6-trusty-amd64.tgz downloaded: HTTP 200; time 20.655s; size 770048 bytes; speed 37281.000 bytes/s Tools 
<xpmaven> any help would be appreciated ;/
<lazyPower> xpmaven: oo ssl decryption error? is there a proxy or anything between you and the url its attempting to fetch?
<xpmaven> no proxy
<xpmaven> well, unless one considers the MaaS controller a proxy...
<dgonzo> SaMnCo: no problem. Thanks for forging the way on this. Being able to manage a "hybrid" cluster with gpu resources is exactly what i've been trying to do "by hand" even with the frustrations you've got a solution that shows real promise
<lazyPower> xpmaven: nothing that would cause an ssl error that i'm aware of
<lazyPower> xpmaven: the only thing i coudl think of was mitm proxying your ssl query and failing validation. but that doesn't seem to be the case... i'm uncertain. I would certainly file a bug however so we can get the right people looking at it
<xpmaven> thanks
<xpmaven> submitted: https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/1038
<admcleod> xpmaven: if you curl the directory (without the file) does it display an ssl error?
<xpmaven> no it does not
<bdx> anyone familiar with configuring the haproxy charm to do https redirects?
<jrwren> bdx: I am.
<bdx> jrwren: so here, https://gist.github.com/jamesbeedy/fa588b242ccffe5ce52c1e41c895e274#file-haproxy-bundle-yaml-L15
<bdx> I'm trying to add the redirect scheme to the service options, but am getting denied due to what looks like a unicode error bc of my "!" http://paste.ubuntu.com/24079429/
<jrwren> bdx: yeah, quote the whole thing.
<jrwren> service_options: ['redirect scheme https code 301 if !{ ssl_fc }']
<bdx> jrwren: I'll give this a whirl. thanks
<jrwren> bdx: that charm is deceptively awesome. I thought did didnt' support a bunch of stuff, but it turns out it supports almost everything. Its great.
<stormmore> o/
<stormmore> wish the maas team were as responsive as you guys!
<bdx> jrwren: I've had similar mileage ... the more I dig in, the better it gets :-)
<kklimonda> is zsh completion for juju 2.0 available somewhere?
<bdx> gah
<bdx> jrwren: I spoke to soon
<bdx> http://paste.ubuntu.com/24079515/
<jrwren> bdx: that didn't work? It must be some yaml thing. In my case it works fine quoted, but when I don't use json array syntax (the [])  but instead use yaml list syntax
<jrwren> bdx: here is exactly what I use: http://paste.ubuntu.com/24079518/
<bdx> jrwren: ahh nice, I'll give that a try
<bdx> jrwren: sadly, same result using your config
<bdx> jrwren: I'm wondering, are you using a trusty harproxy?
<jrwren> bdx: xenial
<rick_h> jacekn: ping, trying to use your grafana charm but getting a failure to get wheeze from packagecloud?
<bdx> jrwren: got it working, thanks
<jrwren> bdx: great!
<stormmore> really wish I could figure out what maas-enlist isn't working :-/
<h00pz> hey guys I need to customize the charmstore openstack bundle, can someone point me in the right direction to start doing some charm coding etc
#juju 2017-02-28
<miken> I'm having trouble removing a service in a deploy - I've `juju remove-application my-service` which worked, juju status shows the workload as terminated...
<miken> but the instance never terminates. I assume because subordinates are still active, but I can't stop them with `juju remove-unit`, as they're subordinates.
<miken> So I'm not sure how I'm meant to free up the underlying instance?
<lazyPower> miken: sounds like a hook is trapped in error state on the subordinate
<lazyPower> miken: can try try juju resolved --no-retry on the application in question?
<miken> Nope, juju status doesn't show any errors...
 * miken tries...
<anastasiamac> miken: does juju status --format=yaml shows errors?
<miken> ERROR unit "sca-cn-fe/0" is not in an error state
 * miken checks
<miken> Nope `juju status --format yaml | grep error` is empty.
 * miken pastes status
<lazyPower> weird, usually when a subordinate fails to clean itself up its due to either itself having hook errors or a related unit has relationship-* hook errors.
<miken> lazyPower: But there were no errors - it looks like the primary charm terminated correctly... here's the main part of status: http://paste.ubuntu.com/24081333/
<lazyPower> miken: is it the nrpe unit or the logstash-forwarder unit thats hanging around (or both?)
<miken> lazyPower: all of the subordinates are idle, as if they're not even aware they should be stopping.
 * miken checks juju log on a subordinate there.
<lazyPower> Ok, so both are lingering.  How about the unit/agent logs? I'm curious if there's anything in there complaining somethings not quite right.
<miken> The unit-landscape-6 log has:
<miken> ERROR juju.api.watcher watcher.go:87 error trying to stop watcher: connection is shut down
<miken> followed by lots of
<miken> WARNING juju.network network.go:447 cannot get "lxdbr0" addresses: route ip+net: no such network interface (ignoring)
<miken> (not sure why that's relevant though - these are not lxd units, so I assume the warning is just a warning)
<kklimonda> is there a known bug in juju 2.0.x where some lxd containers boot up with a 10.0.0.X IP address?
<kjackal> Good morning Juju world!
<kklimonda> I guess I just ran out of IP addresses in MAAS for all the containers
<jacekn> rick_h: hey. Still need help withh the grafana charm?
<rick_h> jacekn: no, thank you though. Turns out the apt-cache didn't like going out to get the debs over https and so needed some tweaking
<jacekn> ok
<Zic> hi, do running conjure-up in an LXD machine is supported/advisable?
<Zic> (LXD in LXD so)
<Zic> oops: I mean, running conjure-up in "localhost" mode
<rick_h> Zic: yes, you an run conjure-up in localhost mode and it's supported as it does special tweaks in the case of things like k8 and such to make nested containers work on the lxd profile and such
 * magicaltrout hive fives rick_h for getting "and such" into a sentence twice!
 * rick_h hasn't had his coffee yet
<magicaltrout> hehe
<rick_h> and now that /me knows he's being watched...will have to proofread more :P
<magicaltrout> ... and such
<stokachu> Zic, running inside a LXD machine is also doable https://stgraber.org/2017/01/13/kubernetes-inside-lxd/ but support for that is limited
<stokachu> and support being technical support from the conjure-up guys
<jamespage> if there are any charmers with a few mins
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/misc-percona-changes/+merge/318253
<jamespage> could do with a review - trying to make the ram usage in PXC a bit more sane
<cory_fu> tinwood: Ping
<cory_fu> tinwood: stub and I touched base and it seems like the discussion going on in the PR is going well, so we decided a short meeting was sufficient.
<cory_fu> tvansteenburgh1: ^
<tinwood> cory_fu, sorry, I missed the ping.  Didn't get a reminder from the calendar.  Sorry I missed the meeting :(
<cory_fu> tinwood: No worries.  I'll make sure to send out a reminder before the next one.  But, as stub pointed out, it was going to be short and sweet regardless.  :)
<tinwood> cory_fu, ok thanks.
<jamesd_> hey, I'm trying to add constraints to the kubernetes bundle.yaml as I want to scale up etcd and the worker nodes do you use the machine: tag to map constraints to machines and not services?
<stub> thedac: ping (or anyone who knows about JUJU_BINARY and JUJU_VERSION)
<cnf> wow, maas is a pain in the special place... :/
<Zic> hmm, I just deployed the kubernetes-e2e charm, add-relation to my masters & easyrsa via juju cli, and run the e2e test through run-action
<Zic> with show-action-output, I find the report as flatfile
<Zic> but how can I have this kind of display https://k8s-gubernator.appspot.com/build/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/0222031634000/ ?
<Zic> I just have .log and .xml files as a result
<thedac> stub: I commented on the MP. Those are my inventions. It helps me when I have multiple versions of juju strewn about that I need to test. I can yank that bit if necessary.
<stub> thedac: I can land it if you need to use juju-wait instead of 'juju wait' for whatever reason, as it is no real skin off my nose. But this command should be moving to core, at which point everywhere you call juju-wait becomes techdebt
<thedac> stub: Yeah, we do use juju-wait. So, for now that would give some breathing room
<stub> Ok. I'll land it in a tick then.
<thedac> thanks
<stub> thedac: per your review comment, when I tested this you never have to ensure that the juju you are using is first in the path. When juju calls the plugin, it has adjusted the path for you. I think it would fail though if you have multiple juju binaries in the one directory though, with names other than 'juju'
<stub> (at which point the plugin architecture falls flat on its face)
<lazyPower> Zic: - those flat files are ingested into gubernator
<lazyPower> Zic: i would love to tell you that its easy to host your own, but its got some AppEngine specifics built in, marcoceppi took a look not long ago
<lazyPower> Zic: so the short answer, is its non trivial. you can parse the data yourself though and display results, its all junit xml
<Zic> lazyPower: ok, thanks, and is an easier way to display this junit xml exist? (I don't know notjing about JUnit actually...)
<rick_h> bdx: you coming to the juju show tomorrow?
<lazyPower> Zic: i know jenkins has a junit parser
<lazyPower> thats about where my knowledge of junit ends tbh
<stub> thedac: up on pypi and packages rebuilt in my ppa
<thedac> stub: thank you. That is great
<cnf> hmm, juju doesn't seem to like socks5 proxies
<cnf> ERROR Get http://<ip>:5240/MAAS/api/1.0/version/: http: error connecting to proxy http://socks5://127.0.0.1:3128/: dial tcp: lookup socks5: no such host
<lazyPower> cnf: isnt that just socks5:// not http://socks5://?
<cnf> lazyPower: yeah, it's set to just socks5://
<cnf> juju is making that into http://socks5://
<cnf> $ echo $http_proxy
<cnf> socks5://127.0.0.1:3128/
<lazyPower> cnf: aww :( yeah thats our bad then. Could you file a bug for that? https://launchpad.net/juju/+filebug
<cnf> when i get home, i'm leaving :P
<zeestrat> rick_h: Are y'all doing the Juju show right now or is the YouTube calendar acting weird?
<cnf> lazyPower: https://bugs.launchpad.net/juju/+bug/1668727
<mup> Bug #1668727: juju commands does not understand socks5:// as a proxy <juju:New> <https://launchpad.net/bugs/1668727>
<rick_h> zeestrat: tomorrow
<lazyPower> cnf: thank you!
<rick_h> zeestrat: should say it's got a day yet
<cnf> np
<zeestrat> rick_h: Their timer just ticked down here. it's February 28 so it's probably leap year stuff
<rick_h> zeestrat: ok, I fail. it said the 28th, moved it to march 1 apologies for the confusion
<rick_h> zeestrat: I could have sworn I grabbed the wed date but must have had a lovely off by one error
<zeestrat> rick_h: As we all know, there are two hard things in computer science: cache invalidation, naming things, and off-by-one errors.
<cnf> tsk, naming things is easy
<cnf> that's what uuid-gen is for!
<mskalka> Hey #Juju, is it possible to alias a charm in a bundle.yaml? Akin to 'juju deploy <some charm> <user specified name>'
<rick_h> mskalka: so the key in the yaml is the name that is used
<rick_h> mskalka: so in https://api.jujucharms.com/charmstore/v5/django/archive/bundle.yaml I could s/python-django to just django and it'd be that name in the model
<mskalka> rick_h: Ah, perfect! Thanks for the quick answer
<rick_h> mskalka: np
<lazyPower> rick_h: lookit you lurking in #juju being all helpful :D
<bdx> is the hosted controller under high load right now?
<rick_h> jrwren: uiteam ^
<hatch> looking
<rick_h> bdx: there was some aws outage going on but not aware of any load items atm. I'm on my way out the door but folks will peek at it
<jrwren> bdx: for which cloud?
<bdx> us-east-1
<hatch> bdx nothing showing here, can you elaborate on the issue are you seeing?
<hatch> oh aws is having issues right now
<bdx> yeah we were experiencing s3 outages in us-east-1 all day
<jrwren> controller shouldn't be impacted, but your ability to add units may be, because archives.ubuntu is s3 hosted AFAIK
<hatch> heh I was JUST typing that out :)
<jcastro> I could have sworn we moved them off of s3 a while back for other reasons
<bdx> hatch: hard to tell its "controoler" thats inducing the "lag", I would just say in general things are a bit slow today. ....just wondering how the controller is holding up as well
<jcastro> it's a major outage, who knows what else is happening there right now
<hatch> bdx so far so good - I'll poke it a bit and report back
<hatch> bdx so it looks like the aws s3 issues are causing issues on fresh bootstraps
<hatch> on aws
<bdx> ok, thanks
<stormmore> that might explain the issues I was having with security. earlier
<brandor5> Hello everyone: I'm using juju to install an openstack poc and have some troubles with networking... Is there a better place to seek advice than #openstack ?
<zeestrat> brandor5: Are you using Juju/MAAS?
<brandor5> zeestrat: yep
<zeestrat> brandor5: Then here, or #openstack-charms would be the right place
<brandor5> zeestrat: awesome, thank you very much
<zeestrat> brandor5: No problem. A word to the wise. I think a bunch of the openstack-charmers are in the EMEA timezone so there might not be so much action around this time.
<brandor5> ah, thanks for the advice :)
<zeestrat> brandor5: Last tip before I'm off, I'd add some more details about the network/interfaces for the hosts in question from MAAS in #openstack-charms :) Good luck!
<brandor5> zeestrat: much appreciated!
<cholcombe> when trying to deploy with juju 2.1 i'm getting an odd problem today.  ERROR unknown option "series"
<cholcombe> unless i'm going crazy that use to work fine with juju 2.0.x
#juju 2017-03-01
<cholcombe> ok. looks like you can't specify the series in the config.yaml.  It has to be on the cmd line
<Budgie^Smore> lazyPower so I am running into a problem with my infra-in-a-box maas environment right now hence why I haven't come back to the issues with the k8s cluster :-/
<Budgie^Smore> I am thinking of switching out the virt tech I am using for it
<lutostag> Cynerva: any charm out there using a debug-layer currently?
<lutostag> (one I can easily test-deploy to see if new feature in crashdump works with it)?
<lazyPower> lutostag: kubernetes-master/worker/etcd
<lazyPower> however etcd may not have landed yet.... i'd need to check the charm store
<lutostag> lazyPower: that'll work, thanks
<KeyboardSquid> Anyone online? Looking for some help troubleshooting a bootstrap issue
<KeyboardSquid> Juju Version: 2.0.2-xenial-amd64
<KeyboardSquid> When running "juju bootstrap lxd lxd-juju" I get the following error:
<KeyboardSquid> error: cannot load ssh client keys: open /home/keyboardsquid/.local/share/juju/ssh: permission denied
<stormmore> KeyboardSquid, what's the output from ls -la /home/keyboardsquid/.local/share/juju/ssh ?
<KeyboardSquid> "Cannot open directory /home/keyboardsquid/.local/share/juju/ssh"
<KeyboardSquid> looks like the owner is listed as User: Root, Group:root
<KeyboardSquid> looking online, this seems to be an issue in earlier versions of juju, but was supposed to be fixed after version 1.8.0
<KeyboardSquid> I am connected to this machine via SSH if that matters
<stormmore> you can do a "chown -R keyboardsquid. /home/keyboardsquid/.local/
<stormmore> you can do a "chown -R keyboardsquid. /home/keyboardsquid/.local/"
<stormmore> that should fix the permissions for all the folders in the directory and should fix your problem
<KeyboardSquid> ok, i can try that. Is there any reason .local would be owned by root in the first place?
<stormmore> no
<stormmore> nothing in your home dir should really be owned by root
<stormmore> root can walk in regardless anyway
<KeyboardSquid> Boom, looks like im good to go. Thanks!
<stormmore> no worries :)
<kjackal> Good morning Juju world!
<eeemil> How does Juju decide which subnet to use within a specific space? I'm deploying to MaaS where each machine has 2 NIC:s, one NIC is exposed to Internet and one NIC is for internal communication. I want to deploy openstack-base. If I have 2 separate spaces (one external, one internal), Juju seems to become confused. If I have 1 space with both external and internal subnet, I can't communicate with some units as
<eeemil>  they get internal IP:s listed as public...
<cnf> meuwning
<cnf> lazyPower: so it seems socks support missing isn't a bug ^^;
<anrah> Has anyone been able to bootstrap controller with 2.1 using OpenStack as cloud provider?
<Zic> hello, is NGINX Ingress Controller deployed by CDK support this https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/customization#using-annotations ?
<cnf> hmm, wow
<cnf> can you bootstrap a juju controller manually?
<cnf> because if you use maas, it claims an entire machine for this
<cnf> hmm
<zeestrat> cnf: regarding juju controllers with maas. to reduce footprint on smaller deployments, we've used kvm's on maas controllers to colocate juju controllers
<cnf> hmm
<cnf> and you set the power mode to manual?
<zeestrat> we use the Virsh power mode. see https://docs.ubuntu.com/maas/2.1/en/installconfig-add-nodes#kvm-guest-nodes
<cnf> hmm, interesting
<cnf> thanks
<zeestrat> no problem
<cnf> can you use a local socket as the power address?
<cnf> hmm, maas GUI really doesn't give a lot of feedback when it doesn't like some entry :P
<zeestrat> cnf: not sure, I've only used qemu+ssh with an IP.
<cnf> zeestrat: ssh to the localhost?
<zeestrat> yeah, can do that too I imagine.
<zeestrat> the #maas guys can probably help you more if you get into the nitty gritty :)
<cnf> yeah, i'll ask there :P
<cnf> i haven't used kvm in ages, either
<cnf> zeestrat: how did you create the kvm guest node? and make it pxe boot?
<Zic> -> to anwser my question of earlier, the nginx-ingress-controller used by CDK does not support Annotations like the one from NGINX Inc., but it's not important, I used this howto and it works: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/custom-configuration
<Zic> TL;DR, just use a ConfigMap instead of Annotations
<cnf> hmm, i always get stuck on "Fetching Juju GUI 2.4.2"
<cnf> hmz
<cnf> where is that gui downloaded? is that a remote command?
<cnf> how do i tell it to use a remote proxy?
<cnf> does juju expect direct ssh access to hosts it creates?
<marcoceppi> cnf: you can set the proxy values when bootstrapping, so it can use that during bootstrap
<marcoceppi> cnf: ssh access is needed during the bootstrap process, at the moment
<cnf> bah
<cnf> that's just not feasable here
<cnf> why can't is just respect my ssh config
<cnf> it*
<marcoceppi> cnf: not sure about the ssh portion - as in if it does use a config or not
<marcoceppi> Also, I'm not sure why we even need SSH on first launch
<cnf> i am about 4 hops separated from direct ssh access
<marcoceppi> cnf: what provider are you using?
<cnf> MAAS
<marcoceppi> let me check
<cnf> my main goal is to install and manage openstack
<marcoceppi> sounds reasonable
<cnf> it's been a rocky road so far...
<marcoceppi> cnf: lets see if I can help curb that
<cnf> that would be nice :P
<cnf> marcoceppi: btw, https://bugs.launchpad.net/juju/+bug/1668727 is also mine
<mup> Bug #1668727: juju does not support socks5 as a proxy <juju:Triaged> <https://launchpad.net/bugs/1668727>
<marcoceppi> cnf: There's a Juju Show later today, I'll make sure to bring it up
<cnf> oh, nice
<cnf> marcoceppi: if you need ssh, it'd be nice if i could specify an ssh wrapper, or at least get my ssh_config used
<marcoceppi> cnf: yeah, I think so
<disposable2> have 4 machines ready in MAAS. i want to control it by juju BUT i don't want to sacrifice 1 of the 4 machines to be the controller. is it possible to have the controller in lxd?
<cnf> i'm going to take a break, and think on how i can bypass this temporary
<disposable2> s/control it/control maas
<cnf> disposable2: i'm putting the controller in kvm on the maas machine
<cnf> well, i;m trying to, juju isn't cooperating much atm :P
<cnf> disposable2: maas can work with KVM just fine
<cnf> disposable2: https://docs.ubuntu.com/maas/2.1/en/installconfig-add-nodes#kvm-guest-nodes
<disposable2> cnf: thanks, but the question is somewhat different. i want to know if the controller HAS to be in the same 'cloud' as the cloud it is controlling
<cnf> it has to run somewhere
<cnf> if you don't want it on the metal, AND you don't want it on the MAAS controller
<cnf> where would you put it?
<disposable2> cnf: into lxd 'cloud'
<cnf> lxd is just a local instance
<cnf> on your laptop
<cnf> or on something running
<disposable2> i'm not going to sacrifice a 60K server with 1TB ram to be a controller.
<marcoceppi> disposable2: the controller has to be reachable by other machines
<cnf> the only thing running is the maas controller
<cnf> disposable2: so, as i said, run it in KVM on the MAAS controller
<cnf> disposable2: you said you have a MAAS working, run it on the same machine the MAAS controller is on
<marcoceppi> ^ +1
<marcoceppi> I run three KVM on my 5 node maas cluster to just get more density
<disposable2> marcoceppi: so if my lxd machines are using a bridged interface on a common network, then bootstrapping my maas-controller in lxd will work?
<cnf> disposable2: you have nothing to run lxd on, atm
<disposable2> marcoceppi: my main problem is with the extremely superficial juju documentation."man juju" doesn't clearly explain what i can or cannot do with 'juju bootstrap'
<marcoceppi> disposable2: it's possible, but if you bootstrap lxd juju will expect lxd as your model, let me chekc if you can bootstrap one cloud and point at another
<disposable2> marcoceppi, cnf: i guess, i can add a cheap node to maas and run kvm on it if that's the only solution.
<cnf> disposable2: ...
<cnf> disposable2: use the MAAS server...
<cnf> you already HAVE that
<cnf> disposable2: what is the maas controller running on, atm?
<disposable2> cnf: i understand what you're trying to say, my question was meant to be a generic one - i.e. "can the controller of 1 cloud backend be running on another cloud backend?" (if networking isn't an issue)
<marcoceppi> disposable2: I know you can do things like, regions across a controller cloud
<marcoceppi> disposable2: not sure about cross controller, trying now
<rick_h> marcoceppi: no, can't do that atm
<marcoceppi> ah :(
<rick_h> marcoceppi: too much chance for things to go boom with networking and dupe 10.XXX addresses and such
<rick_h> you can do manual machines if you want to
<rick_h> e.g. add-machine ssh... and point at a machine on another cloud but it's kind of a mess
<marcoceppi> rick_h: right.
<disposable2> rick_h, marcoceppi: thanks
<aisrael> rick_h, how is manual provider a mess right now? iow, what should we be keeping an eye out for?
<rick_h> aisrael: I just mean that mixing a manual machine into an AWS model
<rick_h> aisrael: e.g. it's not as nice as "juju add-unit"
<aisrael> rick_h, Gotcha. It's "complicated". :D
<rick_h> aisrael: since to get a machine from another cloud you'd need to get that instance, setup SSH, and then add-machine your new GCE instance into your AWS model so that you can cross cloud bits
<rick_h> aisrael: yea, it's a messy UX I guess. Sorry for the confusion there.
<brandor5> Hello everyone: I've tried asking my question in #openstack-charmers and #openstack but haven't gotten any replies, so I'm going to try here... I'm trying to set up an openstack POC using maas/juju and I'm hitting a snag when trying to use vlan networks (for floating and provider networks)... flat networks work fine but when using a vlan network for a floating external pool, i don't see traffic jumping from the router namespace out to
<cnf> hmm
<cnf> marcoceppi: so any suggestions on how to get going for now?
<dakj> Hello guys, I need help. I try to deploy Landscape Dense-Maas on Ubuntu 14.04Lts but I've received that error on haproxy: "Status: error - hook failed: "config-changed"
<dakj> IP address: none
<dakj> Public address: none". Someone can help me, please?
<hackedbellini> Hey guys! I have a setup here using lxd. I'm trying to make some changes to cloud-init's user-data (more specifically, add a dns search option in it), but it doesn't seem that the it is being propagated
<hackedbellini> I'm doing the changes like this: lxc config edit juju-449b90-6
<hackedbellini> after that, I try to restart the lxc (I even tried restarting the host itself), but it is still using the old config
<hackedbellini> when I look at /var/lib/cloud/instance/user-data.txt, it is still in the old version too
<hackedbellini> am I missing something here?
<jrwren> i do not think you can change user-data once a system is booted. its a one time thing AFAIK.
<jrwren> lxc is behaving like other clouds in this regard. AFAIK you can't change it on AWS, Azure or GCE either.
<hackedbellini> jrwren: hrm, so is there any other way of doing that?
<hackedbellini> I really need to make those changes
<jrwren> hackedbellini: there is some way to copy cloud-config to a host and make cloud-init evaluate and execute its modules based on it. I don't recall how, but IIRC it is on stack overflow.
<hackedbellini> jrwren: ok, I'll check it. Do you have any idea of what kind of keywords I should use for the search? I just don't know how exactly to describe the problem to search it on google =P
<jrwren> hackedbellini: run cloud-init with cloud-config after boot?
<jrwren> hackedbellini: http://stackoverflow.com/questions/6475374/how-do-i-make-cloud-init-startup-scripts-run-every-time-my-ec2-instance-boots#10455027 :)
<hackedbellini> jrwren: oh thanks! I'll take a look
<hackedbellini> jrwren: I don't know if those will help me. Let me ask you a question and maybe the answer to it will point me in the right direction:
<hackedbellini> the config from /etc/hosts are generated by cloud-init. They are based on /etc/cloud/templates/hosts.tmpl. When I look at that template, I see that cloud-init will use the fqdn/hostname from its configuration. In this case, how can I change the hostname?
<hackedbellini> since my changes are not propagated, what would be the way to change it? Changing it on /var/lib/cloud/instance/user-data.txt would do the trick?
<jrwren> yes, I'd overwrite that file.
<hackedbellini> jrwren: ok, I did. So, how can I "trigger" the update of /etc/hosts now? I updated that file and restarted the lxc, but nothing
<hackedbellini> because if I know how I can trigger that I probably can trigger the dns changes if I make them on that file too
<dakj> Anyone can help me?
<lazyPower> dakj: what substrate and juju version are you using?
<lazyPower> *cloud
<dakj> Lazy power: 2.0
<lazyPower> dakj: and what cloud?
<dakj> lazypower: MAAS
<lazyPower> dakj: can you pastebin the logs from haproxy?  juju ssh haproxy/0 && pastebinit /var/log/juju/unit-haproxy-0.log
<dakj> I've opened a post here (http://askubuntu.com/questions/886533/deploy-landscape-status-error-hook-failed-install-ip-address-none-public)
<lazyPower> dakj: ah actually your debug-log --replay command would work as well
<dakj> lazypower: on post I had a problem with postgresql, but re-make everything the same issue is on haproxy
<lazyPower> dakj: is it possible the vm is configured without proper networking?
<dakj> lazypower: look that http://askubuntu.com/questions/881208/deploy-landscape-gui-via-juju-gui-on-ubuntu-14-04lts-server
<hackedbellini> jrwren: btw, if I change /var/lib/cloud/instance/cloud-config.txt or /var/lib/cloud/instance/user-data.txt, when I restart the container, it gets reverted to the previous version...
<dakj> Lazy power: all node are configured well on MAAS and their status is correct
<jrwren> hackedbellini: right. its not really designed for this. you are off the rails. Maybe better to find a different way to acomplish what you need?
<hackedbellini> jrwren: ok, but now I'm even more lost hahaha
<jrwren> hackedbellini: when you first asked your question, I didn't realize you were using juju, I thought this was lxcontainers irc channel, so I gave a very poor response for a juju context.
<jrwren> hackedbellini: I thought you were using lxc/lxd directly.
<hackedbellini> jrwren: oh, probably my fault, I think I wasn't very clear about that. So, since I'm using juju, that makes what I want to do easier or harder?
<jrwren> hackedbellini: neither, it just makes it different, or possibly unsupported.
<jrwren> hackedbellini: I think the juju-way would be to use a juju charm to do whatever custom server config you desire.
<hackedbellini> jrwren: hrm I see. Just to be sure, one last question then: What I'm doing right now is hardcode changing the templates in /etc/cloud/templates. I need some changes on /etc/resolv.conf and /etc/hosts. Are you sure there's no way to change those variables anywhere easy?
<jrwren> hackedbellini: AFAIK there is no juju easy way. Depending on the behavior of your models, you could do things like use `juju run` to keep them up to date.
<jrwren> hackedbellini: I think resolvconf may be what is overwriting /etc/resolv.conf editing an /etc/resolvconf/ template may be enough to persist that. the /etc/hosts file, I'm surprised is overwritten.
<hackedbellini> jrwren: I see. Thanks anyway =P. And yeah, it is on every boot
<jrwren> hackedbellini: to what end are these changes important? does this effect the workload of those units and the charms which they run?
<hackedbellini> jrwren: kind. The hosts one is because the "localdomain" domain that juju used was being rejected by out mail server. It was expecting a certain domain or no domain, and because of that I needed to change it. The dns one is because we have a local dns server here and we need to use it for search or else some of our services will not find some hostnames using
<hackedbellini> some local domains
<jrwren> hackedbellini: is there a reason DHCP isn't pointing you to local DNS by default?
<dakj> Lazy power: sorry a lost the connection
<dakj> lazyPower: any sugget
<dakj> Suggest?
<hackedbellini> jrwren: It is pointing to the host machine, but the dns server is running on another one
<jrwren> hackedbellini: maybe the host machine is misconfigured then? its DNS upstream could be that important other one.
<hackedbellini> jrwren: hrm, probably. Is it something I can configure on /etc/default/lxc-bridge?
<hackedbellini> this is my current content:
<hackedbellini> https://www.irccloud.com/pastebin/1Z1Oovem/
<jrwren> hackedbellini: I'm not sure. does `dpkg-reconfigure -p medium lxd` ask about DNS at all?
<hackedbellini> jrwren: no, but there's a "Path to an extra dnsmasq configuration file" in that file. Maybe it is for that?
<jrwren> hackedbellini: sounds great, yes. then find the dnsmasq config to point to upstream DNS
<hackedbellini> jrwren: lets see if that will do the trick
<lazyPower> dakj: not sure, sorry. Sounds like there's an issue with one of the units config in maas though. if its consistent but changes application on each deployment
<lazyPower> dakj: i suspect you need to assign the networks the unit is part of
<lazyPower> eg: edit the unit in question "precipitating-petunia.maas" for example, and check those interfaces, then assign each interface to a fabric.
<bdx> brandor5: hey, whats up
<brandor5> bdx: hello :)
<bdx> brandor5: I don't think vlan type networks are supported as external provider network type in openstack
<brandor5> bdx: that's weird, I'm pretty sure that in a hand rolled (not maas/juju) install I had them working without issue :/
<bdx> brandor5: I have tried to get that working a while back (mitaka), as I couldn't find any info on it either, and had no luck
<bdx> brandor5: when you create an "external" network in openstack .... you can't also set it as type vlan
<bdx> the api commands just won't take ....
<bdx> brandor5: please let us know, if you get this working, as I think many in the community have difficulty here
<bdx> brandor5: I'll give it a shot again if  have time in the next day or so
<brandor5> bdx: I'm almost 100% that I've done it in the past when not using the charms
<bdx> brandor5: the "charms" have nothing to do with the way openstack networking works once it is setup really
<bdx> brandor5: the charms let you specify at a higher level what traffic should go over what interface, so that when openstack is deployed you have the capability to jump right in and start using it
<brandor5> bdx: when you say 'external' you mean network that provides floating ips?
<bdx> brandor5: yes, in the context of an traditional openstack "external" network
<brandor5> bdx: ok, that's fair enough. I can't remember if I had an external network using vlans... but I did definitely have 'provider' networks using vlans... is that supported with the charms?
<bdx> brandor5: yes, out of the box, the openstack charms support vxlan,gre,vlan, and flat network types using neutron-openvswitch/neutron-gateway
<brandor5> bdx: maybe that's what is causing some of my confusion? I'm not able to get provider vlan networks functioning either
<bdx> brandor5: are you specifying that network type when you create your networks via openstack api?
<brandor5> bdx: yes
<bdx> brandor5: can I see that command?
<brandor5> neutron net-create --provider:network_type vlan --provider:segmentation_id 4002 --provider:physical_network physnet1 --router:external dtint
<hackedbellini> jrwren: it worked! Btw, in the process, when I was trying to rerun my cloud-init config, I tried something that made my juju agent get lost. The container is working, I can even "juju ssh" to it, bu on "juju status" while others show as "started" it shows as "down". Is there something I can do about that?
<bdx> brandor5: what is the result of that command?
<brandor5> bdx: it creates a network
<jrwren> hackedbellini: restart the juju machine agent. there should be a juju service for it.
<bdx> oh ... interesting
<bdx> brando5: have you ensured all of your physical routing up to the interface is correct, e.g. the vlan is getting there?
<hackedbellini> jrwren: probably it is worse than that. I restarted the server hosting the lxcs
<hackedbellini> jrwren: Let me tell you what I did
<bdx> brandor5: the interface to which physnet1 is assignedb
<brandor5> bdx: yes, i have... tcpdump at multiple locations
<bdx> ok
<bdx> brandor5: so, what is not working?
<brandor5> bdx: I would except that when I spin up an instance that's attached to that network that I would be able to see traffic and communicate with out real-world systems on it
<brandor5> expect, rather
<rick_h> REMINDER juju show coming in 33 minutes!!! Some big stuff coming!
<rick_h> hatch: bac arosales marcoceppi lazyPower mbruzek kwmonroe and anyone else reminder ^
<hackedbellini> jrwren: I was trying to find the answer on stackoverflow that I followed. Buyt well, I removed the "/var/lib/cloud/instances/juju-449b90-6/" directory and run "cloud-init init" again. After that the problem started. Like I said, the container is working fine, but juju is not able to connect to its agent
<bdx> brandor5: you spin up instances in an internal network which uses that same router as its gateway right?
<hatch> oh I'll be there :)
<bdx> brandor5: I'm not aware that the dhcp mechanism is running on the external network by default
<jrwren> hackedbellini: that... might be very bad. rahter than try to recover can you remove that juju unit and add a new one for that application?
<brandor5> bdx: sorry, I've sent you the wrong command above... that's from when I was trying to set up a vlan external network
<hackedbellini> jrwren: and one even stranger thing. I was trying to do a restart on the service but it doesn't seem to exist. There's only a "juju-clean-shutdown" service, it should have 2 more for the machine and the unit
<bdx> which should be ok though ... as there is nothing stating that creating vlan external provider networks isn't a thing
<brandor5> bdx: now I'm confused
<hackedbellini> jrwren: as a last resort I can do that, but I would rather not have to. The service is easy to deploy, but the data migration would take me some time to do
<jrwren> hackedbellini: right, and its the juju defined cloud-config which creates those, but if you someone removed that... you are really off the rails :)
<bdx> brandor5: I haven't been able to do it, but I'm finding docs that say its a thing
<jrwren> hackedbellini: ah, the data is local to the unit? ugh.
<hackedbellini> jrwren: yeah, unfortunatly. Its a jenkins unit btw, lots of configurations for lots of jobs
<brandor5> bdx: you just told me that i can't create an external network that is of vlan type
<hackedbellini> jrwren: can I copy some file from other container and maybe make that work? hahaha
<brandor5> bdx: sorry for the delay there, wanted to scroll back up and make sure I'm not really losing it... :D
<bdx> ha, brandor, I tried very hard to get that working .... it very well may be possible, but its an area that entirely stumped me .... its possible that it wasn't supported in kilo/mitaka and now it is though
<bdx> its an amorphous beast
<mimizone> hi all.
<brandor5> bdx: like I said earlier, I'm almost 100% that I've at least done provider networks that way... and I'm also fairly confident that I had external networks set up like that too... and this would have been on kilo and liberty
<arosales> rick_h: unfortunately I wont be able to make the live show today
<mimizone> is there a way to give a hint to juju 2.1 on which IP to use for the node/machines? instead of trying to guess by connecting to all interfaces, and then be at the mercy of timing.
<rick_h> arosales: booo :P ok, thanks for the heads up
<hackedbellini> jrwren: you know what, I'll recreate that machine :)
<rick_h> mimizone: on maas you can use endpoint bindings for this
<rick_h> ming: what substrate are you looking at currently?
<arosales> rick_h: looking forward to the reply
<bdx> rick_h: when are you guys going to rev the beta controller?
<mimizone> rick_h: not sure what your mean.
<rick_h> bdx soon, there was an email yesterday about if we should go forward with 2.1 controllers. so I think it's getting ready to happen
<rick_h> juju show hangout url: https://hangouts.google.com/call/w3ojiephyrbslk44jhq4wbfb2me and the streaming url: https://www.youtube.com/watch?v=RdWCtGk_4rU
<bac> hey rick_h, is it almost time for the Juju Show?
<hatch> 12m bac :)
<rick_h> bac: yes, links above ^
<bac> thanks
<lazyPower> bac: in case it was missed: https://www.youtube.com/watch?v=RdWCtGk_4rU
<rick_h> well that was rude google
<rick_h> juju show hangout url: https://hangouts.google.com/call/w3ojiephyrbslk44jhq4wbfb2me and the streaming url: https://www.youtube.com/watch?v=RdWCtGk_4rU
<rick_h> I got booted and had to relogin/etc again
<mimizone> so I am not clear on how network bindings would help picking the right network interface/IP when adding machines to a juju controller.
<rick_h> mimizone: sec, starting the show but can chat after
<mimizone> oh ok sorry
<zeestrat> rick_h: Any updates to the Juju charm store to show the bindings available now that they need to be set explicitly for containers in 2.1?
<stormmore> o/ juju world
<zeestrat> rick_h: bindings for each charm that is
<hackedbellini> hey guys! Any reason why my mediawiki deployment is listed like this on juju status: mediawiki/1*                  blocked   idle   13       10.67.62.191    80/tcp              Database required
<hackedbellini> it is saying that "Database required", but it is working fine
<lazyPower> heyo stormmore. You're just in time for the juju show https://www.youtube.com/watch?v=RdWCtGk_4rU
<stormmore> wonder if the "work across multiple clouds" will come to regular juju
<mimizone> using juju 2.1, I see error as such in one of the machines, blocking the creation of lxd containers.
<mimizone> container provisioner for lxd: setting up container dependencies on host machine: not found
<mimizone> ERROR juju.worker runner.go:210 exited "1-container-watcher": worker "1-container-watcher" exited: setting up container dependencies on host machine: not found
<lazyPower> Hooo look at that hot kubernetes action
<mimizone> ERROR juju.provisioner container_initialisation.go:116 starting container provisioner for lxd: setting up container dependencies on host machine: not found
<stormmore> lazyPower, now now, no need to get a big head :P
<lazyPower> stormmore: i'm not entirely certain thats possible... my head is already pretty big
<stormmore> lazyPower, ah but there is physical and egoistical size :P
<mimizone> good stuff I see on the show by the way :)
<stormmore> I wish I was working with juju and not maas right now :-/
<lazyPower> stormmore: pack a 1/2 punch and do both
<stormmore> lazyPower, I am but I am fighting an inconsistency in the maas installer :-/
<lazyPower> doh
<stormmore> lazyPower, yeah it is weird one where installing maas doesn't register the rack controller with the region controller intermittently
<stormmore> hmm youtube / canonical disconnect?
<rick_h> stormmore: ?
<stormmore> I lost connection to the video and can't get it back
<rick_h> kwmonroe: any link for your CI thing to note?
<rick_h> kwmonroe: docs, CI charm, etc
<kwmonroe> rick_h: the latest cwr charm has the workflow documented in the readme (CWR on Charm Source Pull Requests):  https://jujucharms.com/u/juju-solutions/cwr/
<kwmonroe> rick_h: a better synopsis/blog is coming this afternoon, but it's not quite ready yet
<rick_h> ty kwmonroe
<rick_h> kwmonroe: k, will look out for that and update the readme later on. This is enough to get a first drive at
<kwmonroe> cool
<stormmore> e2e sounds really cool
<stormmore> when is nagios going to die!
<rick_h> stormmore: hah, one day
<rick_h> mimizone: ping, so where were we?
<mimizone> rick_h: thanks for coming back to me :)
<mimizone> I have 2 issues.
<rick_h> mimizone: so https://jujucharms.com/docs/2.1/network-spaces is the basics of it
<rick_h> mimizone: basically, you can tell Juju to place an application on a machine that has access to different network spaces. Juju then makes sure that the machine that it's placed on has devices on the right spaces, and handles making sure the charm is given the right information about what network device to use for communication/etc
<mimizone> thanks I 'll into the spaces stuff. FYI. I am deploying the OPNFV bundle called JOID. It creates the bundle itself with a bunch of spaces/bindings  already for the application.
<mimizone> my question regarding that is more about the dns-name showing up in the juju-status I guess
<rick_h> mimizone: ok, aisrael might be able to help with that bundle. He's got some experience with it more.
<rick_h> mimizone: ok, so what's up with the dns-name?
<mimizone> it picks one of the IP address of the machine (they have 7 IPs in my setup).
<mimizone> I would like it to be predictable and pick only for instance what we call the admin network IP.
<rick_h> mimizone: so I think there is to set the default binding
<mimizone> right now for instance, some of the machines have a dns-name sitting on the 1G nic card, and others on the 10G ports.
<rick_h> mimizone: so that the default interface used is the one on the admin network you're looking for
<rick_h> mimizone: check out https://lists.ubuntu.com/archives/juju-dev/2017-February/006313.html
<mimizone> rick_h: cool I read that.
<mimizone> rick_h: can I bug you on the second question/issue?
<rick_h> mimizone: what's up?
<mimizone> it's regarding one of the machines not creating the lxd containers. I see error in machine-1.log about missing dependencies. but I don't see any errors in cloud-init-output.log about a bad installation
<mimizone> rick_h: stuff lke this ERROR juju.provisioner container_initialisation.go:116 starting container provisioner for lxd: setting up container dependencies on host machine: not found
<rick_h> mimizone: hmm, anything else in there?
<rick_h> mimizone: the lxd stuff won't be on the clout-init as containers are setup on demand
<rick_h> mimizone: so it's juju's job. Now, is this on xenial? lxd is there ootb on xenial and I wonder if this is some issue working backwards on trusty?
<mimizone> yep xenial
<mimizone> rick_h: lxd is there and running as a service
<mimizone> rick_h: is there a way to retrigger a provisioning of the machine?
<mimizone> a few more lines from the machine-1.log file in case it's useful
<mimizone> 2017-03-01 20:06:26 DEBUG juju.provisioner container_initialisation.go:185 release lock "machine-lock" for container initialisation
<mimizone> 2017-03-01 20:06:26 WARNING juju.provisioner container_initialisation.go:134 not stopping machine agent container watcher due to error: setting up container dependencies on host machine: not found
<mimizone> 2017-03-01 20:06:26 ERROR juju.provisioner container_initialisation.go:116 starting container provisioner for lxd: setting up container dependencies on host machine: not found
<mimizone> 2017-03-01 20:06:26 INFO juju.worker runner.go:262 stopped "1-container-watcher", err: worker "1-container-watcher" exited: setting up container dependencies on host machine: not found
<mimizone> 2017-03-01 20:06:26 DEBUG juju.worker runner.go:190 "1-container-watcher" done: worker "1-container-watcher" exited: setting up container dependencies on host machine: not found
<mimizone> 2017-03-01 20:06:26 ERROR juju.worker runner.go:210 exited "1-container-watcher": worker "1-container-watcher" exited: setting up container dependencies on host machine: not found
<rick_h> mimizone: hmm, sorry not sure. Would have to file a bug and look into it. It works on other machines?
<mimizone> rick_h: yes that the disturbing thing. same hardware, same config push by the same maas.
<mimizone> what can juju do when a machine / app is in pending mode?
<mimizone> can it be reinstalled  without destroying everything?
<rick_h> mimizone: so you can remove-machine.
<rick_h> mimizone: and you can --retry-provisioning
<rick_h> sorry, juju retry-provisioning
<rick_h> mimizone: check the output of juju status --format=yaml
<rick_h> mimizone: for that machine
<rick_h> mimizone: or juju show-machine X where X is the machine number the container is on
<mimizone> machine is said to be running / deployed and all the containers are pending
<mimizone> think I can force a remove-machine and retry-provisioning? seems I can't retry if the machine is not in an error state
<mimizone> oh well, I force the remove-machine, but can't reprovision it then... :)
<hackedbellini> Guys, I'm trying to build a custom livecd inside a lxc container (on my jenkins deployment). I'm having some problems with mounts though
<hackedbellini> for example, when I try to do this: https://help.ubuntu.com/community/LiveCDCustomization#Prepare_and_chroot
<hackedbellini> this happens:
<hackedbellini> https://www.irccloud.com/pastebin/XtnkwFmL/
<hackedbellini> I found that the major problem is that the bind mount to dev is not working as it should
<hackedbellini> for example, I tried creating a /tmp/foobar and "mount --bind /dev /tmp/foobar". It works and I can see the content inside it, but they are all empty
<hackedbellini> for example, if I try to "cat /tmp/foobar/urandom" I get nothing instead of the random content that I get from "/dev/urandom"
<hackedbellini> am I missing something here?
<hackedbellini> nevermind, the problem was that I had to use "mount --rbind" instead of "mount --bind"
<lazyPower> arosales: I think i missed context? is this for charmbox?
<arosales> lazyPower: yes
<lazyPower> arosales: so long as its still published to pypi, we should have it https://github.com/juju-solutions/charmbox/blob/master/charmbox-setup.sh#L26
<lazyPower> however matrix would be missing
<arosales> ya I was thinking the same that matrix would be missing
<lazyPower> arosales: do you happen to konw the official distribution pipeline for matrix? is it installable from pypi?
<arosales> lazyPower: pip install
<arosales> sorry may be a clone, and not pypi
<lazyPower> arosales: thats fine, i'm installing charm-tools from MASTER in here as well, we can certainly add matrix as well
<lazyPower> we're goign to keep bloating the base image size, but its an acceptable tradeoff
<arosales> lazyPower: I think Matrix is a worthy bit add
<arosales> lazyPower: I am going to confirm with cory_fu and tvansteenburgh on which bundletester release supports native 2.0 deploy and matrix
<arosales> lazyPower: from there perhaps I could work with you to make sure the latest bundletester is in charm box and install matrix
<tvansteenburgh> arosales: 0.11.0 supports native 2.0 deploy
<lazyPower> arosales: sure
<arosales> tvansteenburgh: awesome, cory_fu do you know if 0.11 supports matrix as well?
<tvansteenburgh> arosales: matrix support was added in 0.10.0
<tvansteenburgh> arosales: it does
<arosales> ah ok
<arosales> 0.11.0 has all the goodness then
<arosales> tvansteenburgh: thanks
<lazyPower> tvansteenburgh: i'm pretty sure thats whats in pypi right?
<tvansteenburgh> lazyPower: yep
<lazyPower> sorry split-attention with a customer meeting. i could go look myself...
<lazyPower> thanks
<arosales> lazyPower: so as long as you are pulling in bundletester >= 0.11 we should be good to add in Matrix
<lazyPower> arosales: sounds like we just need to fetch matrix and we're in like flynn
<arosales> lazyPower: agreed, current instructions are at https://github.com/juju-solutions/matrix#running-matrix
<arosales> lazyPower: but cory_fu said he would be open to putting Matrix on pypi
<lazyPower> tvansteenburgh: i see matrix is python3, should we move deployer to python3 as well?
<arosales> deployer
<arosales> ?
<lazyPower> er
<lazyPower> bundletester
<arosales> deployer for 1.x/
<lazyPower> sorry, split attention
<arosales> ah bundletester
<lazyPower> my mistake
<arosales> no worries :-)
<arosales> do we need bundletester at py 3?
<arosales> or nice to have
<lazyPower> thats my open ? - its just moving between pip install and pip3 install
<lazyPower> simple on my end, just wanted confirmation
<arosales> gotcha
<arosales> cory_fu: and tvansteenburgh ^
<arosales> cory_fu: petevg if you guys do get matrix on pypi could you ping the juju list?
<arosales> I think for charmbox we can still clone and install from there for the time being
 * arosales thinks
<cory_fu> lazyPower: I don't think BT will work in py3 just yet, but it would be good to work toward that.
<lazyPower> ok
<lazyPower> i'll leave BT in py2 for now until otherwise notified
<petevg> There's a bug blocking it -- it's noted in a docstring inside of bundletester.
<petevg> arosales: packaging matrix is on my list of TODOs for the near future. Will ping the list when I release it.
<lazyPower> kicking off a build locally to include matrix
<catbus1> Hi, I am trying to juju bootstrap to openstack, when setting up simplestreams, juju metadata: unrecognized command. using juju 2.1.0
<cholcombe> is there a way to set the lxd profile with juju?
<jhobbs> catbus1: can you show your full command and output?
<arosales> petevg: thanks!
<catbus1> jhobbs: nm, apt-cache policy showed juju wasn't installed, but I could juju status fine. weird. I installed juju again, and don't get the error message anymore.
<skayskay> hi, I'm seeing 'agent lost' messages, but juju show-status-log doesn't find any history
<skayskay> and I've restarted jujud on my machines to no avail
* hatch changed the topic of #juju to: Juju as a Service Beta now available at jujucharms.com | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
* hatch changed the topic of #juju to: Juju as a Service - Beta now available at jujucharms.com | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
<jhobbs> should that be https://jujucharms.com/beta ?
* hatch changed the topic of #juju to: Juju as a Service - Beta now available at https://jujucharms.com/beta | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
<hatch> jhobbs thx ;)
<skayskay> I thought restarting jujud on machine 0 would bring back the agent
<catbus1> $ juju bootstrap demo-openstack demo-openstack-controller --config tools-metadata-url=http://172.16.0.26:80/swift/v1 --config network=8b53cb07-fdb0-4fc9-9b02-6ea4ccad141e
<catbus1> WARNING unknown config field "tools-metadata-url"
<catbus1> Creating Juju controller "demo-openstack-controller" on demo-openstack/RegionOne
<catbus1> ERROR failed to bootstrap model: no image metadata found
<catbus1> jhobbs: ^^^ do you happen to know what might go wrong?
<jhobbs> catbus1: use image-metadata-url rather than tools-metadata-url
<jhobbs> also it's easier to put the metadata in a local directory than on a http server
<catbus1> jhobbs: I tried both, image-metadata-url and --metadata-source <local file path>, now the error is no image metadata found
<catbus1> oh wait
<catbus1> I think I may get the keystone ip wrong in the cloud config
<catbus1> no, I have it right.
<jhobbs> whats the local path, can you provide the output of tree from it?
<catbus1> jhobbs: I provided the wrong keystone ip in the simplestream index.json.
<catbus1> jhobbs: I can confirm that it is the wrong keystone ip I put in both json files that caused the problem.
<jhobbs> cool catbus1
<jhobbs> glad you gat it sorted
<catbus1> --debug is a good friend.
<jhobbs> yeah it's absolutely necessary when using the openstack provider
<kwmonroe> welp hatch, i'm never bootstrapping again :)
<cholcombe> axw you around?
#juju 2017-03-02
<axw> cholcombe: I am around now
<cholcombe> axw: hey!  i was wondering if you had any additional pointers for loopback storage, lxd and juju.  i can share my profile.  i figured out how to edit it.  it still seems to be stuck in allocating
<axw> cholcombe: if you share I'll see what I can work out. also the juju machine agent log for the machine that's trying to create the loop devices please
<cholcombe> axw: http://paste.ubuntu.com/24093220/ here's the juju-default profile
<cholcombe> axw: haha!  I spoke too soon.  It worked this time.
<cholcombe> it took a long time but it eventually went through when i wasn't looking
<axw> cholcombe: :p I thought you would need loop-control though TBH
<cholcombe> loop-control?
<stormmore> OK I am going offline for a little bit... only 108 lines of bash code to help me narrow down why another piece of software is messing up!
<axw> cholcombe: /dev/loop-control. I thought losetup used it to allocate loop devices
<cholcombe> oh right
<cholcombe> yeah it seemed to work without it
<cholcombe> gluster/2  brick/2  /dev/loop3  attached
<cholcombe> that's the juju storage output
<axw> cholcombe: cool. does it work? :)
<cholcombe> gluster says no bricks found so something screwed up but i think it's on my side
<cholcombe> axw: yeah i see it.  i tried to format /dev/loop3 with zfs and it messed up.  it's my issue
<axw> cholcombe: there's a new experimental storage provider for LXD on develop uses the new LXD storage API. probably a bit early to be usable yet tho
<axw> okey dokey
<cholcombe> axw: thanks for the rubber duck debugging haha
<axw> cholcombe: :)
<hatch> kwmonroe :D
<Budgie^Smore> lazyPower so I found out that for some reason, yet to be determined, MaaS doesn't like VirtualBox VMs created via Vagrant today
<kjackal> Good morning Juju world!
<disposable2> is there a way to select a specific maas node to become the juju cloud controller? i have several physical servers and 1 kvm one. i want the kvm one to be the controller.
<disposable2> i'll answer my own question, somebody correct me if i'm wrong please - "juju bootstrap --to kvmnode.maas mymaas controller"
<cnf> morning
<cnf> so, still kind stuck with juju :/
<zeestrat> disposable2: That works. We have multiple kvm's setup for controllers so we tag them in MAAS. Then to bootstrap we use --bootstrap-constraints tags=[your-tag-here]
<jamespage> any charmers around? need a review of https://code.launchpad.net/~james-page/charm-helpers/flush-after-grant/+merge/318750
<disposable2> zeestrat: thanks. unfortunately, i get a rather unhelpful error message at the end: """ERROR failed to bootstrap model: bootstrap instance started but did not change to Deployed state: instance "rfarht" is started but not deployed""". i have no idea where to find more details about what went wrong.
<jamespage> ditto on https://code.launchpad.net/~james-page/charm-helpers/percona-tuning-level/+merge/318755
<cnf> so any way to use juju if you don't have direct ssh access to the ip range it's deploying to?
<kjackal> cnf: can you ssh to the controller and from there to the units deployed?
<cnf> no
<cnf> well, i can through jumphosts
<cnf> but not difectly
<cnf> directly
<cnf> kjackal: but i can't even get the controller going
<cnf> because juju wants to ssh to it, and fails when it can't do that
<magicaltrout> i think its a fair enough requirement to have access to the boxes it wants to control ;)
<cnf> magicaltrout: not directly
<cnf> it's just not a realistic scenario
<cnf> the controller can access whatever it needs to direct
<cnf> but i can't start a controller, because i have no direct ssh access to the machine it is deploying to
<magicaltrout> can't get creative with an socks proxy?
<cnf> well, juju doesn't support SOCKS
<cnf> that's another issue i have open
<cnf> and it won't use http / socks proxies for ssh connections
<cnf> I do have proxied / jumphost access to everything
<cnf> just no way to get juju to use them
<magicaltrout> well no, but you could forward the port to your local machine and do a fake local deploy
<cnf> https://bugs.launchpad.net/juju/+bug/1668727 as a ref
<mup> Bug #1668727: juju does not support socks5 as a proxy <juju:Triaged> <https://launchpad.net/bugs/1668727>
<cnf> magicaltrout: how? juju does a lookup of the controller ip, and decides what to connect to itself
<cnf> and i'm not going to start spoofing ip's etc just to get juju working
<magicaltrout> in that case i'm out of ideas :)
<kjackal> cnf, you have a cloud where you can get VMs from, right?
<cnf> kjackal: MAAS, so it's metal and not vm's, but yes
<cnf> which i need a socks proxy to connect to it (which i already have to trick juju into using through a local proxy)
<kjackal> cnf: but you do not have ssh access to the nodes they come up because the entire rack is behind a firewall blocking ssh. Do I get this correctly?
<cnf> kjackal: before the firewall, the range isn't even routed
<cnf> kjackal: i have firewalled access to the MAAS controller
<cnf> (2 or 3 jumps between me and the maas controller)
<kjackal> Would it be possible to have your juju client in a node (kvm/lxd/physical) inside the rack?
<cnf> in theory, but that's a lot of extra work
<cnf> and not a way i'd want to work in production
<cnf> (which this thankfully isn't)
<cnf> if i could tell it to use specific names, or a proxy defined in my ssh config or something, i'd be golden
<cnf> maybe https://bugs.launchpad.net/juju/+bug/1669180 is relevant
<mup> Bug #1669180: proxy-ssh/juju ssh --proxy is ignored <juju:Triaged> <https://launchpad.net/bugs/1669180>
<cnf> though i guess that already assumes a controller
<magicaltrout> conversely cnf you could clone the code from github and add to the ssh module to pick up proxies :)
<cnf> i have 5 other projects that need patches to work, already
<cnf> queue is kinda full
<lazyPower> Budgie^Smore: Interesting. What seems to be the trouble there?
<cnf> hmm, and sshuttle doesn't work, either
<lazyPower> jamespage: what if FLUSH PRIVLEGES fails? its in a try block that doesn't seem to care about any exceptions. Do we not care if it raises?
<lazyPower> pardon me if thats a dumb question... i'm still working on my first cuppa
<magicaltrout> I care :'(
<jamespage> lazyPower: the try/finally is really to ensure the cursor gets cleaned up
<jamespage> if the flush fails, we'll propagate the exception up to the calling process
<lazyPower> ok so we're less interested if it fails and more interested we dont leave a dangling connection
<jamespage> but doing the cleanup on the way
<lazyPower> sgtm, just making sure that was the intent
<jamespage> lazyPower: +1 yah
<lazyPower> lookin @ the innodb bits now
 * jamespage spent alot of time fixing java code which did not do that....
<lazyPower> jamespage: do you mind if i just approve? are you cool with doing the merge + push?
<jamespage> lazyPower: works for me
<jamespage> lazyPower: if you want to see the innodb tuning stuff in context - https://review.openstack.org/#/c/440333
<lazyPower> http://pad.lv/318750 is approved
<jamespage> lazyPower: muchas gracias
<lazyPower> http://pad.lv/318755  is  approved as well. tests + passing test results
<lazyPower> easy A on this one jamespage
<jamespage> lazyPower: ta
<lazyPower> thanks for being patient while i drug my bum out of bed :)
<jamespage> lazyPower: thankyou for the reviews - having a bug day targetted at percona-cluster
<skayskay> I want to help document to my teammates how to restore a missing secgroup rule for ssh for juju, so I've removed it manually in my environment. it's taking longer than I expected for juju to lose contact with agents
<skayskay> how long should it take?
<kwmonroe> petevg: here's the libjuju issue, in case you didn't see the number on my screen:   https://github.com/juju/python-libjuju/issues/67
<petevg> kwmonroe: thx
<chrome0> Is there a way to change a machines AZ post-deploying? I'm deploying via MAAS and had AZ=default. Changing it in MAAS doesn't seem to propagate to juju apparently
<chrome0> *had deployed with AZ=default
<rick_h> chrome0: no, the info goes from maas to the machine during install/setup but then it's static
<rick_h> chrome0: so you'd have to either manually change the info on the host machines or to redeploy them
<chrome0> rick_h : Ah, I'd like to avoid redeploy, but if there's a way to plant that info on the host machine that'd be awesome -- how can I do that?
<rick_h> chrome0: oh heh, I assumed you "changed it in maas" meant that you were looking for some file/output on the systems that said it was AZ=default
<chrome0> rick_h : I'd like to make use of ceph-{mon,osd} failure domain distr. feature, which uses juju AZ info
<rick_h> chrome0: I see, there's no way other than monkeying with the jujudb to change that after deploy since the machine is in an AZ and they don't tend to move much (usually)
<rick_h> chrome0: so no knob for that unfortunately
<chrome0> Ack, was afraid that'd be the case -- thanks
<lazyPower> arosales cory_fu https://hub.docker.com/r/jujusolutions/charmbox/builds/bbohvddrrcadljfmfthwmzg/ -- new charmbox build including the matrix component
<chrome0> FTR., is no biggie -- the ceph-* charms have a availability_zone setting which allows customization, will use that. Just making sure there's no more "standard" way
<arosales> lazyPower: woot!
<arosales> lazyPower: thanks for doing that. I'll give it a test run today :-)
<lazyPower> lmk if anything goes funk in there and i'll give it some love
<arosales> cory_fu: petevg ^ be great if guys give it a poke as well.
<petevg> arosales, lazyPower: will try to set aside some time to take a look today.
<lazyPower> thanks petevg
<petevg> np
<cnf> kjackal, lazyPower  so i'm trying to use sshuttle to bypass my restrictions
<cnf> both http and ssh
<cnf> http works, ssh isn't playing ball yet
<lazyPower> cnf: err i'm not sure i have context here
<cnf> lazyPower: yesterday ( and the day before), I was failing to connect to my MAAS, because juju doesn't understand socks5
<cnf> lazyPower: and today, i was failing because juju can't connect directly to the hosts
<lazyPower> cnf: this sounds like you're stitching together a ball of work-arounds because of networks segments?
<cnf> so i'm trying to bypass both using sshuttle
<cnf> lazyPower: yep
<lazyPower> cnf: i would probably advise to setup an OpenVPN service in your lab, that way you can join the network space directly instead of relying on sshuttle, which historically has been really flakey for me.
<cnf> bah, openvpn :(
<lazyPower> well you're seeing the result of sshuttle here where it half works
<cnf> lazyPower: and i'd need openvpn to openvpn
<cnf> because i'm 2 or 3 hops away form it
<cnf> from*
<lazyPower> cnf: without having a clear view of your network setup, its hard to recommend a proper fix here. Networking can be dififcult when you have no context of the topology of the dc
<cnf> i'd like juju to support socks5, and ssh config/wrappers
<cnf> would solve everything
<lazyPower> cnf: have you filed bugs for this?
<cnf> for the socks one, yes
<cnf> https://bugs.launchpad.net/juju/+bug/1668727
<mup> Bug #1668727: juju does not support socks5 as a proxy <juju:Triaged> <https://launchpad.net/bugs/1668727>
<lazyPower> fantastic, and its been looked at / triaged.
<lazyPower> step 1 is good.
<cnf> for ssh, i'm not quite sure what i want to ask for
<cnf> i _think_ juju just wraps my system ssh?
<cnf> not sure
<lazyPower> cnf: if you're referring to juju ssh - it should proxy through the controller and reach the node. you can set this on a model-by-model basis or on the entire controller by setting it as a model default
<lazyPower> proxy-ssh                   default  false
<cnf> lazyPower: i'm trying to install the controller :P
<lazyPower> plot thickens
<cnf> hmm, i _think_ i'm mostly there?
<cnf> juju is just using the wrong ssh key
<lazyPower> cnf: yeah juju ssh is using the system ssh agent. afaik we aren't shipping anything custom there. it just hands off some flags to the ssh client to do the connections.
<cnf> i think
<lazyPower> cnf: it's going to use whatever is in $HOME/.local/share/juju/ssh
<cnf> lazyPower: right, and i use assh to wrap my ssh config
<lazyPower> juju does generate a client keyset for the controller that lives there, vs using your user credentials.
<cnf> so how does it put the ssh key on the controller during bootstrap?
<cnf> the pub one, that is?
<lazyPower> I'm not as familiar with the intricacies of the bootstrap process. I presume its just a scp or cloud-init operation, but i would urge you to ping the juju mailing list about that so a core dev can chime in
<jrwren> afaik its cloud-init, but controller may do something different than other machine units.
<cnf> ok, so atm networkwise i can connect
<cnf> but it is rejecting the auth, from what i can tell
<cnf> hmm
<kwmonroe> tvansteenburgh: minor comment on your release-term hint in https://github.com/juju-solutions/review-queue/pull/77.  otherwise, lgtm.  want me to click the button?
<tvansteenburgh> kwmonroe: looking...
<tvansteenburgh> kwmonroe: yeah, click it
<tvansteenburgh> kwmonroe: thanks
<kwmonroe> clicked.  thank you tvansteenburgh!
<cnf> hmm, it stays stuck on "Attempting to connect to 172.20.20.16:22"
<cnf> but i can ssh manually
<jrwren> cory_fu: please take another look at those two haproxy charm MR. I think I've cleaned them up and corrected them. Thank you.
<cnf> hmm, weird
<cnf> hmm, all manual ssh connection attempts work
<cnf> but juju isn't making it
<cnf> anyway, time to go home
<cnf> i'll try again tomorrow
<cnf> \
<cholcombe> with the latest juju-2.1 i'm still unable to bootstrap a localhost lxd cloud
<admcleod> cholcombe: ?!
<cholcombe> admcleod: http://paste.ubuntu.com/24097100/
<cholcombe> i tried a apt-get remove juju juju-2.0 --purge.  deleted all local/share files and reinstalled.  didn't change anything
<bdx> https://aws.amazon.com/message/41926/
<admcleod> cholcombe: juju show-cloud localhost ?
<cholcombe> admcleod: http://paste.ubuntu.com/24097115/
<bdx> ansible playbooks and fat fingered techs .... a lesson learned
<admcleod> cholcombe: remove-cloud and add-cloud?
<admcleod> bdx: hah
<cholcombe> admcleod: lets try
<cholcombe> admcleod: no personal cloud called "localhost" exists lol
<cholcombe> 2.1 really hates me
<admcleod> add?
<cholcombe> it won't let me add it either
<cholcombe> it's not a cloud type
<cholcombe> is there something i can purge to get this back to a clean slate?
<admcleod> you could try ..
<admcleod> cholcombe: put this in lxd.yaml : https://pastebin.canonical.com/181327/
<admcleod> cholcombe: then 'juju bootstrap lxd lxd.yaml'
<admcleod> er sorry
<admcleod> cholcombe: juju add-cloud lxd lxd.yaml
<admcleod> then bootstrap
<cholcombe> admcleod: ok lets give it a spin
<cholcombe> admcleod: seems to be going with the juju bootstrap lxd lxd.yaml
<cholcombe> yeah now i see it under bootstrap.  cool
<admcleod> cholcombe: cool - no idea what you needed to clean up otherwise...
<cholcombe> i'm not sure.  i rm'd everything i could find but something is remaining
<admcleod> cholcombe: mongo?
<cholcombe> admcleod: yeah i prob need to poke mongo
<cholcombe> i actually don't see a running mongo process
<lazyPower> cholcombe: i've been using the snapped stack since pre 2.1-GA and its been rocking and auto-updating for me.
<Budgie^Smore> lazyPower something to do with the way the hardware presents itself during enlistiment from what I can tell
<cholcombe> lazyPower: nice
<lazyPower> Budgie^Smore: thats odd, why would it matter if its a manually created image vs a scripted image?
<Budgie^Smore> lazyPower the VM PXE Boot's but fails to enlist
 * lazyPower throws things at vagrant
<Budgie^Smore> lazyPower that is what I am trying to determine, nothing in the VM config suggests where to look next
<lazyPower> Budgie^Smore: perhaps isntead of vagrant use vboxmanage to create the vms?
<lazyPower> give it the pepsi challenge
<lazyPower> headed out for lunch, bbiaf
<Budgie^Smore> lazyPower I just know that if I hand build a VM using the VB cli it works, about the only thing I haven't tested is using vboxheadless vs vboxmage to start the VMs. that is what I am planning on doing for the other VMs than the MaaS system
<cholcombe> admcleod: i think lxd is wedged
<cholcombe> purge and reinstall seems to have fixed it
<lazyPower> Budgie^Smore: it might be a difference in the image?
<lazyPower> is vagrant using the exact same image that your hand rolled vm is?
<Budgie^Smore> by image what do you mean? the software of the virtual hdd?
<Budgie^Smore> or are we talking the whole 9 yards - virtual machine config, etc.?
<lazyPower> yeah, your base vm image. vagrant uses boxfiles right?
<lazyPower> is that the same as what you're launching by hand?
<Budgie^Smore> yeah I ruled that out when I manually pxe booted the VM as maas-enlist cli just wasn't working and no very helpful in figuring out why
<Budgie^Smore> lazyPower that rules out the software image being the prob
<Budgie^Smore> lazyPower as far as MaaS should have seen it was a machine with "old" software on the drive
<Budgie^Smore> lazyPower but you are right, not hand building the nodes using vboxmanage is just me being lazy
<Budgie^Smore> lazyPower when / if I get time I will torubleshoot further as I still have questions around vboxheadless or the fact that the vagrant vm uses vmdk instead of vdi format but I have a solution to that issue which should be as clean.
<zeestrat> cholcombe: see https://bugs.launchpad.net/juju/+bug/1665056 for localhost issue
<mup> Bug #1665056: interactive boostrap nor 'juju regions'  recognise localhost as a valid cloud <juju:Fix Committed by anastasia-macmood> <https://launchpad.net/bugs/1665056>
<cholcombe> zeestrat: looks like it's going to land in 2.2
<cory_fu> lazyPower: I gave https://review.jujucharms.com/reviews/96 my +1 but I know you've been helping with the design of that as well.  Can you give it a quick look?
<catbus1> Hi, I tried to launch conjure-up, but it just went back to command prompt. It may have something to do with installing conjure-up via apt before, but I have removed it and installed it via snap.
<catbus1> what else do I need to clean before I can launch it successfully?
<cory_fu> catbus1: I haven't heard of it doing that before.  To rule out apt, what do you get when you do `which conjure-up`?
<catbus1> cory_fu: nothing. it returns nothing.
<catbus1> wait
<cory_fu> catbus1: Try doing `hash -r`
<catbus1> I just removed it again via snap. give me a sec.
<cory_fu> Ah
<catbus1> cory_fu: /snap/bin/conjure-up
<cory_fu> catbus1: And what's the exact command you're running that just drops back to the CLI?
<catbus1> 'conjure-up'
<cory_fu> catbus1: That's very strange.  You should at least get some sort of message or the UI for choosing a spell.
<cory_fu> catbus1: Maybe stokachu has some insight
<cory_fu> catbus1: Also, can you check if ~/.cache/conjure-up/conjure-up.log has any info?
<catbus1> cory_fu: the last info from conjure-up.log is dated 2017-02-07 15:26:10.
<lazyPower> cory_fu: looking now
<lazyPower> cory_fu: scanned it and it looks good at first glance. Disclaimer: I have not run deployment tests on this charm
<lazyPower> however i see its using all the latest versions of the componentry and looks well thought out. The last time i looked at this was a much much earlier version. I left an abstain vote comment on the review.
<cory_fu> lazyPower: Fair enough.  I was able to run the tests and they worked with the caveat that I put on my comment  (https://review.jujucharms.com/reviews/96?revision=244) but that was apparently based the tests in the kubernetes charm?
<cory_fu> *based on
<lazyPower> cory_fu: yeah that sounds like the correct recommendation
<lazyPower> that or default to it, and allow it to be overridden by env config or something like that for cases of local testing
<lazyPower> SimonKLB: i'm pretty sure the elastisys layer source is available as foss though right?
<cory_fu> lazyPower: Yeah, something like that.  I think updating the test would be ideal, but I'm not sure if I think it's worth holding up promulgation.  What do you think?
<lazyPower> s/elastisys layer/elastisys autoscaler layer/
<lazyPower> So long as you were able to validate, i think its fine to file a bug against the layer/charm and allow a pass for now. Simon's pretty responsive on feedback
<lazyPower> i'm unsure of his TZ though, he may be out for the remainder of the day
<lazyPower> cory_fu: i can spend some time tomorrow running the gambit of tests on this if you want me to be a voting contribiutor. i see its only got +1 and is blocked on having a second +1
<lazyPower> thats the only reason i abstained is i had not run the functional tests on the charm
<cory_fu> lazyPower: Yeah, that would be great.
<lazyPower> ok i'll pencil this in tomorrow and get some time on it.
<cory_fu> Thanks
<lazyPower> np np
<SimonKLB> lazyPower: still here!
<lazyPower> SimonKLB: *snap* darn ;)
<lazyPower> hehe, hey is the autoscaler layer source publically available?
<SimonKLB> not right now, but i could push it to github if you guys want
<lazyPower> SimonKLB: one of the requirements for promulgation is that the source be licensed under FOSS, which in turn means the source should be easily accessable
<lazyPower> i think we were looking for the layer source more for bug tracking reasons in this context, but the store requirement plays nicely into that ask :)
<SimonKLB> right, should i push it straight away or do you want me to put it off until tomorrow? ;D
<lazyPower> SimonKLB: whatever works for you. I wont +1 it until i've gotten the link
<lazyPower> i'll piggyback off cory_fu's comment and file a bug for the test case so it doesn't get lost in the shuffle
<SimonKLB> lazyPower: alright! how are you handling it btw? i think i grabbed the amulet helper functions from you or mbruzek ?
<lazyPower> SimonKLB: i'm not certain i understand. What would I be handling?
<SimonKLB> how the resource is downloaded in the amulet test
<SimonKLB> (when the charm is local)
<lazyPower> ah, so i would do this per cory's approach.  1) specify teh resource path as the default fallback behavior. And allow the user to override that payload using something like os.getenv('SCALER_IMAGE') which would point to either a filepath on disk or remote. The idea is that its not a pre-req to have local before attempting the test, as any ci subsystem is not likely to have that image, ever.
<lazyPower> so you have to account for operator overrides
<cory_fu> Really, this should be handled at the Amulet level, but it would be more difficult to do it there in a clean but generic way, so working around it in the charm is easier.
<SimonKLB> lazyPower: okok! i know that you did something similar in the kubernetes bundle test before to what im doing in the test atm
<cory_fu> But since it's really an Amulet issue, I wouldn't be against pushing back against adding more work-arounds to the charm.  Hence, I don't consider it a blocker
<SimonKLB> the function still seem to be in there but is unused https://github.com/juju-solutions/bundle-canonical-kubernetes/blob/master/tests/amulet_utils.py#L5
<lazyPower> cory_fu: i think this is yet another case for using libjuju native testing vs amulet.
<cory_fu> lazyPower: Well, this would still be an issue, honestly.  The problem is if you want to test a local copy of the charm it still needs to honor the resources in the store, but the testing framework needs to be smart enough to remember the link and to manage those resources
<lazyPower> ah, good point
<lazyPower> cory_fu: yeah matt and i were grumpy about this when we were piloting it in k8s
<lazyPower> that much i do remember
<SimonKLB> lazyPower: how is it handled now a days?
<SimonKLB> are you working around it somehow that you dont need to use resources in the tests anymore?
<lazyPower> SimonKLB: scripted work-arounds doing a juju-attach in jenkins.
<lazyPower> SimonKLB: we're kind of divergent from existing tooling and have a handfull of bash to lend a hand, we opensourced our build routines over in the juju-solutions namespace... 1 sec and i'll grab a link
<lazyPower> https://github.com/juju-solutions/kubernetes-jenkins
<lazyPower> SimonKLB: https://github.com/juju-solutions/kubernetes-jenkins/blob/master/jenkins-deploy-local-charms.sh
<lazyPower> as well as https://github.com/juju-solutions/kubernetes-jenkins/blob/master/juju-attach-resources.sh
<SimonKLB> lazyPower: ah i actually do our charm CI tests kind of similar, packaging the resources as a part of the test - that way i have access to it locally and also can test it running the latest version of the docker images
<SimonKLB> but ill add the default path to the charmstore archived version of the resource so that it can be tested out of the box like you guys said!
<lazyPower> sounds like theres overlap there. I know that there's work beign done to cwrci to support resources as well
<lazyPower> SimonKLB:  there might be new tooling in the pipeline to help ease this. Have you been introduced to cwr-ci yet?
<lazyPower> kwmonroe: need your link to your prose on the subject sir ^
<SimonKLB> lazyPower: i've seen being mentioned but i havent deployed it and tried it out myself
<lazyPower> SimonKLB: we are alike in that regard. matt has been the primary driver on our side for that effort. I'll be looking into it in short order for the etcd refactors
<lazyPower> plus we just landed matrix in charmbox this morning, which is a component of that stack
<lazyPower> so its starting to surface in places  :)  almost no excuse now
<SimonKLB> hehe :)
<SimonKLB> i would really like to try out matrix, it sounds like a really cool tool
<lazyPower> then cory_fu can yell at me for submitting stuff even though its broken in cwr-ci :P :P
<lazyPower> <3
<cory_fu> :)
<lazyPower> because thats totally something i would do
<lazyPower> "critical path, fix tests later"
<kwmonroe> cory_fu: what do you think about me revving a cwrbox lxd image to pull in bt-11?
<cory_fu> kwmonroe: +1.  You should be in the approved signers list
<kwmonroe> cool
<cory_fu> kwmonroe: We should probably add petevg and kjackal to that list
<cory_fu> And ses
<SimonKLB> lazyPower, cory_fu: https://github.com/elastisys/layer-charmscaler
<SimonKLB> also, i pushed a new version that falls back on the latest revision of the resource if it's not already on the controller or specified using an env var
<lazyPower> nice SimonKLB, thanks for the quick turnaround
<SimonKLB> lazyPower: if you have time it would be really sweet it you tried it out a bit :) would be fun to hear a user-story of someone that is not already familiar with our software!
<lazyPower> SimonKLB: tomorrow :)
<SimonKLB> awesome :)
<SimonKLB> lazyPower: do you know how to set the Bugs link? i've tried `charm set cs:~elastisys/charmscaler bugs-url=https://github.com/elastisys/layer-charmscaler` but it doesn't seem to have any effect
<lazyPower> SimonKLB: its in there. charm show cs:~elastisys/charmscaler bugs-url
<rick_h> SimonKLB: looks like it's set: charm show cs:~elastisys/charmscaler bugs-url
<rick_h> SimonKLB: but that the apache web cache and such might take it a bit to update on the webui
<SimonKLB> lazyPower: ah, so that is different from the Bugs link on jujucharms.com?
<lazyPower> the web front end may be a little behind if thats what you were looking for. the page fragment cache has to invalidate before it updates.
<rick_h> SimonKLB: no, but it'll influence that link but it's cached
<SimonKLB> right! thanks guys :)
<lazyPower> rick_h: we have truly become one
<rick_h> normally charms don't change much w/o a new revision so very cache-able...unless you do something like change the bugs-url
<rick_h> lazyPower: I feel like I've grown super powers
<lazyPower> O_O
<lazyPower> this implies you think i'm super :D
<lazyPower> <3
<rick_h> you see what I did there :P very sneaky
<cory_fu> tvansteenburgh, petevg: Thanks for the comments on https://github.com/juju/python-libjuju/pull/63  I've added integration tests to the PR
<catbus1> I gave up fixing conjure-up on the maas node. I tried launching conjure-up on a medium sized openstack 16.04 instance, I lost the connection to the instance.
#juju 2017-03-03
<kjackal> Good morning Juju world!
<SaMnCo> dgonzo hi. Sorry the MWC swamped all my time this week, but today is the day I actually spin that g2. Upcoming news soon
<cnf> aaand good morning
<cnf> lets try bootstrapping my controller, day 3 :P
<Mmike> Hello, lads. What's the best way to get bundletester? If I pip-install it, it breaks stuff on my system - for instance, pull-lp-source from ubuntu-dev-tools doesn't work anymore
<cnf> Mmike: virtualenvwrapper ftw?
<Mmike> cnf, can we pretend I never asked this question? :)
<cnf> done!
<cnf> :P
<Mmike> :D
<magicaltrout> i'm afraid it will shortly end up here: https://irclogs.ubuntu.com/2017/03/03/%23juju.html ;)
<Zic> magicaltrout: these are not the logs you are seeking!
<Zic> s/seeking/looking for/ :>
<Zic> I missed the VO version :p
<cnf> hmm
<cnf> juju really doesn't want to connect to the bootstrapped controller on ssh
<cnf> and no debug logging on it, it seems :(
<cnf> Attempting to connect to 172.20.20.16:22
<cnf> aaaand stuck
<cnf> hmz :/
<cnf> how do i get more info than --debug --show-log
<cnf> hmm, i guess it's friday afternoon
<cnf> still stuck at Attempting to connect to 172.20.20.16:22
<lazyPower> cnf: and this is still attempting over the sshuttle tunnel?
<cnf> lazyPower: yes
<cnf> it's as close as I can get to a direct connection
<cnf> manual ssh works over it
<lazyPower> I see you've tuned it for verbosity as well
<cnf> yeah, but the ssh bit isn't being very verbose
<cnf> so i don't know what is going on
<cnf> lazyPower: tbh, it seem --show-log doesn't do much beyond what --debug does
<lazyPower> yeah i think thats max verbosity, what you have listed there
<cnf> yeah, sadly it doesn't give any debug messages for the underlying ssh
<Budgie^Smore> My very first juju controller was managed over a sshuttle connection and it worked just fine
<Budgie^Smore> but that was about a year ago
<cnf> yeah, 2.0 seems to have changed some things?
<cnf> idno, i'm new to juju
<cnf> Budgie^Smore: did you have to do anything specific?
<Budgie^Smore> I was using 2.0... not that I remember... whats your sshuttle command look like?
<cnf> sshuttle -r tele-maas --ssh-cmd='assh wrapper ssh' 172.20.20.0/24 195.130.158.250
<Budgie^Smore> hmmm, I wonder if it is your ssh-cmd that is causing it, I didn't use that flag
<cnf> well, without it, it won't work well
<cnf> it needs a few hops to get anywhere
<cnf> Budgie^Smore: but i can manually do "ssh 172.20.20.16" and it works
<cnf> but juju won't connect to it, it seems
<Budgie^Smore> if you can ssh to it then you shouldn't need that flag
<cnf> no, i can't ssh to the target
<cnf> 172.20.20.16 needs to go over the sshuttle
<cnf> it's me ---- machine  ---- MAAS controller --- machine booting to be the juju controller
<Budgie^Smore> oh yeah, sorry barely woke up yet
<Budgie^Smore> however just cause you can manually ssh doesn't mean that wrapper doesn't change things so that juju can't
<cnf> the wrapper is for sshuttle, though
<cory_fu> petevg: Can you make any sense of this error I'm getting from Model.deploy()?  http://pastebin.ubuntu.com/24102747/
<Budgie^Smore> cnf yes but any wrapper can change something in it's stream so that it is no longer sending expected packets
<rick_h> cory_fu: usually I get that with bad yaml
<cnf> well, the wrapper is just calling the ssh binary
<rick_h> cory_fu: or an invalid value, a string for a tag or something
<cnf> i don't see how it would affect anything
<Budgie^Smore> or doing something stupid like merging stdout and stderr
<cory_fu> rick_h: But there's no yaml in that request?
<rick_h> cory_fu: e.g. is it a user tag vs a string username/etc
<Budgie^Smore> calling it with what flags?
<petevg> cory_fu: Yes. You're giving it an int or something when it expects a string.
<cnf> Budgie^Smore: all it does is generate ssh parameters like ProxyCommand etc
<cory_fu> petevg: ...  thanks
<cnf> so i can chain hops
<rick_h> cory_fu: "None"
<rick_h> cory_fu: that in there isn't a string or valid value for json
<cory_fu> rick_h: That's a Python None, which would be converted to a null
<Budgie^Smore> cnf I would reckon there is a conflicting flag between what juju is providing and the wrapper
<cory_fu> petevg: Does the "to" field not accept None?
<cnf> Budgie^Smore: but the wrapper is for shhuttle only
<petevg> cory_fu: I don't remember. The code around the "to" field is really annoying and hard to keep in one's head.
<cory_fu> petevg: Docstring says, "If None, a new machine is provisioned." so that seems pretty explicit
<cnf> Budgie^Smore: juju just needs to do /usr/bin/ssh 172.20.20.16
<petevg> cory_fu: interesting. The Python code in python-libjuju/juju/placement.py represents my best understanding of what Go wants for that field.
<petevg> cory_fu: note that it always seems to want things to be packed into an array/list
<cory_fu> petevg: Looks like None gets converted to an empty list
<cory_fu> petevg: I wonder if it's the Constraints param?
<petevg> cory_fu: could be. The version of placement that I'm looking at will just return "None" when it gets passed None, no list involved.
<petevg> ... so maybe that is the error -- maybe it should return [] rather than None in that first check.
<cory_fu> petevg: Well, it won't ever actually get passed None, at least not from Model.deploy, because there's an outer None check that converts it to []
<petevg> cory_fu: Got it.
<cory_fu> petevg: I think it's the constraints.  It's getting passed {} which it thinks is a valid parsed value, but it should probably be None instead
<petevg> cory_fu: interesting. The constraints.py thing that I wrote will pass back None. I'm assuming that it's getting converted into the empty dict in the facade  object ...
<cory_fu> Well, that got me past the ghost failure, but mysql failed
<cory_fu> petevg: http://pastebin.ubuntu.com/24102792/
<petevg> cory_fu: ah. Unless you give it a dict, then it will pass a dict back, without checking to see if it has stuff in it.
<cory_fu> petevg: Yeah.  I was giving it a dict, like a... chump
<petevg> cory_fu: eh. I was passing back an empty dict like a chump :-)
<cory_fu> petevg: Probably worth replacing the 'if None' and 'if == ""' with a single "if not"
<petevg> agreed.
<cory_fu> I'll do that in my conjure-up branch
<petevg> That error still says "number" to me, though. I wonder if you need to stringify everythign -- "num_units" is an int rather than a string.
<cory_fu> Not according to the docstring nor default arg
<cory_fu> petevg: mysql is the only one that has config values.  Could the 20000 value be the issue?
<cory_fu> petevg: Should we switch regular deploy to use configYAML as well?
<cnf> hmz, this is frustrating
<petevg> cory_fu: interesting. That could be it, too.
<Budgie^Smore> cnf, it probably doesn't just do a clear ssh command, probably adds some control options but the devs would have to say for sure
<cory_fu> petevg: +        # stringify all config values for API, and convert to YAML
<cnf> it is doing something weird, that's for sure
<petevg> cory_fu: yeah. That is supposed to fix things. :-/
<Budgie^Smore> and if it isn't that, then it is might be key related
<petevg> cory_fu: and I've successfully deployed stuff with configs that look like mysql.
<cory_fu> petevg: Actually, I don't like that we have duplication between Model.deploy and BundleHandler.deploy.  Do you think we could combine those?
<petevg> cory_fu: probably. I think that I only switched to using config_yaml in BundleHandler.
<petevg> ... so if your code is passing through Model.deploy, it might not be stringifying things.
<cory_fu> Right
<Budgie^Smore> silly question probably but did you add your ssh pubkey to maas so it can add it to juju?
<Budgie^Smore> OK I am going to shut up now, that question doesn't make sense :-/ going to go find coffee, bbiab
<cnf> Budgie^Smore: yeah, and i can ssh manually
<cnf> Budgie^Smore: enjoy :P
<Budgie^Smore> something in me still things there might be a key issue, i.e. juju not talking to ssh-agent, etc.
<Budgie^Smore> but I am more inclined to thing that it is an overlapping ssh flag issue
<cnf> i wish i'd get ssh debug info :/
<cnf> hmm
<cnf> maybe got something closer
<cnf> maybe
<lazyPower> cnf: while i wouldn't normally advocate this... you can try something
<cnf> no, nm
<lazyPower> cnf: you can temporarily rename the ssh bin to something like ssh.orig, and put a wrapper in place that invokes that .orig ssh bin with -vvv
<cnf> hmm
<lazyPower> but i feel like this means we shoudl file a bug to request a flag to dump ssh debug info
<cnf> indeed
<lazyPower> i'm not seeing anything anywhere that indicates we have a flag already to provide that detail
<cnf> --debug should put ssh in -vvv mode
<lazyPower> so, hacky work-around solutions seem to be the path forward for now
<lazyPower> cnf: since you filed the other bug i'll file this one ;)
<cnf> haha, thanks :P
<cnf> it is, once again, almost time to go home for me...
<cnf> almost WE
<lazyPower> cnf: https://bugs.launchpad.net/juju/+bug/1669848
<mup> Bug #1669848: When bootstrapping with the --debug flag, ssh should also pass -vvv for debugging ssh issues <juju:New> <https://launchpad.net/bugs/1669848>
<cnf> lazyPower: thanks!
<lazyPower> if you dont mind hitting the top left "This bug affects you" to add some bug heat. You may also want to subscribe for updates so you can follow up with any discussion there.
<lazyPower> SimonKLB: You're next after i get caio some feedback on a PR submit this morning.
<lazyPower> i look forward to this :)
<cnf> ah, i was looking for that!
<SimonKLB> lazyPower: nice!
<cnf> lazyPower: your trick doesn't work! it isn;t showing the output :P
<lazyPower> cnf: well it was worth a shot
<cnf> yep
<cnf> well, i'm out of ideas
<cnf> not the way i wanted to start the WE ^^;
<cnf> k, i'm going home
<cnf> thanks for the help lazyPower, Budgie^Smore
<lazyPower> np cnf
<bdx> stub: https://bugs.launchpad.net/postgresql-charm/+bug/1669872
<mup> Bug #1669872: charm install fails installing snapd on trusty  <PostgreSQL Charm:New> <https://launchpad.net/bugs/1669872>
<bdx> from what I can find, snap should install via apt on trusty
<bdx> not sure why I'm not getting it
<bdx> http://paste.ubuntu.com/24103535/
<bdx> stub: in what revision was layer snap introduced to the postgresql charm?
<bdx> stub: ahh, rev 117
<stormmore> ok I am back and in the office
<lazyPower> cory_fu: so i didn't get to it this morning but i just deployed the autoscaler. Did you use this outside of bundletesting?
<cory_fu> lazyPower: Nope
<lazyPower> cory_fu: do you want to see whats up with it? :D
<lazyPower> i'm in a hangout running the paces rn
<cory_fu> Uh, sure
<lazyPower> lol its cool man :) you're busy and its after 5 on a friday
<lazyPower> just making the offer
<lazyPower> but i'm in the batcave
#juju 2017-03-04
<ybaumy> is it normal that br-ex is not up on node 0 after juju deploy openstack-base?
<ybaumy> is it possible to scale out mysql somehow?
<ybaumy> how to i make the juju controller HA
<ybaumy> what is if the controller is not accessible anymore
#juju 2017-03-05
<chatter29> hey guys
<chatter29> allah is doing
<chatter29> sun is not doing allah is doing
<chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
<ybaumy> wtf fcking religious idiots
<ybaumy> i added a second rabbitmq-server unit and now most of my services are down
<ybaumy> anyone can explain that?
<ybaumy> rabbitmq-server/0         active    executing  0/lxd/2  10.14.162.54    5672/tcp        (update-status) Unit is ready and clustered
<ybaumy> rabbitmq-server/1*        waiting   idle       21       10.14.162.75    5672/tcp        Unit has peers, but RabbitMQ not clustered
<ybaumy> hmm one says its clustered
<ybaumy> the other one not
<ybaumy> i see whats happening
<ybaumy> the leader has changed and unit 0 is now upgrading packages maybe will rejoin the cluster afterwards
<ybaumy> ok this is working. but when i try to add an instance now that i have 3 neutron gateways dhcp isnt working anymore
<ybaumy> all 3 services are down
<ybaumy> same goes for vswitch
<ybaumy> https://i.imgur.com/uKFUTql.png
<ybaumy> hmm
<ybaumy> add-unit works somehow but remove-unit mysql doesnt do sh*t .. now i removed the machine and everything broke :D
<ybaumy> 2017-03-05 11:34:31 INFO leader-elected _mysql_exceptions.OperationalError: (1045, "Access denied for user 'root'@'localhost' (using password: YES)")
<ybaumy> 2017-03-05 11:34:31 ERROR juju.worker.uniter.operation runhook.go:107 hook "leader-elected" failed: exit status 1
<ybaumy> jamespage: you here?
<ybaumy> jamespage: how to i customize a charm in the bundle openstack-base
<ybaumy> jamespage: can i juju deploy --config myfile.yaml cs:bundle/openstack-base-49
<ybaumy> and make adjustments there
<rick_h> ybaumy: download the bundle file and edit it? or use the gui to deploy it and it shows up uncomitted and you can make the changes you want.
<ybaumy> rick_h: ah ok thanks for the tip
<ybaumy> rick_h: is there something i have to edit in order to scale out mysql?
<rick_h> ybaumy: should be a num-units key?
<ybaumy> rick_h: i added a second unit but the cluster didnt work when a failover was necessary. there was a access denied message for the root user
<ybaumy> rick_h: the leader couldnt be elected
<ybaumy> because of that i guess
<rick_h> ybaumy: ah sorry. I've not looked at how HA story works on that charm.
<ybaumy> rick_h: hmm is there somebody who can help besides you and jamespage
<rick_h> ybaumy: yea, unfortunately not ATM being Sunday and folks are traveling to a sprint next week.
<rick_h> ybaumy: thedac would be a great person to catch on it.
<rick_h> He might know of others as well.
<ybaumy> rick_h: ok. well i only have time for this on the weekends thats the thing. during the week i have to do my regular work
<ybaumy> but will try to catch him
<jrwren> ybaumy: The mailing list may also be a good resource.
#juju 2018-02-26
<magicaltrout> kjackal: stupid question but seemingly something i've not got in any of my charms... or not that I can find
<magicaltrout> if I set a state in a reactive charm
<magicaltrout> how do i then unset it when the relation goes away?
<stub> magicaltrout: Use the newer Endpoint system and it is @when_not('endpoint.foo.joined'). With the older stuff, it is rather difficult.
<elmaciej> Morning
<elmaciej> Question about openstack charms - can I mix nova-lxd and nova-kvm hypervisors in one cloud ? it would be really cool
<elmaciej> from charms in store descriptions I see that it is not possible but maybe someone know a workaround :)
<TheAbsentOne> Are there any tomcat juju pro's here? I'm looking for the correct way to deploy the tomcat charm and connect (jndi) to a deployed mysql charm. So that I can deploy a war that uses a db from the mysql server.
<zeestrat> elmaciej: If you mean running kvm hypervisor on one host and lxd hypervisor on another host, then I imagine you should be able to deploy multiple different instances of the nova-compute charm, e.g. one for kvm and one lxd on different hosts. jamespage could probably confirm.
<elmaciej> zeestrat:  ok, so quickly - I prepare to charm configs and deploy juju deploy --config nova-lxd.yaml nova-compute --to 1 and juju deploy --config nova-kvm.yaml nova-compute --to 2
<zeestrat> elmaciej: Almost. You need to define the different deployments as different applications in juju. So you'd get something like `juju deploy nova-compute nova-lxd --config nova-lxd.yaml --to 1` where `nova-lxd` is the name of that application/deployment. Likewise with `juju deploy nova-compute nova-kvm --config nova-kvm.yaml --to 2`. Note that that you will then need to setup relations for these individually with their
<zeestrat> new application name.
<gsimondon> Is there a recommended production setup for bare metal for canonical kubernetes with juju?
<elmaciej> zeestrat: that's cool, but honestly when I think about it what are the pros/cons in having nova lxd hypervisor ? just wondering also on this one to deploy manually: https://docs.openstack.org/zun/queens/install/overview.html
<kjackal> magicaltrout: You could have a look here: https://github.com/juju-solutions/interface-kube-control/blob/master/requires.py#L52
<kjackal> gsimondon: what kind of setup would you expect? MaaS I guess would be nice to have so you can put cdk on top
<zeestrat> elmaciej: Depends on your use case. Are you comparing lxd to kvm?
<elmaciej> well from end user point of view. it will be still available as an instance in horizon so only things like latency etc will be better on lxd
<elmaciej> right?
<gsimondon> kjackal: nothing like MaaS, we have a custom solution for maas
<gsimondon> kjackal: basically I know I can create a manual cloud by adding machines but when you spin up apps (units) juju tries to create a new machine
<gsimondon> kjackal: that's the part that's confusing me in this bare metal setup.
<kjackal> gsimondon: let me understand something, you have MaaS already setup?
<gsimondon> kjackal: if you are talking about maas.io - no, we are not using that
<kjackal> ok, so you you want to use the manual provider over machines you already have provided
<gsimondon> kjackal: yes.
<kjackal> You need to pre-provision any machines: https://jujucharms.com/docs/2.2/clouds-manual and then...
<kjackal> deploy the cdk bundle: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml
<kjackal> But you might need to change this bundle a bit
<gsimondon> kjackal: I did that multiple times when I was evaluating this distribution the first time. Basically my questions would be 1) It sucks that I have to specify where to deploy apps. Is it possible to now do that? 2) If I deploy stuff from machine A, how can I use that machine to deploy apps to it? When you do juju status it will show only two machines that I manually added using add-machine but machine A
<gsimondon> won't be available
<gsimondon> kjackal: 3) Is LXD something worth considering for isolation of components in this setup?
<kjackal> gsimondon: yes, creating lxd containers will work
<kjackal> so tou your questions: 1) in the manual provider you have the pre-rpovision you machines so you should be explicit on where each app should go 2) I guess you could add the Machine A as a cloud machine with "juju ssh ubuntu@machine-A"
<kjackal> 3) lxd should work, I know we do this trick in orange boxes when we do not have enough nodes
<gsimondon> kjackal: Thanks, very helpful. Do you use LXD just for kubernetes master and workers or basically for everything being deployed by juju?
<kjackal> gsimondon: have a look at how easyrsa is deployed in this bundle https://api.jujucharms.com/charmstore/v5/kubernetes-core/archive/bundle.yaml it says --to: lxd:0 This might work for you
<kjackal> My suggestion would be not to place workers inside an lxd container, although it should work you may run into trouble when trying to expose a docker container running inside that lxd container
<gsimondon> Any rule of thumb on what to put in LXD and what to keep out so far?
<zeestrat> elmaciej: Yeah, so it depends if you and your users want those lxd advantages. Canonical has a nice marketing page for that: https://www.ubuntu.com/containers/lxd
<kjackal> gsimondon: not rules, but do not put the etcds on lxd containers on the same host machine :)
<gsimondon> kjackal: yeah, that wouldn't be so smart. :)
<pekkari> kwmonroe: do you, or do you know any moderator in bigdata mailing list?
<kjackal> pekkari: I saw your message do you have the charm somewhere so we can see it in action?
<kjackal> We would only need to deploy zookeeper, right?
<pekkari> for the testing is enough to deploy zookeeper kjackal
<pekkari> I was firing the charm in my laptop, so no environment we can share and see unless we get on a hangout and show it to you
<kjackal> I think I know what is wrong with it
<kjackal> you have somewhere:
<kjackal> @when('zookeeper.started', 'leadership.is_leader', 'zkpeer.changed')
<kjackal> def check_cluster_changed(zkpeer)
<pekkari> any suggestion is really appreciated, as I'm running out of ideas
<kjackal> no, turns out I donot...
<kjackal> its not what i thought it was, sorry
<pekkari> sorry kjackal, my irc just crashed
<pekkari> I'm all ears
<kjackal> no, turns out I donot know what is wrong with it
<magicaltrout> http://canacopegdl.com/images/all-ears/all-ears-9.jpg
<kjackal> magicaltrout: :)
<kjackal> is github down for you people or it only me?
<magicaltrout> https://cdn.head-fi.org/a/6941207_thumb.png or maybe this one
<magicaltrout> just you
<kjackal> xm... it is possible to get banned from gh :)
<magicaltrout> with you
<magicaltrout> anything is possible
<kjackal> I am afraid so...
<pekkari> kjackal: I'm starting to think it's platform related, I just tried to use xenial machines and here I can see the hook triggered with no problem: http://dpaste.com/25CB8NJ
<pekkari> for similarity with the end user I was using trusty
<kjackal> Is it possible at some point you had a build without the <method_name>(self) without the self?
<pekkari> no, I don't think so, this very same build I'm deploying now is what I was testing last friday, no modification on the code
<elmaciej> zeestrat: kjackal: Guys, do you know when the queens go official with juju charms? queens going to be released in 2 days and it bring's the 18.04 support
<zeestrat> elmaciej: https://docs.openstack.org/charm-guide/latest/release-schedule.html
<elmaciej> zeestrat: that's cool! thx
<zeestrat> elmaciej: If it ain't in the OpenStack Charm Guide, then you can ask the folks on the mailing list (https://docs.openstack.org/charm-guide/latest/find-us.html) or in #openstack-charms
<pekkari> kjackal: contribution ready, thanks for the help!
<kjackal> did nothing, we are the ones to thank you
<elmaciej> last question for today guys, is there a way to set default user and password when adding a machine to the model ?
<pmatulis> elmaciej, just for add-machine?
<elmaciej> yes, I need a user to login directly not by ssh only
<elmaciej> pmatulis: I know that these images are using the cloud-init so I was wondering on two options - rebuild the image and change the cloud.cfg or somehow passing the user data to create user during the creation
<zeestrat> elmaciej: passing user data already has some support in 2.3.1+. https://bugs.launchpad.net/juju/+bug/1535891 Note the caveats such as not being able to use the users section. Please feel free to add a new bug/request for that.
<mup> Bug #1535891: Feature request: Custom/user definable cloud-init user-data <cpe-onsite> <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1535891>
<elmaciej> zeestrat: Thanks for that. I'm trying to modify default curtin preseed to add the default user with oneliner
<bdx> kwmonroe: I have a requirement for hiveserver2 to be hooked up to zookeeper
<bdx> I find here https://www.cloudera.com/documentation/cdh/5-0-x/CDH5-Installation-Guide/cdh5ig_hiveserver2_configure.html, the configuration that need be mended
<bdx> kwmonroe: I'm wondering if you have any suggestions toward managing the config xml for the bigtop software
<bdx> ?
<kwmonroe> bdx: that's gonna be tricky.  the hive-site.xml that bigtop uses is here: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml, but as you can see, none of those properties are available there.  that means the charm doesn't have an easy way to set/change/manage them.
<kwmonroe> bdx: so the first step to do this right would be to open an issue for bigtop to expose those config options: https://issues.apache.org/jira/secure/Dashboard.jspa -- that requires you to have an apache jira account.  if you don't have one, it's of course easy to create, or i can open the jira for you.
<kwmonroe> and then step 2 would be to get the hive charm to support the zookeeper interface.  that likely wouldn't be too difficult.  we'd do it very much like we do zookeeper's relation for hbase:  https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hbase/layer-hbase/lib/charms/layer/bigtop_hbase.py#L30
<kwmonroe> and pass in the appropriate overrides to set the values in hive-site.xml.
<bdx> kwmonroe: "so the first step to do this right would be to open an issue for bigtop to expose those config options" - the config options already seem to exist in hive-site.xml
<bdx> https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml#L39
<bdx> ahhh, the concurrency option is missing
<kwmonroe> bdx: also, line 39 isn't hive.zookeeper.quorum :)  read it again
<bdx> ooooo
<bdx> good eye
<bdx> darn
<bdx> so, cloudera just drops those properties right in there then?
<kwmonroe> well, hive upstream probably has those options too.. the full hive-site.xml is like 1000 lines.  bigtop only uses a subset of those things that *they* support as being changeable from the default.
<kwmonroe> so we just need to get bigtop to include options related to hive.zookeeper, and then the charms can manage them.
<kwmonroe> bdx: i need to run my kid to an appt, but should be back on in about an hour to hear you curse the day you ever mentioned expanding hive to support zookeeper.  biab :)
<bdx> :) ok
<bdx> https://imgur.com/a/f1W3r - something to think about while you are waiting at your appt
<magicaltrout> my biggest issue with kwmonroe's work.... is it puts hadoop in the hands of people who don't need it but don't know better ;)
<magicaltrout> shut it all down kwmonroe
<bdx> hey
<bdx> coming from someone who has ran hdfs on / for the entirety of all time
<bdx> magicaltrout: letting me down holmes
<magicaltrout> haha :)
<magicaltrout> i'm not really directing it at you bdx, but people still bug me with "I have data, i need to deploy hadoop" and it irks :)
<magicaltrout> because then they a) don't have data that requires hadoop b) don't have the volume for hadoop c) don't have the budget for hadoop d) don't have a clue how to write stuff to run on hadoop e) don't know how to monitor hadoop
<magicaltrout> when all they really need is a column store DB at most
<bdx> I see the same thing
<bdx> happens across all technologies
<magicaltrout> true dat
<magicaltrout> I like the ones in here
<jujuuser> hello, I'm trying to deploy openstack
<magicaltrout> oh really, hows that going?
<bdx> magicaltrout: we are moving a ~10TB dataset through a slew of pipelines mon
<jujuuser> well I have a 5 server setup, but its asking to deploy stuff via floppy disk
<bdx> shoot!
<magicaltrout> really? What are you deploying on
<jujuuser> Pentium 4 machines I have lying around in my lap
<bdx> jujuuser: where does it say "floppy disk"?
<magicaltrout> hehe
<magicaltrout> thats honestly a real story from last year
<magicaltrout> he was having issues because his juju powered openstack cluster was trying to boot from  floppy disk
<bdx> no
<bdx> lol I believe you
<bdx> jujuuser: make sure the minimum requirements are met all the way around
<magicaltrout> hehe
<magicaltrout> I am excited about DruidIO coming to the Apache Foundation
<magicaltrout> I'm gonna get that charmed up
<magicaltrout> for high volume timeseries stuff
<magicaltrout> that'll be cool
<magicaltrout> i'm also tidying up my apache-drill charm which makes hooking up SQL apps to all these platforms much easier
<bdx> magicaltrout: apache-drill just hooks up to a nosql database and lets you query it with SQL?
<bdx> magicaltrout: what happens when you hook it up to hdfs?
<magicaltrout> yeah its a distributed SQL interface for NOSQL platforms
<magicaltrout> flat files, hdfs, hive, hbase, kafka, s3 are the most common ones
<magicaltrout> so on hdfs you can query a variety of different file formats, it'll do stuff like auto schema discovery for json/csv style files, if they follow a naming strategy it can do cross file queries etc
<magicaltrout> you can convert files to parquet DB's quickly and easily for more effective querying at scale
<magicaltrout> it'll push down what it can to the underlying query engine and make up the rest in memory unlike a lot of the JDBC interfaces out there
<magicaltrout> sponsored and largely developed by MapR and a few Dell folk IIRC
<magicaltrout> also provides ODBC & REST interfaces for other uses
<magicaltrout> and data federation capabilities so you can do stuff like "select * from mysql left join (select * from hbase))
<magicaltrout> it leverages a lot of Apache Calcite which provides the SQL interface, and has a bunch of other adapters to do SQL over X
<magicaltrout> https://calcite.apache.org/docs/adapter.html
<bdx> super cool
<bdx> for anyone that cares to deploy volumes > 1T https://bugs.launchpad.net/juju/+bug/1751909
<mup> Bug #1751909: turn up the volume - artificial quotas? <juju:New> <https://launchpad.net/bugs/1751909>
<kwmonroe> 1T is too much
<bdx> no, 1T works
<bdx> but anything over seems to fail
<bdx> orr ooo
<bdx> 1000 GB works
<bdx> possibly 1T woudl fail
<kwmonroe> here's the thing bdx, when you get past a couple gigs, you really need hdfs.  magicaltrout, back me up.
<bdx> oooh sorry ... I'm crossing wires here, ^ is unrelated to hdfs
<magicaltrout> when IRC needs a joy emoticon
<kwmonroe> no bdx, i'm just being hilarious.  /me just finished backscroll of how i'm pushing hadoop on people that don't need it.
<bdx> ahh yes yes
<bdx> aha
<kwmonroe> magicaltrout: are you still looking at ranger?
<magicaltrout> yeah slowly
<kwmonroe> magicaltrout: would ranger's inclusion in bigtop help you?
<magicaltrout> absolutely
<kwmonroe> magicaltrout: please add ranger to https://github.com/apache/bigtop/tree/master/bigtop-packages/src to include it in bigtop.
<kwmonroe> ;)
<kwmonroe> i'm chuckling quietly
<kwmonroe> who am i kidding, i'm totes LOL.  get it done magicaltrout.
<magicaltrout> :'(
<magicaltrout> sad times
<magicaltrout> i will get around to doing something with it soon enough, I've got the LDAP bits and piees that need finishing up because they'll work for Saiku as well
<magicaltrout> and I had the start of a ranger snap
<magicaltrout> but then I got sidetracked with the day job
<magicaltrout> but now i'm circling back around to Hadoop -> Drill -> Saiku stuff for JAAS, i have an excuse to look at it again
<kwmonroe> magicaltrout: i know somebody on the bigtop PMC if you need support.
<magicaltrout> i heard the Bigtop PMC was just a bunch of jerks
<magicaltrout> We'll see, i need to get it in somehow, but I also have DruidIO, Janusgraph and some other stuff on my high priority todo list
<magicaltrout> Not sure where the chips will land with all of that
<magicaltrout> plus once the contracts are sorted out at JPL I'm supposed to be getting 80+ hrs a week and 20 odd on a separate cancer research project
<magicaltrout> so I need to resource them and sleep at some point ;)
#juju 2018-02-27
<heckles1000> #luigi
<heckles1000> whoops
<bdx> kwmonroe: I'm creating the issue for the bigtop project on apache jira
<bdx> should I tag it as "New Feature"?
<bdx> kwmonroe: https://issues.apache.org/jira/browse/BIGTOP-3007
<bdx> kwmonroe: let me know if theres anything else you want/don't want to see on there and I'll add/remove it
<kwmonroe> perfect bdx!  thank you!!
<bdx> kwmonroe: np, thank you
<bdx> kwmonroe: would I be able to fork->branch the juju-solutions/bigtop repo and just add those properties to the hive config to get started working on implementing those bits between the charms?
<bdx> or is there something else
<kwmonroe> bdx: you probably want to fork upstream, right? https://github.com/apache/bigtop
<bdx> ahh, but the juju-solutions fork has the layers .....
<bdx> hmmm, possibly I need to do a bit more digging to understand how these pieces fit together
<kwmonroe> bdx: upstream has the layers too: https://github.com/apache/bigtop/tree/master/bigtop-packages/src/charm
<kwmonroe> fwiw, the upstream layers are the source of truth for the charms in the store -- not our juju-soln/bigtop fork
<bdx> ahhh gotcha
<bdx> kwmonroe: should these properties be conditional in the xml template similar to hbase.zookeeper.quorum?
<bdx> im thinking so
<bdx> ahh, possibly hive.support.concurrency should default to false
<bdx> and hive.zookeeper.quorum shoudl get the conditional
<kwmonroe> yeah bdx, i think you're on the right track.. for reference, here's how we added 3 new options in the puppet scripts:  https://github.com/apache/bigtop/pull/238/files
<kwmonroe> bdx: you'd do something similar to initialize the vars in hadoop_hive's init.pp: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/manifests/init.pp#L48
<kwmonroe> and then conditionally use those vars in the hive-site.xml: https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml
<kwmonroe> bdx: as for the default values, i would set concurrency to false by default (since that is the default): https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.support.concurrency
<kwmonroe> and make hive.zk.quorum an empty string
<kwmonroe> ^^ both of those in hadoop_hive's init.pp common_config class def
<kwmonroe> bdx: then write out the concurrency property and conditional zk.quorum property in hadoop_hive's templates/hive-site.xml
<bdx> done
<bdx> https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-deploy/puppet/modules/hadoop_hive/manifests/init.pp#L50,L51
<bdx> https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml#L45,L57
<kwmonroe> perfect
<kwmonroe> see?!?!  big data is easy!
<kwmonroe> can i get an amen magicaltrout?
<bdx> kwmonroe: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/lib/charms/layer/bigtop_hive.py
<bdx> and https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py
<boucherv> Some issue with the snaps website and api ? https://snapcraft.io/ - 503 Service Temporarily Unavailable nginx/1.13.7
<kwmonroe> looks great bdx!  when you're ready, you can PR that against the bigtop repo.  be sure to follow the PR title guidelines so jira will auto-link your pr to the ticket:  https://github.com/juju-solutions/bigdata-community/wiki/Contributing#pull-request-formatting
<bdx> kwmonroe: awesome, thanks
<kwmonroe> boucherv: i was seeing 502/503 too, but it seems to be back for me now.
<boucherv> kwmonroe: Yes It seems to be back now! ty
<bdx> kwmonroe: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hive/layer-hive/lib/charms/layer/bigtop_hive.py#L61,L63
<bdx> (self.dist_config.path('hive_conf') / 'hive-env.sh.template')
<bdx> I cant seem to find hive-env.sh, or hive-env.sh.template
<bdx> should I create this file?
<bdx> I feel like I'm missing something, as the charm should have hit the error I'm seeing long ago if that file didnt exist
<bdx> possibly I borked something https://paste.ubuntu.com/p/W9tN3Ygyn9/
<kwmonroe> bdx: you are correct... i can't find ref to that template either.  wth!  /me looks deeper down the rabbit hole.
<bdx> kwmonroe: https://paste.ubuntu.com/p/CSscBbmwtf/
<bdx> when I deploy the upstream hive charm
<bdx> I see the files
<bdx> and it doesnt error
<zeestrat> rick_h: Are requests to subscribe to juju@lists.ubuntu.com moderated or should you get a confirmation mail asap? Trying to change my mail, but getting ain't no confirmation love :(
<rick_h> zeestrat: thought it just auto went through.
<zeestrat> rick_h: Alright, thanks. I'll give it a day before I start nagging.
<magicaltrout> i've changed my email address on the juju list before
<magicaltrout> it took about 3 seconds
<kwmonroe> aight bdx, i think i know what's up.  first of all, those template files come from the upstream tgz that gets used during the bigtop deb build, specifically here: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hive/install_hive.sh#L161
<bdx> gotcha, was just figuring that out myself
<kwmonroe> so bigtop doesn't explicitly have a hive-env.sh like it does for hive-site.xml, but it does get there from the deb.
<kwmonroe> bdx: the next part was a bit of a mystery.. i was able to confirm the failure when i used hive-21 (from the candidate channel) and hadoop-plugin-31 (from the stable channel)
<kwmonroe> the problem is that the stable channel plugin is still set to use bigtop-1.2.0, whereas the candidate charms are bigtop-1.2.1.
<kwmonroe> this is a problem because hive will set up it's puppet heiradata on initial install and get ready to run puppet apply using the config from bigtop-1.2.1.
<kwmonroe> then the plugin comes along and overwrites all that config with bigtop-1.2.0
<kwmonroe> when the plugin does this, the puppet config is no longer including the "hive" role, and so puppet apply never does an "apt install hive"
<bdx> oh man lol
<kwmonroe> without installing the pacakge, puppet apply succeeds, but since hive didn't actually get installed, there's no /etc/hive/conf, and hence the failure in the hive install charm method
<bdx> nice
<kwmonroe> so, to confirm that, would you confirm if you used a stable hadoop-plugin with a candidate (or beta / edge) version of the hive charm?
<bdx> that makes sense from what little I know about the process
<bdx> yes, I'll give that a try now
<bdx> can I just build hive from upstream apache/bigtop and use that?
<magicaltrout> you make kwmonroe when you suggest such things
<magicaltrout> cry
<magicaltrout> missed the important word
<magicaltrout> should drink less gin
<kwmonroe> you sure can bdx -- but if you do, make sure you use hadoop-plugin-33 (from candidate) so everybody's on bigtop-1.2.1
<bdx> ok
<kwmonroe> heh magicaltrout.  big data is still easy!  where's my amen?
<magicaltrout> amen!
<kwmonroe> i am complete
<kwmonroe> now to track down the knucklehead that didn't release candidate charms to stable like 2 months ago.
<bdx> kwmonroe: you seeing this https://paste.ubuntu.com/p/K75vMzTgrV/
<bdx> kwmonroe: http://paste.ubuntu.com/p/Pc9kQF35Zg/
<bdx> I dont think we will hit the error on hive because its stuck waiting on hbase
<bdx> hbase hits it though
<bdx> ohhhh
<bdx> I see why
<bdx> I needed to deploy condidate hbase too
<bdx> my bad
<bdx> dah
<kwmonroe> yeah, sorry about that bdx, i'm doing a rebuild now to see if anything has changed recently.. should get those up to stable after CI runs through.
<bdx> cool
<kwmonroe> but yeah, everything needs to be -c candidate for now
<beisner> hi all - in case you haven't seen our announcement on the openstack-dev ML:  1802 openstack charms release is rescheduled for Thu Mar 8.  hop over to freenode #openstack-charms if you'd like to interact or track it more closely.  cheers!
<magicaltrout> isn't openstack 2010?
<magicaltrout> hey beisner hows the kettle?
<beisner> push a button, receive hot water
<magicaltrout> amazing kit, right? ;)
<beisner> it has single-handedly increased my personal throughput more than ... any beer ever has.
<jhobbs> zojirushi?
<beisner> no, baby steps.  mine's from walmart, with training wheels.  but that zojirushi looks sweet1
<beisner> !
<jhobbs> hehe
<beisner> 2010's openstack isn't one that you'd enjoy much, magicaltrout
<magicaltrout> awwww
<magicaltrout> I'm forced to run CDK on Openstack at the moment
<magicaltrout> Openstack makes magicaltrout sad
<beisner> whyfor?
<magicaltrout> not for any particular reason
<magicaltrout> other than its just not baremetal :)
<beisner> ha!  i hear you.
<magicaltrout> on the plus side, CDK support for GPUs via openstack seems to work a treat
<magicaltrout> which does make me a little less sad
<beisner> the world still needs virtual machines.  look!  even you do right now, apparently.
<beisner> nice
<magicaltrout> only cause I don't get my own way all the time :P
<magicaltrout> i'm fixing the world
<magicaltrout> one kettle at a time!
<beisner> word.
<magicaltrout> that said, docker gets on my tits as well
<magicaltrout> i'm hard to please ;)
<beisner> is that like ruby on rails?  docker on ____ ?
<magicaltrout> haha
<beisner> you may have something there.
<magicaltrout> kwmonroe... do you own a kettle?
<beisner> gotta run.  i'll leave you to ""gpu research"" aka ethereum mining, magicaltrout ;-)
<magicaltrout> works a treat
<magicaltrout> on government GPUs ;)
<magicaltrout> have fun
<beisner> cheers
<kwmonroe> i do not magicaltrout
<magicaltrout> its a texan miracle!
<bdx> kwmonroe: concerning zookeeper <-> hive, do you have any suggestions on how to go about making the relation between the two optional?
<bdx> I have a few ideas I'm playing around with atm
<bdx> 1) conditional 'use-zookeeper' or equivalent charm config, 2) call Hive.install() when zookeeper.joined and make a separate handler to get the zookeeper hosts and store them in unitdata so that install_hive() doesn't depend on 'zookeeper.ready'
<bdx> I have #1 working
<bdx> but the more I think about it, the less I like it (I'm sure you will agree here)
<bdx> so, working on #2
<kwmonroe> bdx: yeah, boo #1.  i fail to see how the zk relation isn't already optional.  your install handler isn't gated with zookeeper.ready: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L66
<bdx> ahh , yeah srry, uncommited changes
<kwmonroe> heh, well, don't guard it, and zookeeper will be optional :)
<bdx> I was not getting the zk properties in hive-site.xml with my add_hive branch
<bdx> ok ... I'll keep poking around with what I have in that branch then ... possibly I'm over thinking it
<bdx> the fact that install_hive() doesnt depend on 'zookeeper.ready'
<kwmonroe> ah, yeah bdx.. so here's the thing.  without the @when zk.ready, you'll enter that install_hive handler over and over... and *only* when the deployment matrix changes will anything actually happen.
<bdx> yeah
<kwmonroe> so, i would suggest you move this https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L110,L112 up
<kwmonroe> and add hive_install-args to the dep matrix here: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L89
<kwmonroe> otherwise, you're hitting this: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L103
<bdx> oh totally
<kwmonroe> and returning because the matrix didn't change
#juju 2018-02-28
<bdx> I just need to add zks to the deployment matrix
<kwmonroe> yup
<kwmonroe> then, if hbase or zks change, install_hive will run through the actual hive.install(**args) method
 * kwmonroe wanders off to dinner
<magicaltrout> dinner?!
<bdx> kwmonroe: https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py - still giving the same result (not getting the hive zk config in the hive-site.xml)
<bdx> possibly I've missed adding something somewhere to get those picked up all the way
<bdx> I feel its close
<bdx> oooo I think I see it
<bdx> well nm
<bdx> willing to bet that https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hbase/layer-hbase/reactive/hbase.py#L70 matters that its named 'zookeepers' somewhere outside of my peripheral
<bdx> I've decided to name mine zks https://github.com/jamesbeedy/bigtop/blob/hive_config_add_zookeeper/bigtop-packages/src/charm/hive/layer-hive/reactive/hive.py#L95
<bdx> bet if I s/zks/sookeepers/ it will be gold
<bdx> oh man .... these layers dont have "series" tag in the metadata.yaml
<bdx> I've been deploying the wrong thing all this timeGRRRR
<bdx> it defaults to repo/builds/trusty lol oh man
<bdx> at least hive will be super consistent with hbase :)
<bdx> oooooh still not workin, but I bet I need to add entries here https://github.com/apache/bigtop/blob/be9a183b4db8f183c14cc9a4ed853cf7bbbab2e5/bigtop-deploy/puppet/hieradata/bigtop/cluster.yaml#L191
<bdx> kwmonroe: I found the source of the issue
<bdx> cat /etc/hive/conf/hive-site.xml | https://paste.ubuntu.com/p/fm7mMsyvJS/
<bdx> as you can see I'm not getting the vars that I want in there, even though the template has them https://github.com/jamesbeedy/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml
<bdx> and the facts are there on the filesystem - cat /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/hieradata/site.yaml | http://paste.ubuntu.com/p/dK9mhgmYvc/
<bdx> so I figured puppet must be using some template it pulls from upstream
<bdx> instead of whats in teh bigtop repo
<bdx> which is correct
<bdx> and here is the template that is used https://paste.ubuntu.com/p/T5dBbw8hzm/
<bdx>  /home/ubuntu/bigtop.release/bigtop-1.2.1/bigtop-deploy/puppet/modules/hadoop_hive/templates/hive-site.xml
<bdx> lol
<bdx> you could have just told me
<bdx> either way
<bdx> I think what I have in my hive_config_add_zookeeper branch will work just fine
<bdx> once we have the correct template there
<bdx> alright I can stop now
<bdx> thanks for your help today
<bdx> here's the PR btw https://github.com/apache/bigtop/pull/344/files
<bdx> setting 'bigtop_version: master' was all I needed
<bdx> geh ... that did not allow me to do what I thought it would... still no way to get whats in my branch onto the instance
<bdx> the bigtop-repo resource is the way!!
<gsimondo1> Hi. Trying to juju bootstrap with a LXC container as controller. Fails because of publickey. What's the conventional way of going about manual clouds and keys with juju?
<petevg> gsimondo1: do you mean that you're doing "juju boostrap localhost"? Or are you doing something else?
<petevg> gsimondo1: With some of the versions of juju 2, you needed to add the credential manually, with "juju add-credential localhost".
<petevg> I think that's been fixed with the latest releases, though.
<petevg> gsimondo1: you might want to try "snap install juju" to get the latest juju, if you don't want to mess around with adding the credential.
<magicaltrout> petevg ... he lives
<petevg> magicaltrout: I'm even in your time zone for a little while.
<magicaltrout> jesus
<petevg> Enjoying lovely snowy Dublin.
<magicaltrout> yeah its pretty bleak
<magicaltrout> i'm stuck at home today
<magicaltrout> kids are pissing me off
<petevg> Snow day!
<magicaltrout> kill me
<petevg> magicaltrout: it'll melt eventually. It's supposed to warm up and rain by the weekend :-)
<gsimondo1> pegevg: doing juju bootstrap manual/ip.address where ip.address is an address of a LXC container
<gsimondo1> petevg: juju uses that default ubuntu user. I was looking into configuration options to mess with that.
<gsimondo1> petevg: temporary solution is copying ssh keys in the container before running juju bootstrap
<gsimondo1> petevg: currently thinking and looking if there are some conf management tool integrations for this stack - can't find any. not sure if it's a good direction too
<gsimondo1> petevg: also running juju 2.3.4-xenial-amd64. why would I reinstall?
<petevg> gismondo1: if you're up to date, not need to reinstall.
<petevg> I'm afraid that I don't know manual providers well.
<petevg> Anyone else have a suggestion?
<magicaltrout> i use manual clouds all the time, but i usually have a precanned image kicking around with a key in that becomes the base image
<gsimondo1> magicaltrout: that makes a lot of sense
<gsimondo1> magicaltrout: do you do cross-host lxc networking? using flannel by any chance?
<magicaltrout> afraid not gsimondo1
<petevg> gismondo1: according to my lunch companions, you are doing the correct thing by copying the key into the image.
<magicaltrout> ircing on your lunch break
<magicaltrout> what a geek
<gsimondo1> haha
<petevg> Since juju doesn't create the machine, it can't drop the key on the machine by itself
<petevg> A geek is what I am :-)
<gsimondo1> petevg: any way to make it use a different user than ubuntu?
<magicaltrout> yeah
<magicaltrout> you just alter the ssh command
<magicaltrout> when adding the machine
<magicaltrout> user@
<gsimondo1> gotcha thanks
<gsimondo1> that's what I did to set it up but just making sure I'm not hacking around it too much
<magicaltrout> na what you're doing sounds pretty standard
<petevg> It sounds like you've got stuff figured out. We're always glad to verify, though :-)
<gsimondo1> I'm more than grateful! :)
<magicaltrout> don't ask petevg he only knows openstack
<magicaltrout> he's been corrupted
<gsimondo1> spoiled lovechild?
<magicaltrout> those were the days... when kevin used to have team members....
<petevg> My mind has been expanded :-p
<magicaltrout> as I told beisner last night
<magicaltrout> openstack is so 2010
<gsimondo1> do I feel a pitch for MaaS coming up?
<magicaltrout> ha
<magicaltrout> i'm only bitter because i have to run CDK on openstack
<magicaltrout> Kubernetes on VMs.....
<magicaltrout> cause thats a great idea
<gsimondo1> I suddenly feel lucky using this LXC cross host networking with questionable amount of testing that went into it
<petevg> Exciting!
<magicaltrout> everyone loves software networks with questionable testing
<magicaltrout> its what petevg lives for
<petevg> That just means I get to have fun writing tests :-)
<petevg> Abs making test frameworks...
<petevg> *and
<petevg> Sometimes, I write non test code, too...
<magicaltrout> I'll refer you at this point to my point about you IRCing at lunch
<magicaltrout> alright folks
<magicaltrout> need a hand here
<magicaltrout> kjackal: or someone
<magicaltrout> i need to set these GC variables
<magicaltrout> and if I update the snaps args file by hand
<magicaltrout> your stuff zaps it
<magicaltrout> anyone got any bright ideas?
<kwmonroe> magicaltrout: what "stuff" zaps it?  you talking about the k8s snaps?
<kwmonroe> if so, ryebot loves the k8s snaps...
<magicaltrout> i dunno if its the snap
<magicaltrout> i suspect its the charm
<stub> cory_fu: https://github.com/stub42/juju-relation-pgsql/pull/1/files if you are interested. The new Enpoint version of the pgsql interface is better in all respects I think. Its more confusing that it needs due to backwards compatibility.
<ryebot> magicaltrout: which snap/config are we talking about?
<magicaltrout> kubelet ryebot
<magicaltrout> the charm config doesn't let me set GC attributes
<kwmonroe> bdx: about the series stuff... we don't set that in the individual charms since all the bigtop charms are xenial-only.  that's defined in https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml.  but yeah, you have to build with charm build --series xenial :)
<magicaltrout> but then when I append the args file for kubelet at some point they get removed
<ryebot> magicaltrout: okay cool - there's a kubelet-extra-args option on kubernetes-workere
<ryebot> worker*
<magicaltrout> aah
<kwmonroe> k8s: almost as easy as big data.
<ryebot> magicaltrout: if you have any trouble with it hit me up and I'll help out
<magicaltrout> oh
<magicaltrout> hold up
<kwmonroe> run ryebot
<magicaltrout> thats the bug we filed last week
<magicaltrout> which impacts this
<ryebot> welp, that sounds like trouble
<ryebot> let me take a look
<magicaltrout> ha
<magicaltrout> you guys put some logic into the charm and filter out flags you don't grep
<magicaltrout> but the GC settings aren't picked up
<magicaltrout> or so the greek told me
<magicaltrout> i take most things he says with a pinch of salt
<magicaltrout> but this seemed to ring true
<ryebot> magicaltrout: looks like this one? https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/498
<magicaltrout> yeah thats me
<ryebot> magicaltrout: give me a few minutes to dig into this
<magicaltrout> thanks
<ryebot> magicaltrout: Yeah this is a bit of a pickle. Have you looked into the --eviction-hard and --eviction-soft cli options for kubelet?
<ryebot> They're supposed to be the replacements for the deprecated gc flags.
<magicaltrout> well
<magicaltrout> no
<magicaltrout> can i set them? :P
<ryebot> You should be able to, since they're in the --help output :)
<ryebot> Also look at --eviction-minimum-reclaim	
<magicaltrout> alright, i'll give it a go and see if it stops the NIST folks complaining
<magicaltrout> they want max-dead-containers set and container-ttl set
<magicaltrout> but i'll see if i can get away with it
<ryebot> magicaltrout: If it helps, there's a deprecation/replacement table here: https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/
<ryebot> hopefully that's convincing
<magicaltrout> ha
<magicaltrout> yeah i know that inside out these days :P
<magicaltrout> alright thanks ryebot i'll translate what we have and see how i get on
<ryebot> haha sorry to hear that xD
<ryebot> alright good luck!
<kwmonroe> hey cory_fu, does charm build still require layers to be at the top level of a repo?
#juju 2018-03-01
<TheAbsentOne> Hi all, any juju gui developers here? Is it possible to add nodes in the gui that represent small entities (no services but rather parts of services). I'm researching the usefulness of abstracting a database/tabel from the actual server for example in modelling techniques (like juju)
<TheAbsentOne> So one of the goals I want to achieve is having a nodes that actually pass information from the relations. For example I would create a wordpress node, a mysql node and a pseudo database node. The relation would then be from wordpress to the db and from db to the mysql one (instead of wordpress to mysql).
<gsimondon> What are the LXD specific configurations that conjure-up sets up when deploying CDK?
<gsimondon> I keep on seeing that note but nobody is actually getting into what specifically that is. I do not want to use conjure-up but I'd like to understand what I have to configure manually or using some other tooling.
<gsimondon> I've installed https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml manually by creating lxc containers manually, adding the machines to juju and then deploying the necessary apps and settings permissions.
<gsimondon> However, kubernetes-master is stuck in "Waiting for kube-system pods to start"
<gsimondon> And one kubernetes-worker that I have is stuck in "Waiting for kubelet, kube-proxy to start."
<gsimondon> Of note is that the cni relation between these two components and flannel is established but I do not see an ip address that cni is bound to on neither
<gsimondon> So the fourth step in the CDK spell for conjure-up is dealing with CNI. However, I can see only EC2 implementation. Can any dev shed a light on this please?
<petevg> TheAbsentOne: nodes map directly to charms, so to do what you'd want to do with the juju as it currently exists, you'd need to write a charm that acted as an intermediary for the relation data. It might make sense to write it as a subordinate to the database. That would be an unusual use case for a charm, however.
<magicaltrout> TheAbsentOne: the pattern for that currently is that when you connect wordpress to mysql ...
<magicaltrout> alright petevg's geeking
<magicaltrout> i'll leave him to it
<petevg> magicaltrout: that's pretty much all I've got. A node is a charm, so you can write a charm. :-)
<magicaltrout> TheAbsentOne: what would probabaly make more sense is you extend wordpress or whatever, and so when a mysql relation provides connection details or whateevryou could have a hook that adds stuff to the provided datadatabase for you
<TheAbsentOne> Yes as a first implementation I want to make it as a subordinate, do you by any chance have a good example on how to implement that? The problem I'm encountering though is exactly that (the fact that nodes map on charms) I kinda want to create nodes that represent something way smaller
<TheAbsentOne> petevg: magicaltrout : yes exactly but in the gui you won't be able to visualise that well I think?
<petevg> TheAbsentOne: correct. The GUI implicitly displays relation data on the edges of the graph. I'm not sure how happy that makes you, from a formal perspective, though.
<TheAbsentOne> The goal of my research/dissertation is to actually look for a way to add a layer/functionality to juju so that non operations-people can actually model the services. And to solve the problem of 2 services connecting to the same the table
<magicaltrout> i once got shown that gsimondon and i have no idea on github where it is :)
<magicaltrout> petevg might know where the CDK conjure up recipe is stored
<magicaltrout> its inside that somewhere
<TheAbsentOne> petevg: wouldn't it be interesting to have the ability to add something on the gui that is no real service/charm but an abstraction of an entity of a service/charm? ^^ like a topic, queue for message brokers or databases/tables with database servers
<gsimondon> magicaltrout: some teams preferring launchpad is confusing the hell out of me
<gsimondon> :D
<magicaltrout> join the club
<magicaltrout> ah hold on
<magicaltrout> progress
<magicaltrout> https://github.com/conjure-up/spells/blob/master/canonical-kubernetes/steps/00_process-providertype/lxd-profile.yaml
<magicaltrout> thats what i got shown gsimondon
<gsimondon> git cloned https://github.com/conjure-up/spells/tree/master/canonical-kubernetes 30 minutes ago and jumped 4 steps ahead instead of opening that folder
<gsimondon> let's see if that fixes the CNI setup
<magicaltrout> ha
<SuneK> I added a new user and granted him superuser, and access to models. However when running juju status, he gets the following error: "ERROR current model for controller mycontroller not found"
<SuneK> Any ideas? juju show-controller works
<petevg> TheAbsentOne: it would definitely be interesting :-) Juju tries to avoid having a deep understanding of the services that it deploys, however. Everything is just charms and relation data. That makes it flexible and powerful, but does make it trickier to do what you're looking to do.
<petevg> gismondon, magicaltrout: the conjure-up spell for k8s lives here https://github.com/conjure-up/spells/tree/master/canonical-kubernetes  It's definitely more machine friendly than human friendly, but it does document what conjure-up does when it deploys k8s.
<petevg> nm. I'm behind on the conversation. Looks like you already found it!
<kjackal> gsimondon: magicaltrout: here a a short description on the lxd profile https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD
<gsimondon> kjackal: thanks. useful
<kjackal> do you know if tere is a way to figureout if the provider is localhost from within a charm?
<petevg> kjackal: if there is a way, I don't know it. I think the best practice might be to put a flag in the config that you reference.
<petevg> Then in the README, you say "if you have condition x, set param x in the config."
<kjackal> :)
<petevg> That way, you're doing something based on an environmental requirement, rather than hard coding provider specific behavior in the charm.
<petevg> kjackal: do I want to know why you want to know the provider? :-)
<kjackal> there is this issue: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/505 petevg,
<kjackal> on lxd deployments k8s will probably need to set proxy-extra-args="proxy-mode=userspace"
<petevg> kjackal: you've already got the config param, then :-)
<petevg> kjackal: The more I learn about this, the more it sounds like something to handle at the ops/conjure-up level. A charm really doesn't want to be aware of the provider.
<kjackal> I would also prefer to not address this inside the charm, but conjure-up is only one way to deploy bundles
<petevg> kjackal: true. And it is frustrating to try to deploy something and get weird errors and then get told to look in conjure-up. gsimondon can probably relate :-)
<magicaltrout> i for one try and avoid our conjure-up overlords :)
<petevg> kjackal: does it fail to talk to the Cluster IP service in a distinct way? Or is it just a generic connection failure?
<petevg> You could have it switch proxy modes if it failed to connect.
<petevg> But then you might end up with a bug where you erroneously change proxy mode when you have a transient connection failure, or are still getting set up.
<TheAbsentOne> thanks for the answers though petevg and magicaltrout when my formal problem statement is finished I'll post it here as well for the expertise of you guys ^^
<petevg> TheAbsentOne: You're welcome! I'll look forward to the problem statement.
<magicaltrout> don't listen to him
<magicaltrout> he's paid to say that....
<TheAbsentOne> as long as my dissertation leads to something I don't mind forced answers or not ;)
<magicaltrout> what course you doing TheAbsentOne ?
<petevg> I'm actually being paid to be at a meeting right now. IRC is just me having fun :-)
<magicaltrout> don't invest your cash in Canonical
<magicaltrout> they're easily distracted
<petevg> It's multitasking!
 * magicaltrout watches his pennies draining away
<TheAbsentOne> well it's actually my research to obtain my master degree
<magicaltrout> nice
<petevg> Excellent.
<magicaltrout> i failed university 3 times :)
<TheAbsentOne> My origin is kinda the same origin as the tengu-team :)
<magicaltrout> ah those weirdos
<TheAbsentOne> hehe xD
<TheAbsentOne> I supose most thesis goals/subjects need to be a bit weird sometimes
<magicaltrout> i wouldn't know!
<TheAbsentOne> It's kinda hard getting everything clear and my juju experience wasn't too big either so I hope I'll manage
<TheAbsentOne> but it's good to know there is an IRC ^^
<magicaltrout> indeed part of my pull towards juju was the fact you get access to the poor developers who have to write the code
<magicaltrout> so many open source projects run by corps don't give you that facility and you end up talking to other folks on the outside whilst we all try and peer in :)
<TheAbsentOne> very true, but right now I'm having a hard time finding the state-of-the-art examples and aproaches as juju changed/evolved throughout different versions
<gsimondon> kjackal: petevg: the instructions for configuring LXD profile for CDK were all that's needed to get CNI running in a manual setup. just an update
<kjackal> :)
<petevg> Awesome :-)
#juju 2018-03-02
<magicaltrout> got stuck in the snow this morning
<magicaltrout> i'm clearly winning at life
<magicaltrout> kjackal did you get stuck in the snow this morning?
<kjackal> sure, if that pleases you?
<magicaltrout> that doesn't sound like a truthful answer
<kjackal> I havent check, but I suspect the freezer is the only place I can find ice around here.
<kjackal> I hear you got a proper winter this time
<magicaltrout> *sob*
<petevg> Nobody on these islands seems to know how to deal with snow. And here I thought that it wasn't that uncommon. magicaltrout: Doctor Who has had several episodes depicting London in the snow. Are you saying that the show isn't a documentary? ;-)
<magicaltrout> it snowed once when i worked in london
<magicaltrout> and it was a disaster cause everyone would get off the tube and compact it to ice
<magicaltrout> then everyone else would fall over
<magicaltrout> this is the first real snow for me in 6 years
<magicaltrout> so no one is equipped for it because its not worth the expenditure for once every 6 years
<petevg> There's a potted tree lying on its side in the hotel courtyard. I think that sums things up pretty succinctly ...
<magicaltrout> https://www.dropbox.com/s/1bvu8tlgblurjr4/20180302_100418_1.gif?dl=0
<magicaltrout> welcome to the end of my road
<gsimondon> having recently moved to Berlin, this windy cold climate with no way to reproduce coziness of at least having some snow outside is the worst aspect of the city
<magicaltrout> i quite like the snow, i just wish it were more regular so the country could still function
<magicaltrout> i'm bored of seeing my bedroom and my daughters desk this week
<zeestrat> Over here in Norway, people complain when there's little snow up at the cabin and now they complain because it's too much (there's a cabin there I swear): https://gfx.nrk.no/mHFTfRnMLLszzsA2SypmhAy7z3oFvUbNUS7bf-r367CA
<magicaltrout> lol
<magicaltrout> no pleasing some people
<rick_h> magicaltrout: I shoveled snow, does that count?
<zeestrat> If anyone needs a good excuse for splurging on a snow blower: https://www.health.harvard.edu/blog/can-shoveling-snow-put-your-heart-at-risk-2017120612887
<cory_fu> kwmonroe: I only just saw your question from the other day.  Charm layers no, base layers yes, unfortunately
<rick_h> zeestrat:  except for this line in there :P "itâs possible that the connection between snowfall and heart trouble has nothing to do with shoveling snow. Perhaps there is some other snow-related risk factor, such as the use of a snow blower that is the real culprit"
<zeestrat> rick_h: Naughty you for leaving out the latter part of that sentence ", but Iâm having a hard time coming up with other plausible explanations. Can you?" ;)
<rick_h> zeestrat: hey, I can't copy all of the text :P
<rick_h> that would be improper use of IRC
<zeestrat> rick_h: Haha! At least you can get a nice fallback career writing clickbaity press releases for research institutions with that attitude :P
<rick_h> zeestrat: how'd you know my secret ambition in life?!
<zeestrat> rick_h: Well, it's obvious where the money is with all those cold fusion plants and everlasting batteries just around the corner.
<rick_h> zeestrat: have to look out for the future
<zeestrat> rick_h: Had a nice trip I hope? Do y'all need any more info from me regarding https://lists.ubuntu.com/archives/juju/2018-February/009887.html?
<rick_h> zeestrat: yes, very nice trip. Did some mountain biking and came home to a ton of snow :(
<rick_h> zeestrat: I had a call about that this morning with some folks that work on the back end and I'm currently putting a doc together identifying the different gaps as they cross different systems.
<rick_h> zeestrat: I'd love to chat with you and tom once that's up about what we can do short term and longer term to smooth out the process
<rick_h> zeestrat: in particular we have a tool in place we use internally called "agent auth" where you create an agent with private/public keys so that services can talk to each other in a trusted way and wonder if that'll work for your needs (even though ideally we'd just generate tokens like a GH setup)
<rick_h> but once exists already to a great extend and one is all new code/etc
<rick_h> I scared zeestrat away
<magicaltrout> or made him cry
<zeestrat> rick_h: No, rather my SO and her impending arrival (I promised to start making dinner...)
<rick_h> zeestrat: ah, well dinner with SO > * (at least that's the line you must repeat"
<rick_h> and wtf...I start with a ( and end with a "
 * rick_h gets moar coffee
<zeestrat> You got at least 4 different chars in there so well done
<magicaltrout> noooo
<magicaltrout> ignore them all
<zeestrat> ^ Sounds good though. Glad to chat and help out further with that. I imagine we're not the only ones that would benefit from ripping out some gnarly CI scripts.
<rick_h> zeestrat: definitely not alone on it
<magicaltrout> rick_h: can you come shovel some snow for me whilst i wait for that day to come?
<magicaltrout> took/forced the kids out for a walk an hour ago
<magicaltrout> they officially hate me
<magicaltrout> I believe i'm winning
<rick_h> magicaltrout: sure, I'll get right out there.
<magicaltrout> thanks
<rick_h> magicaltrout: lol have to know true hate so that you can buy their love back later without any guilt
<magicaltrout> well
<magicaltrout> i remember my parents forcing me out for walks in the snow
<magicaltrout> i call it indirect retribution
<rick_h> "and with that walk over I declare...Extra Screen Time!"
<rick_h> hah, we lived in the south so we were locked out of the house. I'd read books on the front porch.
<magicaltrout> i believe if i locked my kids out of the hosue these days i'd have social services knocking on my door ;)
<rick_h> yea...scary times
<rick_h> they might have to wave to folks walking through the neighborhood
<magicaltrout> i know right... hey kids, you need to go interact with that stranger
<rick_h> hah, you need to awkwardly be polite with other random adults like we all had to
<bdx> k, sent
#juju 2020-02-24
<timClicks> is it possible to delete a charm from the charm store? https://discourse.jujucharms.com/t/-/2682
<rick_h> timClicks:  have to go through an RT to IS
<timClicks> that's what I thought
<timClicks> thanks for confirming
<rick_h> timClicks:  replied
<evhan> Does juju have any concept of operation timeouts? e.g. when I deploy a change and it takes longer than ${SOME_CONFIGURABLE_DURATION}, the app/unit/whatever is put into error status automatically?
<evhan> Apart from bootstrap and actions, I mean. More a general one for charm operations.
<timClicks> hpidcock: hey btw is the aborting an action's work finished
<hpidcock> timClicks: no, the bulk of it has, just the last pieces will be finished this week
<timClicks> all good (just writing that doc)
<thumper> evhan: no
<thumper> evhan: when the charm can do entirely arbitrary things in hooks, it is virtually impossible to come up with any sane default
<thumper> A few PRs if people are bored...
<thumper> https://github.com/juju/juju/pull/11244
<thumper> https://github.com/juju/juju/pull/11197
<tlm> lgtm thumper added one comment to second PR
<thumper> tlm: thanks
<anastasiamac> wow tlm is bored already?
<tlm> it was a trap
<anastasiamac> :)
<tlm> I never get bored. Just break something
<thumper> tlm: the constant is Superuser, but yes, I agree and will update
<thumper> wow... make format is a mistake
<thumper> spending all its time going through the vendor directory
<achilleasa> stickupkid: is it possible to specify an lxd profile when bootstrapping? I am trying to limit the disk assigned to the containers
<achilleasa> (and having "disk" in a charm lxd profile fails validation)
<stickupkid> achilleasa, modify your default or juju-default
<stickupkid> achilleasa, that's the profile names
<stickupkid> achilleasa, but the answer to your original question, nope
<achilleasa> that won't work on CI where the profiles are shared... hmmm I need to find another way...
<stickupkid> achilleasa, https://github.com/juju/juju/pull/11245
<achilleasa> stickupkid: '-t' is for tee?
<stickupkid> -t for test
<stickupkid> achilleasa, we can't use tee, as we're shell and not bash and we can do piping
<stickupkid> with failures
<achilleasa> stickupkid: note that you have a static analysis failure for copyright ;-)
<stickupkid> achilleasa, argh, no tput in github
<stickupkid> achilleasa, me https://live.staticflickr.com/3438/4593531893_f67a757fa1_n.jpg
<stickupkid> achilleasa, fixed, turns out TERM isn't set in github, and I can't find what to use instead
<stickupkid> achilleasa, I'd rather use tput etc rathern than some weird echo setup... ah well, i'll fix that another day
<achilleasa> stickupkid: what if you export TERM=xterm?
<stickupkid> it hates me
<stickupkid> I'm sure I tried that, but I'll give it another go
<rick_h> stickupkid:  ahhhh, don't feel so unloved :P
<stickupkid> hahaha
<rick_h> and morning party folks
<stickupkid> forcing tty might be an option
<achilleasa> stickupkid: so my workaround for limiting disk space for logs is, stop jujud-*, mount tmpfs as /var/log/juju and restart jujud... let's see if I can get the acceptance test to run faster :D
<rick_h> achilleasa:  cheater lol
<stickupkid> achilleasa, i feel sick and amazed at the same time
<skay> help. I wrote a charm for an app, and up until now I've only deployed the app with one unit. I tried deploying it with two units and it can't handle the db connection properly.
<skay> one of the two units will have the expected state for a bit, then they both eventually settle into a state where they report the db not being connected
<skay> Here's the code that checks for connections and sets a connected state, https://paste.ubuntu.com/p/PvGsNVPN7K/
<skay> could someone help by reviewing that?
<skay> note, I wrote that code 2 years ago
<skay> I went ahead and made a post for it https://discourse.jujucharms.com/t/my-charm-cannot-handle-a-db-relation-when-it-is-deployed-to-multiple-units/2685
<stickupkid> skay, best way to get eyes on it +1
<rick_h> skay:  yea, best thing would be to look at something like it
<skay> rick_h: any recommendations?
<rick_h> skay:  sec thinking...first thought was keystone but that might be a big beast to jump into
<skay> (I'm looking through noisy logs right now)
<skay> how often do unit logs get rotated?
<rick_h> X days or Xmb but can't recall the numbers off the top of my head
<skay> I wonder if that workaround I have in there to use db.master.available instead of db.available is the problem
<skay> I haven't worked on the code in 2 odd years and I have a link to a thread on the forums from back then
<skay> (thank goodness I left comments and have a readme file with stuff in it.)
<rick_h> comments ftw
<rick_h> skay:  what db are you talking to?
<skay> rick_h: my memory of the postgresql charm is very foggy, but when it first joins, it creates a database according to the config settings for hte database name and role?
<skay> rick_h: so, that one. and then my charm figures out it is connected and sets up django and runs migrations.
<rick_h> skay:  no, I thought it created a new db, user, and password and sends it back on the relation data
<skay> by any chance do you know a good django charm I could look at?
<rick_h> skay:  so it can be used more than once (one db serves many applications)
<skay> rick_h: ah, so, it uses the new db, user, and password for the app I've related it to
<skay> that's what I meant
<rick_h> skay:  right, and then I think (and here's where it's just what I think) you'd use your charm to pass that info into any units that needed it as peer relation data
<rick_h> skay:  but maybe not, you'd just get the same relation data on each unit
<skay> rick_h: brb, short standup
<rick_h> rgr
<rick_h> skay:  https://jaas.ai/search?requires=pgsql (I'd look at landscape, mailman, maybe vault)
<skay> rick_h: I'll take a look
<skay> rick_h: I was assuming that each unit for the same app would just be able to get the same relation data
<skay> rick_h: the mailman3-web-charm is pretty readable. It checks for leadership in a few instances - e.g. before running django migrations. in other cases it does not. One big difference is that it does not use hooks except in one case, upgrade-charm.
<skay> rick_h: I'm assuming that with multiple units, the only unit specific code that would run would be when checking for leadership
<skay> I'm thinking that my unit may not be reporting state accurately, and that I should rethink how I'm setting/unsetting flags and reporting status
<skay> what's the recommended practice right now. use reactive states rather than hooks?
<dvntstph> howdy do... feel like such a noob, been years since I've used irc
<rick_h> dvntstph:  howdy
<skay> I have a question about the postgresql charm. I deployed it to an environment without altering hte default backup_dir.
<skay> and then, when I realized that storage didn't point to the parent directory of that, I changed the config to point to a location in storage
<skay> my question is, should I have to do anything other than change the config? if not, I have to report that it didn't work. backups are still going to the old directory
<rick_h> skay:  hmm, can look at the "submit a bug" in https://jaas.ai/postgresql and see if there's something to it
<skay> rick_h: https://bugs.launchpad.net/postgresql-charm/+bug/1864549
<mup> Bug #1864549: changing backup_dir config did not result in backups going to the new value <PostgreSQL Charm:New> <https://launchpad.net/bugs/1864549>
<babbageclunk> quick review for that juju/utils/tar missing dir bug? https://github.com/juju/utils/pull/309
<anastasiamac> babbageclunk: already done?
<babbageclunk> anastasiamac: thanks!
<anastasiamac> no worries i did it when u proposed ;D
<babbageclunk> hmm, looks like no mergebot watching juju/utils... adding
<anastasiamac> thnx!
<hml> babbageclunk: there shoudl be jobs for thatâ¦ i remember the pain of adding them.  :-)
<babbageclunk> hml: hmm, you're right! Why aren't they working then? <digs>
<hml> babbageclunk: that, iâm not sure of.
<babbageclunk> maybe cred rolling? seems unlikely though - would have been noticed yesterday
<babbageclunk> setup looks the same as on juju-restore ones that I know were working last week
<hml> babbageclunk: looks liek https://jenkins.juju.canonical.com/view/github/job/github-check-merge-juju-utils/ is running, did you nudge something?
<babbageclunk> hml: yeah, I fired that off - not sure what it's going to try to build, I expected it to ask for some parameters
<hml> babbageclunk: 13 is me, i aborted it.  expected parameters not to start.  :-D
<babbageclunk> hah sounds like we're both trying that
<hml> babbageclunk: i need to run away, so iâll let you have all the fun
<babbageclunk> hml: thanks! ;) catch you tomorrow
<tlm> do we have an example of util function for tests that can check the type of error before I make something ?
<anastasiamac> tlm: m not sure wot u mean... we have smth like c.Assert(err, jc.Satisfies, os.IsNotExit)
<anastasiamac> tlm: is it wot u r after?
<anastasiamac> Exist*
<tlm> it is, cheers anastasiamac
<anastasiamac> \o/
<kelvinliu> wallyworld: got this PR to upgrade podspec v3, could u take a look? thanks https://github.com/juju/juju/pull/11240
<wallyworld> sure
<wallyworld> kelvinliu: +1. i think we should rename the envConfig etc in this branch as well
#juju 2020-02-25
<kelvinliu> wallyworld: what do u mean rename env config, HO?
<wallyworld> ok
<babbageclunk> launchpad down?
<wallyworld> yup
<babbageclunk> I've got the page now, looks like it was just slow
<babbageclunk> I mean, very slow
<anastasiamac> yes.. very very slow
<tlm> slow here also
<wallyworld> kelvinliu: i merged your ppa packaging branch
<kelvinliu> ty, so i probably doesn't have permission to merge.
<wallyworld> maybe yeah, not 100% sure off hand
<hpidcock> wallyworld: thanks for the approval on the 2.7 merge
 * thumper screams into the ether
 * thumper digs for old PR to put back something
<wallyworld> tlm: would prefer not to export the k8s client struct. the pattern we use is to declare at the point of use an interface that represents the expose methods we want
<wallyworld> make testing much cleaner and easier
<wallyworld> reduces explicit coupling between layers
<tlm> I have removed most of those couplings this morning for testing, may need to discuss this more though
<wallyworld> ok, i'll wait for pr to be fully ready before poking my nose in
<tlm> no feedback is great. I realised this morning that mistake. I still have the export there so may need a hand to remove it later this afternoon as there is three cases it's still solving
<thumper> two PRs up for review: https://github.com/juju/juju/pull/11248 and https://github.com/juju/juju/pull/11247
<thumper> I know they'll clash, but I'll deal with that
<thumper> these are needed for new model summary watcher for dashboard
 * thumper EODs
<achilleasa> jam: thanks for the feedback. I will push extra commits to address your points
<jam> achilleasa: sorry it took so long
<stickupkid> fixing the issue where integration tests fail on exit
<achilleasa> jam: pushed additional commits and answered some of the comments; can you please take another look?
<achilleasa> can someone point me at the code that cleans up the agents/unit-XYZ-W folder once the unit is removed?
<hml> Anyone know what Commands are in terms of the uniter resolving hooks?
<stickupkid> manadart, rick_h got microstack working in a vm
<stickupkid> muhahaha
<manadart> stickupkid: Noice.
<rick_h> stickupkid:  with vm provisioning?
<rick_h> stickupkid:  the nested virt I was expecting to fail to get VMs in the microstack
<stickupkid> rick_h, we'll soon see
<rick_h> stickupkid:  ah ok gotcha
<rick_h> stickupkid:  good stuff on progress
<stickupkid> rick_h, going to launch a cirros image and see what happens
<stickupkid> works for me
<stickupkid> hey it's fast if you give it 18cores
<rick_h> stickupkid:  lol
<stickupkid> right i'm a bit shocked, wasn't expecting it to work... now how to make this usable
<rick_h> stickupkid:  still might be worth keeping an eye on the snapshot idea (though maybe a lxd snapshot) to not have to wait for it to install/setup each time
<rick_h> stickupkid:  but optimization I guess at that point
<manadart> stickupkid: https://github.com/juju/juju/pull/11249
<stickupkid> manadart, why not 2.7?
<manadart> stickupkid: I goes on top of https://github.com/juju/juju/pull/11208.
<stickupkid> manadart, fair
<skay> ouch. I am grepping through the postgresql-charm src and cannot find where it sets any flagsother than postgresql.upgrade.systemd. where are db.master.available and db.connected set?
<skay> oh, it's in the interface codebase
<achilleasa> rick_h: Is the reboot detection logic meaningful for CAAS workloads? I think not but looking for +1s
<rick_h> achilleasa:  no, since pods are expected to die/come back all the time not sure it's in juju's hands but more the k8s hands as the official scheduler of things
<achilleasa> rick_h: my thoughts exactly.
<rick_h> I'd think juju doing things would mess with k8s scheduler expectations
<achilleasa> to answer my previous question, the agent data gets purged via a call to something that implements worker/deployer/deployer.go:47
<achilleasa> of course it would be called "RecallUnit" to make searching for it harder :D
<achilleasa> (the actual implementation lives in simple.go in the same package)
<rick_h> lol "RecallUnit" that's new
<rick_h> "come home unit, come home!"
<achilleasa> stickupkid: is there a place to put common dependencies for workers?
<achilleasa> this is for something that will be exposed via the dep engine but don't want to put in workers/
<stickupkid> achilleasa, not sure tbh, not that I'm aware of
<achilleasa> stickupkid: got a few min for a quick ho?
<stickupkid> sure
<anastasiamac> babbageclunk: there is a PR for juju-restore to stop/start agent
<babbageclunk> anastasiamac: oops, missed that - looking now
<anastasiamac> babbageclunk: \o/
#juju 2020-02-26
<kelvinliu> wallyworld: got 1min to discuss files field for configmap/secret?
<wallyworld> kelvinliu: just doing a discourse reply, give me a few minutes
<kelvinliu> yup
<wallyworld> kelvinliu: free now
<kelvinliu> stdup?
<babbageclunk> before I roll my own, does anyone know a good way to read a series of bson docs from a reader or []byte? Seems fiddly (read length, slice off that many bytes, prepend length bytes, pass to unmarshal, rinse, repeat), but I can't see anything in mgo.v2/bson to do it for me.
<babbageclunk> I guess there's nothing stopping me from using a different bson library that does it
<babbageclunk> actually I don't need to unmarshal the docs, just count them
<babbageclunk> so I guess I'll just do it.
<anastasiamac> i wonder if something similar was done for upgrading from juju1.x to juju 2.x?...
<babbageclunk> I don't think so - I certainly didn't
<wallyworld> kelvinliu: just checking, is the PR reasdy to look at or do you need to do some more to it?
<kelvinliu> wallyworld: yes, It's ready to rv, im filling the QA steps now
<wallyworld> ok, will look
<wallyworld> kelvinliu: looks very nice, ty
<kelvinliu> wallyworld:  ty for rv, im doing last bit change, remove the `Image` in v3.
<kelvinliu> so only ImageDetails is supported in v3
<wallyworld> kelvinliu: i think we should first check
<wallyworld> am am pretty sure a few charms still use it
<wallyworld> can we delay that change till we talk to folks next week
<kelvinliu> i thought we have decided to deprecated Image from v2
<kelvinliu> ok
<wallyworld> kelvinliu: yeah, i think we just need to be cautious with this one
<wallyworld> we can talk to ken and the osm folks
<wallyworld> who will be most impactd
<kelvinliu> sure
<achilleasa> jam: I have inlined (haven't pushed the commit yet) the annotate helpers as you suggested but I am not sure how to change (or if its worth changing) this block: https://github.com/juju/juju/blob/3fcf0a1ffea4fdf2ef6f36c0ef0e2d143d12dee6/state/unit.go#L1497-L1515 any ideas?
<stickupkid> fun morning this morning, trying to work out how multipass wasn't working after a restarting - hit this issue: https://github.com/canonical/multipass/issues/1304
<jam> achilleasa: if the 'helper' is significant in size, then its fine to pull out. I was mostly arguing against the one-liner that actually obfuscates what is going on
<jam> achilleasa: so I'd keep 1497 as a helper
<achilleasa> jam: keep it inside the func or extract it out? It's only used by that method so I would prefer to keep it where it currently is
<achilleasa> jam: I pushed the commit that inlines all annotations except L1497. Can you take another look?
<rick_h> morning party folks
<manadart> stickupkid: Got time to review https://github.com/juju/juju/pull/11251?
<stickupkid> manadart, yarp
<manadart> stickupkid: I just realised since we're not passing it, we don't need the channel. I will make it a bool again.
<hml> stickupkid: https://github.com/juju/juju/pull/11252 quick pr review pls?
<achilleasa> hml: can you help me with the creation of an upgrade step?
<hml> achilleasa:  sureâ¦ i have mtg with rick so 1/2 hour?
<achilleasa> sure thing. ping when you have some time
<hml> achilleasa: back - jump in Daily?
<achilleasa> hml: omw
<dvntstph_> anyone recently tried to bootstrap a controller using letsencrypt auto dns
<dvntstph_> I see they're no longer getting signed
<dvntstph_> letsencrypt pushing back saying that ACMEV1 is no longer supported
<rick_h> dvntstph_:  looks like they deprecated how we use it, we'll have to update it looks like
<dvntstph_> :,(
<dvntstph_> thanks rick. wait for next release?
<rick_h> dvntstph_:  is there a bug? If you have a sec and can file it that'd be great
<rick_h> and yea, we'll have to fix it/release a fix unfortunately
<dvntstph_> yeah I believe it could be tagged as a bug. According to letsencrypt docs they will no longer honour certs issued from the ACMEv1 api
<dvntstph_> newly bootstrapped controllers fail to get the certificate signed as a result
<dvntstph_> how does canonical bug tracking work. do you log an issue on git or where?
<dvntstph_> ok opened bug. always second guess myself whether I've created them correctly
<stickupkid> manadart, I think I've got most of the process sorted, need to work on the actual add space test https://github.com/juju/juju/pull/11253/files#diff-c92860a8a8efa87fa5be705d9bc71c80
#juju 2020-02-27
<wallyworld> kelvinliu: babbageclunk: forgot to check, it shouldn't be too hard to support tag constraints in vsphere i hope. main thing is to be able to query the nodes for what tags they have i think?
<wallyworld> ie as per the discourse discussion
<kelvinliu> tags is marked as 	validator.RegisterUnsupported(unsupportedConstraints) , i m not sure why we decided to not support it
<babbageclunk> wallyworld: just reading about tag handling in the vsphere api, I haven't seen it before
<wallyworld> kelvinliu: i'm guess because we didn't yet implelent the api calls to ask vsphere about tags
<wallyworld> i reckon we could do something (he says all handwavy)
<kelvinliu> currently, we just fetch all instances then filter tags in  metadata field from client side, https://github.com/juju/juju/blob/develop/provider/vsphere/environ_instance.go#L88
<wallyworld> so we could do that server side too and support tag placement
<wallyworld> hpidcock: in the for loop to hande the process cancel/kill/exit, i think we'd want to cap the amount of times we retry kill and not get notification of process exit? with a suitable user facing message surfaced
<wallyworld> kelvinliu: tlm: hpidcock: i have a stateful set issue if any of you guys have time for a HO in standup
<tlm> ok
<kelvinliu> yep
<kelvinliu> Updates to the volume claim template are not currently permitted. A feature request to permit this is open at #69041
<mup> Bug #69041: Beagle: German translation incomplete <beagle (Ubuntu):Fix Released by ubuntu-l10n-de> <https://launchpad.net/bugs/69041>
<kelvinliu> wallyworld: tlm https://github.com/kubernetes/kubernetes/issues/85955
<skay> what do I do when my unit thinks there is a relation when there isn't? https://paste.ubuntu.com/p/w5b8Hw82tZ/
<rick_h> skay:  um don't know? and it skipped it so all good? :)
<skay> rick_h: no, juju status says that the agent is in error
<skay> I'm trying to figure out why and that's a suspicious thing in the logs
<skay> I restarted the service for that unit and the service for the machine, btw
<rick_h> skay:  oh hmmm, what does juju status --relations show? and are there > 1 units (peer relation?)
<skay> rick_h: there's only 1 postgresql unit. https://paste.ubuntu.com/p/t5kBzghkg2/
<skay> I did recently remove a relation I no longer needed. Previously there was pgbouncer. I removed it and connected things to postgresql directly
<skay> and since this is my test environment, I take down units that are connected to it willy-nilly and then spin up new units. sometimes apps
<skay> I like how this unit is on machine 101. it is juju failed 101. I should learn some lessons from this
<skay> if I had more ranks in dadjoke I would be able to make a good-worse joke than that
<skay> rick_h: do you have any troubleshooting tips for this? I am at a loss.
<rick_h> skay:  sorry, getting pulled in a few directions atm and we've got a bunch of folks out of the office today
<rick_h> skay:  no, I mean I would mark it --resolved and try to see if you can get past it
<rick_h> I'm not sure what is up with "skipping" but then an error
<skay> rick_h: I tried marking it as resolved. I can ask again later when things are less hectic
<skay> it's not in an error state, it's 'active' and 'failed'
<skay> I just noticed the yaml status has a better message. 'message: resolver loop error'
<rick_h> skay:  oh sorry, I thought you mentioned it was in an agent error
<rick_h> skay:  hmmm, can you paste more of the unit log on there then please?
<skay> brb standup
<rick_h> k
<skay> (the only thing I see in the log is the thing I pasted. I'm tailing it. I'll restart it to see if it has different output after)
<rick_h> skay:  ok
<skay> rick_h: I've been tailing hte postgres unit's log for a while now and those two lines are the only thing that show up.
<rick_h> achilleasa:  do you have any ideas around this agent error skay is seeing? https://paste.ubuntu.com/p/w5b8Hw82tZ/
<skay> achilleasa: here's a snippit from the status. the juju-status messages is 'resolver loop error' https://paste.ubuntu.com/p/GxVq58pH3z/
<achilleasa> skay: looking
<achilleasa> skay: which juju version are you using on the controller?
<skay> achilleasa: 2.6.10
<skay> they will be upgrading the controller soon
<achilleasa> skay rick_h: so there are a couple of places in (https://github.com/juju/juju/blob/2.6/worker/uniter/relation/relations.go) where this error is raised but there is not enough context to figure out which one is it (best guess is L383 or L400)
<achilleasa> maybe you could try to remove-unit --force to get rif of the stuck unit and spin up a new one?
<skay> achilleasa: ouch. that's my postgresql unit.  it's not extremely painful since i don't care about the database in this environment, but if it happens in a real environment it would be painful
<achilleasa> skay: can you share a mongo dump with me? maybe I can track down which relation name is associated with the 303 ID
<skay> achilleasa: I do not have access to the controller. would I need that? if I don't, then are there docs on how to get a dump?
<achilleasa> skay: you might be able to use "juju create-backup" (see https://jaas.ai/docs/controller-backups)
<hml> achilleasa: iâve reviewed 11255 and added comments.  still have qa to do
<hml> achilleasa:  one is more of an observation and question, rather than a request to change as itâs a set pattern in the code there.  :-/
<achilleasa> hml: I keep messing up the import stanzas... :-(
<hml> achilleasa:  doesnât help that the static analysis job isnât working correctly for imports either.
<hml> mine get minimized, so i donât see them until i push the code up to GH
<achilleasa> I will clean up the commits and force-push the right version
<achilleasa> hml: I think I fixed the stanza issues; can you take another look?
<hml> achilleasa:  sure
<hml> achilleasa:  the qa isnât working for me.  ho?
<achilleasa> hml: omw
<hml> achilleasa:  https://pastebin.canonical.com/p/kS82HDydk7/ model errors
<hml> achilleasa:  https://pastebin.canonical.com/p/BQxgqD7zqC/ controller errors
<gnuoy> I'm after some advice if anyone has a sec. I have a charm with a hook erroring with:
<gnuoy> 2020-02-27 17:49:12 ERROR juju.worker.uniter.operation runhook.go:132 hook "ceph-client-relation-changed" failed: could not write settings from "ceph-client-relation-changed" to relation 0: permission denied
<gnuoy> if I resolve the hook it works
<gnuoy> sorry, I meant:
<gnuoy> if I resolve the hook using a debug-hook session the error goes away
<rick_h> gnuoy:  ooh, I think achilleasa just fixed this one
<gnuoy> if I do it without a debug-hook session it persists
<rick_h> gnuoy:  single unit?
<gnuoy> always the non-leader of a two unit deploy in my case
<rick_h> gnuoy:  hmmm yea non-leaders can't write leader data. Sounds like a charm logic problem then
<gnuoy> right, but why does the debug-hook session make a difference ?
<rick_h> gnuoy:  the charm should be checking if it's the leader before trying to write the data
<gnuoy> yep, it should, and I believe it is
<rick_h> gnuoy:  ok so the question is why does it not do it with debug-hooks?
<gnuoy> yes. Full disclosure I'm using the operator framework and the bug could lie in there. but its hard to track down when using debug-hooks seems to make it go away
<gnuoy> I'm ssh'd onto the unit and happily resolving the hook and reproducing the error
<gnuoy> rick_h, I don't want to waste your time, the bug is almost certainly outside of juju. Just wonder if anything spring to mind about the difference in hook env when using debug-hooks ?
<rick_h> gnuoy:  thinking but confused tbh...if you're ssh'd to the unit I would think you'd not have hook context and have issues
<gnuoy> rick_h,oh, I'mm ssh'd in just to observe whats happining, not executing the hook from the ssh session
<rick_h> debug-hooks sets up the hook context, but I can't think of why it would affect leader data type stuff...unless maybe it's working around a check somehow?
<gnuoy> I wonder if this is a focal'ism
<hml> gnuoy: which version of juju?
<gnuoy> I tried with 2.8.1 and 2.7.3
<hml> gnuoy:  there should be no difference between the context during debug-hook and the regular hook execute then.
<hml> previously the only diff i know of was an env var not set with debug hook
<gnuoy> hmm, ok, I must be doing something truly stupid
<hml> but nothing to do with leadership
<hml> gnuoy: itâs always possible something is off with not debug-hooks.  definitely shouldnât be seeing a diff in hook execute
<gnuoy> fwiw https://paste.ubuntu.com/p/5df6Yw6XFH/
<hml> gnuoy:  are there any errors in the juju debug-log for that model?
<gnuoy> hml, does juju just use the exit code of the hook to determine if the hook worked ?
<hml> gnuoy: yes
<gnuoy> hml, I'm going to stop using up your time and go do some more digging, thanks for the ideas
<hml> gnuoy:  have fun.  i need to lunch, but will be back later
<tlm> morning
<rick_h> morning tlm
#juju 2020-02-28
<wallyworld_> kelvinliu: if you have a chance at some stage, here's that PR https://github.com/juju/juju/pull/11258
<kelvinliu> yep
<wallyworld_> kelvinliu: sorry, i didin't see your comments before i hit merge
<wallyworld_> i thought it had been approved
<wallyworld_> i managed to hit abort just in time
<kelvinliu> wallyworld_: nws, just a few minor suggestions
<hml> can i pls get a quick review of https://github.com/juju/juju/pull/11256
<achilleasa> hml: I have rebased 11255. Can you take another look at the pod-spec commits and verify that the k8s QA steps now work for you?
<hml> achilleasa: sure
<hml> achilleasa:  working on the qa - for some reason iâm pulling down the old code.  instead of the newâ¦
<achilleasa> make install followed by make for the operator before bootstrapping seems to do the trick for me
<hml> achilleasa:  not.. the actual code, i do a git log after i pull down the pr and get commits from feb 26 and feb 17 instead of feb 28
<achilleasa> hml: those are the right dates. I had the PR on the backburner until part 1 landed
<hml> achilleasa:  hrmâ¦ okay.  checking commit shas right now.
<achilleasa> so it's just rebase on top of different head; it preserves the original commit dates unless you explicitly override
<achilleasa> you should see dcf42c9 as the head for my branch locally
<hml> achilleasa:  the shas are okay.  trying the iaas charm again
<achilleasa> manadart: where can I find the cleanup script for manual machines?
<manadart> achilleasa: I don't think there is one.
<achilleasa> manadart: it's the one where we clean up the juju data so the machine can be reprovisioned. IIRC you patched it recently
<manadart> achilleasa: Ah, that one. https://github.com/juju/juju/blob/develop/scripts/removejujuservices.bash
<achilleasa> manadart: thanks
<manadart> This is also installed into /usr/sbin IIRC, via a cloud init during provisioning.
<achilleasa> manadart: is it overwritten when uprading?
<manadart> achilleasa: No, there is no associated upgrade step.
<achilleasa> wonder whether it's worth adding one... probably should
<achilleasa> though if you get rid of the uniter state, even if you reprovision the start hook wont fire because the unit will not have been started... it's probably safe to skip the upgrade
<hml> achilleasa:  qa was good - approved
<achilleasa> thanks!
