#juju 2012-05-14
<SpamapS> james_w: any service period, but it only monitors SSH and PING right now
<SpamapS> james_w: I am working out host groups, and then I will flesh out some ideas for monitoring things more deeply.
<james_w> SpamapS, ah, that sort of ping :-)
<james_w> SpamapS, I was talking to marcoceppi and m_3 about deeper monitoring last week
<james_w> I'm very interested in getting that working
<SpamapS> james_w: I was thinking that to really be effective there needs to be some abstract ways to say "Monitor me like this"
<SpamapS> james_w: so that we don't lock in to nagios
<james_w> SpamapS, yeah, to a point
<james_w> I think an interface for "here's a nagois plugin, run it" is important too
<SpamapS> meh, thats what the nrpe subordinate is for
<james_w> abstract is fantastic, but we can't abstract everything
<SpamapS> oh you want to feed the whole thing back to nagios?
<james_w> then my use case might be solved already :-(
<james_w> err :-)
<james_w> we have some application-specific plugins that we would want to expose if the admin was using nagios
<SpamapS> no I misunderstood who would be handing out plugins
<SpamapS> but to that point, a 'myapp-nagios-plugins' subordinate *would* work
<james_w> "run this plugin every 5 minutes if the admin wants to monitor"
<james_w> yeah, I think subordinates will be the answer
<james_w> with some nagios-plugin interface maybe
<james_w> I don't know nagios well enough
<james_w> I've never used it, just written plugins
<SpamapS> I'm underoing a personal refresher .. it was the core of things for me for a long time, but I got it down to the point where I didn't have to do anything with it ;)
<SpamapS> james_w: I think its fair to have a 'monitoring' interface which passes back abstract plugin names.. and then nagios will just make a best effort to monitor what it can.. and you can extend its capabilities with a subordinate
<james_w> that sounds reasonale
<SpamapS> so a website would say things like  'http;url=foo http;url=foo2'
<james_w> reasonable
<SpamapS> alright, my nagios charm now groups all machines into a hostgroup named after the service
<SpamapS> Ok since the old nagios charm was basically completely broken, just went ahead and pushed mine into the store.
<SpamapS> ihashacks: nagios is now infinitely more useful.. though still needs a lot of work to be able to do full monitoring.
<imbrandon> morning all
<melmoth> how to investigate what is going on if a juju status show a mysql service in "pending" state for more than 30 mn after deploying it (lxc) ?
<marcoceppi> melmoth: it'd be a good idea to see if the master container has been created or not. If you're on a slower internet connection it can take sometime to create the first image
<melmoth> i can connect to the console of the mysql service with lxc-console
<marcoceppi> there's a machine-agent.log file in your data-dir path for your local provider
<melmoth> so yes, it is
<marcoceppi> melmoth: well, what does /var/log/juju/mysql-service-0.log show?
<koolhead17> melmoth, aah thats what i was writing :)
<koolhead17> marcoceppi, hey there
<melmoth> hello koolhead17 :)
 * koolhead17 has lost his sleep
<marcoceppi> o/ koolhead17
<koolhead17> :P
<melmoth> http://pastebin.com/8bQ6dq7R
<melmoth> (the log file is elsewhere, but it match this machine)
<melmoth> last time i checked when i had a problem, the last line was this "lxc_start - invalid pid for SIGCHLD" stuff
<melmoth> so, i try to destroy my environment, starting again
<melmoth> still stuck
<melmoth> (to summarise, i cannot launch a mysql service on precise, lxc. The service keeps being on pending state)
<melmoth> any help appreciated. cause i have no idea what to change or where to look
<m_3> melmoth: wow, haven't seen that error for lxc
<m_3> melmoth: I almost always recommend clearing your lxc cache when there are problems
<melmoth> i did
<m_3> melmoth: `sudo rm -Rf /var/cache/lxc/`
<melmoth> i m now re trying again,with the ppa
<m_3> melmoth: ah, yeah... check juju version, and your libvirtd group membership
<melmoth> i m in libvirtd group
<melmoth> juju was the latest official one available in precise, now tryng with 0.5+bzr535-1juju5~precise1
<m_3> melmoth: after that it might be networking... `virsh net-list --all` should show only a default network that's active
<melmoth> wich i think is a new one (got a juju update after adding the ppa for chamrs-tools)
<m_3> melmoth: but give it time to try to come up with the new version
<melmoth> well, it show another network that is active, and i sort of need it for other purpose
<m_3> melmoth: make sure the `default` libvirt network is on 192.168.122.0 (you can do this with `ps auwx | grep dnsmasq`)
<m_3> melmoth: also that 192.168.122.0 is on `virbr0` (via `ip addr show`)
<m_3> unfortunately, that's pretty picky at the moment (known problem... bugs are filed)
<melmoth> seems ok
<melmoth> the only stuff i changed on te default network is to disable dhcp on it
<melmoth> (wich i also need for some other purpose)
<m_3> melmoth: hmmmm.... well that will be a problem for the juju local provider
<m_3> melmoth: two possibilities, depending on your setup:
<m_3> melmoth: a.) move your custom stuff up to a different libvirt network so juju can have `default/virbr0/192.168.122.0`
<m_3> (I know that totally sucks)
<m_3> melmoth: b.) perhaps you can spin up the juju local provider in a VM on that machine?
<balloons> is there a video walkthrough that explains juju at all? Something like setting up juju and deploying wordpress? I don't want a long video, just something to "show" the power and purpose of juju
<m_3> melmoth: b might be easier... it won't necessarily perform as well as on the bare metal, but... it'd at least get up
<melmoth> i think i might give b a try if i fail with the current test
 * balloons is trying to share juju with others
<melmoth> so at least i m sure i m using all vanilla stuff and can break everything without caring about my desktop :)
<m_3> balloons: yeah, I think we've got a good one up on juju.ubuntu.com... I'd have to look
<m_3> melmoth: yeah, sorry... we're pushing to get this particular problem with lxc fixed
<balloons> m_3, ahh.. thanks, I see the demo links @ the top
<balloons> hmm, they do all seem to be older though with a focus on ensemble
<m_3> melmoth: b should be fine if you can get the vm to think it's using something called `virbr0`
<nathwill> you can change the network device in the lxc container config...
<m_3> balloons: yes, so much has changed recently with the charm store landing too... it's a much easier story... `juju deploy mysql` deploys directly from the store instead of having to download charms and specify local repos
<m_3> nathwill: can you answer some of the askubuntu questions on this?  haven't a clue what to recommend as the best workaround for when you have existing libvirt networks
<balloons> m_3, yes.. sounds like an opportunity for someone to update :-)
<balloons> thanks
<m_3> nathwill: I think the code's actually looking for the literal `virbr0` though... and not the lxc default :(
<nathwill> right
<nathwill> if you want to change that, you can
<m_3> balloons: oh please!  that'd be awesome
<nathwill> in the container config... and probably in the template container to affect new units
<nathwill> though i haven't tested that
<nathwill> but i went through doing the same thing in my lxc instances to make sure they're all on the same network (192. vs 10.)
<m_3> nathwill: imo correct fix is for juju cli to a.) use lxc networking instead of libvirt (in precise), and b.) add a new dedicated lxc network for juju that doesn't conflict with any address spaces in use at the time
<nathwill> m_3, i agree that would be spiffy
<m_3> nathwill: cool... let's capture that b/c I think lots of people would benefit from your tests
<m_3> imo currently the biggest issue people have with juju... and it's often during their _first_ experience with it... it's a huge priority
 * m_3 coffee
<LemU_> Hi! I would need help... I was following this guide https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Deploying_Ubuntu_Cloud_Infrastructure_with_Juju     Im almost end of guide but cloud-publish-tarball ubuntu-11.10-beta1-server-cloudimg-amd64.tar.gz images ain't working.
<LemU_> It just gives me: Unable to run euca--describe-images.  Is environment for euca- set up?
<negronjl> 'morning all
<m_3> negronjl: morning
<negronjl> m_3: 'morning ... Ubuflu in full swing over here :/
<m_3> LemU_: looking at the doc now... euca-tools is pretty common for all kinds of ec2-api-based clouds
<m_3> negronjl: ouch
<m_3> bummer man
<nyr0x> hey, what's the purpose of 'machine 0'? i figured out that it is some kind of management node of the environment. i want to deploy a local environment with orchestra and i have 10 compute nodes that juju will take care of. but i don't want to lose one of the nodes doing almost nothing... so how much power does this 'machine 0' need? could i setup a small vm handling this task?
<LemU_> m_3 yeah I know, I was able to use euca-tools when I did manually install openstack cloud sometime ago, and I thought that I did understand how it works, but seems I'm missing something..
<m_3> LemU_: I'd make sure you're following MaaS-based docs... the docs might have changed over the past month with 12.04 landing with Maas
<m_3> nyr0x: juju refers to `machine 0` as the bootstrap node... it runs zookeeper and some agents that juju depends on
<LemU_> m_3 I have everything else working here, I just cant add images on my cloud.
<m_3> nyr0x: one thing to watch out for... `orchestra` has been deprecated and the juju bare-metal provider is called MaaS (metal as a Service) now
<LemU_> or to be more specific I can't reach my euca tools or something like that
<m_3> LemU_: euca-upload-bundle is out-of-band from juju... and my experience with it is stale.  perhaps somebody on #ubuntu-server or #ubuntu-cloud can help debug?
<LemU_> m_3 Ok, I'll try that, thanks for help !
<m_3> nyr0x: really small VM would work great for the juju bootstrap node btw
<m_3> nyr0x: the only real requirement is that zookeeper is java-based... maybe one-cpu, 256M?  maybe 512M? dunno... you won't be stressing it with 10 nodes
<m_3> LemU_: sure thing
<nyr0x> m_3: thx than i can use the sever handling the login node etc. (playing around to deploy a hpc-cluster)
<m_3> nyr0x: cool... keep us posted about progress
<nyr0x> m_3: once everything is running i will publish a series of blogposts about the setup
<m_3> nyr0x: cool... check out the minimum requirements for the `maas server` in the docs... it'll need cobbler, dhcpd, zookeeper, juju agents.  MaaS in much easier to get working than the old `orchestra` docs
<m_3> nyr0x: iirc it's a bootup option from the standard server iso
<SpamapS> m_3: 512M minimum.. zk keeps *everything* in RAM
<SpamapS> nyr0x: ^^
<m_3> SpamapS: thanks
<SpamapS> I think the t1.micro is a decent guide for what works as the tiniest possible node 0
<SpamapS> m_3: back home and ready to rock?
<m_3> SpamapS: only time I've looked was 130M used by zk for quite a bit more nodes... but then elbow room for other processes would imply at least 512 to be safe
<m_3> SpamapS: well back home at least :)... wrote part of a blender render-farm charm last night just to get away from it all
<SpamapS> Hah nice
<SpamapS> m_3: yeah I entertained myself by bringing nagios into the 21st century last night. ;)
<m_3> I saw that
<m_3> SpamapS: james_w and I were discussing over breakfast Saturday
<nyr0x> is it possible to bootstrap 2 environments to handle 2 different sets of machines?
<nathwill> nyr0x, yeah, from what i've seen, you use -e envname to specify the environment for your commands.
<m_3> nyr0x: yup
<melmoth> just to be sure: on precise is it better to use the ppa version of juju or is the default one suppose to be "good enough" ?
<m_3> nyr0x: I have like 12 different environments set up... maybe 4 running at any given time
<SpamapS> though that may require 2 separate maas's
<nyr0x> m_3: how do you specify which environment uses which machines? different management classes? because i have 2 kinds of machines in my network... compute nodes and I/O nodes they are made of different hardware
<m_3> nyr0x SpamapS: yes, mine are all ec2/openstack... no maas setups yet
<SpamapS> nyr0x: the ability to "tag" machines is feature #1 that maas needs to add soon
<SpamapS> I'm actually quite shocked it wasn't added first.
<m_3> nyr0x: but constraints can be specified now in juju (http://fewbar.com/2012/04/juju-constraints-unbinds-your-machines/)  I think that works with MaaS now
<SpamapS> m_3: constraints only allows *name* :(
<SpamapS> for maas
<m_3> bummer
<SpamapS> nyr0x: you can control which nodes are available at what time by only accepting/commissioning nodes right before you give them to juju
<SpamapS> nyr0x: in the very near future maas will have classification features that will make that permanent
<nyr0x> SpamapS: sounds promising
<tobin_> win 2
<Jarmo> My team has been setting up Cloud with MAAS and Juju, we did get openstack environment running, but we can't add images...  cloud-publish-tarball ./ubuntu-11.10-beta1-server-cloudimg-amd64.tar.gz images , gives us only this: Unable to run euca--describe-images. Is environment for euca- set up?
<SpamapS> Jarmo: I'm not sure I understand what you're even trying to do..
<SpamapS> Jarmo: it sounds like cloud-publish-tarball needs you to have your environment variables setup to access openstack via the euca-* tools
<SpamapS> Jarmo: that means setting env variables like EC2_URL, EC2_ACCESS_KEY, EC2_SECRET_KEY, probably S3_URL too
<Jarmo> SpamapS check end of this guide: https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<Jarmo> I think those cred files are setting variables?
<SpamapS> Jarmo: yes the '.ec2rc.sh' bit should set those
<SpamapS> Jarmo: if you can't run 'euca-describe-images' then something is wrong w/ your OpenStack setup
<Jarmo> hmmm, ok...
<SpamapS> Jarmo: you may want to ask in #ubuntu-server and #ubuntu-cloud too. Lots of the devs who work on Ubuntu Cloud hang out in there
<Jarmo> It's like this: i have used openstack manually and I think I know how to use eucatools etc.... I just dont find is there way to use manually way on maas + juju environment too.... I'll try ubuntu-cloud next :) THX :)
<Jarmo> or no: #ubuntu-cloud Cannot join channel (+i) - you must be invited
<SpamapS> wha?!
<SpamapS> Oh
<SpamapS> it was shut down
<SpamapS> which makes sense
<Jarmo> ahaa :D
<SpamapS> Jarmo: ok, yeah so just try #ubuntu-server
<Jarmo> ok, thanks
<m_3> SpamapS jcastro: we're getting quite a bit of action on the homebrew-juju osx cli... what criteria do we use to give someone the rights to merge Pull Requests into trunk?
<m_3> that's not charm-contributors or charmers... it's more juju itself
<jcastro> that's a good question
<m_3> s/quite a bit of action/action/... but still
<SpamapS> m_3: Its up to Brandon IMO.
<SpamapS> m_3: unless we want to start dedicating hardware and time to it.
<m_3> SpamapS: kinda think we should... (it's important to juju overall)
<m_3> I'll send something to the list
<jcastro> yeah this sounds important enough to get list-wide buy in
<Jarmo> BTW, Would love if there would be charm for creating images for openstack, or if there would be button "add image" on openstack-dashboard...
<Jarmo> I'll think ill try to do it after I get my system working perfectly.. just need to understand what goes wrong atm
<m_3> Jarmo: We do have http://cloud-images.ubuntu.com/ which includes openstack images
<Jarmo> but those images need handling with eucatools manually, would love if there would be way to put them in just by clicking button and telling "use this image"
<Jarmo> My team has been working for setting up Cloud with MAAS & juju, only thing we have problems is adding images
<Jarmo> or reaching eucatools, not 100% sure what excatly goes wrong
<Jarmo> maybe authenticating (damn that is hard word)
<Jarmo> but must say that I love charms, makes setting up openstack much easier & faster...
<Jarmo> ..but it is harder to tell what goes wrong at the moment
<SpamapS> m_3: well if we're going to pull it in.. we need to move it to launchpad so we don't have to maintain 2 teams.
<SpamapS> m_3: then all of ~charmers can own it like all the rest of these tools
 * SpamapS braces for the flame of bzr hate
<Jarmo> :D
<SpamapS> Jarmo: there's an active effort to have an external glance server which hosts the Ubuntu images, and then a feature to point your openstack at said glance server.
<marcoceppi> SpamapS: tbh it needs to be in LP because that's where the Juju core is stored
<SpamapS> marcoceppi: yeah. It gets complicated spreading out between github and launchpad
<SpamapS> if nothing else we need a project page so we can track bugs that are caused by the brew packaging
<Jarmo> there is guide : https://help.ubuntu.com/community/UbuntuCloudInfrastructure wich says EC2 API  To begin using the EC2 API, select Settings-> EC2 Credentials -> Download EC2 Credentials in the Openstack dashboard. Save the file (eg, /home/adam/openstack/"). We can then unzip these and begin using our cloud... Does this mean when I use those files on any computer wich is on same network I should have access for eucatools? or how you unde
<SpamapS> Jarmo: well you have to source the ec2env.sh or whatever it is called, but then yes
<Jarmo> ok, then I did understand right...
<SpamapS> Jarmo: note that the 12.04 OpenStack dashboard also has an option in there for downloading a juju environments.yaml
<Jarmo> (have been working 11+ hours so im not anymore so sure where I do my mistakes :P )
<Jarmo> SpamapS about downloading juju environments.yaml: i didn't find how this would be useful/how this feature should be used? can you lighten me where this helps?
<SpamapS> Jarmo: download it, put it in ~/.juju and type 'juju bootstrap' and you should get a working juju
<SpamapS> Jarmo: instead of having to figure out all the config options for juju like ec2-uri, access-key, etc. etc.
<Jarmo> ahaa, I have to give that a try tomorrow :)
<m_3> SpamapS: sent 'osx client' to the list
<Jarmo> hmmm, if I follow this "guide" I dont find too many spots where I could have done wrong...
<Jarmo> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<ihashacks> SpamapS: tyvm for your attention to the issues I send here. I bzr branch'ed your nagios/trunk but I don't know how to tell juju to use it as a local repository
<ihashacks> ... or do I zip it up as a .charm and stick it in ~/.juju/cache ?
<SpamapS> ihashacks: juju deploy --repository ~/charms local:nagios
<SpamapS> ihashacks: you need a dir under ~/charms with the release of ubuntu (precise) and put nagios in there
<SpamapS> ihashacks: also you can set $JUJU_REPOSITORY and drop the --repository ~/charms (I put this in my .profile so I never have to use --repository)
<SpamapS> m_3: woot thanks
<ihashacks> I did --repository= ... needed to get rid of the = ... derp!
<SpamapS> Jarmo: hopefully the people in #ubuntu-server will have more insight into your issue
<SpamapS> ihashacks: no, the = is optional
<SpamapS> ihashacks: though I think it won't work with --repository=[space]dir
<Jarmo> was hoping that too, but they were kinda silent guys :D Was so hoping that cloud channel would have been open :D
<Destreyf> ihashacks: i had issues with local repositories, my workaround was --repository=. local:<charm name>
<m_3> SpamapS: btw, those're just packaging formulas... they pull the cli itself from lp
<SpamapS> m_3: Yeah I know. I'm fine with leaving them there, but its going to be harder and harder to support split-admin of teams who might want to help w/ both.
<SpamapS> m_3: can cross that bridge when we come to it.
<SpamapS> but IMO, this is a fail from Debian that we don't want to repeat. they have 19 ways to maintain packaging in git/svn/bzr .. even heavy divergence between teams that use git.
<SpamapS> So, the more diversity in tools, the greater the barrier there is to contribution.
<ihashacks> Wait, it wasn't the = either. It didn't like "local:nagios" had to use just "nagios"
<SpamapS> ihashacks: using just 'nagios' will deploy from the main charm store
<SpamapS> ihashacks: which, actually, is now my charm ;)
<SpamapS> because that old one was.. well.. just that.. really old.. clearly hadn't been used ever
<ihashacks> ...clearly :)
<ihashacks> Ok that must have happened in the last hour then because I tried earlier and it still showed nagios0 instead of (now) nagios1
<Destreyf> SpamapS: i ran across some references on LaunchPad regarding Machine specific deployment, the last post talks about a Go Port?
<Destreyf> SpamapS: https://bugs.launchpad.net/juju/+bug/806241 <- to be specific
<_mup_> Bug #806241: It should be possible to deploy multiple units  to a machine (unit placement) <production> <juju:Confirmed> < https://launchpad.net/bugs/806241 >
<SpamapS> Destreyf: right, there is a rewrite of juju underway to the go language.
<SpamapS> Destreyf: bzr branch lp:juju/go  to check it out
<SpamapS> ihashacks: that number has nothing to do with the charm version
<Jarmo> that sounds really smart idea!
<SpamapS> ihashacks: you should see revision 20 if you have mine, or 2 if you have the old one
<Destreyf> that is the Unit number in the enviroment.
<Destreyf> so if you have done "add-unit" you'd see another.
<Destreyf> SpamapS: i shall look at juju/go thanks!
<SpamapS> Destreyf: its quite a ways off. What is it that you want to run two of on one machine?
<SpamapS> Destreyf: there are ways of deploying extra things, called subordinates.. that usually handles most of the cases that people want.
<Destreyf> We have a 4 Node 2U server, that we're going to use as the center of our cloud, and we're wanting to deploy openstack across it using juju to allow for easier configuration and setup of the nova-compute nodes
 * negronjl is back from lunch
<SpamapS> Destreyf: keep in mind that with VMs and the cloud, there's not so much of a point in running two primary things on a box as there were when we had only discreet servers
<Destreyf> OpenStack setup requires 8 servers for its operation, which is more than i have at this moment.
<SpamapS> Destreyf: so you want to put mysql, rabbit, etc. on one box?
<Destreyf> SpamapS: Basically i'm wanting to have MySQL and Rabbit on one box, and also on each "Compute" node having the nova-volume as well as we're wanting to offer a backup solution to some of our clients as well.
<Destreyf> whoo, almost closed my IRC window
<m_3> balloons: ah.. just saw that the videos I was pointing you to were commented out.  They were BrightTalk webinars and required an account to watch.  as such, they don't belong as-is on the front page of juju.ubuntu.com
<ihashacks> Not the unit nubmer but the number in the actual charm cache (at least that's what I think it is) http://paste.ubuntu.com/987680/
<SpamapS> Destreyf: yeah I think juju is going to remain weak for small bare metal deployments for the near term future.
<SpamapS> ihashacks: when you use local: there is no charm cache
<m_3> balloons: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFMQtwIwAA&url=http%3A%2F%2Fwww.brighttalk.com%2Fwebcast%2F6793%2F39309&ei=5VSxT9eqKuTq2QWGrLzpCA&usg=AFQjCNG9kG4AGdC45c6tql-e3xEpb8iXng
<balloons> m_3, I don't think I found the brighttalk videos
<Destreyf> SpamapS: He didn't use local
<SpamapS> ihashacks: oh right but you used the store.
<Destreyf> ihashacks: the _1 vs _0 is referencing the unit number
<SpamapS> ihashacks: juju status should show the charm: for the service.. if it says it is version 20 then you have the new one.
<balloons> interesting
<balloons> your having a webcast?
<ihashacks> I question that because in that directory I only have a 0 for wordpress but I have 6 units for wordpress in the current test I'm running.
<Destreyf> ihashacks: if you issue juju status, you'll see a nagios/1 that's the unit number, you'll notice you probably have 2 of them
<m_3> balloons: we'll try to get it either signup-free or redo them as pure screencasts
<Destreyf> the cache directory probably has 2 versions cached for nagios, and only 1 for Wordpress
<balloons> m_3, yes, definitely looks cool.. but a big silly to have to register to view the old talks
<ihashacks> Right, so the number relates to versions, not units. What I was infering was that I saw a new version in the cache and therefore got the new Nagios version.
<m_3> balloons: here it is without all the tracking crap http://www.brighttalk.com/webcast/6793/39309
<ihashacks> I think we're all on the same page and just don't realise it. :)
<Destreyf> lol probably
<SpamapS> ihashacks: indeed we are :)
<SpamapS> Destreyf: back to your questions. There *are* ways around your problem with current juju. But none of them involve deploying the stock charms. You have to make some minor tweaks.
<SpamapS> Destreyf: the simplest thing you could do would be to fork the current charms and add 'subordinate: true' and a 'requires: { local-relation: { interface: juju-info, scope: container } }' to the metadata.
<balloons> m_3, interesting container they have on that page
<Destreyf> I have no problems forking the Charms :P
<SpamapS> Destreyf: then you would deploy the 'ubuntu' charm and add-relation to put those services together on one box. "juju deploy ubuntu rabbit-and-mysql ; juju deploy mysql-subordinate ; juju deploy rabbitmq-server-subordinate  ...
<SpamapS> Destreyf: we have a problem encouraging you to fork the charms, because we'd like to encourage collaboration, not forking. :)
<SpamapS> Destreyf: the second step of that is 'juju add-relation rabbit-and-mysql mysql-subordinate ; juju add-relation rabbit-and-mysql rabbitmq-server-subordinate'
<koolhead17> melmoth, around
<Destreyf> so mysql-subordinate is the charm?
<Destreyf> and the rabbit-and-mysql is a reference name to a machine?
<SpamapS> Destreyf: not a machine, a service
<SpamapS> Destreyf: but yes, the others would be the names of the -subordinate forked charms
<SpamapS> IMO we should give you the option to do this at runtime, since its so simple, but for the moment, we don't.
<Destreyf> Okay, i think i have an understanding now.
<SpamapS> Destreyf: so the 'ubuntu' charm is a special empty charm
<Destreyf> SpamapS: its been great talking to you, i had wondered if i'd run into you directly after seeing you post on launchpad
<SpamapS> Destreyf: did we meet?
<SpamapS> I lose track. :-/
<Destreyf> No, just your posts are always very well done, and full of useful information.
<Destreyf> :P
<SpamapS> I try. ;)
<Destreyf> i've been working through getting the MAAS setup for the better part of 2 weeks, after dealing with some hardware issues i've gotten alot more working, i've only had one other problem, which nukes juju out of the box, for some reason, when the MAAS provisions a box, when it boots for cloud-init, the apt sources are ubuntu-mirror.localdomain
<Destreyf> and juju/zookeeper and others are unable to install
<Destreyf> but that's a 30 second fix with ssh after the MAAS provisions :P
<jorge___> hi! just to confirm, if I want to use a mirror in the instances created by juju, there is no support yet? I have to use some ppa?
<SpamapS> jorge___: right, there are some tricks you can do but ultimately you're stuck with whatever cloud-init chooses for your mirror.
<Destreyf> SpamapS: i've got to run, i'll be playing with Subordinates, thank you very much for your time, if i get a chance, i'll see what i can contribute back to the community once i get up and running.
<SpamapS> jorge___: on EC2 thats an S3 based mirror.. outside EC2 it will usually just be archive.ubuntu.com
<jcastro> SpamapS: hey I noticed you've been promulgating while we were away, nice!
<jcastro> 78 charms!
<SpamapS> Destreyf: your interest is a *great* start. :)
<SpamapS> jcastro: Yeah I promulgated a few of the charm contest entries.
<jcastro> well done!
<melmoth> koolhead17, i m back (got mysql started at least)
<koolhead17> melmoth, cool. because i see SpamapS here :)
<koolhead17> jcastro, hello sir
<jcastro> hi!
<jorge___> SpamapS: hum, ok! I was using 11.10 and now I'm using 12.04, in a private cloud. Before, I modified the image to point to my mirror. So, I have to do it again. I've read next releases are going to bring some features as apt-mirror, proxy and parameters to cloud-init, as when we call nova boot -f <file>. Am I right?
<SpamapS> jorge___: https://launchpad.net/juju/+milestone/galapagos  That is the current milestone's bugs.. and I see bug 897645 is there with a branch awaiting review...
<_mup_> Bug #897645: juju should support an apt proxy or alternate mirror for private clouds <cloud-init:Fix Released> <juju:In Progress by hazmat> < https://launchpad.net/bugs/897645 >
<SpamapS> actually my bad, its got a branch, but not proposed for merge just yet
<jorge___> SpamapS: ok! Thanks. I've written a description about a specific problem here. Maybe it can be helpful ... I don't know if it is a bug. Someone in openstack list told this. I dont know. http://pastebin.com/SnC4GLEi
<SpamapS> jorge___: I saw your issue on the openstack list. I think its because juju over-uses groups and openstack isn't quite able to keep up with the way juju is abusing groups ;)
<SpamapS> anyway, lunch time
<koolhead17> is someone participating or already registered juju charm session for LISA12>
<koolhead17> https://www.usenix.org/conference/lisa12/call-for-participation
<koolhead17> https://www.usenix.org/conference/lisa12/workshops-training-program-and-bofs#training
<jcastro> hazmat is registring lisa12
<SpamapS> koolhead17: we are submitting talks, and I hope we'll do a full charm school
<koolhead17> looks place for charm school
<koolhead17> hazmat, few more days left :)
<SpamapS> yeah its close to me too, so I can give one ad-hoc in the hallway if they say no ;)
 * SpamapS goes to lunch
<marcoceppi> hazmat: good luck at LISA - land of neck beards, perl, BOFH, and 1990's Linux Guy
<koolhead17> marcoceppi, i saw some of them at uds too :D
<SpamapS> marcoceppi: according to Robbie W, the crowd from LISA11 was super excited about juju
<marcoceppi> SpamapS: wow, I guess it's changed a lot from LISA08
<SpamapS> marcoceppi: they're a great target, as they're not interested in "learning" the cloud, but they're going to get dragged into it anyway
<SpamapS> best that we be the grease in those old squeaky wheels
<SpamapS> marcoceppi: I thought the same about it too, but Robbie found that the crowd was realizing they can't just keep buying boxes ... the cloud is coming :)
<marcoceppi> Cool, that's a great audience to target then
<koolhead17> SpamapS, i read on the site with mention of term "Cloud" which was kind happy feeling :P
<jcastro> SpamapS: m_3: hangout invite incoming!
<Destreyf> SpamapS: sorry to pop back in, on the requires, i see a JSON string, but in MySQL the requires statement is straight YAML, do i need to just convert the JSON to YAML?
<SpamapS> Destreyf: heh.. json is a subset of yaml ;)
<Destreyf> lol hadn't ever thought of it that way.
<thomi> Hi - I've been writing a juju charm to deploy quassel-core, and I'd love someone to take a look at it and give me a few pointers. What's the recommended way to do this? Just push a branch to launchad?
<SpamapS> thomi: https://juju.ubuntu.com/Charms has some pointers
<jcastro> there are steps there to follow ^
<thomi> ahhhhh, those steps aren't in the "create a charm" tutorial...
<thomi> swesome, thanks.
<thomi> *awesome even.
<Destreyf> SpamapS: sorry, that was a stupid question on my part :P
<Destreyf> when i bootstrap my server, i get "agent-state: not-started" and it doesn't change.
<Destreyf> just that just mean i need to deploy or?
<SpamapS> Destreyf: your servers may be busy doing stuff
<Destreyf> it shouldn't be, its a fresh box that had been setup last week and hasn't been touched yet.
<Destreyf> i'm getting an error that occurrs every minute in juju debug-log
<Destreyf> http://pastie.org/private/lw3emg4ygqdznfi1ozs9q
<Destreyf> and just so it can be seen: http://pastie.org/private/q69fcgvfnt498sgmifu3a <- that's the provision.py just in case.
<SpamapS> Destreyf: that means you don't have enough nodes in maas
<SpamapS> Destreyf: make sure they're 'accept and commission'ed
<Destreyf> http://i47.tinypic.com/2ykj1ap.png <- 3 nodes all say ready
<Destreyf> i did try the PPA of juju at one time, however i reverted back, maybe something related?
<Destreyf> hmm, i tried to do a cloud-init by hand on the node, and am getting a 401 back from the instance-id, something else must be going on.
<_mup_> Bug #999338 was filed: juju should complain/error on unknown charm metadata <juju:Confirmed> < https://launchpad.net/bugs/999338 >
<ihashacks> SpamapS: all works well with the add-relation for the nagios charm unless you have two units for a service
<ihashacks> "members" are separated by a newline then a , instead of just a ,
<ihashacks> I'm thinking an rstrip is needed somewhere around line 74 of /var/lib/juju/units/skymon-0/charm/hooks/common.py
<koolhead17> jcastro, ping
<SpamapS> ihashacks: thanks, I'll fix that
<ihashacks> np. You keep updating, I'll keep testing. :)
<Destreyf> SpamapS: Working on a fresh install of ubuntu 12.04. :P
<Destreyf> SpamapS: off i go to the server room again. :D
<SpamapS> Destreyf: no remote kvm?
<SpamapS> ihashacks: Hmm.. its working fine for me..
<SpamapS>     members ip-10-252-6-67.us-west-2.compute.internal,ip-10-252-85-84.us-west-2.compute.internal
<SpamapS> Hrm.. I'm starting to think that internal hostname is not as useful as service unit..
<SpamapS> mysql-0 looks better than ip-x9134815901
<imbrandon> heh
<ihashacks> members skysql-0
<ihashacks> ,skysql-1
<ihashacks> that is how my "members" look in the generated hostgroup files
<SpamapS> ihashacks: weird
<ihashacks> isntead of members skysql-0,skysql-1
<SpamapS> ihashacks: are you using local provider? maas?
<ihashacks> I get that if I 1) add-relation to a single-unit service (works) and then add-unit a second (fails) or 2) add-relation to a new service that already has two units (fails)
<ihashacks> local provider (which I understand to be the red-headed step-child right now) :)
<SpamapS> ihashacks: I added a unit to a working service. Got no newlines in there.
<SpamapS> ihashacks: I actually did most of my dev on the local provider
<SpamapS> but I didn't test add-unit
 * SpamapS tries it in local provider
 * SpamapS lands changes in charm-tools to set a 'maintainer' field.
<ihashacks> Manually removed newline in the hostgroup, Nagios starts up. juju add-unit a 3rd unit and the original newline plus a second new one is there.
<ihashacks> members skysql-0
<ihashacks> grrrr
<ihashacks> ,skysql-1
<ihashacks> ,skysql-2
<SpamapS> ihashacks: I'm debugging now
<SpamapS> ihashacks: ok, confirmed this only happens on local provider.. now to figure out why
<SpamapS> I think I know
<SpamapS> ihashacks: ok, fix pushed to lp:charms/nagios
<SpamapS> ihashacks: it takes a while for that to make it to the live charm store
<ihashacks> I've noticed, http://jujucharms.com/charms/precise/nagios still shows the notes from 2011/10/12 even though I know you fixed it. :)
<ihashacks> THANK YOU for the help. When this shows up in the store I'll beat on it again and let  you know what else I find.
<SpamapS> ihashacks: unfortunately, they are not linked..
<SpamapS> ihashacks: so sometimes the store has stuff that is not in jujucharms.com
<SpamapS> and vice-versa
<SpamapS> hazmat: ^^ note that this charm is woefully out of date on jujucharms.com .. any ideas?
<Destreyf_> Hola
 * hazmat pokes around
<Destreyf_> SpamapS: Hey, i had a question in regards to the deploy command you listed earlier.
<Destreyf_> "juju deploy ubuntu rabbit-and-mysql" is ubuntu an actual charm, or do i use any charm in its place?
#juju 2012-05-15
<SpamapS> Destreyf_: its an actual charm that deploys nothing
<SpamapS> Destreyf_: you could deploy one of your regular charms as the primary.. but this keeps them all on the same level as subordinates.
<Destreyf_> when i run that command i get an "2012-05-14 17:55:48,163 ERROR Error processing 'cs:precise/ubuntu': entry not found"
<Destreyf_> do i need to specify lp:charms/precise/ubuntu ?
<SpamapS> Destreyf_: that charm is broken, I forgot. Try 'bzr branch lp:charms/precise/ubuntu' into your local repo and then deploy it with 'local:ubuntu'
<Destreyf_> ::P i didn't think i was crazy
<Destreyf_> i just got everything setup again so i had just tried it once
<Destreyf_> kk time to do the magic of booting :P
 * SpamapS lands maintainer checks for 'charm proof'
<Destreyf_> btw powerwake does not ever work for me, though that may be the supermicro being a POS :P
<SpamapS> https://code.launchpad.net/~clint-fewbar/juju/add-maintainer/+merge/105738
<SpamapS> docs update for the maintainer field
<Destreyf_> SpamapS: you're a busy person :P
<bkerensa> SpamapS: :P I cant merge ^ you guys fixed the team privileges
<bkerensa> either way looks good
<SpamapS> bkerensa: who fixed the team privileges?
<SpamapS> damnit, lets make up our mind!
<SpamapS> bkerensa: no, I just did it wrong
<SpamapS> so confusing that there is an lp:juju/docs different from lp:~juju/juju/docs
<SpamapS> bkerensa: You don't need to merge other peoples' merge proposals. Just +1 them
<SpamapS> bkerensa: https://code.launchpad.net/~clint-fewbar/juju/add-maintainer/+merge/105742 .. *that* one you can Mark as Approved :)
<hazmat> SpamapS, fixed thanks for bringing that to my attention
<james_w> can someone explain to me the difference between -departed and -broken? The docs don't make sense to me
<james_w> in particular I want to run something each time a unit leaves the relation so that I can drop its address from a config file
<james_w> I don't care how the unit left in this case
<james_w> hmm, it looks like -departed is what I want, but it doesn't get run on remove-relation?
<james_w> and it seems I can't relation-get in the relation-departed hook to find out the info about what to remove?
<james_w> m_3, SpamapS: lp:~james-w/charms/precise/nagios-nrpe-server/trunk has a sketch of what I was thinking for the nrpe charm. The main concept still missing is how the charm tells nagios what to check and when
<james_w> I think it's a case where we may want to move to structured data in the relation info, as it's fairly complex to specify all of the needed info (check name, command, frequency etc.)
<hazmat> james_w, departed is a remote unit is gone
<hazmat> james_w, broken is the relation is gone
<hazmat> ie. remove-unit vs remove-relation
<james_w> hazmat, and there is not one that does both?
<james_w> so I should symlink them or something to handle both cases?
<SpamapS> james_w: you can't really handle departed the same as broken usually
<SpamapS> james_w: when broken fires, all the units are are already gone. All you have is the relation ID
<james_w> SpamapS, ok
<james_w> so it should clear everything
<james_w> what about getting the relation info when departed fires?
<SpamapS> james_w: right, if you look at what I just recently did with the nagios charm.. I prefixed everything on disk with $JUJU_RELATION_ID, and on broken, I just rm -f /etc/nagios3/conf.d/$JUJU_RELATION_ID-*.cfg
<james_w> SpamapS, cool idea
<SpamapS> simple is what makes the charm world go round :)
<james_w> SpamapS, I'll have to amend slightly in this case, as it's a single value to append to, so I'll have to use puppet or something as a layer of indirection
<SpamapS> james_w: dotdee works if puppet feels like too big of a hammer
<james_w> or cat :-)
<SpamapS> james_w: re the structured data.. this is needed for the general monitoring case as well.. I wonder if we can make use of the same interface
<james_w> SpamapS, that would be cool
<james_w> the structure in this case is {'check_name': ..., 'script': ..., 'frequency', ...}
<james_w> if other things can consume the script which would have nagios plugin semantics then it could be re-used
<SpamapS> NRPE does make things complicated in this light... hrm
<SpamapS> I think I could spend 2 weeks straight making the monitoring story really solid.
<SpamapS> I feel like that, and backups, need to get much much better
<james_w> the sketch I have has an service->nrpe interface and an nrpe<->nagios interface
<james_w> that would rock
<james_w> I can't decide if the nrpe<->nagios interface is any more than the interface you put in the nagios charm, but that one isn't currently extensible
<james_w> having a generic monitoring interface would be *fantastic*
<SpamapS> So what you really want is for a service to provide its own plugin
<SpamapS> which is definitely something NRPE was made for
<james_w> that's my primary use case currently, yeah
<SpamapS> For that case, I can see NRPE just swallowing and running whatever you give it from your service. I still want that to map to something that we can generically identify on Nagios.
<james_w> yeah
<james_w> the nrpe->nagios interface can just list check_nrpe!check_foo things for the host in question, and it should all just work
<james_w> whereas non nrpe would be things like check_ssh
<james_w> so it seems like there is one 'nagois-checks' interface that would suffice for both
<james_w> in addition to magic for http/juju-info/etc. interfaces
<SpamapS> so primary service sets this to nrpe: plugin=/usr/share/foo/plugin.py monitor_type=unique_to_this_service .. then nrpe says to nagios  monitor_type=nrpe args=unique_to_this_service .. and nagios says "Oh I know how to do NRPE" and just monitors that
<james_w> yeah, sounds like that would work
<SpamapS> I think a description would be helpful too.
<SpamapS> rather, alias
<SpamapS> but I'd call it description. Basically "What to show the pager duty guy at 3am"
<SpamapS>     echo "command[$NAME]=$COMMAND" >> /etc/nagios/nrpe.d/plugin-$NAME.cfg
<SpamapS> You can stick the $JUJU_RELATION_ID right there :0
<james_w> yeah
<james_w> it's the other side that requires the indirection
<james_w> "append private-address to allowed_hosts"
<james_w> other relations sorry
<SpamapS> right
<ihashacks> SpamapS: you're charm shows as latest in http://jujucharms.com/charms/precise/nagios
<ihashacks> Not sure if that's your or hazmat's doing
<ihashacks> aaaand, the latest Nagios charm looks good: members skypress-1,skypress-0
<ihashacks> ...until I find something else broken. ;-)
<imbrandon> SpamapS: whats the proper way to use run-as-hook that would drop me in a shell in say the config-changed hook context ?
<melmoth> grumble...I was thinking about trying to charm condor...
<melmoth> https://bugs.launchpad.net/ubuntu/+source/condor/+bug/919671
<_mup_> Bug #919671: Please remove condor from ubuntu precise <apport-bug> <ftbfs> <i386> <oneiric> <running-unity> <condor (Ubuntu):Fix Released> < https://launchpad.net/bugs/919671 >
<melmoth> manual installation seems to be a bit painfull... got to find something else...
<melmoth> anyone knows of a scheduler similar to condor that is already packaged on 12.04 ?
<Madkiss> hi there.
<Madkiss> I'm having a simple question for you guys, I think. If installing a charm with juju is "like installing a package", then ... what do I need charm for? ;-)
<Madkiss> or juju, for that matter
<Madkiss> maybe I'm just not seeing the obvious.
<melmoth> Madkiss, to deploy machine.
<melmoth> like, you want a reverse proxy , n http server backend and 1 mysq db used by those.
<melmoth> once your charm are written for those services, you can install them as easily as you would apt-get install something.
<melmoth> (well, you need to deploy them, and then define their relationship, that s about it)
<Madkiss> Okay. So it's mainly about getting the right configuration stuff into my system?
<melmoth> plural
<melmoth> into systemS
<melmoth> that are deployed in your cloud.
<Madkiss> Okay. And what's the relation between Juju and Orchestra?
<melmoth> orchestra has been ...cancelled.
<melmoth> the new stuff is now called Maas.
<melmoth> and with Maas, you can use juju to deploy bare metal boxes.
<melmoth> so you deploy your cloud infrastructure with juju, and then you deploy your vms with juju
<SpamapS> Madkiss: the reason for charms is that in the modern computing world, you want to run things across many machines, not just one.
<jcastro> SpamapS: what's the TLDR; status on hpcloud/juju?
<SpamapS> Madkiss: getting things setup to talk between two machines often involves multiple steps where you need to run things on one box, and then on another... and back and forth. So, ask for a database to be created, and a user, create a database user and the database, then create the schema. This distributed configuration is easy w/ juju.
<SpamapS> jcastro: HP cloud does not expose S3, so it cannot store charms ore the "map" that we use to find the bootstrap machine.
<jcastro> I knew that part
<SpamapS> jcastro: the 5 line change I had to allow us to use a non HP S3 is pretty much a hack.. but I may propose it for trunk.
<jcastro> ok so we're actively working on it though right?
<SpamapS> jcastro: yes :)
<jcastro> negronjl: here's "the big list" distro uses: http://reqorts.qa.ubuntu.com/reports/sponsoring/
<jcastro> SpamapS: so we're going to have to move to a "subscribe a team" instead of "use a tag" for charms
<jcastro> which I think is fine
<SpamapS> jcastro: totally. We can even have a bug bot that automatically converts new-charm to the subscription.
<jcastro> https://launchpad.net/ubuntu-sponsoring is the code
<jcastro> SpamapS: in this case, it's ~charmers instead of ~ubuntu-sponsors right?
<jcastro> we don't need another team do we?
<SpamapS> good questino
<SpamapS> I think we do
<SpamapS> jcastro: we want to be able to unsubscribe the team when we want it off the queue
<SpamapS> jcastro: there may be good reasons to subscribe ~charmers that don't include sponsorship
<SpamapS> tho I can't think of one now
<jcastro> well, in cases where I want a charmer to look at something ... feels "queueish" to me
<SpamapS> jcastro: test it out now.. because I think ~charmers has an implicit subscription to all the bugs anyway
<SpamapS> jcastro: so that may be one reason
<jcastro> Indeed
<jcastro> ok, so in cases like this, I am sure this was discussed at length when they did it for distro
<jcastro> so like, why deviate, it's probably like that for a reason
<SpamapS> I suspect launchpad limitations before "sane rational thought" ;)
<mars> Hi guys, any word on landing bug 958312 in precise?  A dev on my team using the disto version of juju was bit by a runaway log file this morning.
<_mup_> Bug #958312: Change zk logging configuration <juju:Fix Released by hazmat> <juju (Ubuntu):Triaged> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/958312 >
<SpamapS> mars: thats enough of a push for me. I'll start working on an SRU
<koolhead11> jcastro, did you got my mail sir. :)
<jcastro> yes, forwarded it on
<mars> SpamapS, ok, thanks for looking into it
<koolhead11> jcastro, cool. :)
<SpamapS> imbrandon: run-as-hook is jimbaker's thing.. but I believe you just run 'jitsu run-as-hook ...'
<SpamapS> imbrandon: to be clear, you have to run it on the box with the unit agent
<jimbaker> SpamapS, imbrandon - you can use 'jitsu run-as-hook' on a juju machine OR a client box that's running the juju cli
<SpamapS> jimbaker: thats a bit crackful
<SpamapS> jimbaker: and as soon as we implement real ACL's, that won't work
<jimbaker> running it on a juju machine could be useful for working with cron, for example; on your client box, for doing introspection or triggering exec to run as a debug hook
<SpamapS> jimbaker: you're relying on wide-open-zk
<jimbaker> SpamapS, it's just a tool ;)
<SpamapS> jimbaker: yes, and I like that it goes beyond the usual limits
<SpamapS> but.. a bit crackful nonetheless
<jimbaker> SpamapS, zk acls change this, and that's good. but it could be still useful then
<SpamapS> actually you'll probably just spin up w/ the admin secret ;)
<jimbaker> SpamapS, yes, certainly for the admin side of things. and for running on a juju machine, presumably restricted by the same acls as that user agent
<spidersddd> I am trying to test juju on a private openstack cloud and am wondering how to set the ec2 api target.  Can anyone help?
<SpamapS> spidersddd: if you have essex dashboard setup it will give you a pre-populated environments.yaml
<spidersddd> This is diablo with keystone, but I have an Essex test cloud I can get it from
<spidersddd> What is the process?
<SpamapS> spidersddd: its in the same place as you get your credentials from
<spidersddd> I will check it out.  Thank you.
<SpamapS> spidersddd: http://askubuntu.com/questions/94150/how-do-i-use-openstack-and-keystone-with-juju
<jcastro> negronjl: mira, I suibscribed you to the bug about "the big list"
<senior7515> how does one clone the juju charm repo for the latest ubuntu
<SpamapS> senior7515: you can use 'charm getall' from charm-tools
<SpamapS> senior7515: its *very* slow
<SpamapS> senior7515: there are 78 charms now.. but we expect there will be 100's soon.. thousands some day
<SpamapS> senior7515: you can get individual charms with 'charm get name-of-charm'
<SpamapS> jimbaker: can you please add something to https://bugs.launchpad.net/juju/+bug/992329 explaning why its important and what the impact of the bug is? I need that for SRU's.
<_mup_> Bug #992329: Ensure Invoker.start is called from UnitRelationLifecycle usage. <juju:Fix Released by jimbaker> < https://launchpad.net/bugs/992329 >
<spidersddd> I have been working on getting juju working with a openstack Diablo install with keystone and having no luck.  Can someone help me out with a more verbose method in juju?
<negronjl> jcastro: ok
<spidersddd> All I am getting back is :
<spidersddd> 2012-05-15 11:16:56,425 DEBUG Initializing juju status runtime
<spidersddd> Traceback (most recent call last):
<spidersddd> Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.
<spidersddd> 2012-05-15 11:16:56,510 ERROR Traceback (most recent call last):
<spidersddd> Failure: twisted.internet.error.ConnectionRefusedError: Connection was refused by other side: 111: Connection refused.
<spidersddd> Connection was refused by other side: 111: Connection refused.
<spidersddd> 2012-05-15 11:16:56,510 ERROR Connection was refused by other side: 111: Connection refused.
<spidersddd> I know the packets are making it to port 3333 and the response is making it all the way back to the juju initiating host.
<jujutest> can one specify the VPS deployment subnet with juju on EC2
<jujutest> VPC**
<SpamapS> jujutest: we have not tried VPC
<SpamapS> jujutest: I suspect it does nto work
<jujutest> got you. Thanks a lot!
<jcastro> negronjl: hey so all the code exists, etc. I guess we just need to integrate it?
<SpamapS> spidersddd: that means zookeeper isn't working most likely.
<negronjl> jcastro:  even better :)
<SpamapS> spidersddd: I assume you're getting that message back after bootstrap succeeded?
<jcastro> negronjl: also, jono ended up with the ~charmers bottle of rum, so he's holding onto it for us
<SpamapS> spidersddd: also, do you have an S3 component to your diablo?
<SpamapS> spidersddd: juju needs an S3 (nova-objectstore will suffice)
<spidersddd> We have swift backed glance.
<SpamapS> spidersddd: ok, so are you pointing s3-uri to the swift S3 frontend?
<spidersddd> Glance frontend
<SpamapS> spidersddd: glance is not S3 :)
<spidersddd> Got it.  Swift it is.
<SpamapS> spidersddd: http://docs.openstack.org/trunk/openstack-object-storage/admin/content/configuring-openstack-object-storage-with-s3_api.html
<SpamapS> spidersddd: specifically you need your swift proxy server to have the snippet listed there
<hazmat> SpamapS, i would recommend folks use the store instead of charm get.. unless they specifically want to develop a charm
<spidersddd> Thank you.  No S3 support in our setup.
<SpamapS> hazmat: I would not
<SpamapS> hazmat: until there is a switch-charm .. the store is just for playing IMO
<SpamapS> hazmat: inability to fix your charm == fail in production
<_mup_> juju/trunk r536 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] add relation topology check was only verifying one endpoint for user exc instead of topo exc [r=jimbaker]
<Destreyf> SpamapS: you online?
<SpamapS> Destreyf: I am, though I am dist-upgrading so I may disappear ;)
<Destreyf> lol, figures.  Have you ever heard of the MAAS interface saying "Duplicate Mac" with no nodes appearing in the list (i used the enlist function and the node never showed up, but now i can't add manually either)
<Destreyf> Or do you know who i can talk to in order to get some feedback on the whole MAAS provisioning, as i can't get them to install properly 50% of the time, (grub rescue prompt, with out of disk when accessing (hda0,2)/boot/)
<SpamapS> Destreyf: I've never used MaaS
<SpamapS> Destreyf: #ubuntu-server , ping roakasox or Daviey
<Destreyf> SpamapS: how do you use Juju then :P
<Destreyf> (i don't have amazon ec2)
<SpamapS> Destreyf: ec2
<SpamapS> Destreyf: and the local provider for testing stuff
<Destreyf> Ah, i never got local provider to work either :P
<Destreyf> it just always hung on the first deploy command when it built the container
<nathwill> destreyf: disable ufw and see if it makes a difference
<Destreyf> already did that
<nathwill> ah
<Destreyf> no such luck
<nathwill> then i got no clue
<Destreyf> that was on my home machine
<Destreyf> inside of virtualbox on an SSD array
<Destreyf> but, i'm not playing with the local containers :P i'm working on a MAAS deployment.
<SpamapS> Destreyf: hung or errored out?
<Destreyf> Hung
<Destreyf> left it running for 6 and 1/2 hours
<SpamapS> Destreyf: that sounds like errored out, not hung :)
<SpamapS> just didn't report the error where you could find it ;)
<Destreyf> well the juju -v debug-log didn't show anything, neither did the machine-agent.log inside of the data-dir
<Destreyf> and that was in an attempt to deploy just mysql+wordpress as the examples show
<SpamapS> Destreyf: there are like, 4 more logs
<SpamapS> Destreyf: a known problem w/ the local provider
<SpamapS> debug-log needs to be all inclusive
<SpamapS> Destreyf: there's master-customize.log , and then the unit logs.. :-P
<Destreyf> Ah, i hadn't looked there, however i tried several more times with no avail :P
<SpamapS> Destreyf: you are not alone :)
<Destreyf> But i did see alot of people saying that the local provider was buggy and to just try a couple times :P
<Destreyf> its just juju, zookeeper, and zookeeperd that i need for Juju correct?
<SpamapS> Destreyf: for local? You don't even need zookeeperd
 * SpamapS heads to lunch
<Destreyf> well i'm trying a from scratch install of stuff
<Destreyf> so i'll be dealing with MAAS
<MarkDude> jcastro, ping
<jcastro> MarkDude: hi
<MarkDude> Hey dude. I figure you may be recovered enough from UDS now. Now you get the after-UDS fun
 * MarkDude wants to see how to get the Juju in Fedora thing rolling
<MarkDude> I have one guy already signed on, and also Brandon said he would help
<MarkDude> Im gonna need a bit more. I will need at least 2 points of contact for Juju folks. Im guessing you are one
<MarkDude> I can then see about getting the rest of the details sorted on my side
<_mup_> juju/local-cloud-img r489 committed by kapil.thangavelu@canonical.com
<_mup_> cloud init file is passed through to lxc lib layer
<jcastro> MarkDude: when you say point of contact
<jcastro> do you mean technical or just otherwise?
<MarkDude> Well both of those would be nice
<jcastro> also is it possible to put a list as a point of contact?
<jcastro> sure, put me down for one
<jcastro> hazmat: you fine being the technical POC for fedora folks for pyjuju?
<MarkDude> But mostly but I figure having a few people would be good.
 * MarkDude assumes he should join some sortof mailing list for juju (never have too manyh of those)
<jcastro> https://lists.ubuntu.com/mailman/listinfo/juju
<MarkDude> Ty
<hazmat> jcastro, sure
 * MarkDude likes the idea of more stuff like this
<jcastro> MarkDude: here's your guy: https://launchpad.net/~hazmat
<MarkDude> We are all on the same penguin team :D
<MarkDude> My LP account is sooooooo not updated.
<MarkDude> Perfect guys.
<nathwill> so if we're currently submitting a charm, would y'all want we should add "maintainer" to avoid a need to update later?
<jcastro> yeah it's probably a good idea to do that now
 * MarkDude s gong to complete hs report on going to UDS. And then see about doing some more public thing on my side of it, hopefully more interested people.
<nathwill> k.
<MarkDude> The report startts, it was hella fun :D
<MarkDude> nathwill, you should have been at UDS
 * MarkDude will see you at OSCON
<nathwill> markdude, i know :) half a dozen people have been on me about missing it :P
<nathwill> markdude, yes you will
<nathwill> we'll be the ones throwing eggs at you ;)
<MarkDude> Sure, I deserve it :P
<MarkDude> I will scotchgaurd my penguin suit
<MarkDude> Thx jcastro hazmat
<hazmat> nathwill, yeah.. the charm lint tool requires it now for things going into the 'official' namespace ( as opposed to ~personal)
<MarkDude> jcastro, here is the link for the Open Pixel Cup. the open source game thing - http://lpc.opengameart.org/ Its a great project that needs a bt more publicity
<nathwill> hazmat, y'all got that pushed already? nice :)
<MarkDude> wink wink. nudge nudge :D
<hazmat> nathwill, SpamapS did the magic
<MarkDude> ttyl
<nathwill> well cool beans folks. thx :) i'll get the pending ones updated asap
<_mup_> juju/unit-address-changes r513 committed by kapil.thangavelu@canonical.com
<_mup_> unit address updates on agent restart, periodically checks for changes, and updates relations accordingly
<_mup_> juju/docs/rest-api-spec r24 committed by kapil.thangavelu@canonical.com
<_mup_> old rest spec revisited
<SpamapS> nathwill: I'll be sending out emails to maintainers this week before assigning them to all the charms in the store...
<SpamapS> nathwill: IIRC you have at least one charm in the store.. right?
<nathwill> SpamapS, i have 2 in new-charm queue, none in store yet
<SpamapS> nathwill: alright good, then def add them ASAP. :)
<nathwill> :) yeah, will do as soon as i get to my box w/ my ssh keys
<thomi> m_3: got a second?
<m_3> thomi: hey
<thomi> m_3: Hi, THanks for reviewing the quassel-core cham (at: https://bugs.launchpad.net/charms/+bug/999439). I've updated the branch with the requested changes, but I'm not sure if I need the relation and interface bit.
<_mup_> Bug #999439: Need charm for quassel-core <new-charm> <Juju Charms Collection:Incomplete by thomir> < https://launchpad.net/bugs/999439 >
<thomi> the only thing that can connect to the core is the 'quassel-client' package, so it's probably not reusable
<thomi> ...I also don't understand what an 'interface' is in this context.
<hazmat> thomi, its the communication protocol for a relation
<thomi> ahh ok
<SpamapS> thomi: you can probably not provide an implementation for quassel client since it looks like they're all GUI and thats a rare case..
<hazmat> thomi, ie.. the 'mysql' interface defines a protocol where on join of a new service, it creates a  db, user and sets that on its relation, and the other side can read that info in its rel changed hook
<thomi> ok, that makes sense
<thomi> SpamapS: right, since the client isn't ever going to be on the cloud, it doesn't make sense to define that interface I guess
<SpamapS> well thats saying a lot I think
<SpamapS> what if you wanted to make a robotic tester for the client?
<SpamapS> pop up a cloud instance with VNC as the X server and some program that remote-controlled it as part of automated testing
<thomi> ahh, OK
<SpamapS> or what about bare metal for an IRC kiosk? :)
<SpamapS> *rare* cases
<SpamapS> but not "never" :)
<thomi> fair enough :)
<thomi> I'll get on it :)
<SpamapS> seems like the server part of it is pretty tiny
<jcastro> SpamapS: hey ninja, I saw a branch be proposed for subordinates and ports.
 * jcastro wants to demo mod_spdy everyday
<imbrandon> heh
<imbrandon> quassel-client is also on android and windows too, so the only thing in the archive
<imbrandon> but not the only thing
<m_3> thomi: I was thinking maybe a detachable client that might live in the cloud.  it's not that big of a deal, but it's probably worth doing.  biggest thing was to remove the template relations and then make the metadata match the set of hooks you implement
<thomi> m_3: yup, makes sense now that I know what it does
<james_w> anyone know what the cause of http://paste.ubuntu.com/989853/ is ?
<james_w> oh oh! I bet bash isn't installed
<james_w> ah, not at /usr/bin/bash
<SpamapS> heh.. is it ever there?
<nathwill> SpamapS: /usr/local/bin/bash on this FreeBSD box :P
<james_w> SpamapS, no :-)
<james_w> SpamapS, I hadn't noticed before though as my config-changed hook wasn't executable, and so not running
<SpamapS> james_w: charm proof should have squawked at you for that
<james_w> W: all charms should provide at least one thing
<james_w> heh
<SpamapS> thats it?
<SpamapS> it should warn about non executable hooks
 * SpamapS checks
<SpamapS> aha!
<SpamapS> it checks install, start, and stop
<SpamapS> but not config-changed
<SpamapS> fixed in trunk
<james_w> SpamapS, cool thanks
<james_w> plus I'd fixed it already
<james_w> but I wanted to see what it said, and found that to be a rather amusing warning
<nathwill> SpamapS, would it then be considered bad form to include a non-executable data file in hooks?
<m_3> nathwill: put it somewhere else in the charm
<m_3> like /data maybe
<m_3> nathwill: but not hard/fast rule... just my pref to only have hooks in hooks/
<nathwill> hrm. alright... i only needed one file and was trying to avoid making extra top-level dirs, but.. definitely makes sense
<nathwill> adding to my to-fix list, lol
<m_3> :)
<m_3> CWD of hook during execution is top-level charm dir (aka $CHARM_DIR)
<hazmat> also avail as env var of the name noted
<hazmat> juju should be warning about non-exec hooks as well
<hazmat> i noticed go-juju just transparently makes them exec
<james_w> https://bugs.launchpad.net/charms/+bug/999990
<_mup_> Bug #999990: New charm: tarmac <Juju Charms Collection:New> < https://launchpad.net/bugs/999990 >
<james_w> \o/ bug number
<ajmitch> so close
<james_w> quick everyone try and get the good ones!
<m_3> hazmat: several charms have hooks that're just links to a single non-hook-named file.. usually in the hooks/ dir... just a note that we probably don't wanna break that
<m_3> james_w: wow... makes you wanna submit a few more bugs to see what happens with rollover huh?
<hazmat> m_3, noted
<hazmat> pg sequences go for a while beyond 100000!
<james_w> we'll find out in ~30 minutes, so hopefully it doesn't go boom!
<hazmat> but it would be fun to claim the num ;-)
<m_3> yup
<james_w> 999993
#juju 2012-05-16
<jimbaker> pretty cool to see the odometer roll over
<SpamapS> jimbaker: unless the odometer explodes
 * SpamapS takes cover for the next 25 minutes
<_mup_> Bug #1000007 was filed: juju add-relations should raise a StateChange error instead of Internaltopologyerror <juju:New> < https://launchpad.net/bugs/1000007 >
<nathwill> so is maintainer strictly an email, or will it take the standard: First Last <user@domain.tld>
<nathwill> oh nm... just saw the docs update
<imbrandon> #100000 was well played :)
<_mup_> Bug #100000: There are still too many bug reports <lp-bugs> <Launchpad itself:Invalid> < https://launchpad.net/bugs/100000 >
<imbrandon> oops
<imbrandon> the right one
<imbrandon> :)
<SpamapS> imbrandon: bug #1000000 was also well played :)
<_mup_> Bug #1000000: For every bug on Launchpad, 67 iPads are sold <Edubuntu:Triaged> < https://launchpad.net/bugs/1000000 >
<imbrandon> yea thats what i ment to link :)
<SpamapS> well the one linked was actually nicely done too :)
<imbrandon> yea
<imbrandon> hehe
<SpamapS> imbrandon: you done with turbo-drupal yet?
<imbrandon> actually i was testing it earlier
<SpamapS> imbrandon: I was wondering something actually
<imbrandon> it still has 1 or 2 bugs, i thought i was done
<SpamapS> imbrandon: there's a PPA with PHP 5.4 for precise..
<vrturbo> hi all, looking for a little help with juju and the ubuntu openstack deployment
<SpamapS> imbrandon: would be cool to see if it improves the performance :)
<imbrandon> SpamapS: what one, i havent dfound one that works well
<SpamapS> vrturbo: we can try, though I don't know if there are any maas experts here.
<imbrandon> yea 5.4 is generally 30% faster
<SpamapS> imbrandon: Ondrej Sury's is good
<imbrandon> oh no apc in that one
<vrturbo> Ok I followed the MAAS juju install from https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<imbrandon> apc segfaults
<SpamapS> https://launchpad.net/~ondrej/+archive/php5
<SpamapS> imbrandon: that one?
<imbrandon> yea i tried that one
<imbrandon> all works but apc segfaults
<SpamapS> imbrandon: weird. Well I'll be merging 5.4.3 into quantal soon
<vrturbo> all servers install it's when I added the relations via juju that things broke
<imbrandon> nice, maybe we can -backports
<SpamapS> vrturbo: how did things break?
<SpamapS> imbrandon: no, the rdeps are too many
<imbrandon> doh
<SpamapS> imbrandon: you'd have to smoke test everything that Depends: php5
<SpamapS> at last count it was 120 or so packages
<imbrandon> yea but with 5 years it may be worth the extra effort
<imbrandon> no rush but a goal
<vrturbo>  relation-errors: shared-db: - mysql
<SpamapS> imbrandon: you have to test them all *every* time you upload
<imbrandon> hrm yea , smoke tests can be automated tho
<SpamapS> vrturbo: ok, if you go to the unit with the errors you should have the error logged in /var/lib/juju/units/servicename-0/charm.log
<vrturbo> that pointed me too some error with DB
<SpamapS> imbrandon: that would be an *epic* win
<imbrandon> just a thought anyhow
<imbrandon> yea
<imbrandon> definately
<SpamapS> imbrandon: I'd get that into the daily jenkins QA for Ubuntu if you did that. :)
<imbrandon> i'll work twords it , see how far i can get
<imbrandon> good thing is the way 5.4 breaks things is pretty consistant
<vrturbo> on the node running the service or the node I ran the juju command from ?
<imbrandon> but yea i *thought* the super drupal one was done, but i ran into a few snags, i'd say about 3 more hours of work, so like 1 to 1.5 days
<SpamapS> vrturbo: on the node running the service
<SpamapS> vrturbo: juju ssh servicename/0 should work :)
<SpamapS> vrturbo: you can also do 'juju debug-hooks servicename/0' and then 'juju resolved --retry servicename shared-db' to get a chance to re-run the hook in a terminal window manually
<SpamapS> vrturbo: btw, which service has the error?
<SpamapS> vrturbo: would be simplest if you just did 'juju status | pastebinit' btw :)
<vrturbo> mysql didn't create correctly
<vrturbo> so keystone, then nove, glance etc
<vrturbo> just looking at the log now
<imbrandon> vrturbo: trying to contrain it to micro ? heh
<SpamapS> vrturbo: ah so everybody got errors?
<vrturbo> yeah, I logged into mysql this morning to try and debug and the databases were there
<vrturbo> but no tables
<SpamapS> vrturbo: so, nothing in the charm.logs then?
<vrturbo> running a debug on the relation of mysql and cloudcontroler showed the below
<vrturbo> hook.output ERROR: 2012-05-15 02:17:46 CRITICAL nova [-] (OperationalError) (1044, "Access denied for user 'nova'@'' to database 'nova'") None None
<vrturbo> so I tried to login directly
<vrturbo> via mysql
<vrturbo> and the nova and keystone users couldn't login to mysql, changing the grant permissions fix that, the the db sync on the services created the DB structure
<vrturbo> but I think that something broke between mysql and keystone right at the start, I"m still looking at the logs
<imbrandon> mmmm more charming music , err music to charm to :) http://www.youtube.com/watch?v=wX1wPLjPhlc
<imbrandon> i should put out a charmers itunes playlist :)
<SpamapS> vrturbo: right, mysql should be granting those users before sending them back to nova
<vrturbo> yeah I seen a similar error in keystone
<vrturbo> just trying the hook debug you recommended
<vrturbo> juju resolved --retry keystone/0 shared-db
<vrturbo> Marked unit 'keystone/0' relation 'shared-db' as resolved
<vrturbo> 'resolved' command finished successfully
<vrturbo> the debug didn't show much
<vrturbo> 'juju debug-hooks keystone/0' didn't show much
<vrturbo> keystone and shared db look to be ok now but what did the command do ?
<vrturbo> juju resolved --retry keystone/0 shared-db ?
<hazmat> its specifies a relation on a unit to repair
<_mup_> juju/proposed-support r485 committed by kapil.thangavelu@canonical.com
<_mup_> proposed pocket support
<SpamapS> vrturbo: its not going to show much, its going to open up a tmux session and a new window for each hook ...
<SpamapS> vrturbo: at the bottom you should see something like shared-db-relation-changed .. which is telling you that its time to run hooks/shared-db-relation-changed (if it exists) .. this gives you a chance to run the hook with a debugger/logs/strace/etc.
<SpamapS> hazmat: debug-hooks is really poorly documented. We need screenshots actually.
<_mup_> juju/proposed-support r486 committed by kapil.thangavelu@canonical.com
<_mup_> use proposed option when configuring cloud-init
<hazmat> +1 screenshots would help
<hazmat> its confusing till its understood
<imbrandon> btw SpamapS i got to spend ALL day sat, chillin with TSA officers at SFO, not by choice, made it into Kansas City at 2am ( should have landed at , ohh , 330pm local )
<imbrandon> heh
<ajmitch> imbrandon: lucky you!
<imbrandon> yea
<SpamapS> imbrandon: they didn't like the cut of your jib?
<imbrandon> heh, never really explained why, just held me for a bit
<nathwill> imbrandon, you wear a beard? they're beard bigots
<imbrandon> heh, well a 5 day beard :)
<nathwill> see
<imbrandon> or a goatee , wouldent really call that a beard anyhow, more just to cover the scar from 18 stiches on my chin :)
<imbrandon> mmm ok, more caffeine, cant sleep so i might as well roll with it and get some work done
<imbrandon> brb
<SpamapS> I have a chin beard
<SpamapS> never been detained
<imbrandon> yea that was the first time i have, flown countless times
<vrturbo> not sure why glance would say relation to keystone id ok when the other way around it shows an error
<hazmat> i think i've gotten it twice.. once for carrying vitamins in my carryon (clearly bomb material).. and the other for wearing baggy pants, which they couldn't see through with their fancy machine
<imbrandon> nice
<vrturbo> keystone/0:  agent-state: started  relation-errors: identity-service: - glance - nova-cloud-controller
<hazmat> SpamapS, they never detain DDs ;-)
<imbrandon> :)
<vrturbo> how can I remove the service and reinstall it via juju ?
<vrturbo> I want to try blow away the mysql and keystone and reinstall them
<imbrandon> vrturbo: should be able to destroy-service service/N
<imbrandon> then terminate machine for those machines
<imbrandon> then just juju deploy again
<vrturbo> what does the teminate machine do ?reinstall ?
<imbrandon> totally wipes the machine
<imbrandon> if its ec2 it terminates them and spins a new one up
<imbrandon> not sure on other providers, but the equiv
<vrturbo> ok I'm using MAAS but I'll give it a try
<imbrandon> yea likely a reinstall then
<imbrandon> SpamapS: looks like the debian php pkg team may have worked out the php-apc segfault
<imbrandon> i'm trying it again now, but there was an update a day or so before uds that seems to have fixed it maybe
<imbrandon> that i havent tried yet
<imbrandon> nice, yea segfault averted now, time to try it out on some real world stories
<ajmitch> damn, juju filled up my disk again
<imbrandon> wow
<imbrandon> small disk or bug ?
<ajmitch> bug, machine-agent.log goes out of control
<imbrandon> ahh
<ajmitch> it's a small disk (SSD) as well, which doesn't help
<imbrandon> true, combo ko
<ajmitch> https://bugs.launchpad.net/juju/+bug/958312 is the bug afaik
<_mup_> Bug #958312: Change zk logging configuration <juju:Fix Released by hazmat> <juju (Ubuntu):Fix Released> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/958312 >
<hazmat> SpamapS, thanks
<hazmat> doh.. oh not released
<hazmat> ajmitch, destroy-environment on local providers
<hazmat> before rebooting
<hazmat> or after rebooting
<ajmitch> yeah I'll need to remember to do that
<hazmat> is the workaround to that fix is merged, or use the ppa
<hazmat> er. its merged not released though
<hazmat> it should get SRU'd
<hazmat> SpamapS, pretty please
 * ajmitch hopes so :)
<imbrandon> hrm i've never written this kinda tests before , this may be a learning experince in itself
<imbrandon> SpamapS: oh btw after seeing the MBA in action i'm headed to the Apple store today when they open to get me one, gonna replace both that POS i dragged along and my MBP ( destined for craigslist )
<vrturbo> still a problem after I redeploy mysql and keystone
<vrturbo> 2012-05-16 02:07:22,975: unit.relation.lifecycle@WARNING: Error in shared-db-relation-changed hook: Error processing '/var/lib/juju/units/keystone-1/ch
<vrturbo> keystoneclient.exceptions.ClientException: An unexpected error prevented the server from fulfilling your request.
<vrturbo> (OperationalError) (1044, "Access denied for user 'keystone'@'' to database 'keystone'") None None (HTTP 500)
<vrturbo> so database issue with de-relate scritp
<vrturbo> should I be worried about this " Hook does not exist, skipping /var/lib/juju/units/keystone-1/charm/hooks/config-changed"
<ihashacks> "Bootstrap aborted because file storage is not writable: The supplied storage credentials were not accepted by the server"
<ihashacks> ... when trying to use with MaaS: https://wiki.ubuntu.com/ServerTeam/MAAS/Juju
<imbrandon> SpamapS: if you want a quick an easy one to rung the prom bell for, https://bugs.launchpad.net/charms/+bug/1000088 is just a slight variation on the newrelic-php charm for the sysmond
<_mup_> Bug #1000088: charm needed: newrelic sysmond <new-charm> <Juju Charms Collection:Confirmed for imbrandon> < https://launchpad.net/bugs/1000088 >
<vrturbo> looks like my issue was a problem with the .local domain
<vrturbo> I put on a .com domain onto the nodes via the MAAS server and the db-relation between keypoint and mysql works
<imbrandon> nice
<vrturbo> alright all, my work day is over, home time, thanks for you help
<imbrandon> SpamapS: hrm i might have opened a can of worms with this php testing BUT i think one that may turn into something much larger and is much needed the more I'm thinking about how to best go about it
<imbrandon> since i'm thinking new testcases for each project will need to be written, not just use the phpunit that exist now ( if at all )
<imbrandon> really though , 5.4 or not this might be a worthy project , esp given this will be for a lts
<imbrandon> just not sure exactly how to best get true coverage yet ... i'm sure i'll be pickin your brain about it at some point
<koolhead11> marcoceppi, ping
<gmb> SpamapS, around?
<koolhead11> gmb, i think it be too early for him :)
<anwak> Hi
<Jarmo> Hi, does someone know easy way to change openstack so it wont try use AWS, trying to use local... Did install it with maas + juju... and can't find what to change and where..
<koolhead17> hi all
<Jarmo> Hi, does someone know easy way to configure openstack so it wont try use AWS, trying to use local... Did install it with maas + juju... and can't find what to change and where..
<koolhead17> Jarmo, did you try juju docs
<koolhead17> i think there is a section which describes about openstack based config file
<nathwill> is there any juju search functionality to search the charm store?
<nathwill> that would be a righteous feature...
<koolhead17> nathwill, i suppose yes am not 100% sure though
<nathwill> yeah, i'm not seeing anything in the man page.
<nathwill> was thinking like `apt-cache search x`output
<koolhead17> nathwill, am not sure about that but it be cool feature SpamapS ^^ :)
<ihashacks> I'm going crazy trying to get juju + maas working happily.
<ihashacks> Here's my environments.yaml http://paste.ubuntu.com/990792/
<ihashacks> keep getting this on juju bootstrap: The supplied storage credentials were not accepted by the server
<ihashacks> I made sure the MAAS key for oauth is good and even created a new one.
<ihashacks> Here is my maas.log http://paste.ubuntu.com/990810/
<ihashacks> ...and all I can find through web search are references to previous commits to maas and/or juju which I believe are merged now.
<ihashacks> s/${}//
<ihashacks> facepalm
<m_3> nathwill: charmstore search is spec'd out but not implemented yet... perhaps in 12.10, but I don't know the exact timeline
<nathwill> sweet
<m_3> ihashacks: man... haven't used maas yet, it's on the tbd :(
<nathwill> thanks for the info m_3
<m_3> ihashacks: digging through the maas code, it looks like the maas api either doesn't have the correct paths for the file store or doesn't have the mac_addrs recorded in the filesstore correctly
<m_3> ihashacks: maybe look for path config stuff... (?)
<m_3> ihashacks: also, there may be more folks familiar with maas in #ubuntu-server
<m_3> at least for now
<negronjl> 'morning all
<m_3> negronjl: morning
<negronjl> m_3: 'morning
<m_3> negronjl: hey... so are you cool with doing a charmschool sorta g+ hangout / ajaxterm in spanish sometime?
<negronjl> m_3: absolutely
<m_3> negronjl: awesome!
<Jarmo> koolhead17 can u give me link?
<Jarmo> Hi, does someone know easy way to configure openstack so it wont try use AWS, trying to use local... Did install it with maas + juju... and can't find what to change and where.. After installing it.... or before.... what ever is the easiest way
<Jarmo> koolhead17 can u give me link? or did you mean https://juju.ubuntu.com/docs/getting-started.html ? ...i can'tfind that ..yaml file mentioned there... I Think it should be there somewhere, because Dashboard prints it... but how i can change it..
<koolhead17> Jarmo, let me check it once
<Jarmo> my 1st .yaml file was something like thisone: environments:   maas:     type: maas     maas-server: 'http://maas.server.ip:80/MAAS'     maas-oauth: '${maas-api-key}'     admin-secret: 'nothing'     default-series: precise
<Jarmo> so i think there should be different kind of .yaml file after installiation, because dashboard can print it..
<koolhead17> Jarmo, i have no exp of maas so cant help you much
<koolhead17> :(
<Jarmo> that's ok, but i think it should be (almost) at same location than with openstack installiation.. maybe... :D
<Jarmo> there must be some logic with its location :D
<Jarmo> Note for myself :hmmm... maybe at nodes /.juju/enviroments.yaml at node computer....
<koolhead17> Jarmo, wondering if there does a maas gets creted
<koolhead17> and for that you have the yaml file used
<Jarmo> but I was wondering: I think I have my own api server up wich I could use, if only I would know how to modify "creds" wich openstack dashboard gives me..
<Jarmo> koolhead17: this is guide wich i did follow, then I did hear it uses AWS: https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<koolhead17> Jarmo, nopes that guide uses openstack AFAIK
<Jarmo> AFAIK?
<koolhead17> as far as i know :P
<Jarmo> on that guide this part gives me headache : uec-publish-tarball ./ubuntu-11.10-beta1-server-cloudimg-amd64.tar.gz images (actually cloud-publish-tarball) then it only says "not authorized"
<koolhead17> yes uec-pulish is no more ued
<koolhead17> used
<Jarmo> I was asking about that problem yesterday here, and ppl told me happens because i dont have AWS account
<koolhead17> well in that case i have no idea :(
<Jarmo> same here :/ But I'll keep looking that, wanted just share that problem, hoping some1 had figured how to deal with it :D
<koolhead17> jcastro, http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage do we not need to tell juju where my charms are lying?
<koolhead17> juju deploy local:mysql  <-- is it automatically going to fetch mysql charm from juju repo?
<Jarmo> yeah
<Jarmo> wich u did download with bzr command
<koolhead17> Jarmo, so it means i have to provide my charm source path
<Jarmo> https://help.ubuntu.com/community/UbuntuCloudInfrastructure    ---> Deploying Ubuntu Cloud Infrastructure with Juju explains it imo
<Jarmo> but you had to bzr those files for your computer to do deploy from local
<koolhead17> hmm. so i meant the askubuntu docs needs some modification
<Jarmo> ...I dont totally understand what you mean.... but I did deploy some files jsut with command juju deploy mysql... etc but some files didnt work with that command, so i had to bzr those files to my copmuter and then deploy --repository=. local<charm name> them
<Jarmo> hope that answers for your question :)
<koolhead17> hmm
<koolhead17> :P
<SpamapS> koolhead17: local usage (using the local provider) does not mean local: charm usage
<SpamapS> koolhead17: the question you linked is focused on the local provider, not local charms
<koolhead17> SpamapS, ok so when am trying juju deploy local:mysql it should work without throwing any error
<koolhead17> but it asks for source location 4 charm
<Jarmo> did u bzr it?
<Jarmo> u need to tell where that file is
<Jarmo> and normally it is your repo folder on root
<SpamapS> koolhead17: no, I'm just saying the question you linked is not relevant.
<Jarmo> like i did have precise folder at root
<SpamapS> koolhead17: you need either --repository path/to/charms or export JUJU_REPOSITORY=path/to/charms
<koolhead17> SpamapS, thats exactly what i was mentioning
<koolhead17> but if i follow http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage am going to hit error saying path not found
<Jarmo> I dont think  that guide works, atleast that way didn't work for me
<Jarmo> or atleast somethign must be done before that
<jcastro> which part doesn't work
<jcastro> he just updated it
<koolhead17> jcastro, all depoly commands need repository part
<koolhead17> path
<jcastro> no they don't
<jcastro> "juju deploy mysql" grabs mysql from the charm store
<koolhead17> jcastro, juju deploy local:mysql
<koolhead17> i suppose asks for --repository
<Jarmo> :O hmmm
<jcastro> right
<jcastro> that's if you want to deploy the charm you've downloaded by hand
<jcastro> but you don't need to do that unless you're modifying the charm or something
<Jarmo> yeah try without local:
<koolhead17> jcastro, so here local doesnot mean LXC par se but custom charms
<koolhead17> hot it
<imbrandon> no
<koolhead17> *got it
<jcastro> no this means LXC
<jcastro> it grabs the mysql charm from the store and deploys it locally on your LXC container
<koolhead17> jcastro, juju deploy local:mysql thrown error asking for where is my repository
<jcastro> right
<koolhead17> am i supposed to install some additional pkg too
<jcastro> no
<jcastro> you do juju deploy --repository whatever local:mysql
<jcastro> ok let's back up
<jcastro> what are you trying to do
<koolhead17> jcastro, i simply want to run mysql charm in my lcx
<koolhead17> lxc
<koolhead17> :)
<jcastro> juju deploy mysql
<koolhead17> jcastro, cool. so when am doing local:charm name am trying to deploy a charm located in my local machine with some repository path
<koolhead17> am i correct?
<jcastro> correct
<koolhead17> cool. confusion solved :)
<koolhead17> so local is for the location on charm but not anything with LXC
<koolhead17> got it
<koolhead17> SpamapS, thanks i got it finally what you mentioned :)
<james_w> am I right that two services can establish multiple relations using different interfaces at the same time?
<imbrandon> yes
<james_w> cool
<james_w> thanks imbrandon
<imbrandon> np
<pdtpatrick> Im getting the following error: http://pastie.org/3922991
<pdtpatrick> following this guide: https://wiki.ubuntu.com/ServerTeam/MAAS/Juju
<pdtpatrick> here's my environments settings
<pdtpatrick> http://pastie.org/3923026
#juju 2012-05-17
<ihashacks> pdtpatrick: do you have at least two nodes ready in your maas?
<imbrandon> jcastro: column80 ROCKS! dude, seriously. full stop
<bkerensa> jcastro: your images are oversized :P
<SpamapS> bkerensa: I bet you say that to all the community members...
<SpamapS> ;)
<bkerensa> heh
<bkerensa> SpamapS: I didnt get to meet you at UDS.... Your like a ninja
<SpamapS> ceph charm is basically totally rewritten and pointing at upstream "daily crack" repos now
<SpamapS> bkerensa: I descended upon Oakland, and then poof! smoke bomb..
<imbrandon> SpamapS: I totaly intended to buy a MBA today at the apple store, but degressed and got an iPad3
<imbrandon> :)
<imbrandon> i am however looking at ways to boot an andrioid kernel on it already ( its doable but a PITA and will likely wait until some of my more important projects are done )
<imbrandon> heh
<SpamapS> imbrandon: haha
<SpamapS> imbrandon: ipad3 is just a MBA w/o a keyboard and less CPU/RAM ;)
<imbrandon> yea
<imbrandon> well mostly the ram
<imbrandon> cpu its pretty comparable
<SpamapS> oh, and a crap OS ;)
<imbrandon> a5x ( x 2 ) isnt a joke :)
<SpamapS> No joke, but its no i7
<imbrandon> yea , the os isnt nice, well it IS nice, but not NICE
<SpamapS> try running vms on that a5x :)
<imbrandon> whoa you have a i7 in the mba ?
<SpamapS> yeah :)
<imbrandon> the ones i looked at today were i3 dual core
<imbrandon> :(
<imbrandon> hrm
<SpamapS> Wait no I have i5
<imbrandon> that was actually a big factor , i5 quad ?
<SpamapS> I forget now
<SpamapS> been looking at getting an i7 desktop
<imbrandon> cuz i "intend" to use this as my mobile machine 100% , with bluetooth keyboard
<imbrandon> and sell the MBP and give the old crappy mb to someone in need
<SpamapS> yeah whatever floats your mobile boat :)
<imbrandon> hrm, i may head back tomarrow for another look, they are good about returns
<imbrandon> well i want something ultra portable but 99% of the time is in ssh, even ios can do that
<imbrandon> and the tablet formfactor is so nice
<imbrandon> honestly if the asus transformers wernt so crippled ( even more than apple ) that would be perfect imho
<imbrandon> or a ubuntu/andriod phone with laptop "dock"
<imbrandon> but i think i'm ahead of the market a little bit on that one
<imbrandon> anyhow, onto some "real" business
<imbrandon> dude i've been looking into ways to get some "all the world php" coverage
<imbrandon> its not looking pretty at all without a whole project unto its self
<imbrandon> unless you have some ideas that i'm missing, i may have once again put my foot into mouth BUT the more i think about it the more i like the idea be it for 5.4 or just cuz
<imbrandon> basicly i'm thinking it needs to be an extension of the php packing group in debian, because at minimium packing level knoledge of each project is needed to get even a smoke test , i *think*
<imbrandon> not that it wouldent be a good thing, but a large under taking good thing
<SpamapS> imbrandon: indeed, you will have a hard time figuring out the url of the exposed app
<SpamapS> bbl
<SpamapS> anyway, I'm out
<imbrandon> oh and i did retest the debian pkg teams ppa, apc is now fixed
<imbrandon> l8tr
<imbrandon> SpamapS: ( i know your out, possibly retorical question for later as I know the policy but just wondering your thoughts ) what about a "semi official but totally unofficial" PPA for 12.04 that not only is the 5.4 upgrade but a parallel install of 5.4 in /usr/local/ ( yes a .deb no no ) or /opt ( another no no ) , apache/nginx/lighttpd all support multi php version installations
<imbrandon> something along the lines of multi python's ( but not technicly the same obviously but policy wise )
<imbrandon> and even trying to put/push that into 12.10 since for the full life of 12.10 5.3 nd 5.4 will be supported
 * imbrandon just needs to get back on the debian and ubuntu php mailing lists
<bkerensa> >.<
<imbrandon> sup bkerensa
<bkerensa> nada just trying to find some packaging work ;P
<imbrandon> bkerensa: juju charm or package package ?
<bkerensa> imbrandon: debian package ;)
 * bkerensa is perhaps submitting a patch for ia32-libs-multiarch atm
<bkerensa> :D
<imbrandon> bkerensa: wanna try something that i'm not even 100% sure is possible but if it is should be simple-ish , brew ( homebrew ) for Linux :)
<imbrandon> it works , havent tested it all, but its just a little bit of ruby, mostly if anything needs patches for OS X assumed paths like /Library and /private/etc VS /etc
<imbrandon> :)
<imbrandon> i have it on my todo list but its WAY WAY down at the bottom
<imbrandon> i'm thinking that I can do automated testing for the OS X "port" in Ubuntu once its available
<imbrandon> bkerensa: if you are feeling froggy here is the installer script for OS X to give you an idea, i've never done any Ruby packaging myself so no idea on the policys https://github.com/mxcl/homebrew/blob/master/Library/Contributions/install_homebrew.rb
<bkerensa> ;p
<imbrandon> but at one point there was a linux port attempt and all patches are now in upstream code
<imbrandon> so if any patches at all arte nneeded its minimal
<bkerensa> imbrandon: https://bugs.launchpad.net/ubuntu/+source/ia32-libs/+bug/1000541
<bkerensa> >.<
<_mup_> Bug #1000541: ia32-libs-multiarch depends on gstreamer0.10-fluendo-mp3, causing problems when installing packages from partner <ia32-libs (Ubuntu):Triaged by vorlon> <ia32-libs (Ubuntu Precise):Triaged by vorlon> <ia32-libs (Ubuntu Quantal):Triaged by vorlon> < https://launchpad.net/bugs/1000541 >
<imbrandon> bkerensa: did you test that it works as intended without that dep ? if so i'll review/sponsor it here in a hour or two
<imbrandon> well into Q i will, you'll need a bit more red tape after to get it into -updates
<imbrandon> specificly [Regression potential]
<imbrandon> Dropping the dependency might cause wine to FTBFS, contrary to Scott's assurances
<bkerensa> imbrandon: yeah :) slangasek is in my loco
<bkerensa> ;p
<bkerensa> but the issue is building it
<bkerensa> so I can test it
<imbrandon> bkerensa: kk i'll look in a bit and lets take the rest to -motu chan as its more relevant there
<bkerensa> kk
<hazmat> james_w, a client can have multiple rels of the same interface to a server just so as their named differently
<james_w> hazmat, and presumably multiple rels of different interfaces
<hazmat> yup
<james_w> cool, thanks
 * SpamapS had an aha! moment on the way back home and is now almost done making a ceph charm that does not suck
<SpamapS> btw we really need automatic leader election and a "service bucket" for things that need to be atomically consistent between all nodes of a peer relation
<hazmat> SpamapS, woot re ceph charm
<hazmat> SpamapS, doesn't charm helper have a leader election script?
<SpamapS> hazmat: it will not work
<hazmat> and what do you mean by the latter?
<SpamapS> we confirmed it
<SpamapS> relation-list can sometimes *not* show the lowest member of a peer relationship
<hazmat> SpamapS, huh?
<SpamapS> I think it happens if the lowest member is lagging and hasn't started yet
<hazmat> oh.. out of order joins
<hazmat> SpamapS, and on the service bucket?
<SpamapS> right, so that was my only way of semi-atomically knowing enough to pick one place to run the bootstrap bits on
<SpamapS> hazmat: ceph needs to pick a UUID for the filesystem...
<SpamapS> hazmat: right now I have to just generate it on one box..
<SpamapS> hazmat: and then the other boxes have to know which box to expect it from.. based on a config item
<SpamapS> hazmat: so a bucket that I can say something like config-set --if-empty fsid=`uuid` ;uuid=`config-get uuid`
<SpamapS> hazmat: basically the same problem.. there are things that are just global to the service that are not human generated
<hazmat> SpamapS, so cluster wide atomic cas op
<hazmat> oops redundant
<SpamapS> hazmat: CAS would be cool too.. but is slightly different as it implies a more stringent mode of operation. This is just "ADD"
<SpamapS> hazmat: either one would solve the problem CEPH has. Actually, ceph is doing this upstream in their network protocol because their crowbar+chef solution needs it too.
<SpamapS> hazmat: but I can see other complex storage solutions being hard to get right w/ the current peer relation scheme.
<SpamapS> hazmat: having relation-* accept -r btw, has been really helpful
<hazmat> SpamapS, no doubt
 * SpamapS realizes redshift wasn't working.. turns it on, and realizes he will probably fall asleep within 20 minutes now
<SpamapS> hazmat: I'm getting around it now by having a config option to specify which unit ID to run the initialization on
<SpamapS> hazmat: and the README just states.. wait until you have added all your initial ceph monitors.. and then run 'juju set ceph-service initializing-unit=ceph-service/0'
<SpamapS> hazmat: its a one time thing, once it has run, the service can manage its own quorums and memberships
<SpamapS> hazmat: there seems to be a bug with upgrade-charm too..
<SpamapS> services:
<SpamapS>   ceph-mon:
<SpamapS>     charm: local:precise/ceph-mon-92
<SpamapS> root@ip-10-252-23-71:/var/lib/juju/units/ceph-mon-16/charm# cat revision
<SpamapS> 91
<hazmat> SpamapS, hmm.. i can have a look at that in the morrow.. i'm past my bedtime
<hazmat> SpamapS, if you could though pls grab the unit-agent log for it
<SpamapS> hazmat: I did.. I think its working normally
<SpamapS> hazmat: the version is set to the one its supposed to be
<SpamapS> hazmat: then there was an error
<SpamapS> hazmat: 'nite
<imbrandon> ok, so from the install hook ( or specificly a bash script called by the install hook ) can I config-set a variable , and assuming so is that the best way to inform a user ( juju user ) of a randomly generated admin pass or is there another way that is common/prefered ?
<imbrandon> m_3: i'm gonna email the list with a reply but any thoughts on adding some bits to the official juju Makefile to allow a "make darwin" or similar that would generate a precompiled traditional .pkg for OS X, then we dont have the dependancy on Brew and can control easier the way Zookeeper is compiled ( default with brew has no zkpython ) and filesystem permissions on install , the two main problems I've noticed with the current way. Also I'm thinki
<SpamapS> imbrandon: no, but I was just asking for a config-set last night. ;)
<SpamapS> imbrandon: right now the way to communicate passwords back to the admin is juju-log
<imbrandon> SpamapS: :(
<SpamapS> imbrandon: I know. :-P
<imbrandon> what about when another bit may need that same pass ? can i fake it by calling out to "juju config-set $service pass=$pass" ...
<imbrandon> hrm thats nasty tho
<SpamapS> imbrandon: perhaps we should develop a 'password safe' charm that stores all the generated passwords in a secure site that the admin can get access to. ;)
<imbrandon> hehe , not a bad idea actually
<SpamapS> imbrandon: for a large scale site it works.
<imbrandon> along with the ssh key charm too
<imbrandon> still gonna do that $someday
<SpamapS> imbrandon: calling juju set would require enough access creds to find the ZK node and talk to it... not something you'll have unfortunately.
<imbrandon> SpamapS: ok, hrm is there a way for the charm runner to see the log if they were not running debug-log at the time ?
<SpamapS> imbrandon: juju ssh unit/# sudo cat /var/lib/juju/units/unit-#/charm.log (but that doesn't work on local provider.. doh)
<imbrandon> SpamapS: even with access to the socket and such like a cron job ? still even so its too crufty that way
<imbrandon> hrm, what about if i just stored it in a file that was not accessable via the webroot, then added into the readme that if they missed it in the log they need to run "blah ..."
<imbrandon> and blah would be a one liner to ssh to the node and cat the file
<imbrandon> assuming they have ssh key access they should have admin access
<SpamapS> imbrandon: yeah a few charms put the admin creds in /var/lib/juju/something
<SpamapS> imbrandon: we need config-set.. period
<imbrandon> kk, that seem like the sensable thing to do until we grow a config-set
<imbrandon> apart from the password safe, as that really is a good idea but maybe overkill for the lone sysadmin drupal guy
<imbrandon> or gal
<imbrandon> maybe support for both at somepoint :) heh
<SpamapS> well we could build it into charm helper...
<imbrandon> ok, well with that bit of information "super drupal" will be ready for peer review today and hopefully promstrangulation
<SpamapS> sweeeet
<imbrandon> still will have lots of interation as the nginx charm evolves but it will be in a useable form, just not as broken down into suboranates as i;d like to see it eventually
<imbrandon> i figure i've put it off long enough though and will just need to update it as other subordnates make sense to replace bits of it
<imbrandon> btw i never have ask but prom my own charms is taboo correct ?
<imbrandon> i just assumed so
<SpamapS> new charms need to be reviewed always
<SpamapS> not by the author
<SpamapS> actually I don't think thats actually in our "policy" doc
<imbrandon> right, i;d feel better that way anyhow, but just makin sure i was clear
<SpamapS> we need to get that in an rst and in VCS...
<imbrandon> but updates to like newrelic-php i can commit directly ? or how should that work ?
<SpamapS> I'd say thats at your discretion
<imbrandon> also plain newrelic is ready for review too if you dident notice
<SpamapS> I do request reviews for any change that isn't an obvious improvement.
<imbrandon> kk, well if its minor stuff like i have ready to push ( readme updates and other minor misc stuff ) i'll push direct, larger changesets i'll likelly seek review just to cya
<imbrandon> SpamapS: yea in the rst etc we can word it similar to bin-new, as in any charm thats 100% new needs non-packer review, and past that if your the maintainer listed then its your descression as you should be vetteted not to do something dumb by that point
<imbrandon> or something along those lines
 * imbrandon is semi afk while he implments the passwd bits and does a few last smoke tests
<imbrandon> SpamapS: btw if you dident see jcastro blogpost about column80.com , zomg go look, soooo nice
<imbrandon> hrm, install hook calling out to the python REST api to run config-set , hahahah ok ok tine to just do it the dirty way and moan for go port to hurry so we can add config-set
<imbrandon> SpamapS: btw does run-as-hook not drop you in a tmux shell like debug-hook ? i could have swore that marcoceppi showed me that it did at UDS but i could have my commands mixed up a bit as I dident take notes at the time
<jimbaker> imbrandon, run-as-hook does not drop you into a tmux shell. but you can use it to trigger a relational hook exec, and then that could be captured by a debug-hook
<imbrandon> ahhh, did not think of that, i kept add-unit'ing
<imbrandon> heh
<jimbaker> so jitsu run-as-hook mysql/0 relation-set -r OUTPUT-FROM-RELATION-IDS refresh=NEW-VALUE
<imbrandon> the output being possibly wordpress/0 wordpress/1 mysql/0
<imbrandon> ?
<jimbaker> imbrandon, no the output of relation-ids. that would be something like db:0
<imbrandon> ohhhh oh oh ok, yea brain fart
<jimbaker> imbrandon, np
<imbrandon> hrm thats actually pretty powerfull
<imbrandon> err very
<jimbaker> imbrandon, yes. perhaps too powerful ;)
<imbrandon> heh
<jimbaker> rev up the chainsaw
<imbrandon> `lol , right
<imbrandon> ok last dumb question while we're on that train of thought, so i have debug-hook open , i trigger a config-change or relation-change , the tmux opens a window but dosent actually exec anythig right ?
<imbrandon> as in the hook never fires, it just gets ready to
<imbrandon> correct ?
<james_w> SpamapS, hey, https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-charm-unit-tests mentions 'jitsu run-tests' but does that command already exist?
<jimbaker> imbrandon, just flipped back to this screen - what you have in that tmux window is a shell environment where everything is setup so you can type commands as if it were in a hook context
<jimbaker> (which it actually is, just to be clear!)
<imbrandon> jimbaker: right, i got that, just not clear on if the hook its self fires or if its stoped
<jimbaker> imbrandon, the hook is running. but it's the tmux session itself
<imbrandon> pre-execution
<jimbaker> not stopped, not pre-executed. this point in time
<jimbaker> until you exit the shell, the hook is still running. which means no other hooks will run as well, since the hook scheduler serializes the execution of hooks
<imbrandon> ok so the relation hook fires and the console output is dumped into my tmux window and then at somepoint i have a prompt that i can do additional task in the same context ?
<jimbaker> imbrandon, in debug-hook mode, you are running in a hook context. you can take any action that the normal version of the hook would run. try running some relation hook commands. interact with the system in some way
<imbrandon> ok, cool, i thought when the window was presented it had preped the enviroment to run whatever hook it was about to run but stopped the execution of the hook its self prior to giving the prompt to me
<imbrandon> yea, pretty sure i have all that down. just wasent clear on if the hook its self ever actually fired or not
<jimbaker> imbrandon, you can also try running the normal hook script. this can be useful if it's buggy. try running it. maybe insert some extra debugging. whatever. then repeat again if that's feasible to do, until you're finished debugging
<imbrandon> ( not additional hooks , just the first )
<imbrandon> sure, exactly
<james_w> imbrandon, you're right about the effect
<imbrandon> ok cool, that makes things much clearer
<james_w> imbrandon, the normal hook isn't fired
<imbrandon> oh ok ...
<james_w> the subtlety is that it
<james_w> 's because the tmux window is itself being run as the hook
<james_w> rather than what normally would be
<imbrandon> ahhh ok thats what i thought but was not clear on
<james_w> so if you want to run the normal hook in tmux you can
<SpamapS> james_w: no that command is planned
<james_w> or you can fire up nethack
<jimbaker> imbrandon, it's subtle but very powerful once you get the hang of it
<james_w> juju doesn't care
<SpamapS> james_w: its the final piece of the puzzle for getting the per-charm tests going
<james_w> SpamapS, ok, thanks
<SpamapS> james_w: re jitsu run-tests
<james_w> SpamapS, the nagios charm is getting mighty complex, so I'd like to write some tests
<imbrandon> ahhh perfect , ok thanks james_w and jimbaker , yea i had most of it under wraps just not that last bit, but perfect
<james_w> (in my local hacking of it)
<jimbaker> imbrandon, cool. and remember not to cut off your foot with jitsu run-as-hook ;)
<imbrandon> hahaha right
<imbrandon> :)
<SpamapS> james_w: you can write tests now, and just put 'local:' in front of the charm name until I get around to writing a wrapper
<james_w> SpamapS, ah, true
<james_w> good idea
 * SpamapS really wishes JUJU_DEFAULT_NS hadn't been turned into "JUJU_EVERY_WAY_WE_MIGHT_WANT_TO_OVERRIDE_CS"
<imbrandon> heh
<SpamapS> james_w: export JUJU_REPOSITORY to get rid of the need for --repository
<james_w> yep
<imbrandon> SpamapS: does repository: work in env.y too ( i think i noticed that in the docs iirc )
<imbrandon> # repositories:
<imbrandon> #    - http://charms.ubuntu.com/collection/ubuntu
<imbrandon> #    - http://charms.ubuntu.com/collection/openstack
<imbrandon> #    - http://charms.ubuntu.com/people/miked
<imbrandon> #    - /var/lib/charms
<imbrandon> crap, bad paste, was intended for pastbin , sorry
<imbrandon> anyhow ^^ like so ?
<SpamapS> imbrandon: no that was never implemented
<imbrandon> darn
<imbrandon> phone, afk brb
<james_w> also, is it ok for a charm to execute commands sent to it from a relation?
<james_w> normally that would be a really bad thing to do, but maybe relations are trusted in juju?
<MarkDude> hazmat, ping
<m_3> imbrandon: I do like having packaging targets be part of the source, but I'm not married to the idea
<SpamapS> james_w: I think its a bad idea personally.
<SpamapS> james_w: put the commands in a package, and dictate that "its time to run those commands we both trust"
<SpamapS> james_w: PPA's are useful for that. :)
<james_w> SpamapS, I think that may be too big of a hurdle in this case
<james_w> arguments to the command would be ok, but then you have to sanitise them
<james_w> it's for saying run "check_http" with these arguments...
<SpamapS> james_w: thats too specific
<SpamapS> james_w: abstract it. "check_type=http arg=url;http://foo/bar"
<SpamapS> james_w: nagios is *not* the only monitoring solution
<SpamapS> and in fact, will likely not be the most popular once we make it easy to choose :)
<james_w> I don't want to try and write an abstract monitoring system
<james_w> I'd love one
<SpamapS> Why not? you can use Nagios as your guide
<james_w> but I'd like to have *one* monitoring system working first
<SpamapS> just don't be specific with cmdline args
<james_w> because that's ok for check_type=http
<james_w> I really care about check_type=custom
<SpamapS> nooooo
<SpamapS> its not custom
<SpamapS> its mysupercoolthing
<james_w> heh
<SpamapS> then add support for mysupercoolthing to nagios, or make a subordinate for that plugin
<SpamapS> otherwise, nagios will just refuse to monitor it
<james_w> right
<SpamapS> You can add a passive check for it to nagios, and just leave it as UNKNOWN forever
<james_w> but then how do you tell nagios to actually monitor it?
<SpamapS> that way its obvious to an admin that nagios can't check mysupercoolthing
<james_w> monitor a service using the plugin
<james_w> check_type=mysupercoolthing arg=what?
<SpamapS> like I said, you either make a subordinate for your super plugin, or you add it directly to the charm
<SpamapS> and then when Reconnoiter becomes popular, it can't do mysupercoolthing, but thats ok
<james_w> because it's then tied to the nagios plugin interface, and so we no longer have a generic monitoring solution
<SpamapS> not really. You probably have other monitors you want run
<james_w> so you are proposing a "monitoring-check" interface that expects "check_type=<type> arg=<args>"
<SpamapS> james_w: something like that yes
<james_w> and if the monitoring system in question doesn't know how to handle the type then it doesn't monitor it?
<SpamapS> james_w: *or* we go the other way and have an interface per monitor
<james_w> but a subordinate can't add interfaces to the primary right?
<SpamapS> james_w: right!
<SpamapS> which is one of the reasons I don't like that plan ;)
<james_w> heh
<SpamapS> james_w: the "doesn't monitor it" is just an idea that I have not thought through. It may be better that some kind of error is generated
<james_w> so how should arg=whatever be interpreted?
<jcastro> SpamapS: hey did we talk about subordinates opening ports at UDS?
<SpamapS> jcastro: not really. Its a known bug that is being worked on
<jcastro> ok so nothing controversial then, whew. :)
<imbrandon> yea i kinda came to that conclusion with the drupal charm too, i was trying to do too much all in one go, and decided to scale back and get "one monitoring system" working first then figured i could iterate as needed to get it to the sweet spot over time
<imbrandon> jcastro: mornin ;)
<SpamapS> james_w: though its perfectly fine, I think, for a subordinate to be related to in this case..
<james_w> SpamapS, ah, I guess at least for nagios the subordinate could set up the service in nagios' config directly, rather than through an interface
<SpamapS> juju deploy mysuperthing ; juju deploy mysuperthing-nagios-plugin ; juju add-relation my-nagios-service mysuperthing-nagios-plugin ; juju add-relation mysuperthing mysuperthing-nagios-plugin
<imbrandon> anyone else notice the stubleupon logo is very much like the juju one, even changed its color to orange too
<SpamapS> james_w: perhaps an interface for non-generic plugins is a good thing
<SpamapS> imbrandon: yeah I was shocked when I saw the orange color change. I think that was an accidental confluence ;)
<imbrandon> http://su.edgesuite.net/KwIEec5utYGrKmzXYLgFzg
<imbrandon> yea , used to be blue and green i think
<james_w> SpamapS, that's good I think, assuming that all monitoring services will allow for a subordinate to define a new service to monitor, which they probably will
<james_w> because the app charm only needs to understand the interface to monitor, and the monitoring charm doesn't need to know directly about the new check type
<james_w> I just need to think through nrpe in this arrangement
<SpamapS> james_w: yeah for NRPE you can just put stuff in a place where NRPE will find it.
<SpamapS> much simpler to solve w/ NRPE
<james_w> the tricky bit is getting nagios to monitor the service via nrpe
<james_w> to actually do the check
<james_w> defining the checks is easy I think
<jcastro> SpamapS: got a sec to talk juju debian?
<SpamapS> check_type=nrpe arg=...
<SpamapS> james_w: I think NRPE would be a subordinate actually.. its too specific to nagios
<SpamapS> jcastro: sure
<james_w> SpamapS, yeah, agreed
<james_w> in fact it already is :-)
<SpamapS> jcastro: its basically just one of my bazillion TODO's for the next week or two
<SpamapS> james_w: yeah, so you just need to feed back the primary unit id and address info to nagios via the nrpe interface
<james_w> to define the host?
<SpamapS> james_w: yeah, do it some consistent way with the "monitoring" interface so you don't end up with two versions
<james_w> I need to think it through
<jcastro> SpamapS: oh also, we appear to have work items unassigned on this spec: https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-charm-unit-tests
<jcastro> m_3: ^^
<jcastro> just pointing it out, I dunno if that was on purpose or if you guys just put those there opportunistically
<SpamapS> jcastro: huh?
<m_3> jcastro: nope, just haven't been processed yet
<SpamapS> of course we have work items on that spec. :)
<SpamapS> james_w: btw, once you get Nagios working.. you can make add-unit on it work by farming jobs out via the nagios mod_gearman
<james_w> ooh
<SpamapS> james_w: scalable monitoring FTW
<james_w> yeah
<james_w> SpamapS, gearman has its own transport right, it's not build on rabbitmq?
<SpamapS> james_w: correct
<SpamapS> its *far* simpler than AMQP
<james_w> yeah
<imbrandon> ugh, unplaned RL detour , back in a hour or so ( hopefully )
<hazmat> MarkDude, ping
<jcastro> sorry for the blueprint spam fellas, just rearranging bits
 * MarkDude has been looking at a few of the Charms. As near as I can tell, only some of them would be relevant for Fedora
<MarkDude> Differences in sytem files and such.
<MarkDude> 1. So, does the idea of folks curating the stuff that will be useful sound like a decent long term solution?
<MarkDude> 2. do you have any suggestions for a few very useful charms to include in my proposal
<MarkDude> Keeping in mind I am not a dev, and to some extent am known as *Fedora's Jono* :D
<SpamapS> MarkDude: oh man, how do you wash that off? ;)
 * MarkDude just watched his recent video, and needs to steal the slide that said * I am not a programmer*
<MarkDude> lol SpamapS I just figure beer as a cologne
<SpamapS> MarkDude: The charms have remained simple because they're targetted at Ubuntu. I'm hesitant to introduce big case statements or lots of if statements to enable another platform that is for most purposes "equal" to Ubuntu for the intended task.
<MarkDude> So far, the most interest I have gotten is for helping with rollouts
<SpamapS> MarkDude: However, there may be things that one can do much simpler on Fedora.. (name one?) and for those, what might make sense is to have a store of fedora-only charms..
<hazmat> MarkDude, realistically charms won't port
<SpamapS> MarkDude: the beauty there would be you could deploy the fedora-only thing, and relate it to the ubuntu-only thing, and they'd just hug it out and get the job done. :)
<hazmat> MarkDude, we need a separate collection of charms for fedora
<MarkDude> Ok, that was my read
<MarkDude> A few seemed independant
<hazmat> We don't provide the abstractions for portability, and encourage free form tool usage.. to the extent that individual charms use portable tools they could be considered portable
<hazmat> but thats a case by case analysis
<hazmat> easier to just setup a separate archive for the two and allow ad-hoc cross-pollination and portability convergence
<MarkDude> Ok, thats why I was thinking there are most likely a few of them that would *just work*
<hazmat> i think the initial focus should just be on getting the client running well on fedora
<hazmat> which is mostly just packaging afaics
 * MarkDude can get that part going
<MarkDude> and has a packager or two
<SpamapS> hazmat: I bet we could just use the current "series" thing we have now... cs:beefymiracle/foo sounds too awesome to ignore. :)
<MarkDude> lol
 * MarkDude thinks his *gitbeefymiracle* is the best association I have over there
 * MarkDude appreciates the input (and confirmation)
<SpamapS> MarkDude: IIRC, fedora has recently grown cloud-init capability, has it not?
<MarkDude> plans on now seeking out a few charms to include when I bring the proposal
<MarkDude> I believe so. The new project lead Robyn was chosen due to her work with the CLOUD
<MarkDude> She even had a cloud hat for last FUDcon
<MarkDude> Hmmmm, so this all means I will need to volunteer to be on approval board (or some crap like that) to help curate charms
<MarkDude> prolly sumthin' informal (hopefully)
<MarkDude> Thx   SpamapS hazmat :)
<MarkDude> Keep an eye out for charms that relate to cloud, or rollouts if you could :) I plan on going forth next week :)
<imbrandon> MarkDude !!
<MarkDude> Hello imbrandon
<imbrandon> MarkDude, hey man get me in touch with some .spec guys and i'll at minimum help with the initial "getting it running"
<MarkDude> Sure, sounds good
<imbrandon> that part should be fairly simple, its the all new charm collection thats the bear
 * MarkDude just wanted to immerse himself in trying to understand this whole juju thing
<imbrandon> :)
<MarkDude> Right now tho, the guys that a willing to help, are at FUDcon in Asia
<imbrandon> well pretend your a ubuntu full time user and anytime you want to type yum type apt-get instead , that should get you 80% there :)
 * MarkDude would also have to pretend he a full time Fedora user also ;D
<imbrandon> even better yet install apt for rpm as a depenancy of juju ( artificial ) and then chnage only package names in charms , hehe i dount that would work realizticly though
<imbrandon> doubt*
<MarkDude> I know an Ubuntu Memebr/Fedora Ambaasador that uses aot for package management
<imbrandon> aot ?
<MarkDude> apt
<imbrandon> ahhh, you type as well as i do i see
<imbrandon> so yea, hazmat and SpamapS are def the fellas to snag when things get a bit more in debpth, but like we talked about at uds, just gettting the initial rpm built and the cli "client" app running should be fairly streight forward and i could knock it out in an afternoon if i knew a bit more about .spec files
<imbrandon> MarkDude: ^^ but even without knowing much about them should still be semi easy for that much at least
<imbrandon> in fact, i'm gonna snag a fedora iso to load up in vbox after i push this drupal stuff today
<imbrandon> just to see if i can make it work without actually making the rpm
<imbrandon> i'm sure the deps are all available
<imbrandon> SpamapS: wow, just got pointed the RPM howto, and its ummmm tiny, as in like 10 printed pages, I was expecting the "Debian New Maintainers Guide" heh
 * imbrandon wipes forehead in welcome releif
<imbrandon> MarkDude: yea tho seriously this will get your a little kick start but i have no intentions to maintain it long term or anything so eventually you'll want to round up a fedora packer
<imbrandon> and there will be many more harder things, but this will enable fedora users to deploy and control juju environments on aws ( same as the mac client )
<izdubar> Yep, mostly it will be sorting thru to get relevance
<hazmat> imbrandon, http://docs.python.org/distutils/builtdist.html
<imbrandon> zomg
<imbrandon> hazmat: i love you
<imbrandon> bdist_rpm
<hazmat> i get that alot ;-)
<imbrandon> maybe python isnt so bad ( did i say that out loud ? ) then again we;re not using it on the web just yet so i havent fully inserted foot into mouth .... yet
<imbrandon> man if bdist_bundle existed then we could use it for OS X too, hrm ... man sooo many things to look into $someday, i wish there was 3 of me
<imbrandon> there is py2app but thats not really for cli apps
<hazmat> imbrandon, there's a few of the larger python apps for osx that ship their own dmg installers ... and have src avail for the process
<imbrandon> hazmat: yea i was actually contemplating that just today
<hazmat> give its cli/client nature and lack of much by way of deps.. not sure its a big deal
<imbrandon> and had mentioned it to m_3, as it would eleminate alot of the headache ( even just the little headache ) i'm having with brew now
<hazmat> the only c binary dep we have is libzk/zkpython
<hazmat> afaik
<imbrandon> yup
<hazmat> hmm.. i guess twisted as well, but osx ships that for us
<imbrandon> yea osx has twisted
<imbrandon> and lucky its new enough
<imbrandon> and with brew i have to build a special zookeeper anyhow
<imbrandon> as the one that is default dont have the c or ptython bindings
<imbrandon> so really i'm like 80% there, might as well go the rest of the way imho and eleminate the brew headache, it sounded good in theory but is a PITA for anything thats not super simple
<imbrandon> i intended to send a email to the juju list stating as much as see if anyone balked but i highly doubt it
<imbrandon> and dmg/pkg installers are easily both written in python ( i've done them in ruby too ) and buildable on other platforms very very easily
<imbrandon> really they are a sparse filesystem image that the dir layout and installer script is in certain directories and then the whole thing is gziped
<imbrandon> like a casperfs kinda
<imbrandon> so really like a charm that is shipped via a filesystem.img(a.k.a .dmg) and then when double clicked the OS knows to run hooks/install automagicly :)
<imbrandon> but makes it simple to cross build
<imbrandon> an apple .app bindle is really a directoy uncompressed and had a binary located at like
<imbrandon> bholtsclaw@ares:~$ ls -l /Applications/Safari.app/Contents/MacOS/Safari
<imbrandon> -rwxr-xr-x  1 root  wheel  34784 Apr 28 20:38 /Applications/Safari.app/Contents/MacOS/Safari
<imbrandon> installers the same way, only .pkg's or .dmg's ( compressed ) instead of .apps
<imbrandon> ok nuff osx FSH school, me gets to work
<MarkDude> Awesome
<imbrandon> 360 deg Panoramic shot during the Goobuntu presentation http://360.io/jh87QX , far from perfect but uniqe and the first one I'd ever taken
<imbrandon> Super Colin is so uber though the cam refused to capture the awsomness and blanked that portion out :)
<SpamapS> jcastro: hey how come you reverted the work items in servercloud-q-juju-charmstore-maintenance? I actually am already DONE with one that you marked back to TODO, and one is INPROGRESS that you set to TODO
<imbrandon> mmm yea that reminds me , i need to gather up the few work items i snagged to update the progress and/or remind myself of them adding to my agenda where i'll be nagged constantly
<SpamapS> jcastro: n/m .. fixed.. ;)
<imbrandon> most were minor but still dont wanna let them slip into cracks
<SpamapS> imbrandon: I think anybody in ubuntu-core-dev gets tracked on status.ubuntu.com
<imbrandon> yea
<imbrandon> 'but that dont annoy me, sms messages from google cal does :)
<imbrandon> heh
<SpamapS> oh that kind of reminder
<imbrandon> yea , just personal tracking so i dont forget any of them
<SpamapS> imbrandon: blueprint assignee will also buug you :)
<imbrandon> i normaly set stuff like this for a weekly self reminder that annoys the hell out of me on the last day if i havent updated it, seems to work pretty well for me
<imbrandon> oh it will ?
<imbrandon> that may work too
<imbrandon> hehe
<SpamapS> imbrandon: yes, assignee == responsible party
<SpamapS> in theory
 * imbrandon can see jcastro chaising brandon down to just get the satisfaction of annoying him
<imbrandon> i'd like to pick a few more longer term items up, not quite sure where tho, as in from more the php/lamp stack packing debian/ubuntu stuff of more deeper into juju core-ish stuff
<imbrandon> guess i'll see in the next weeks how thta plays out
<imbrandon> i just know my time i wont have for both, well not and be able to do well AND pay attn to charms ( dont wanna give that bit up )
<imbrandon> SpamapS: i re-looked at my purchase of the ipad vs the mba too, i think i'm gonna stick with the ipad , plus it has good resale value should i change my mind and will be intresting to see if i can make that workflow for mobile work without too much fuss
<imbrandon> that and i need something to purchase itunes music/videos on now that my machines are back on linux :)
<imbrandon> lol
<matsubara> hi there, is there a juju way of deploying a machine with a attached storage volume on it? I know I can do it through the install hook, but I wonder if there's another way of doing it
<SpamapS> imbrandon: There's always non-itunes music.. U1 for instance. :)
<imbrandon> heh, supose so, sooo much change tho, would make my head spin :)
<matsubara> SpamapS, hi, on bug https://bugs.launchpad.net/juju/+bug/811226 you mentioned a branch (https://code.launchpad.net/~clint-fewbar/ensemble/reuse-machines/+merge/67417) which is possible to start a machine with preconfigure storage. is that branch available somewhere else? I get a 404 when I click the link
<_mup_> Bug #811226: Configurable persistent backing storage <production> <juju:Confirmed> < https://launchpad.net/bugs/811226 >
<SpamapS> matsubara: https://code.launchpad.net/~clint-fewbar/juju/reuse-machines
<SpamapS> matsubara: pretty old now.. probably would need to be re-done
<matsubara> SpamapS, hmm is there any other way to start a machine with a storage volume attached to it? I thought I could through the install hook but that means I'll need to upload my canonistack credentials to the machine so I can euca-* tools
<matsubara> s/so I can/so I can use/
<SpamapS> matsubara: you can destroy the service that deploys on it, attach, then deploy/add-unit
<SpamapS> matsubara: you don't actually need that branch
<SpamapS> matsubara: there is a blank 'ubuntu' charm for doing this.. its broken in the store, so you have to grab it from lp:charms/ubuntu .. but then just 'juju deploy local:ubuntu' and then login, shape the machine as you want, and then destroy-service ubuntu and deploy whatever you want
<matsubara> SpamapS, thanks for the tip. one question, if the machines dies, I'd have to re-do the shape the machine part again, which is what I'm trying to avoid. Let me explain my use case:
<matsubara> SpamapS, I deploying a jenkins instance on canoistack and would like to keep JENKINS_HOME in this storage volume, so if the machine dies suddenly, I'd be able to deploy jenkins again, re-attach the volume and have all my data, configuration and jobs there
<matsubara> s/I/I'm/
<SpamapS> matsubara: I wonder if we can get this done without volumes, just by using a backup/restore model
<matsubara> SpamapS, how would that work?
<SpamapS> After reading through the manual.. I'd really like to see a charm written with ansible if it is possible.. http://ansible.github.com/
<SpamapS> The hosts / ssh part isn't useful.. but I like the declarative way to say "start that service, run this command to create that file" etc. etc.
<hazmat> what does that even mean?
<hazmat> SpamapS, ansibile doesn't run in the cluster it runs on the client
<hazmat> er. provider
<hazmat> while i dig it.. i'm not sure how it plays together
<SpamapS> hazmat: strip out the ssh'ing part
<SpamapS> hazmat: just run 'sh' instead of 'ssh' :)
<SpamapS> hazmat: becomes a good way to write hooks
<hazmat> ah.. in local mode
<hazmat> SpamapS, cdist didn't float your boat?
<SpamapS> yeah if it has that, we're set
<SpamapS> hazmat: no, its basically what we have with charm-helper-sh
<SpamapS> + the ssh bits
<imbrandon> hrm yea looks like pretty nice way to write hooks actually
<imbrandon> well syntax and preceedure wise
<SpamapS> http://ansible.github.com/playbooks2.html#local-playbooks
<imbrandon> but dosent that kinda cut down on the "you can do anything"
<imbrandon> with all the yaml
<SpamapS> imbrandon: it does.. which is the point. :)
<imbrandon> Group By Roles
<imbrandon> A system can be in multiple groups. See ref:patterns. Having groups named after things like âwebserversâ and âdbserversâ is repeated in the examples because itâs a very powerful concept.
<SpamapS> you can have any color youw ant, as long as its black
<imbrandon> This allows playbooks to target machines based on role, as well as to assign role specific variables using the group variable system.
<imbrandon> now that is nice
<hazmat> we'd probably need a plugin for template vars ala facter/ohai integration for rel stuff
<imbrandon> but but but i want the green eyed blonde , whaaaaaa you told me i could have anything !!!!
<hazmat> although hopefully that's trivial w/ relation-get --json
<SpamapS> hazmat: sure, looksl ike modules are easy to write though
<hazmat> SpamapS, yeah.. its pretty simple code
<imbrandon> hrm what about makeeing something like this availble as a plugin type thing ?
<SpamapS> http://ansible.github.com/moduledev.html
<SpamapS> could be as simple as 1 line that just execs relation-get --json $@
<imbrandon> as in a module for juju ( non existant ) that implments the best parts of ansible but still leaves the raw "do anything" under the hood
<SpamapS> a module for ansible, which makes it easy to template relation-get vars
<imbrandon> a module should be fairly steight forward to design in both go and python i *think* ( saying this never having tried it ) but just a litteral import module with proper naming / placement
<SpamapS> heh they even use key=value
<SpamapS> Ok, next charm I write I'm going to experiment w/ ansible
<imbrandon> SpamapS: oh wow and their helper tool is "jinja" ? hehe
<imbrandon> it does look good in some ways , but i think more in the sense i want to steal some of those bits for juju ( we are fallin behind here waiting on GO ) instead of using it inside juju
<imbrandon> because to be honest here if i'm gonna use it inside juju why not just use it all the way
<SpamapS> imbrandon: because it doesn't have a notion of lifecycle management
<imbrandon> ohhhhhh
<SpamapS> imbrandon: you have to get the order *just* right in the playbook
<SpamapS> all serial too
<imbrandon> "I would like (in 0.6 or whenever) for Ansible to offer a quick/dirty https server on some arbitrary port on the overlord instance, and then use a randomized secret to download files to the target from the overlord, say from the ~packages directory." i'm soooo making a subordiante to do that
<imbrandon> SpamapS: aparently its becoming more OO in 0.5 says the author so maybe not for long
<imbrandon> a juju-locker that stores admin passwords and can transfer files to authenticated user via a web interface :) hrm , music to my ears
<imbrandon> SpamapS: https://groups.google.com/forum/?fromgroups#!topic/ansible-project/e6ATPmFmlAk
<imbrandon> this definately is cool those, if nothing else to get inspired from on some of the bits they are doing right, but i think its goal is really to be juju its self unlike chef etc but more like a true clone ( probably unintentional , least i would hope )
<imbrandon> and actually very young project, anyone with a bit more ( ok alot more ) clout than me want to reach out to Michael DeHaan and see about folding the ideas into juju that make sense and then he gets the goodness of a full team beside/behind him
#juju 2012-05-18
<imbrandon> i for one think it would be awsome and probably the only project i've seen yet that it would make sense to do something like that
<imbrandon> ok dinners calling, be back
<car> this is more of a launchpad/bzr question .. but, is it possible to checkout / branch the florence milestone?
<hazmat> car, florence is precise
<car> thx
<hazmat> car, we keep trunk pretty stable (the ppa is a nightly build from trunk with tests)
<hazmat> car, there is one ugly bug in precise with local environments and restarts though
<car> I see
<hazmat> bug 958312
<_mup_> Bug #958312: Change zk logging configuration <juju:Fix Released by hazmat> <juju (Ubuntu):Fix Released> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/958312 >
<hazmat> else its a pretty good
<car> cool, thx
<iSeeDeadPixels> are there any VM charms?
<jcastro> hazmat: heya
<jcastro> juan told me he handed over the code for the sponsorship queue over to you?
<hazmat> jcastro, he did
<jcastro> hazmat: when can we haz? I am wondering how much damage the UDS explosion left, heh
<ninjix> join /#ubuntu-cloud
<ninjix> awh... invite only
<iSeeDeadPixels> ninjix, #ubuntu-server
<jcastro> it just sends you to #ubuntu-server
<iSeeDeadPixels> anyways, MaaS problems?
<ninjix> I am testing with using a cloud-init seed iso. How do you instruct cloud-init not to look for any datasource on the post initial boot?
<ninjix> basically I'm want use cloud-init to rapid provision persistent machines without having to rely on PXE/DHCP/DNS etc...
<jcastro> hazmat: feel free to answer my question. :)
<ninjix> anyone have a link to a maas + juju tutorial?
<jcastro> https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Installing_the_MAAS_server
<jcastro> MarkDude: hey dude, is there an equivalent of ITP bugs in Fedora? Like if I wanted to follow along your progress in a bug report?
<MarkDude> Well 1st I talk to admin folks.
<MarkDude> Then I will share results with juju ML
<MarkDude> Name suggestion for the board that will apporve new charms so far?
<imbrandon> btw i failed misseribly at making the rpm yesterday, gonna give it another go this afternoon
<MarkDude> Fedora Unified Charms Reop
<MarkDude> repo
<MarkDude> FUCR
<MarkDude> :D
<imbrandon> lol
<ninjix> jcastro: thanks
<MarkDude> We will see if that flies. I mean we created Fedora Unity, just so we could use FU at conferences
 * MarkDude needs to go swear a bit. I have a computer that does everything, but , actually see the hard drive AFTER bios
<hazmat> jcastro, its not quite a drop in, needs some refactoring along the way. if you want a deadline, let's say monday.
<hazmat> hopefully sooner, but that's what i'm comfortable committing to atm
<jcastro> monday sounds awesome to me
<jcastro> SpamapS: Wanna hang out for a bit later?
<jcastro> I have some follow up charm workflow questions for ya
<imbrandon> Probably the most Epic Fooball games to ever take place http://www.flickr.com/photos/imbrandon/7222206314/in/set-72157629786858262/
<imbrandon> Foosball
<SpamapS> jcastro: sure let me just get through the morning email crunch :-P
<jcastro> \o/
<imbrandon> FINALY i'm ready to push the drupal charm
<imbrandon> woot woot, time to merge and do some chovin
<jcastro> who's up for some bikeshedding?
<jcastro> we'll need a group name for the charm reviewers
<jcastro> I'm thinking ... ~charm-reviewers
<jcastro> Or, can we just reuse ~charmers?
<hazmat> beauticians.. snakeoil, farmers, so many possibilities
<jcastro> it would be great if we could get away with not having another team
<hazmat> jcastro, don't we already have a team for this?
 * hazmat needs a mindmap of the teams
<hazmat> venn diagram even
<jcastro> we have ~charmers, which is people who have access to lp:charms
<jcastro> I was just wondering if we switch to this kind of review process if we need a new team, or if we can just use ~charmers, which is what we've been doing
<jcastro> keeping ~charmers makes the most sense to me
<jcastro> and if you're and ~charmers and don't want to do reviews then you can just leave the team
<negronjl> jcastro:  we can just use ~charmers
<negronjl> jcastro:  we would then have to subscribe the charmers to the bugs
<jcastro> right
<jcastro> ~charmers it is then!
<jcastro> DRAFT: https://juju.ubuntu.com/CharmsProposedProcess
<jcastro> SpamapS: that's basically "our process modified to be like ubuntu sponsorship"
<SpamapS> jcastro: can charmers ever be unsubscribed from bugs in /charms ?
<jcastro> good question
<jcastro> let me try it
<SpamapS> jcastro: looks like it can... so yeah, charmers should be fine
<jcastro> yep
<jcastro> I just tried subscribing, refreshing the bug list, and undoing that
<jcastro> man, this will work awesome
<SpamapS> bzr push lp:~your-launchpad-username/charms/precise/nagios/your-branch-name
<jcastro> fyi, the now-out-of-context links in that page will be replaced with the big list.
<SpamapS> s/nagios/fixed-charms-name/
<jcastro> fixed
<SpamapS> To review Ubuntu merge proposals, check out these UDD instructions.
<jcastro> yeah that will be replaced
<SpamapS> ok it looks pretty good to me
<jcastro> ok so barring hazmat getting hit by a truck we should be good to go on this on monday
<jcastro> well, I mean, good for me to propose to the list for monday
<jcastro> SpamapS: should I modify all the bugs now? Or wait
<jcastro> with the subscribing, etc.
<hazmat> jcastro, don't forget the plane crash
<SpamapS> jcastro: wait
<SpamapS> jcastro: lets get the word out that the sponsorship queue will change on day X, and do it all at once.
<SpamapS> jcastro: ok, ready to G+ in 10
<SpamapS> jcastro: actually make that 5
<jcastro> sure!
<jcastro> just fire it up whenever
<negronjl> jcastro: SpamapS:  I just looked at the proposal and it looks good to me
<jcastro> negronjl: hey you mentioned you wanted to try it on aws
<jcastro> do you have an instance up with what it looks  like?
<jcastro> I'm curious to see how doomed we are
<negronjl> jcastro:  I can get that done in just a minute....hold on
<jcastro> hazmat: wanna join us on G+? We have some queue questions
<hazmat> sure
<imbrandon> uht oh :)
<dpb_> hi -- I have a MP that was approved, is there something else I should do?  can I commit?  https://code.launchpad.net/~davidpbritton/juju/dont-proxy-https/+merge/105353
<SpamapS> Hey can somebody review the format of this email? This is what I'm thinking of sending out to all committers to official charms next week:
<SpamapS> http://paste.ubuntu.com/994670/
<SpamapS> dpb_: needs two +1's
<SpamapS> dpb_: Its still considered "Needs review"
<dpb_> oh boy... thx SpamapS
<SpamapS> dpb_: when the second +1 comes in, they will merge it for you since you're not in ~juju
<SpamapS> dpb_: resources are a bit scarce on the review queue at the moment: https://code.launchpad.net/juju/+activereviews
<dpb_> SpamapS: great, I was wondering about that one.
<dpb_> SpamapS: ya, even for my very trivial change. :)
<SpamapS> dpb_: indeed, yours is probably trivial enough
<negronjl> SpamapS: +1 on the email
<SpamapS> negronjl: ty!
<negronjl> SpamapS: I have a question about it:  Do we assume the new maintainer is the one with most commits or do we wait for a response before taking that action ?
<SpamapS> negronjl: we wait
<negronjl> SpamapS: ok
 * negronjl is out to lunch
<SpamapS> negronjl: if nobody steps up, in about 2 weeks, we will evaluate dropping the charm or having ~charmers take temporary ownership
<negronjl> SpamapS: perfect
<SpamapS> negronjl: I will send an email today that says basically "If you don't want to get nagged next week, assign yourself to any charms you are obviously the maintainer of"
 * SpamapS is also out to lunch now
<dpb_> Hey all -- is there any reason why stderr prints trigger an ERROR log message in juju-log?  Seems like it should be informational only.  (unlike a program exit, for instance)
<jimbaker> dpb_, reported in bug 955209 - this will get fixed
<_mup_> Bug #955209: charm.log uses 'Error' for all stderr messages <juju:Confirmed> < https://launchpad.net/bugs/955209 >
<dpb_> jimbaker: sweet!  thx. :)
<imbrandon> SpamapS: i reached out to that asmi.... other cloud tool guy yesterday
<imbrandon> he seemed very very cool and actually was JUST checking out juju and bacicly said the same thing we was about it only reversed
<imbrandon> it was kinda funny
<imbrandon> anyhow i think he is gonna pop in now and then and maybe collaborate on a larger "ideas" type scale
<imbrandon> least that was what we talked about, share the pros and cons with each others approach etc  etc from the get go rather than a long un-spoken silence like some rival projects
<imbrandon> seemed really down to earth
<imbrandon> SpamapS: is it too early to file a MIR for nginx in Q ? heh
<dpb_> hazmat: Hi, was there something else I needed to do for https://code.launchpad.net/~davidpbritton/juju/dont-proxy-https/+merge/105353?
<hazmat> dpb_, nope looks good, i'll land it
<hazmat> just need to get a bug in for it for the SRU process
<dpb_> hazmat: thx!
<hazmat> SpamapS, any objections to landing this one for SRU bug 993034
<_mup_> Bug #993034: lxc deployed units don't support https APT repositories <juju:Confirmed for davidpbritton> < https://launchpad.net/bugs/993034 >
<_mup_> juju/juju-status-expose-hint r537 committed by jim.baker@canonical.com
<_mup_> Adjust juju status so that exposed: false is output for a service if there are open ports but not exposed
<koolhead17> philipballew, hey
<philipballew> koolhead17, hey!
<philipballew> not sleeping?
<koolhead17> nopes :(
<koolhead17> hey hspencer :)
<hspencer> hey koolhead17
<koolhead17> hspencer, how are you doing
<hspencer> doin good
<hspencer> just workin hard
<hspencer> how you?
<koolhead17> am okey not able to sleep :P
<SpamapS> hazmat: that one is perfect for SRU
<SpamapS> imbrandon: go ahead and file the MIR for nginx. :)
<negronjl> SpamapS: ping
#juju 2012-05-19
<imbrandon> jcastro / SpamapS : I assume these are a mixed set of policys for both what a charm should and shouldent do as well as only appying to the charm store though in some un-specified instances like "must be under a Free license. We recommend GPLv3+. See OSI approved licenses for other licenses."
<imbrandon> i'm mean its really just a nit pick, but since its still proposed thought i would mention it
<imbrandon> i guess i'm asking should we have a "this is what a charm should be poilicy" doc like this one in addition to the docs and then a second one ( likly just this one here slip in 2 ) listing the additional requirements for cs inclusion
<imbrandon> or am i just way off base
<MarkDude> Any sightings of the WTF license?
<MarkDude> A dude made some paper designs for gving away media.  He got soooo mad about the license talk over them ( lasted forever) he just put a WTF license on it imbrandon
<imbrandon> lol nice
<imbrandon> fsf ( rms ) says its free, not sure about dfsg
<imbrandon> MarkDude: err actually only version 2 i guess
<imbrandon> with an added sentance
<imbrandon> http://www.gnu.org/licenses/license-list.html#WTFPL
<MarkDude> Yep funny license
<MarkDude> Creative Commons had all sorts of fun updating their license.
<MarkDude> 2nd year I went to OSCON I accidentally sat on the speakers part of a circle
<imbrandon> hahah sounds like you pulled a brandon
<MarkDude> Did not realize it was about updating licensing. I did not get up. 15 minutes in I was asked what I had to offer on the talk
 * MarkDude had to then say he sat in wrong seat
 * MarkDude would kill for a pic of me on that panel
<imbrandon> lol nice
<imbrandon> hrm , http://developer.ubuntu.com/resources/programming-languages seems to be missing PHP
<imbrandon> heh
 * MarkDude often wonders what folks are thinking when they walk by me and smile at conferences. its not like I have only done one dumb thing
<imbrandon> hahah right, i can totally relate there
<imbrandon> only at my house tooo, with my friends doing the walking by with a "smile and wave, just smile and wave"
<imbrandon> HAHA
<imbrandon> hrm this questionable individual just texted me and said he had 4 fully working ( booting he means ) with power adapters too ( not garenteed with him ) Dell Mini H1000's or whatever those 10.1 inch one's were with the n270 atom procs
<imbrandon> and wants 75$ a pop, i've very tempted
<imbrandon> nah, i hate dealin with that kinda crap, always ends up biting my rear
<imbrandon> SpamapS or jcastro did either of you hit the jmeter talks among the qa fellas in oakland ? I'm wondering how it stacks up to blitz
<imbrandon> but i'm fairly sure too MarkDude that its just me you and that dog covered in mustard right now anyhow, its friday night and i forget ppl have lives :)
<imbrandon> lol
<MarkDude> Yep
<MarkDude> Tomorow is maker Faire
 * MarkDude is old, so he is resting up
<MarkDude> One of the locals is bringing her kid.
<imbrandon> oh nice, there is a local group of marker faire peeps here in KC thats fairly large actually, pulling people in from StL and Chicago
<imbrandon> I could play the old card, barely, but I dont have the heart to, I'm just a geek that spends way to much time not socializing irl, but thankfully not to a intervention point, just almost :)
<MarkDude> Well I am
<MarkDude> at least with social media
<imbrandon> long as my gf drags me out now and then or my buddy patrick threatens to pour wiskey down my throat if i dont come out for beers now and then i'd likely be a happy hermit
 * MarkDude has been called a facebook whore a few times
<imbrandon> lol, i do them all EXCEPT facebook, and twitter only very very little
<MarkDude> Weather is tooooo nice here to stay in
<MarkDude> If I lived in Portland Oregon again, I might do that
 * imbrandon is a bald fair skin freckled face redhead
<MarkDude> then again, they have so much going on
<imbrandon> you think i care about the weather ?
<MarkDude> lol-
<imbrandon> :)
<MarkDude> You heard of the group Dope?
<imbrandon> hell yea
<imbrandon> they are form near here iirc
<MarkDude> That reminds me of a cover they did of NWA
<imbrandon> dunno, in that realm of music, the STAiND amd limp bizkit cover of Bring the Noise rocks pretty hard
<imbrandon> not many people , including those two if they ever tried again can pull off a public enemy and anthrax cover
<imbrandon> hrm, now its in my head too /me looks for it in his collection
<koolhead17> SpamapS, around
<imbrandon> sooo close, people are actually asking for Ubuntu support from appple now ... man o man ... so close ... https://discussions.apple.com/message/18383575#18383575
<twobottux> aujuju: How can I access local juju instances from other network hosts? <http://askubuntu.com/questions/139208/how-can-i-access-local-juju-instances-from-other-network-hosts> || Juju native Openstack support [closed] <http://askubuntu.com/questions/138710/juju-native-openstack-support> || Does juju run on non-Ubuntu distributions? <http://askubuntu.com/questions/138575/does-juju-run-on-non-ubuntu-distributions> || Juju opens
<amithkk> Nevermind that please :D
#juju 2012-05-20
<twobottux> Announcement from my owner (amithkk): Hey! I'm 2bottuX, A bot by Amith KK. I'm on 2 ubuntu channels and #2buntu. My Function is to provide AskUbuntu Integration. If you want me in any of your channels watching a tag, msg amithkk
<marcoceppi> I'm back
<amithkk> marcoceppi: You have a question to answer
<amithkk> Oh, you answered <3
<ihashacks> Ok, so after a few weeks of working with Juju + LXC I've decided to step it up to EC2 testing.
<ihashacks> I did my bootstrap but just get tons of "ERROR Invalid SSH key" in "status" or "debug-log"
<ihashacks> All I've found on web searches are a lot of "wait longer," "destroy-environment and try again," and "make sure you have the ssh-keygen setup"
<ihashacks> I know I have my ssh keys setup and working properly, "destroy-environment and try again" seems silly and I don't know how much "wait longer" I need.
<ihashacks> ...except that now the local .ssh/ is empty O_o
<marcoceppi> ihashacks: destroy the environment, re-run ssh-keygen, bootstrap, try again?
<ihashacks> ...trying
<ihashacks> I *know* those damn ssh keys were there at first because you get an error about keys not found if you try to run juju w/o them
<SpamapS> ihashacks: are you getting errors because you don't have a *known* key for the instances?
<SpamapS> ihashacks: thats something different entirely
<lamont> the laptop actually made it home this morning... I'll fetch it from the car shortly
<SpamapS> ihashacks: it would help if you paste binned the errors
<lamont> bah. ECHAN
<ihashacks> SpamapS: series of events 1) run "juju bootstrap" 2) edit environments.yaml 3) run "juju bootstrap" (receive error about missing SSH keys) 4) run ssh-keygen 5) this time "juju bootstrap" completes, node in ec2 running 6) "juju status" received "ERROR Invalid SSH key"
<ihashacks> 7) destroy-environment and start over
<ihashacks> ^^^ worked
<SpamapS> ihashacks: interesting
<SpamapS> ihashacks: I'd be interested to see that ssh key error
<ihashacks> I guess juju warned about SSH key, bootstrapped anyways, then when I tried to run "status" it obviously didn't have a key in the running image for me to authenticate with
<SpamapS> *ahhh*
<ihashacks> and the error received was the generic "you don't have ssh keys" message when you run juju before ssh-keygen
<ihashacks> I did notice an interesting "bug" perhaps in ec2-zone constraints. I set "ec2-zone=a" for a mysql server, the node shows in "juju status" but nothing was ever loaded in ec2
<ihashacks> My dashboard in AWS only shows zones b,c,d for me which is why I think it didn't actually "work" in ec2
<ihashacks> I haven't started with a fresh environment since that issue earlier to confirm that I can repeat the issue.
<marcoceppi> ihashacks: those aren't the same zones I don't think
<marcoceppi> nevermind, I'm confusing regions with zones
<ihashacks> ec2-zone=b and c had the desired result
<SpamapS> ihashacks: Sounds like the provider should use the describeAvailabilityZones call to validate your constraints
<SpamapS> ihashacks: there's a bug to have constraints be pre-checked at some point.
<SpamapS> ihashacks: most likely the provisioning agent was spewing errors once per minute
<ihashacks> This one: https://bugs.launchpad.net/juju/+bug/984640
<_mup_> Bug #984640: Unsatisfied constraints are not reported back to the user <juju:Confirmed> < https://launchpad.net/bugs/984640 >
<ihashacks> ?
<twobottux> Launchpad bug 984640 in juju "Unsatisfied constraints are not reported back to the user" [Medium,Confirmed]
<_mup_> Bug #984640: Unsatisfied constraints are not reported back to the user <juju:Confirmed> < https://launchpad.net/bugs/984640 >
#juju 2013-05-13
<stub> I'm writing a test. Any ideas how to pause until a relation has been completely setup? I think I need to block until all hooks have finished running, but I can't get that information out of juju status.
<marcoceppi_> stub: I saw your merge proposal. We're writing a light weight testing framework to help provide functions like that
<stub> marcoceppi_: lovely.
<drj11> i have a question, about SSL certs and https servers and juju
<marcoceppi_> drj11: fire away
<drj11> where do people generally put their SSL certs?
<drj11> (or any other private config that's a bit too big to fit in a yaml key,value pair)
<drj11> let's say i am making a charm to run our web-app on https. which i am.
<m_3> drj11: a common practice for this has been to a.) freeze the charm locally, then b.) embed large sensitive payloads just-in-time when deploying
<m_3> drj11: we're still working to consolidate libraries and offer a standard best practice for this
<drj11> m_3: okay, that's cool that your working on it
<m_3> drj11: keep an eye on https://launchpad.net/charm-helpers
<drj11> m_3: does juju have any facilities yet for the "just-in-time" thing?
<drj11> ooh
<m_3> drj11: no, that's all tooling around juju atm
<m_3> drj11: and it's one way people are solving that problem
<m_3> drj11: we'll probably have a session on this at virtual uds this week
<drj11> m_3: right. so for now I could deploy the charm, then have a script that copied the SSL cert to the right server. and if i wanted i could wrap that.
<m_3> drj11: yes, that's another approach
<drj11> oh i thought you were saying that's what charm-helpers was, just a standard way of doing that wrapping.
<m_3> drj11: another common problem is when you've gotta have a "private" key out on the server in order to pull from private repos... that's really best when you grab the payload _first_ and then deploy a fatter charm
<m_3> drj11: charm-helpers is mostly a set of tools to call from within a charm
<m_3> drj11: juju plugins will be the place to put together all our wrapper scripts like the one you just described
<m_3> hasn't landed yet
 * m_3 can't wait!
<drj11> m_3 okay. that all sounds good. really just checking i hadn't missed anything obvious.
<drj11> now i can't wait either.
<m_3> drj11: :)
<m_3> marcoceppi_: hey project question... shall we keep lp:charm-tools for charm tools?  lp:charm-helpers for charm helpers?
<marcoceppi_> m_3: I think so
<m_3> marcoceppi_: then we need to decide what to do with lp:juju-jitsu
<marcoceppi_> m_3: firey death?
<m_3> ha!
<marcoceppi_> m_3: juju-plugins project probably ;)
<m_3> there're still some scripts we'll wanna salvage as plugins in juju-1.9
<m_3> juju-contrib
<marcoceppi_> Take the things we liked form it and format it to plugins
<m_3> yeah, juju-plugins
<marcoceppi_> something like that, but yeah the import export stuff for sure off the top of my head
<m_3> so we'll make those each separate packages...
<m_3> charm-tools, juju-plugins, and charm-helpers?
<marcoceppi_> m_3: I'm not sure how to best structure the repo. I figured they should, for the most part, live in the same repo
<marcoceppi_> m_3: sounds good to me for now
<m_3> do we still need to package charm-helpers-{sh,py,etc..}?
<m_3> oh, yeah... probably language deps that'd be annoying
<marcoceppi_> m_3: once charm-helpers project takes shape, we'll need to package those (in addition to figuring out other distribution methods) but helpers would likely be packaged by context and not by language
<m_3> ok, so juju-plugins, charm-tools, charm-helpers-sh, charm-helpers-py
<marcoceppi_> so charm-helpers-core (for example) would give you the py/bash installations
<m_3> maybe s/charm-helpers-py/python-charm-helpers/ ?
<marcoceppi_> m_3: probably, again I'm not sure how best to package those semantics wise
<m_3> marcoceppi_: wait, so where's "-core" come in
<marcoceppi_> m_3: that was a poor example
<m_3> well helpers will be a bitch to package
<m_3> b/c of the alternative installs that people want
<m_3> hmmm.... well, maybe not... if we offer both a `pip install` as well as a real package... that might make mew et al happy enough
<marcoceppi_> m_3: I think we're going to be splitting helpers up based on subject matter and not by core lanaguage. So having a set of helpers that deal with openstack for instance which would include all the ways to talk to those helpers (ie bash and python starting out)
<m_3> marcoceppi_: yeah, good point
<m_3> ok, we can put off packaging helpers (well, or changing helper packaging)
<marcoceppi_> m_3: package, pip, and branch + install for sure. I know that's a high requirement
<m_3> marcoceppi_: we need to figure out how to maintain a little backwards compat
<m_3> or grep around in the current store charms to see who's using what
<marcoceppi_> m_3: right, the current packaing is good for now, as the new helpers is still early work atm but we will need to find a way to gracefully switch over to the new infrastructure
<m_3> figure out a migration strategy for non-store charms
 * m_3 sad
<m_3> wanted to tear packaging from charm-tools with a vengence
<m_3> :)
<marcoceppi_> :D in due time sir. Though we could move the packaing over to the new helpers project as mirror the way it works now.
<m_3> well... bits of it
<m_3> no need for charm-tools to be packaged there
<m_3> i.e., charm promulgate and friends
<marcoceppi_> Right, I mean the helpers packaging could be moved
<marcoceppi_> though, that would be tedious
<wedgwood> m_3: marcoceppi_: the new lp:charm-helpers is already set up for both deb and pip packaging
<m_3> we should go through the juju packaging sometime too
<m_3> wedgwood: awesome
<wedgwood> I still need to set up a PPA, but it should build
<m_3> wedgwood: ack... we can tack a ppa onto charm-helpers
<m_3> marcoceppi_: s/tedious/randomly explosive/ :)
<marcoceppi_> m_3: very much so
<hazmat> sidnei, new deployer version released w/ your flush fix fwiw
<zodiak> I don't suppose anyone has a 'devstack style' juju charm for openstack on a single machine ?
<sarnold> zodiak: does this do what you want? http://jujucharms.com/~virtual-maasers/precise/virtual-maas
<zodiak> sarnold, very cool.. certainly looks like the ticket
<zodiak> I am finding the documentation on ubuntu 13.04, grizzly and juju interplay.. urm.. sorta.. "lacking" (no offense meant)
<zodiak> do the juju charms for precise work on raring without troubles ?
<marcoceppi_> zodiak: I can't speak for their compatibility 100%, but ideally the precise charms should work on 13.04 - that being said we recommend deployments (and charms) be run and written against the LTS release of Ubuntu
<marcoceppi_> Which is why you find the majority of charms are written for precise and not raring. As for the client side, you can run juju on 13.04 without issues, just deployments tend to be on the lts
<sarnold> marcoceppi_: it feels to me like it'd be useful to have a juju.ubuntu.com page providing guidance on the versions... something like, "if you're here to just use juju, stick to LTS releases and juju from <source>" and "if you want to make sure you're ready to use juju in the _next_ LTS, use <distro series> and juju from <source>" ...
<marcoceppi_> sarnold: Yeah, that's going to be an interesting problem come 14.04 - the "new docs" should help clarify the whole client series vs deployed series though
<sarnold> marcoceppi_: but I think it'd be nice to have feedback from _some_ users on 13.10 and how well things work out with different charms...
<marcoceppi_> sarnold: how so?
<sarnold> marcoceppi_: .. while still mkaing it clear that production users probably shouldn't be _those_ users.. :)
<zodiak> well, dare I even ask, is the suggested "plan of attack" to stick with folsom or does the openstack juju's use grizzly ?
<zodiak> cause, speaking as an openstack-developer, folsom and horizon was jst a disaster
<zodiak> (specifically the quantum integration or lack thereof, and nova-network was.. ugh)
<zodiak> I believe the juju's still use 2012.xx rather than 2013.xx .. but I am more than open to being wrong :D
<hazmat> zodiak, we have a set of openstack testing charms for later releases of ubuntu, the precise charms can pull from preferred ostack release in cloud archive.
<b1tbkt> anyway to deploy ceph-mds through ceph juju charm? default deployment doesn't seem to include an mds which prevents cephfs from working
<chilicuil> hi there, I'm trying to pass 'null' to a string var in a charm and juju outputs: "Invalid options specification: options.config_file.default: expected string, got None", bug #1101607, do you know any workaround?
<_mup_> Bug #1101607: juju default config string value of 'null' produces an error <juju:New> <https://launchpad.net/bugs/1101607>
<chilicuil> ok, got it, it seems I need to write "" instead, I've updated the report
<hazmat> chilicuil, commented on bug
<hazmat> b1tbkt_, doesn't look like it, i'd suggest a separate charm like ceph-osd
#juju 2013-05-14
<marcoceppi_> hazmat: can juju-deployer deploy "local" charms?
<jcastro> you guys ready for UDS?
 * marcoceppi_ is pumped
<hazmat> marcoceppi, of course, that's its raison dentre
<marcoceppi> hazmat: So do you just set the "branch" to a file path?
<marcoceppi> Neither of the two deployments.cfg have an example using a local charm
<jcastro> gary_poster: heya
<gary_poster> hey jcastro
<jcastro> mind posting your painpoints on the juju list too?
<gary_poster> jcastro, no will ping later for related discussion
<hazmat> marcoceppi, oh you mean local dir non in a published bzr branch?
<marcoceppi> hazmat: yup!
<hazmat> marcoceppi, if its in bzr you can give a bzr branch path to a local directory
<jcastro> gary_poster: ack
<marcoceppi> hazmat: That's what I infered, reading through the source, but wanted to make sure before I dived too far down that path
<marcoceppi> thanks!
<jcastro> weird
<jcastro> marcoceppi: did you move/change the charm school on the fridge calendar?
<marcoceppi> jcastro: I have no idea how to even do that.
<JoseeAntonioR> only people with fridge cal admin rights can do it
<jcastro> ah, just forgot to repeat it
<jcastro> fixed
<jcastro> marcoceppi: this friday btw.
<marcoceppi> jcastro: ack
<jcastro> http://summit.ubuntu.com/uds-1305/meeting/21818/juju-core-development/
<jcastro> this appears to be the only juju session today
<mattyw> jcastro, was the previous charm school recorded?
<jcastro> yessir
<jcastro> https://juju.ubuntu.com/resources/videos/
<jcastro> I'll be adding them there over time
<jcastro> the one from last week is "Getting Started with Juju"
<gary_poster> hey jcastro.  I'd like the gui blog to have a way to highlight posts as being of more general interest.  is the traffic on one of the planets good enough that it is worth setting up a tag import?  Or should I just publicize general-interest posts on list, G+ and so on?
<jcastro> I can syndicate it on juju.u.c but it's not much traffic
<jcastro> usually what I do is just blog it to point to your blog
<jcastro> are you or anyone on your team ubuntu members?
<mattyw> jcastro, thanks very much
<gary_poster> jcastro, no
<jcastro> gary_poster: ok I can point to your blog posts then
<gary_poster> ok thanks jcastro.  I will also put on juju list and G+ for myself
 * jcastro nods
#juju 2013-05-15
<davidbanham> Hi all, interested in juju, but want to clarify something. Does juju provide any mechanism for running multiple units (processes) per computer? The docs read like a new EC2 instance (or equivalent) is spun up for each new unit.
<b_> hi juju folks :)
<b_> i was wondering if there's any tutorial online on getting some juju + linode magic going
<marcoceppi> Just so I don't keep spinning my wheels, there's no way to get a charms metadata programatically without first branching the charm?
<SpamapS> marcoceppi: IIRC the charm store used to provide it via a simple GET
<marcoceppi> SpamapS: thanks, I'll poke that code for a bit. See if it's still there
<marcoceppi> SpamapS: looks like it exposes some high level data about the charm, but doesn't actually provide the full metadata for the charm. Thanks for the tip though
<SpamapS> marcoceppi: I used to run a nightly tarball of all the bzr branches ...
<SpamapS> marcoceppi: if you ask IS, they might still have my old crontab.. ;)
<SpamapS> marcoceppi: was quite useful for "wtf I need a full repo now"
<marcoceppi> SpamapS: Yeah, I was thinking about tapping in to the tpaas service we have running for graph testing, since that has a local cache of all the charms, it'd just be nice to have it all in one central place
<tabibito> hi, I was wondering if someone could help me with juju, I'm receiving following error when trying to bootstrap to EC2: ERROR 301 Moved Permanently
<tabibito> Tried recreating the environment a few times
<tabibito> version is 0.7
<marcoceppi> Hi tabibito, what version of juju are you using? (dpkg -l | grep juju)
<marcoceppi> Beat me to it :)
<tabibito> :)
<marcoceppi> tabibito: Are you trying to deploy to a specific region? if so what's the region line look like in the environments file?
<tabibito> currently it's looking like this: region: eu-west-1
<tabibito> but even if I don't use a region, I get the same result
 * marcoceppi tries to bootstrap with 0.7
<tabibito> Extra info: If I do a juju status, I see following error: ERROR Cannot connect to environment: 301 Moved Permanently
<tabibito> Traceback (most recent call last):
<tabibito>   File "/usr/lib/python2.7/dist-packages/juju/providers/common/connect.py", line 43, in run
<tabibito>     client = yield self._internal_connect(share)
<tabibito> Error: 301 Moved Permanently
<marcoceppi> tabibito: I'm not able to replicate with and without region on 0.7 Do you have an EC2_URL environment variable (or something similar) set in your shell?
<tabibito> not that I know of â¦ How can I check?
<marcoceppi> tabibito: `env` but if you don't think you have it set then odds are you don't
<tabibito> no don't see any EC2_URL
<tabibito> strange
<marcoceppi> tabibito: run "juju -v status" and paste the output to a pastebin please
<marcoceppi> err, when  you try to bootstrap too, if you haven't gotten a successful bootstrap yet
<tabibito> bootstrap: http://pastebin.com/gTyH8HCg
<tabibito> status: http://pastebin.com/MpDeK3vs
<marcoceppi> tabibito: darn, I was really hoping it would show the URL it was trying to connect to
<tabibito> environment(sanitized):http://pastebin.com/YDNWBdTZ
<tabibito> I know right! â¦ That would be easier to troubleshoot :)
<marcoceppi> tabibito: shot-in-the-dark, what if you used precise as your default series (Most all charms are written for precise anyways, and we typically recommend deploying to LTS)
<tabibito> same thing
<tabibito> Tried with precise, quantal ...
<tabibito> I'm using 13.04 now, but I also tried with 12.04â¦ results are the same, so it must be something I doâ¦ or :)
<marcoceppi> tabibito: You've got me puzzled, Ususally when you get a 301 from AWS it's because it's not looking at the right endpoint URL. I can't exactly replicate but opening a bug on LP or asking on Ask Ubuntu might get better traction than what I have to offer
<jcastro> marcoceppi: flagbearer charms in 5?
<jcastro> I can start the hangout
<marcoceppi> jcastro: cool, just point me at a URL
 * marcoceppi looks for a brush
<SpamapS> jcastro: oh btw, your adobo made my breakfast amazing today (and btw, no clumping here)
<tabibito> @marcoceppi thanks â¦ I'll keep digging and if all fails I'll post a bug or go to Ubuntu
<jcastro> https://plus.google.com/hangouts/_/fa6dea509240bfe96361ee99a233312bebed02b0?authuser=0&hl=en
<jcastro> for those who want to join
<jcastro> SpamapS: tah, I think it was a bad bottle on my end
<tabibito> marcoceppi, I found it â¦ needed to enter the EC2 and the S3 uri for the specific region and use HTTPS
<tabibito> duh
<marcoceppi> tabibito: interesting
<tabibito> indeed
<tabibito> exit
<tabibito> bye
<hazmat> SpamapS, incidentally theres a new txzk for debian inclusion
<SpamapS> hazmat: ty, will upload ASAP
<irossi> Hi Robert, are you there? It's Ian.
<irossi> I've got the go ahead to deploy our Cloud 1.0 on MAAS/Juju
<irossi> But now I'm hitting a major blocker
<jamespage> bbcmicrocomputer, ^^
<bbcmicrocomputer> irossi: hey, let's take this off channel
<dpb1> When using juju-core, destroy-service seems to be not as "reliable" as it was in pyjuju.  I.e., if there is some kind of service error, I can't destroy the service successfully.  The agent state changes to "dying" and then sits there
<dpb1> I'm actually not sure how to recover from this state.
<Makyo> dpb1, maybe similar to #1168154 or #1168145?
<_mup_> Bug #1168154: Destroying a service in error state fails silently <juju-core:Confirmed> <https://launchpad.net/bugs/1168154>
<_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine <juju-core:New> <https://launchpad.net/bugs/1168145>
<ubot5`> Launchpad bug 1168154 in juju-core "Destroying a service in error state fails silently" [High,Confirmed]
<_mup_> Bug #1168154: Destroying a service in error state fails silently <juju-core:Confirmed> <https://launchpad.net/bugs/1168154>
<ubot5`> Launchpad bug 1168145 in juju-core "Destroying a service before it reaches started or running does not destroy the machine" [Undecided,New]
<_mup_> Bug #1168145: Destroying a service before it reaches started or running does not destroy the machine <juju-core:New> <https://launchpad.net/bugs/1168145>
<Makyo> Oops..
<dpb1> wow, dualing bots
<dpb1> Makyo: checking those, looks similar.
<hazmat> dpb1, juju resolved a few times.. or use juju-deployer -W -T
<hazmat> it works around the issue by resolving the unit
<dpb1> hazmat: yes, looks like resolved frees it up, thx.  Makyo: 1168164 looks like the issue, thx
<sinzui> hi. I am pondering a deploy of charmworld and juju-gui behind a single apache to manage the ssl endpoint. I think I can create a vhost template to defines two reverse proxy relations, because each relation has a service name, such as jc-squidrev
<sinzui> hi charmers. I want to write a unittest for the mongodb charm. I have written an function that should be independent of the juju environment so could be a simple python unit test. Are there any examples of this being done before?
<marcoceppi> sinzui: There's no real examples of unit testing yet. We have an idea for how it should look but nothing solid yet
<sinzui> marcoceppi: hmm, would my test be rejects if I included one with instructions to run it?
<sinzui> s/rejects/rejected/
<marcoceppi> sinzui: for unit testing the charm? Probably not
<sinzui> okay, I think it is worth trying at least.
<sinzui> wedgwood, ^ any thoughts about 2 reverseproxy relations for a single apache charm deploy
<marcoceppi> sinzui: we're open to see how people want to do this. We were kicking around the idea of having charms stub most of their hooks and put a lot of logic in lib/<service-name> and then tests for that in lib/<service-name>/tests something like that
<marcoceppi> again, that's just a thought, we'd be interested to see how charm authors start tackling this
<wedgwood> sinzui: marcoceppi: that's not quite true. I believe that the haproxy and apache2 charms have unit tests.
<sinzui> I don't see any tests in apache2
 * sinzui pulls haproxy
<sinzui> thank you wedgwood, I see an example in haproxy
<sinzui> mongodb is similar so this helps me a lot
<hazmat> sinzui, i've got some in my aws-* charms re unit tests
<sinzui> hazmat, fab, will I find them under ~hazmat on Lp?
<hazmat> sinzui, yeah.. here's one lp:~hazmat/charms/precise/aws-elb/trunk
<sinzui> I got it. Thanks hazmat
<benji> sinzui: the juju-gui charm has some unit tests (if I gather correctly what you're looking for)
<sidnei> sinzui: https://bazaar.launchpad.net/~sidnei/charms/precise/haproxy/trunk/files/head:/hooks/tests/ it's not merged yet
<sidnei> wedgwood: ^ it's only merged into canonical-losas
<sinzui> thanks benji, sidnei.
<sidnei> hazmat: ping?
<hazmat> sidnei, pong
<sidnei> hazmat: https://pastebin.canonical.com/91061/
<sidnei> hazmat: looks like juju 0.7 failed to build on the ppa for precise
<sidnei> hazmat: the tb above is from bootstrapping with 0.6
<hazmat> sidnei, yeah.. i just noticed mgz moved that ppa off trunk to a branch.. i'll trigger the build
<hazmat> hmm
<hazmat> sidnei, what version of txzookeeper?
<sidnei> hazmat: in the paste
<hazmat> sidnei, its not in the paste..
<sidnei> ah, txzookeeper sorry
<sidnei> hazmat:   Installed: 0.9.8-0juju53~precise1
<hazmat> argh..
<sidnei> hazmat: actually, i bootstrapped with 0.7-0ubuntu1~0.IS.12.04, but the bootstrap node got 0.6 as that's what's in the juju ppa
<hazmat> sidnei, hmm.. where's that from?
<hazmat> sidnei, i queued some builds for the ~juju/pkgs ppa
<sidnei> hazmat: i assume you mean 0.7: https://pastebin.canonical.com/91063/
<hazmat> sidnei, yeah.. i did
<hazmat> wierd.. it build correctly for precise but says error on upload
<sidnei> hazmat: yup, because it has the same version as the previous failed build
<sidnei> need to bump to ~precise2 or something
<hazmat> i'll commit something to the branch
<sidnei> yeah, or that.
<sidnei> yay, progress
<hazmat> i'm still surprised by the error, that aspect of txzk hasn't changed in quite a while
<fcorrea> yo, any of you having issues with pyjuju and local provider? Whatever service I try to deploy never get past "pending". Logs doesn't tell much
<sidnei> hazmat: funny you mention that: https://pastebin.canonical.com/91065/
<hazmat> argh
<dpb1> fcorrea: I can try
<dpb1> fcorrea: bootstrap works ok?
<fcorrea> dpb1, cool. Found something similar up on askubuntu.com. Will try disabling the firewall
<hazmat> sidnei, got a minute for a g+?
<fcorrea> dpb1, yep
<dpb1> fcorrea: quantal?
<fcorrea> dpb1, raring
<dpb1> k
<sidnei> hazmat: let me see if it's working today or if i need to reboot *wink*
<fcorrea> dpb1, it gets very quiet after deploying. In the logs I see a "Started service unit mysql/0" for example and that's it...it was working last week though. I guess I should head back to Oakland
<dpb1> fcorrea: what series are you trying to deploy.  What charm?
<fcorrea> dpb1, default series is precise and charm is mysql
<fcorrea> lemme try to create an lxc container and check it works. I could be there maybe
<dpb1> fcorrea: what does lxc list show you?
<fcorrea> lxc-ls correctly shows the container created by juju though
<dpb1> ok
<dpb1> what about lxc-ls --fancy
<dpb1> you should get the ip
<dpb1> which you should be able to ssh into the ubuntu account
<fcorrea> dpb1, yeah it's there: fcorrea-local-mysql-0  RUNNING  10.0.3.87  -     NO
<fcorrea> doing it
<fcorrea> dpb1, mmm....the unit-mysql-0-output.log shows the issue: ImportError: No module named txzookeeper.utils
<dpb1> fcorrea: ok, that is what hazmat is debugging right now
<fcorrea> dpb1, hah! Awesome
<dpb1> I think same here
<dpb1> I guess it's not just quantal. :(
<hazmat> somehow the debian build changed to not include the src
<hazmat> for the ppa txzk
<fcorrea> ic
<hazmat> and its python.. so no binaries.. it seems quite strange
<fcorrea> well, feeling better now
<hazmat> pypi is seeming pretty awesome right now ;-)
<dpb1> ya, dpkg -L python-txzookeeper is pretty embarrassing. :)
<fcorrea> hazmat, lets use buildout as a package manager ;)
<hazmat> SpamapS, the txzk recipe was just using the embedded packaging?
<SpamapS> hazmat: probably?
<SpamapS> hazmat: uploading 0.9.8 to debian unstable shortly
<hazmat> SpamapS, k, i don't see the recipe do anything different, and the embedded packaging hasn't changed.. but now the package being generated is empty.. which is just odd
<hazmat> SpamapS, cool, thanks
<SpamapS> hazmat: yeah probably needs a kick, not sure why
<sidnei_> SpamapS: i suspect the recipe got changed to not use a nested the packaging branch at some point, since the build log for 0.9.7 have --with python2 and the one for 0.9.8 doesn't, and the debian/rules in lp:txzookeeper have not changed.
<SpamapS> quite likely the recipe has been broken for a long time. They tend to break over time in my experience.
<m_3> SpamapS!  /me waves
<SpamapS> m_3: howdy
<SpamapS> hazmat: anyway, 0.9.8-1 is in unstable now.. should make its way through the tubes to saucy by tomorrow
<hazmat> SpamapS, sweet
<hazmat> SpamapS, yeah.. that seems quite likely re the recipe, it hasn't built in a while
<hazmat> er.. been built
<SpamapS> Given how ridiculously simple it is to package.. kind of sad that it broke :-P
<hazmat> SpamapS, agreed
<dpb1> hazmat: can the ppa get changes faster?
<hazmat> dpb1, not sure what you mean
<dpb1> hazmat: sorry, I'm not sure why I typed that.  I meant, can the package be uploaded to the ppa?  (maybe you already have).
<hazmat> dpb1, the alternative is running a non ppa origin
<hazmat> er. removing origin and relying on distro version
<ahasenack> you can probably ask a web-op to kick the ppa and bump its priority
<ahasenack> if a new package is being uploaded, you should probably do that
<Akira1_> anyone else running into the python-txzookeeper thing floating around today?
<hazmat> ahasenack, the problem is the recipe is foobar
<hazmat> Akira1_, yes..
<ahasenack> hazmat: where is it? url
<hazmat> dpb1, ahasenack .. anyone else if your interested.. and have packaging knowledge.. we're hanging in https://plus.google.com/hangouts/_/a278ee33e829a90be5c6c364a6754726e6b975ee?authuser=0&hl=en
<hazmat> ahasenack, https://code.launchpad.net/~juju/+recipe/txzookeeper
<SpamapS> hazmat: I'll try and join you shortly
<hazmat> SpamapS, that would be awesome.. i'm digging through dh_python2 conversion docs atm
<dpb1> my packaging knowledge is pretty minimal
<Akira1_> hazmat: we've been following your commits so coo
<ahasenack> hazmat: if I type "make" in the txzookeeper directory, I get the "done" target, that is confusing dh_auto_build
<ahasenack> there is an override to get it to use python build in this case, it is finding the Makefile and assuming that's how the thing is built
<hazmat> ahasenack, aha!
<ahasenack> let me find it
<hazmat> ahasenack, i'll just kill the makefile
<ahasenack> that would work too
<ahasenack> I don't remember how recipes handle 3.0 quilt packages, i.e., if they fetch the orig tarball or not
<ahasenack> so maybe you want to kill r55 too, or just try it without the makefile first and see what happens
<hazmat> ahasenack, killing the makefile and it seems to work locally.. recipe builds requeued
<ahasenack> ok
<adam_g> hazmat, ping
<adam_g> oh wait
<hazmat> fubar in progress.. crossed fingers on next recipe build
<mgz> there's also a note on the mailing list about txzookeeper in the ppa,
<mgz> so probably want to respond there when it's built and confirmed fixed
<hazmat> sidnei, ahasenack, adam_g thanks for your help
<hazmat> ppa should be good now
<sidnei_> or as soon as the index gets rebuilt
<Akira1_> working for us too
<Akira1_> was pretty neat as I was planning to demo juju to my project team about 90 minutes ago and we couldn't bootstrap
<Akira1_> I figured it just stopped working cause we were demoing it cause, you know, that is how things work, especially on wednesdays
<hazmat> Akira1_, ouch.. sorry. the root cause seems to have been the makefile and then some flailing trying to get to that introduced another issue (source/format). its generally been pretty good, but that particular build recipe hasn't been exercised in a long while.
<Akira1_> yeah, it happens ;)
<Akira1_> I'm loving this stuff otherwise. we've cooked up saltstack integration and I'm hoping to distill 4.5 years of garbage bash deployment scripts down to some minor charms and salt grains
<Akira1_> so cheers even with the hiccups
<jcastro> bah!
<jcastro> I missed him
<jcastro> would love to see some salt stack stuff
<robbiew> jcastro: see the new google plus?
<robbiew> man...not feeling it...but maybe it'll grow on me
<AskUbuntu> juju: deploying lamp charm on ec2 causes instances to terminate | http://askubuntu.com/q/295961
#juju 2013-05-16
<AskUbuntu> Juju and MAAS: ERROR No matching node is available - with Ready nodes | http://askubuntu.com/q/295992
<AskUbuntu> MAAS JUJU still get bad archive mirror | http://askubuntu.com/q/295999
<epic_> I made the mistake of installing raring, that means I cannot utilize juju right?
<epic_> or is there a repo somewhere with charms for raring?
<jamespage> epic_, most charms target precise officially, but will work with raring
<jamespage> also you can use juju to manage a precise environment from raring
<jamespage> i.e. juju client on raring - deployed environment == precise
<epic_> ok
<epic_> but I have a maas cluster of two raring machines, and I wanted to use juju to deploy openstack onto them, is there a way ? :)
<marcoceppi> epic_: You can "force" juju to deploy the precise version of your charm on to raring machines. set default series in your environments.yaml to raring then use juju deploy cs:precise/<charm-name> There's no guarentee every charm will work with raring but it might be easier than re-provisioning everything :)
<sidnei> hazmat: ping
<jcastro> ~20 minutes until UDS, first session is on Juju Docs!
<jcastro> http://summit.ubuntu.com/uds-1305/meeting/21699/servercloud-s-juju-docs/
<jcastro> arosales: we can probaly remove the session for social pages on charms
<jcastro> rick has the assets from design, they just need to implement it, there's really nothing to discuss
<hazmat> sidnei, pong
<sidnei> hazmat: see pvt
<jcastro> https://plus.google.com/hangouts/_/10bbba04970621a9233d57c88c7d469acc185e86?authuser=0&hl=en
<jcastro> marcoceppi: arosales marcoceppi ^^
<marcoceppi> evilnickveitch: ^
<arosales> jcastro, the social pages for Juju we still may want to have as there are some additional feedback loops some folks wanted to discuss. May be a quick session though.
<jcastro> ok
<jcastro> arosales: http://youtu.be/E2peuYklMxw
<hazmat> jamespage, vmaas vms vs juju vms group thing starting up if you've got a few.. https://plus.google.com/hangouts/_/90577076c19891cc3206c77d3e16988d1e1f130e?authuser=0&hl=en
<jamespage> in a openstack ha then mongodb sessions right now
<jamespage> hazmat, ^^
<hazmat> ack
<sinzui> jcastro, has luca spoken to your team about calling "reviewed" charms "recommended"
<sinzui> ?
<jcastro> no
<jcastro> wait, don't tell me
<jcastro> we're renaming them again?
<gary_poster> mattyw, fwiw I was on vacation yesterday so could not attend charm helpers session.  Strawman I'd like to discuss with you later is that GUI team tries to merge host.py and our utils stuff to produce merged low-level bits, and we are responsible for getting higher-level charm-support tests to pass with merge.  Requirement is that merged result has a test coverage rating as high or higher than existing host.py, whatever
<gary_poster> that is.  Busy now, but can talk later
<luca> sinzui: jcastro don't do anything yet!
<luca> sinzui: jcastro we are looking into the UX of a better name for it. Recommended was an option but will most probably not be used. We're researching it.
 * jcastro nods
<mattyw> gary_poster, is that meant for me or matt wedgewood?
<gary_poster> mattyw, sorry, matt wedgwood!  bad assumption, apologies
<mattyw> gary_poster, no problem (I'm the cloud-green matt w)
<gary_poster> oh cool, hi mattyw :-)
<wedgwood> gary_poster: I think that's fine. I'm working on hook environment stuff now.
<gary_poster> cool wedgwood, thanks, sounds good
<dpb1> hazmat: hey, the ppa is updated now, all series?
<hazmat> dpb1, yes
<hazmat> dpb1, by all series.. that being raring, quantal, precise
<dpb1> hazmat: ah, I see.  the lucid build is still old
<dpb1> hazmat: is it possible to kick a lucid build too?
<hazmat> dpb1, checking
<hazmat> dpb1, it looks like its barfing on dh_python2
<hazmat> https://launchpadlibrarian.net/139956755/buildlog.txt.gz
<jamespage> hazmat, you guys finished now?
<hazmat> jamespage, yup
<jamespage> bah
<jamespage> nm
<hazmat> jamespage, more to be discussed, lots of details and directions to be ironed out
<dpb1> :(
<jamespage> hazmat, dpb1: is that really a lucid build?
<hazmat> jamespage, apparently yes.. who knew :-)
<jamespage> I don't think lucid has dh_python2
<dpb1> jamespage: know of a quick way to fix it?
<hazmat> revert back to pycentral/pysupport ..
<hazmat> afaik
<dpb1> k
<jamespage> hazmat, dpb1: openstack packaging used todo this just after lucid
<hazmat> jamespage, support lucid with dh_python2 or use pycentral/pysupport?
<hazmat> i assume the latter.. cause it still works even throws deprecation warnings
<jamespage> it had some hacks to support dh_python2 and pycentral/support
<jamespage> upstream activity backported to lucid until precise arrived
<hazmat> barry ref'd this bug re lucid/dh_python2 https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/788524https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/788524
<_mup_> Bug #788524: backport dh_python2 to lucid (and maverick if appropriate) <pycentral-deprecation> <OpenStack Core Infrastructure:Fix Released by mordred> <python-defaults (Ubuntu):Confirmed for doko> <https://launchpad.net/bugs/788524>
<_mup_> Bug #788524: backport dh_python2 to lucid (and maverick if appropriate) <pycentral-deprecation> <OpenStack Core Infrastructure:Fix Released by mordred> <python-defaults (Ubuntu):Confirmed for doko> <https://launchpad.net/bugs/788524>
<dpb1> hazmat: best of both worlds: https://pastebin.canonical.com/91141/
<hazmat> also refs the openstack usage
<dpb1> heh, ya, that is the bug that I was looking at too
<m_3> wedgwood: not sure it needs to be resolved right now... it's pretty easy to change
<dpb1> that patch (minus the stray .deb file) seems to work on my lucid machine
<m_3> wedgwood: oh nm... we wanna get stubs into charms sooner rather than later
<m_3> :)
<wedgwood> m_3: indeed, but also means changing charms
<marcoceppi> wedgwood: So what you're saying is just anything in tests/ gets run regardless of +x ?
<wedgwood> suppose windows criteria could be different than unix
<m_3> I can see value in ordering
<m_3> also having files there that _don't_ get executed
<wedgwood> m_3: upon consideration, +1 to ordering
<m_3> having multiple scenarios
<wedgwood> I like the test manifest idea. "Run these things"
<m_3> I was thinking ordering was more about "failing early" on simple tests
<marcoceppi> m_3: well I can see the "don't want it executed, don't put it in tests/, put it in tests/lib"
<marcoceppi> Or another subdirectory and just not walk trees when testing
<m_3> and less about setup/teardown... that (imo) should happen within each scenario
<marcoceppi> m_3: +1 for sure
<m_3> marcoceppi: yeah, stick to tree level 0 in there
<wedgwood> yep, we agree that order is important
<marcoceppi> A lot of my use cases have very unique setups with different charms and initial configs
<m_3> ok, so then any naming scheme's fine with me
<wedgwood> what about a single command that's expected to handle running tests?
<marcoceppi> m_3: wedgwood: I can drop the implicit .test and just run things in lexi ordering in the tests folder
<marcoceppi> wedgwood: the juju-test command?
<m_3> hell, even worth considering tree level 1... i.e., $CHARM_DIR/tests/<scenario-name>/{01.setup.sh,02.run.sh,03.teardown.sh}
<wedgwood> m_3: bah!
<m_3> wedgwood: what about that single command?
<m_3> wedgwood: ack, it's too noisy
<wedgwood> marcoceppi: I'm thinking mycharm/tests/run_tests
<marcoceppi> m_3: why couldn't that be $CHARM_DIR/tests/scenario.sh ?
<m_3> marcoceppi: excellent point
<marcoceppi> wedgwood: that would be encompased in the juju-test command though, where it would just run one or more (or all) tests in the tests directory
<m_3> wedgwood: single starting point would work... let's you control your scenarios in python
<m_3> wedgwood: but I guess I like separate script file per scenario
<m_3> either way though really
<wedgwood> m_3: scenarios are still possible, but shouldn't juju-test have a single mode?
<m_3> so tree-level-0, executable, single file per scenario (that can optionally call as much other stuff as it likes)
<marcoceppi> If you wanted control over ordering $CD/tests/00-run-tests then you could mock scenarios you cared about in $CD/tests/scenarios and have the 00-run-tests build an order
<wedgwood> s/mode/target/
<m_3> wedgwood: if I understand correctly, then yeah, each scenario would be a separate stack of services (include bootstrap and destroy-environment)
<m_3> `juju test <charm-name>` should run all of them in order I think
<marcoceppi> m_3: right, buy could could also run juju test charm-name <test-name> to run just one test
<m_3> with an option of calling `juju test <charm-name>/tests/<scenario-file>` explicitly perhaps?
<wedgwood> marcoceppi: good point
<m_3> ah, same page
<marcoceppi> m_3: yeah, and we can make that an explict path too
<marcoceppi> maybe just juju test <charm-name> -f tests/blah
<marcoceppi> or something
<m_3> <test-name> would have to match a file in $CHARM_DIR/tests otherwise
<m_3> explict path might be easier to impl
<m_3> but really whatever works
<marcoceppi> m_3: right so you could do that or possibly just have an additonal option to supply a full file path
<wedgwood> to sum up my concern about naming schemes: I'd like to structure my tests in a way that works with my chosen programming language.
<marcoceppi> in the event you're cool and want to have tests outside of the charm
<m_3> hell, I'm happy with just making bits I'm _not_ interested in testing non-executable temporarily
<m_3> (during dev)
<marcoceppi> wedgwood: ack on your concern. I'm just going to have it execute whatever the hell you have in tests directory so you can name them as you see fit.
<m_3> wedgwood: yeah, understand... I'm fine with picking one set of names
<marcoceppi> I'll try to err on the site of flexibility rather than being too opinionated on things like naming and structure
<m_3> marcoceppi: +1
<wedgwood> marcoceppi: I think that's flexible enough. some sharp edges, but probably safe
<marcoceppi> wedgwood: we can dull those edges in the next few weeks, for sure
<marcoceppi> I'd like to just get something out that people can use without too many scrapes
<m_3> fits along with the "flexible framework" yet opinionated helpers/examples
<wedgwood> marcoceppi: and when we tackle windows, we may need a manifest file. there's no way to tell from naming or filesystem bits what is and isn't an executable.
<m_3> yup, or rely on tree depth
<marcoceppi> wedgwood: yeah, I think we'll need to cross that bridge when we get there, but I'll keep that in mind on these first few cuts
<marcoceppi> wedgwood: thanks for the feedback!
<m_3> I need to start using that one more in talks... "flex framework, opinionated charms/helpers"
<wedgwood> marcoceppi: looking forward to seeing your work!
<marcoceppi> m_3: where should we start storing juju plugins? Given juju-test will likely be the first one
<m_3> yeah, awesome guys
<m_3> marcoceppi: good question man... not really sure... there're pros/cons with each option
<m_3> marcoceppi: we can package `juju-plugins` or `juju-contrib`
<marcoceppi> I'd probably lean to juju-plugins, contrib sounds too...official :)
<m_3> marcoceppi: we can have a juju-plugins project where each plugin has to be pulled separately
<m_3> marcoceppi: so we get packages like `juju-plugin-testing`
<m_3> makes it explicit to the user that this is something _other_ than juju-core
<marcoceppi> m_3: ack
<chilicuil> hello there, I'm working on a monitor charm (observium) https://code.launchpad.net/~chilicuil/charms/precise/observium/trunk and currently it deploys the server part, however no enable clients, those machines (the ones who are going to be monitored) needs to install snmpd, I'd like that it could be done via some kind of connection between the charms, I suppose it would require modification in the charms which will be clients, is this correct?
<m_3> juju-plugins-<common-subject> for groups of pugins
<marcoceppi> m_3: like juju-plugins-charm :)
<marcoceppi> get all the charm-tools juju plugins
<m_3> marcoceppi: yeah, so I think a plugins project... maybe repo per plugin?... and then we can create package-groups
<m_3> chilicuil: this is a perfect use for "subordinate" services
<marcoceppi> m_3: just started the plugin project, I'll just createa a repo for juju-test and have it building in a ppa...maybe the juju/pkgs ppa?
<m_3> chilicuil: you'd create something like 'observium-emitter' that attaches as subordinate to any service
<chilicuil> m_3: I'll look for it at the documentation, thanks!
<m_3> chilicuil: then that subordinate charm adds snmpd and friends
<marcoceppi> hum. Do plugins need to be the same license as core?
<m_3> marcoceppi: yeah, same pa as core imo
<m_3> ppa
<m_3> chilicuil: np
<m_3> marcoceppi: no clue
<m_3> marcoceppi: they're more one-off things... so make them as open as possible I'd think
<marcoceppi> I wish I was a laywer
<m_3> haha
<m_3> I wouldn't wish that on you
<marcoceppi> haha
<m_3> although it might be helpful in a bar in DC :)
<m_3> marcoceppi: we can add lp:juju-plugins to the juju project group in lp
<marcoceppi> awesome
<m_3> I'll set up the mirror for github
<marcoceppi> m_3: ta!
<m_3> github.com/juju-plugins
<m_3> what's the lp project?
<marcoceppi> lp:i-am-not-a-lawyer
<marcoceppi> jk, lp:juju-plugins
<m_3> :)
<m_3> marcoceppi: I expect at some point there will also be plugins that're bundled with core, but nothing to do for that really
<m_3> should just work
<marcoceppi> m_3: so are we creating directories in a single bzr branch, or a branch per plugin?
<m_3> just don't know
<m_3> how heavyweight is repo per plugin?
<m_3> you clone it and simlink the plugin iteself into your ~/bin dir
<m_3> or do you wanna just clone a single repo and then symlink selections
<m_3> we can do both
<marcoceppi> depends on how many plugins you have
<marcoceppi> I think it's a good idea, but doesnt' scale quite well in github unless you merge all the branches in to one section
<m_3> if they're all in one repo, then it's harder for us to create `juju-plugins-testing` and `juju-plugins-charm` packages
<marcoceppi> lets do different repos for now
<m_3> well, maybe it isn't though
<mattyw> I've got a charm which works under pyju - and fails under juju-core when running the relation_changed hooks. it looks like I get "no relation id specified" when running relation-set, does this make any sense?
<m_3> marcoceppi: I'm kinda thinking one repo
 * marcoceppi shrugs
<marcoceppi> just in different dirs?
<m_3> marcoceppi: you know what? it doesn't really matter right now... we can change this pretty easily
<m_3> marcoceppi: so maybe one repo... trunk on lp:juju-plugins... file per plugin (no need for subdirs)
<m_3> marcoceppi: then we can add a packaging branch at any time to package selected bits
<m_3> marcoceppi: with the full expectation that this will probably change
<marcoceppi> ack
<m_3> marcoceppi: I think it's a good idea to treat this as a big bucket... over time an individual plugin might "grow up" and need its own repo
<marcoceppi> m_3: good point
<m_3> marcoceppi: I could see Makefiles and other supporting crap associated with installing a single plugin... at the point it's no longer a single file of script, it gets a separate repo
<hazmat> mattyw, are you using upload-tools
<mattyw> hazmat, no I'm not
<hazmat> mattyw, can ic the charm?
<hazmat> priv msg if nesc
<jcastro> gary_poster: GUI session in about 20 min?
<gary_poster> y jcastro thanks.  on call now.  hangout link?
<jcastro> I haven't created it yet
<jcastro> I will in about 15
<jcastro> just wanted to give you a heads up
<gary_poster> cool thanks
<hazmat> jcastro, is there a separate irc channel then.. #ubuntu-uds-servercloud-2 ?
<hazmat> oh.. nm
<Huron> alguien habla espaÃ±ol
<sinzui> arosales, Your user stores for charm testing rock
<sinzui> they should be added to author profiles
<arosales> sinzui, I can't take credit for that. I think marcoceppi did those in https://blueprints.launchpad.net/ubuntu/+spec/servercloud-s-juju-charm-testing
<arosales> there are good though :-)
<marcoceppi> Holy grammar hell, batman. I should have proof read those after I got off the plane
<sinzui> grammar was never a barrier for developers. My fingers don't believe in plurals and tenses.
<hazmat> marcoceppi, incidentally new deployer impl support for pyjuju is coming
<marcoceppi> hazmat: \o/ thanks for the update
<hazmat> marcoceppi, and deployer integrates what amounts to gowatch (available from the jujuclient api impl)
<marcoceppi> perfect
<hazmat> marcoceppi, what i really want is a way for charm authors/charm farmers to be able to manually kick off a test run on a particular charm or mp, and get notified of results.
<hazmat> i mean its icing.. but its delicious
<hazmat> ;-)
<marcoceppi> hazmat: yeah, you're not the only one asking for this. When local lands it'll be easy enough for charmers (sans ~) to just juju test <charm> <test-name> -e local but I've been mulling over adding a juju-remote-test command for queing a test on jenkins. Not sure how abused that will get though
<hazmat> marcoceppi, restricting it to ~charmers initially seems sane.. and helps reviewers
<marcoceppi> hazmat: yeah, would be a good starting point, though I'd favor just having automatic testing on merge request over one-off test queing
<hazmat> marcoceppi, true
<marcoceppi> Once we get people writing test I'd love to start diving in to all that automation deliciousness
<hazmat> marcoceppi, so whats missing to me is why jitsu test is not the correct path forward
<hazmat> ignoring the implementation but the spec itself
<marcoceppi> hazmat: do you have the spec somewhere? I haven't actually read it
<marcoceppi> Though, I don't think the juju-test implementation would be too far off what jitsu test was striving to be
<marcoceppi> hazmat: this in the docs: https://code.launchpad.net/~clint-fewbar/charm-tools/charm-tests-spec/+merge/90232 ?
<hazmat> marcoceppi, its the one in the old docs
<hazmat> marcoceppi, http://jujucharms.com/docs/charm-tests.html
<marcoceppi> hazmat: yeah, so implementation wise of just listening to signals and shelling out to the test in the tests directory (with better handling of the tests individually) is what juju-test will do
<marcoceppi> The test helper I'm working on just give an author of tests a lot easier ways to inspect a deployed environment to build better detailed tests
#juju 2013-05-17
<hazmat> sidnei, its missing some of the waits for units running stuff, but the python-env branch of the new deployer is working against pyjuju
 * sidnei gives it a shot
<hazmat> sidnei, pushed some fixes for terminate
<hazmat> er reset
<sidnei> hazmat: ok, not using that on the pyjuju one yet, still doing nova delete
<sidnei> hazmat: seems like it worked, except for the waits :)
<hazmat> sidnei, nice
<sidnei> hazmat: https://pastebin.canonical.com/91208/
<hazmat> time for bed then :-)
<sidnei> cheers
<jcastro> marcoceppi: so I did a talk on juju @ a lug last night
<jcastro> and the drupal6 charm is broken
<jcastro> should I file a bug to unpromulgate?
<marcoceppi> jcastro: I'd at least file a bug
<marcoceppi> I'm a little unclear as to the unpromulgate process, does it just happen whenever or does it have to go through orphaned process, etc
<jcastro> no clue
<marcoceppi> Plus, if we unpromulgate it, where to we keep the charm for historical reasons? ~charmers/ will likely cause some oddites with ingestion in the GUI
<jcastro> well in any case
<jcastro> this would make a good test case to see what happens
 * marcoceppi nods
<jcastro> can we kick it back to brandon's namespace?
<marcoceppi> jcastro: I can't, no one but brandon has access to that namespace
<marcoceppi> BUT
<marcoceppi> it looks like he has a copy in his personal branch already so that would "just happen"
<marcoceppi> First audit casualty?
<jcastro> yeah
<jcastro> audit by failing a demo
<jcastro> marcoceppi: I am publishing the new calendar now
<jcastro> for reviews
<hazmat> hmm. that was known broken..
<jcastro> marcoceppi: ttrss looks like it's ready for round 2
<hazmat> i ended up pushing a drupal7 one to my ns
<hazmat> when i was doing a demo.
<marcoceppi> hazmat: yeah, I think it's been general knowledge that it's broken. Just not sure what to do about it until the recent audit sessions
<jcastro> no more mr nice guy
<jcastro> it doesn't work ---> see ya!
<hazmat> agreed
 * marcoceppi warms up the unpromulgater
<jcastro> if anyone has any cycles today for the review queue, that would be <3
<jcastro> marcoceppi: hey so so far Andreas has been asking for more charm-writing content in the charm school
<jcastro> want to concentrate on that this afternoon?
<marcoceppi> jcastro: sure
<jcastro> I sent you instructions for running the ubuntuonair thing
<jcastro> since it'll be you and mims today
<marcoceppi> jcastro: oh yeahh
<marcoceppi> thanks
<freeflyi1g> does juju go version only support constraints like arch, cpu-cores and mem?
<mgz> freeflyi1g: yes, for now
<freeflyi1g> mgz: thx
<mgz> feedback welcome about what you want most
<freeflyi1g> mgz: in python version, maas-name is very useful :)
<wedgwood> hazmat: is there anyway to fix a machine agent showing "down"? I've ssh'd into the unit and restarted it, but the status is the same.
<wedgwood> (besides redeploying)
<sidnei> jcastro: are you guys going to be at velocity?
<jcastro> No, we got declined
<hazmat> wedgwood, could you pastebin the machine agent log
<wedgwood> hazmat: it's looking like a zookeeper problem
<hazmat> wedgwood, restarting the machine agent should resolve that
<hazmat> wedgwood, connectivity?
<wedgwood> hazmat: well, the zookeeper keeps dying
<hazmat> wedgwood, g+?
<wedgwood> hazmat: sure
<hazmat> i landed some fixes to do better back off on retry, but its only in the ppa as of two days ago.
 * hazmat refills on coffee
<jcastro> ahasenack: hey do you have any specific things you'd like covered in the charm school? Or just general charm authorship stuff?
<ahasenack> jcastro: hm
<ahasenack> jcastro: "best practices" for interface creation I guess, and simple examples of set and get relations, with emphasis on the fact that relations might not be established yet so it should noop
<ahasenack> jcastro: or maybe for this first one just explain the hooks
<gnuoy> jamespage, does java set the default maxheap size based on the ram of the system it's running on or is it set when the jre is compiled?
<jamespage> gnuoy, its never clever - there is a jre default and then whatever the application specificies
<jamespage> cassandra does some auto-sizing
<jamespage> others don't
<gnuoy> jamespage, lovely, thanks
<marcoceppi> I was poking around the help for juju-core 1.10, what version (if any) can you specify an alternate .juju/environments.yaml (or different .juju "home")? I couldn't find it any of the help topics for juju-core
<sidnei> marcoceppi: JUJU_HOME iirc
<marcoceppi> sidnei: yup, thanks!
<avoine> is there a snippet somewhere on how to do a wait loop for apt-get in case an other charm is installing?
 * avoine think every charm should use that
<mattgriffin> it's like 2:30 in the morning and you're working on charms?
<avoine> here it's noon
<mattgriffin> oopsâ¦ wrong channel :)
<mattgriffin> avoine: :)
<marcoceppi> avoine: There is talk of using aptdaemon to resolve this problem. Not sure if there are any snippets though
<avoine> ok
<avoine> I'll check it
<marcoceppi> s/is/was/
<jamespage> avoine, marcoceppi: so the latest python juju (and I believe juju-core) only allows serial hook execution in the same container
<marcoceppi> That might actually not be the right package, but there's supposedly a way to queue packages for installation so you don't collide with apt-get in parent
<jamespage> thus avoiding a principle and subordinate charm trying todo conflicting operations at the same time
<marcoceppi> jamespage: that's super helpful to know
<jamespage> marcoceppi, hazmat fixed that for us during the HA work on OpenStack as we hit that issue *alot*
<jamespage> (around January I think)
<marcoceppi> excellent. I should write more subordinate charms
<hazmat> that also made it into juju core (trunk atm afaicr)
<marcoceppi> hazmat: any idea when the ppa will be updated with a more recenty juju-core release?
<hazmat> marcoceppi, when there's a new release
<hazmat> marcoceppi, there are several ppas associated to juju-core
<marcoceppi> I thought core was up to 1.14 or something (where 1.10 is currently in the ppa)
<hazmat> 1.11
<hazmat> is trunk
<hazmat> mramm2, when's the next core release?
<mramm2> I think next friday
<mramm2> would have been this friday but Dave is traveling
<mramm2> I also think we should move to earlier in the week in case there is something serious that requires a followup release -- don't want to have to do that over the weekend!
<marcoceppi> For those waiting for the Ubuntu On Air, we'll be starting in a few mins
<marcoceppi> The Ubuntu on air page isn't quite working for us, so you can follow along here: http://www.youtube.com/watch?v=yRcqSjOGweo&feature=youtu.be
<marcoceppi> If you have any questions please feel free to ping me with your questions!
<marcoceppi> Questions?
<paraglade> marcoceppi: how about talking a bit about implicit relations
<paraglade> and scope
<marcoceppi> paraglade: queued up!
<paraglade> marcoceppi: this might be something actually to cover when you get into subordinates
<marcoceppi> paraglade: probably which we'll probably spend a whole 'nother session on
<paraglade> :)
<paraglade> cool thanks!
<arosales> m_3, marcoceppi thanks or the charm school.
<marcoceppi> m_3: Thanks for running us through charms!
<m_3> arosales: sure
<m_3> thanks peeps for tuning in!
<dpb1> m_3: Hey -- Could you review this one when you get a chance?  https://code.launchpad.net/~davidpbritton/charms/precise/landscape-client/add-landscape-relation/+merge/161497
<marcoceppi> Oh boy, where is charm.log in go-juju deployments?
<marcoceppi> /var/lib/juju/units/<unit-name>/charm.log isn't quite there anymore :)
<m_3> marcoceppi: look in /var/log
<marcoceppi> m_3: ah, /var/log/juju/unit-name.log perfect
<marcoceppi> that will make archiving logs in juju-core a hell of a lot easier
<m_3> dpb1: I'll try today, but most likely Monday
<dpb1> m_3: monday is fine.
<dpb1> m_3: just a gentle reminder. :)
<dpb1> (now that sprint season is done for a while)
<m_3> gotcha
<m_3> actually at a conference next week
<m_3> _then_ the season's over for a bit
<marcoceppi> I don't know if it's just me, and my expectations, but I feel like "the cloud" has been noticeably more responsive since using the juju-core version
<jhujhiti> is this the right place to look for help with the openstack charms?
<sarnold> it's not -wrong-, anyway :) but it is late in the day..
<jhujhiti> well, it's worth a shot
<jhujhiti> i'm trying to help someone i work with get this openstack deployment straightened out. he's used juju charms to deploy mysql/rabbitmq/keystone/glance/swift/etc in HA. but it seems to have left the default database for each of the openstack services set to sqlite
<jhujhiti> there's a relation in juju, so i'm not sure what's been done wrong
<jhujhiti> and there are too many moving parts for me to just dive in and fix it, having never set it up myself
<jhujhiti> using glance as an example, i read through the 'juju debug-log' as i removed and re-added the relation to mysql. glance says something to the effet of 'database settings changed, will try again' but it never does
#juju 2013-05-18
<marcoceppi> jhujhiti: There's a merge coming to the keystone charm that allows users to migrate from sqlite to mysql
<marcoceppi> jhujhiti: https://code.launchpad.net/~nick-moffitt/charms/precise/keystone/migrate-sqlite-to-mysql/+merge/149334
<jhujhiti> it didn't deploy it right in the first place :/
<jhujhiti> but i'm starting over, we'll see how that goes
<marcoceppi> jhujhiti: The Open Stack charmers are quite receptive to feedback, they just don't typically patrol the chat during the weekend. So if you have any feedback for them I'm sure they'd want to know your pain points, etc
<marcoceppi> Actually, any feedback you have from the process would be helpful :)
<jhujhiti> my pain point right now is that we're trying to use ancient hardware that i don't have remote management too. and wake-on-lan doesn't appear to be working :(
<marcoceppi> jhujhiti: Wish I could help you there :\
<jhujhiti> meh, i'm not walking to the datacenter at 10 PM, i'll have to pick this up tomorrow
<jcastro> marcoceppi: heya, where's the youtube of the charm school?
<marcoceppi> jcastro: hey, I was slightly confused by your instructions. So it recorded on my personal account
<jcastro> that's ok, that's what you're supposed to do
<jcastro> I didn't want to create a juju channel with seperate creds, etc.
<marcoceppi> http://www.youtube.com/watch?v=yRcqSjOGweo
<jcastro> ok I'll add it to the website, etc on Monday
<jcastro> someone was asking me directly for it
<marcoceppi> jcastro: tl;dw we should have a part two from this charm school
<jcastro> 2 weeks!
<marcoceppi> hopefully it'll line up with "writing a charm WITH TESTS" if testing is ready by then
#juju 2013-05-19
<jhujhiti> hmm, "ERROR vip is not a valid configuration option." during mysql deploy. using the example YAML config
<jhujhiti> ah here we go, a real, honest-to-god error that i don't think is my fault:
<jhujhiti> 2013-05-18 22:45:18,596 unit:mysql-hacluster/1: unit.hook.api INFO: Missing required principle configuration: ['corosync_bindnetaddr', 'corosync_mcastport']
<jhujhiti> 2013-05-18 22:45:18,933 unit:mysql-hacluster/1: unit.hook.api INFO: Unable to configure corosync right now, bailing
<jhujhiti> but corosync.conf on the boxes has bindnetaddr 127.0.0.01 and mcastport 5405
#juju 2014-05-12
<Sebas_> hey! somebody there?
<Sebas_> jose or lazyPower take a look at this video at the time 1:15 http://hemingway.softwarelivre.org/fisl15/high/41f/sala41f-high-201405070913.ogv
<Sebas_> I talk about juju and Drupal
<jose> Sebas_: Cool! I'll take a look :)
<Sebas_> jose \o/
<Sebas_> actually was unexpected called to the stage hehe
<jose> :P
<Sebas_> so I didn't prepare anything, but was alright hehe
<jose> is it in 1m15s or 1h15m?
<Sebas_> 1h16m
<Sebas_> jose: that was in the FISL 15
<jose> is that this year?
<Sebas_> yes! actually last wednesday
<jose> awesome!
<Sebas_> yeah! \o/
<Sebas_> I talk with all kind of people about juju
<Sebas_> for example redhat people from openshift
<jose> tell them to switch to Ubuntu!
<Sebas_> hehe they where really shock about what juju can do
<Sebas_> haha yeah!
<Sebas_> sadly ubuntu doesn't appear in the event
<jose> maybe next year?
<Sebas_> debian, redhat, opensuse, fedora and others
<Sebas_> yes of course
<Sebas_> I'm thinking in talk about juju next year
<Sebas_> also talked with some guys from openstack community
<Sebas_> they didn't know about openstack bundle, and the maas integration
<Sebas_> they are using foreman
<Sebas_> and puppet
<jose> I assume they do now?
<Sebas_> yes!!
<Sebas_> but I didn't use already juju to deploy os, because I wanna to deploy it only with lxc containers
<jose> oh well
<Sebas_> i only have 1 dedicated server, and i really doesn't wanna to deploy openstack in VMs
<jose> well, you can use the EC2 free tier
<Sebas_> free you say?
<sarnold> for tiny and slow machines :)
<jose> http://aws.amazon.com/free
<sarnold> but wonderful for testing things
<Sebas_> ohh nice
<Sebas_> yeah!
<jose> I do all my charm development/testing on EC2's free tier :)
<jose> hi sarnold :)
<Sebas_> ahh ok
<sarnold> hey jose :)
<Sebas_> yeah but I was trying to avoid kvm's hehe
<jose> it's not kvm
<jose> or well, I'm not sure
<sarnold> probably xen
<sarnold> the best part is you mostly don't care :)
<jose> yeah
<Sebas_> yes probably xen
<Sebas_> haha
<Sebas_> Â¬Â¬
<jose> you just use it
<jose> or well, at least I do
<sarnold> *nod*
<Sebas_> do you already tried openstack with juju ?
<Sebas_> jose or sarnold hmm let me rephrase that, do you already deploy openstack with juju?
<sarnold> Sebas_: not me, I don't have enough machines to make use of openstack, heh
<Sebas_> sarnold: same problem here
<Sebas_> i was trying some openstack all-in-one
<Sebas_> but i don't know if it is something like that in ubuntu
<Sebas_> like RDO from redhat has
<sarnold> ahhh :)
<jose> I don't use OpenStack :)
<Sebas_> ohh, jose, you use other thing?
<jose> yes, I use EC2
<Sebas_> ahhh ok
<sarnold> Sebas_: hrm, in the jujucharms.com there's a bundle for deploying openstack on a machine with nested KVM
<sarnold> Sebas_: but I do'nt know the manage.jujucharms.com website well enough to find that bundle again
<jose> let me try and do that for you
<Sebas_> ohhh thanks!
<sarnold> I've done nested KVM before when testing MAAS systems; it's a bit slow, but it got the job done..
<sarnold> but nothing you'd want to use long-term
<jose> sarnold: who created that bundle?
<jose> is it http://manage.jujucharms.com/bundle/~james-page/openstack-on-openstack/openstack ?
<sarnold> jose: that's the one! ah. it's for testing on clouds that already exist. :)
<Sebas_> hmmm im getting a look
<Sebas_> "The Ubuntu Server Team use this bundle for testing OpenStack-on-OpenStack."
<Sebas_> hehe nice
<Sebas_> the thing is to use openstack to manage the vm's with lxc
<Sebas_> like the juju local does
<zchander> Good morning (GMT+2 over hereâ¦). Is anyone around who provide some additional information about juju-gui together with a HAProxy based gateway?
<rick_h_> zchander: never tried it but what's up?
<zchander> I am (re)trying my MaaS cluster, but had to reinstall everything. I would like to access juju-gui from within our schoolnetwork, but our MaaS is in itâs own network.
<zchander> I have a seperate server, with two network adapters, one connected to our schoolnetwork and one to our MaaS network. I would like to use the SSL enabled environment (Had the non-secure option working)
<rick_h_> zchander: ok,so you could haproxy the 80 but not the 443?
<zchander> rick_h_: Yes. using port 80 has worked (and before).
<rick_h_> zchander: hmm, looks like we do send the port as 443 http://bazaar.launchpad.net/~juju-gui-charmers/charms/precise/juju-gui/trunk/view/head:/hooks/web-relation-joined#L29 if secure is on
<zchander> My problem/issue is with HAproxy. I cannot seem to get a proper connection from my client (using SSL) to juju-gui (behind HAproxy)
<rick_h_> zchander: looking at haproxy it seems it'll only pick up the port it's told if it has a service_name http://bazaar.launchpad.net/~charmers/charms/precise/haproxy/trunk/view/head:/hooks/hooks.py#L823
<rick_h_> zchander: oh hmm, yea I'm not sure how the ssl pass through would work there.
<rick_h_> at that point haproxy is playing a man in the middle attach almost to the client.
<rick_h_> hmm, everything I'm seeing is having you setup haproxy to be the ssl endpoint
<zchander> I am also trying it with Apache2 as the reverse proxy, but here I get some I/O error issues. Might be relatedâ¦.
<rick_h_> zchander: yea, looking for docs/notes I did run across http://comments.gmane.org/gmane.comp.web.haproxy/15689 which seems to imply that it should be possible
<rick_h_> zchander: the charm just might not support that complex a config ootb
<zchander> The HAproxy server isnât part of our juju environment. It is a seperate Ubuntu based server with 2 nics
<rick_h_> zchander: oh ok, then yea. I think it should be possible just a matter of getting the right ssl handling config into the haproxy server I think.
<rbasak> How do I generate a release tarball from the bzr branch (specifically 1.18)? I don't see any instructions anywhere, and the two are quite obviously different.
<marcoceppi> rbasak: ask sinzui
<sinzui> Be aware the answer might not make sense.
 * sinzui still needs more coffee
<rbasak> sinzui: I want to test the latest 1.18 branch against packaging in Utopic
<rbasak> (since there's no release of the last few commits yet)
<rbasak> So I tried to generate an orig tarball based on the bzr branch, which failed, and then I noticed that they aren't the same.
<sinzui> rbasak, that is because devel/network broke juju devel and stable :)
<rbasak> sinzui: yeah it's the fix for bug 1314686 I'm trying to verify
<_mup_> Bug #1314686: Please add support for utopic <packaging> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Committed by wallyworld> <juju-core (Ubuntu):Triaged> <https://launchpad.net/bugs/1314686>
<sinzui> rbasak, CI is catching up to devel at this hour. Next hour I will force a rebuild of stable if a new rev doesn't automatically trigger a build.
<sinzui> rbasak, If you cannot wait a few hours, then you can build it yourself with lp:juju-releease-tools
<rbasak> sinzui: OK, thanks. lp:juju-release-tools sounds like what I needed. I can wait though, but if I do, where do I get the orig tarball from after that? The orig tarball from the stable PPA?
<sinzui> rbasak, http://bazaar.launchpad.net/~juju-qa/juju-release-tools/trunk/view/head:/README.md
<sinzui> rbasak, make-release-tarball.bash -1 lp:juju-core/1.18
<rbasak> sinzui: ah, great. Thanks!
<sinzui> rbasak, make-package-with-tarball.bash stable ./juju-core_1.18.4.tar.gz \
<sinzui>         'Full Name <user@example.com>'
<sinzui> ^ that can be installed
<rbasak> sinzui: thanks - I got a release tarball now I think. Do you accept MPs for juju-release-tools? I have a trivial fix, but it looks like it has only one revision fed from somewhere else.
<sinzui> rbasak, We will accept fixes thanks
<sinzui> rbasak, was the '-1' a problem. I often use the actual revno instead of an index
<rbasak> sinzui: https://code.launchpad.net/~racb/juju-release-tools/hg/+merge/219188
<rbasak> sinzui: ah, I missed the -1. I read the docs and looked up the revno.
<sinzui> Thank you rbase. I will get that merged ina few minutes
<rbasak> BTW, it gave me juju-core_1.18.4.tar.gz which doesn't quite seem right as there are more commits on top.
<rbasak> "git describe" does something sensible here, by appending a suffix describing the extra commits (in git's case, the number of commits and the HEAD commit hash).
<rbasak> Maybe that's appropriate here too?
<rbasak> Or maybe just a "~bzr{revno}" if the tag doesn't exactly match.
<rbasak> Actually "+bzr{revno}" would probably be more accurate.
* lazyPower changed the topic of #juju to:  Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewer: lazyPower
<Sebas_> lazyPower: thanks for the answer :)
<lazyPower> Sebas_: o/  anytime
<lazyPower> i'm puzzled why your bug isn't showing up in the rev queue
<lazyPower> glad i saw the mail or your bug would have gone unnoticed.
<Sebas_> lazyPower: take a look at the description i made in the bug report https://bugs.launchpad.net/charms/+bug/1315428
<_mup_> Bug #1315428: Review needed: Drupal Charm <drupal> <Juju Charms Collection:New> <https://launchpad.net/bugs/1315428>
<Sebas_> lazyPower: there are some of the questions
<lazyPower> http://manage.jujucharms.com/tools/review-queue -- feel free to check ~ 15 minutes after you've opened a bug, it ingests every 15 minutes (so max wait is 30 mins) - and as you see its not in there.
<Sebas_> lazyPower good to know that the mail helped hehe
<lazyPower> indeed!
<Sebas_> lazyPower nice, I sow it yesterday, but didn't see my issue hehe
<mbruzek> Hey lazyPower there are more than one drupal charms out there.  I remember reviewing one recently.  What do we do with multiple requests for the same charm?
<marcoceppi> Sebas_: lazyPower there's no linked bracnh
<lazyPower> marcoceppi: ah, so the fact it's in github is the blocker for it getting into the queue. ok.
<lazyPower> mbruzek: it'll require collaboration between the two charm authors i'd say. Drupal and ElasticSearch are both subject to this at the moment.
<marcoceppi> mbruzek: none are in the store yet
<lazyPower> mbruzek: that or may be the best charm win? (i really dotn know, this is a first to me)
<lazyPower> Ideally we'd want both authors working together toward teh same goal, building a high quality charm that implements best practices from both branches. But that's my 2 cents.
<mbruzek> Sebas_, for reference the other drupal charm is here: https://bugs.launchpad.net/charms/+bug/1290636
<_mup_> Bug #1290636: New drupal charm submission. <Juju Charms Collection:Incomplete by dweaver> <https://launchpad.net/bugs/1290636>
<Sebas_> mbruzek: ooh i see, opening here...
<mbruzek> Sebas_, It uses apache2, and yours uses nginx, I would say that is pretty different.
<Sebas_> mbruzek: it uses nginx with perusio configurations
<mbruzek> Sebas_, but see if there is a way the authors could agree on something.
<Sebas_> mbruzek: yes we can do a merge
<mbruzek> Either that or race his charm to the store...
<Sebas_> hehe
<lazyPower> Sebas_: its ok to be opinionated in your charm, we encourage that. With that being said, extending one or the other - based on a user config (use_apache: type: boolean, default: false) would make it really nice...
<lazyPower> to auto-tune the configuration to your given preference in httpd
<Sebas_> mbruzek: there's others diferences too
<Sebas_> there are other things like, importing an existing project, compass install, etc...
<Sebas_> the thing is this charm is good practices oriented
<Sebas_> a drupal boilerplate is used
<lazyPower> dweaver: ping
<lazyPower> Sebas_: you two should talk :)
<Sebas_> lazyPower: yeah definitely
<Tug> Hi everyone
<mbruzek> Hi Tug
<Tug> I'm having an error with debug-hooks
<Tug> $ juju debug-hooks mongos/0
<Tug> can't create socket: Permission denied
<Tug> any idea ?
<Sebas_> lazyPower: my questions about scaling are in the description of the issue, do you think thats enough?
<lazyPower> Sebas_: occupied atm, can i get back to you?
<Sebas_> lazyPower: to schedule our school charm :)
<Sebas_> yes!! of course!
<Sebas_> :D
<Sebas_> lazyPower: thanksss
<lazyPower> If you're not around when i circle back i'll reply to the thread.
<lazyPower> thanks for your patience Sebas_
<Sebas_> lazyPower :D
<dweaver> lazyPower, pong
<lazyPower> dweaver: Sebas_ is the other community member working on a drupal charm. You guys should coordinate, as there's awesome work in both charms.
<Sebas_> dweaver: hey man!
<dweaver> lazyPower, Yes, I saw there was another submission.  Hi Sebas_
<Sebas_> dweaver: we should work together on this charm, what you think?
<Sebas_> dweaver: if you can take a look at the charm to see what is doing
<dweaver> Sebas_, I'd be happy to combine our resources.
<Sebas_> but my goal is to deliver a best practices and enterprise charm ready
<Sebas_> dweaver: \o/
<Tug> it was working fine until now :'(. Please, do you guys have any tips to start debugging the issue ?
<Sebas_> take a look at the charm later please :)
<mbruzek> Tug I haven't seen that before.  Can you run:  juju ssh mongos/0 ?
<Sebas_> dweaver: we need to put apache option as webserver in the other charm
<Tug> mbruzek, yes I can
<dweaver> Sebas_, That's a great goal and I'm happy to help.  Time is the problem for me, I don't have much spare, but I should be able to review the charm later this week and let you know what I think so far and some suggestions on how we can combine .
<mbruzek> Tug, and you connect OK?  It looks like debug hooks failed on creating a connection.
<Sebas_> dweaver: that would be great!! thanks :)
<Tug> mbruzek, yes it connects fine. Then I can run $ sudo tmux attach-session -t mongos/0 and I also have can't create socket: Permission denied
<lazyPower> Tug: looks like something detached you from teh session and there's  possibly a permissions problem. I ran into this before, but didn't think much of it - it wasn't production and i destroyed/redeployed
<lazyPower> here's a bug that shows what i found: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=597335
<Tug> thx lazyPower, my error is even weirder as I'm running it with sudo
<lazyPower> ah, thats no fun :(
<Tug> not sure which process I could kill/restart...
<Tug> I have 3 jujud
<mbruzek> Tug, I believe lazyPower means to destroy-environment and restart this
<mbruzek> juju destroy-environment
<mbruzek> Tug, you should not need to kill individual processes
<Tug> ok mbruzek, I'll try to destroy the service first I think
<Tug> maybe even remove juju from the machine and add-machine later
<mbruzek> Tug what environment are you using?
<Tug> amazon
<Tug> machines were created manually
<mbruzek> Tug, I see
<marcoceppi> hey stub got a second?
<mbruzek> Is the 'HOME' environment variable not available in a hook environment?
<mbruzek> in python I mean?
<mbruzek> I am writing a charm in python, os.path.join(os.environ['HOME'], 'Liberty') that line fails.
<mbruzek> But when I ssh to the charm 'HOME' is set for both ubuntu and root user.
<marcoceppi> mbruzek: it's not set
<marcoceppi> not in a hook at least
<mbruzek> marcoceppi, ah I see.  Is there a good reason for not setting HOME ?
<lazyPower> marcoceppi: since you're working with trusty testing, i've validated a MP works against a charm in series trusty - shoudl i go ahead and push it to trusty or wait for you to do it?
<marcoceppi> mbruzek: probably, I mean what would HOME be?
<mbruzek> I can just use a tmp directory or something.
<lazyPower> meaning - it was targeted at precise, but works on trusty.
<marcoceppi> lazyPower: link?
<mbruzek> marcoceppi, I would expect HOME == /root
<marcoceppi> mbruzek: I would expect HOME to be $CHARM_DIR
<lazyPower> http://leankit.hostmar.co/lp:charms/elasticsearch/215696
<mbruzek> You are after all the root user
<marcoceppi> just because you're root doesn't mean you want things to end up in the HOME directory
<marcoceppi> it makes people think about what they're doing, since services installed by charms should not run as root
<mbruzek> That is fine, I was just trying to get a scratch area to do some work in, I thought HOME would be the place to do that
<mbruzek> I can do it in temp.
<mbruzek> or tmp
<marcoceppi> mbruzek: tmp is a better place
<marcoceppi> or $CHARM_DIR
<marcoceppi> lazyPower: there's not elasticsearch charm in trusty
<lazyPower> marcoceppi: i'm aware, thats why i asked.
<lazyPower> this works on trusty
<marcoceppi> lazyPower: there's not tests though
<mbruzek> What would happen if a bash charm did "cd"  what directory would pwd return?
<lazyPower> marcoceppi: there are when you pull elasticsearch and merge this in.
<marcoceppi> lazyPower: ah, cool I'll keep that in mind
<marcoceppi> mbruzek: probably either cwd or /
<marcoceppi> lazyPower: if you want to give it the +1 and merge precise
<marcoceppi> I'll do the promulgation to trusty
<lazyPower> marcoceppi: already merged precise. pending your validation of trusty.
<marcoceppi> lazyPower: ack, ty
<Sebas_> i'm trying to deploy openstack all-in-one with juju in one dedicated server, so I'm going to try to deploy some charms that must be out of an lxc container
<Sebas_> like the compute-node
<marcoceppi> Sebas_: you really need two nodes to do that
<marcoceppi> one for juju bootstrap (and all other services in LXC container)
<marcoceppi> and a second one for nova-compute
<Sebas_> marcoceppi: second dedicated server?
<Sebas_> marcoceppi: i was thinking something like all the os charms in lxc, but! the compute-node (with lxc libvirt type) in the 0 machine
<marcoceppi> right, everything can go in LXC excet nova-compute
<Sebas_> marcoceppi: i'm triping ? hehe
<Sebas_> right
<marcoceppi> which I guess you /could/ put on node -
<marcoceppi> 0*
<marcoceppi> but it just seems like it'd do better on it's own machin
<Sebas_> marcoceppi: \o/
<marcoceppi> you also can't use ceph with this setup
<Sebas_> of course, but i have only 1 machine hehe
<Sebas_> marcoceppi: instead of swift ?
<marcoceppi> well you couldn't have any ceph backed anything
<Sebas_> marcoceppi: well, i'm going to try that approach adapting the openstack bundle
<marcoceppi> Sebas_: good luck!
<Sebas_> marcoceppi: thank you!!!
<qhartman> If I destroy a service, should I expect juju to remove the associated packages from the machine it was deployed on?
<qhartman> also, if I have all of my charm config in a file, and I run "juju set --config=config_file.conf the_charm" should I expect the_charm to get the updated config from there?
<hatch> qhartman no it won't remove the associated packages, you'll have to account for that in the charm if you don't want to tear down the machine/container and create a new one (recommended)
<hatch> qhartman when the config is updated the config update hook will be executed, from there, you do with the data what you wish
<Sebas_> what's the best serie for deploying local juju ?
<Sebas_> trusty ?
<Sebas_> or precise ?
<Sebas_> somebody? :P
<Sebas_> i'm asking because i'm planning to deploy a nova-compute-node charm into the 0 machine
<cory_fu> jose: Did you see my comment on the tracks charm?
<jose> cory_fu: yep! I just came back from university
<cory_fu> Oh, ok.  :)
<cory_fu> It's so close!  :)
<jose> are you sure I should do -lt?
<cory_fu> Yeah.  Apparently, < does lexigraphical ordering instead of numerical ordering
<jose> oh, ok
<jose> I should change that everywhere, right?
<cory_fu> Well, you could also use double parentheses instead of double brackets
<cory_fu> http://www.tldp.org/LDP/abs/html/comparison-ops.html
<cory_fu> I think everywhere you're comparing to a number (i.e., port)
<jose> easier to do -lt :P
<cory_fu> Yeah.  :)
<jose> cory_fu: pushed!
<cory_fu> Awesome!  Let me take a look
<jose> cool
<cory_fu> mbruzek: Here's an example of my super simple upstart script (writer) for Apache Allura: http://bazaar.launchpad.net/~johnsca/charms/precise/apache-allura/refactoring-with-tests/view/head:/scripts/write-service
<mbruzek> Thanks cory_fu
<lazyPower> jose: there's a problem with owncloud on default deploy. I've got a recent revision of the tests here - bazaar.launchpad.net/~lazypower/charms/precise/owncloud/refactor_amulet_tests/
<lazyPower> that should be the last issue to resolve and its g2g for your dependent branch merge
<lazyPower> s/owncloud on default deploy/ the port configuration code on default deploy/
<cory_fu> I wrote that write_service script but it would be easier to have a files directory with the .conf files already created and just copy them into place with install or charmhelpers.core.host.write_file
<jose> lazyPower: ack, taking a look
<cory_fu> mbruzek: ^
<mbruzek> cory_fu, yeah good idea.
<Sebas_> yaiks!! juju-gui is not deploying, WARNING juju.worker.uniter.charm git_deployer.go:200 no current staging repo
<Sebas_> with precise serie
<jose> lazyPower: what's the problem with the owncloud charm?
<jose> I cannot seem to find it
<lazyPower> ports aren't exposed. Run the test with --set-e and --timeout 15m
<lazyPower> then inspect the environment after the test runs.
<jose> ok, trying now
<lazyPower> jose: charm test -e amazon --set-e --timeout 20m 100-deploy.py -o test
<lazyPower> that way you've got a) logs being piped after the run, b) an inspectable environment, and c) only running the test in question, nto waiting for the setup script to run.
<whit> hazmat, bcsaller, setup completes, but hangs on the login
<whit> this is aws single node
<jose> (my amazon environment is named ec2)
<bcsaller> whit: did you bootstrap with the proper constraints
<Sebas_> wow! juju-gui is not deploying any more :(
<Sebas_> http://pastebin.com/PT716tDT
<marcoceppi> Sebas_: this is a juju-core issue not a juju-gui issue
<marcoceppi> Sebas_: 1.18.2 ?
<Sebas_> let me check
<Sebas_> marcoceppi: 1.18.3.1
<Sebas_> marcoceppi: do you think I must report a bug?
<marcoceppi> this has been fixed in 1.19
<marcoceppi> Sebas_: they removed the git requirement all together
<Sebas_> oohh
<Sebas_> marcoceppi: 1.19 is stable?
<marcoceppi> Sebas_: no, 1.ODD are devel releases, 1.EVEN are stables
<marcoceppi> Sebas_: but it's been a known shortcoming for a while in juju-core, using git as a state server for charms
<Sebas_> ooohh marcoceppi i get it
<Sebas_> marcoceppi: do you know a workarround for this?
<jose> marcoceppi: did you get my comment about the wp mp?
<marcoceppi> jose: no, link?
<marcoceppi> Sebas_: for now just ssh in to the server and install git, then deploy again
<Sebas_> hehe marcoceppi i was just going to do that
<Sebas_> thanks :D
<Sebas_> marcoceppi++
<jose> marcoceppi: https://code.launchpad.net/~jose/charms/precise/wordpress/fix-1309980/+merge/216568
<marcoceppi> jose: oh, yeah, as long as it's still in the review queue it'll get looked at
<marcoceppi> (it's still in the review queue)
<jose> I know, just wanted to let you know
<Sebas_> humm thats strange "Could not resolve 'archive.ubuntu.com'"
<jose> Sebas_: juju resolved unit/# --retry
<Sebas_> jose: will try
<Sebas_> jose: the thing is it isn't in a error state
<jose> oh, where is that error?
<Sebas_> *an error state
<Sebas_> jose: i cant access internet from the machine
<Sebas_> so the apt isn't finding the repository
<marcoceppi> Sebas_: so there's your problem
<Sebas_> marcoceppi: yep
<marcoceppi> Sebas_: why git wasn't installed (it should be installed when a machine is provisioned)
<marcoceppi> Sebas_: is this maas?
<Sebas_> marcoceppi: it's juju-local
<Sebas_> just that
<marcoceppi> Sebas_: ah, can you run this?
<Sebas_> in a trusty
<marcoceppi> apt-config | grep -ri "proxy"
<marcoceppi> inside the unit
<Sebas_> marcoceppi: ok
<marcoceppi> sudo apt-config dump | grep -ri "proxy"
<marcoceppi> soryr, let me give you the real command
<Sebas_> .profile:[ -f "$HOME/.juju-proxy" ] && . "$HOME/.juju-proxy"
<marcoceppi> Sebas_: can you pastebin the output of sudo apt-config dump ?
<Sebas_> marcoceppi: of course
<Sebas_> marcoceppi: http://pastebin.com/j81bVttQ
<marcoceppi> Sebas_: is that on your machine or the node?
<Sebas_> unit
<Sebas_> machine
<Sebas_> marcoceppi: ubuntu@ubuntu-local-machine-2:~$ sudo apt-config dump
<Sebas_> marcoceppi: why, it's wired? hehe
<marcoceppi> well, I don't see any proxy lines in there
<marcoceppi> so that's odd
<Sebas_> its a fresh install of trusty and juju
<marcoceppi> you should just be able to apt-get update/upgrade without issue
<Sebas_> marcoceppi: yep, thats the first time that happen
<Sebas_> marcoceppi: well, i think i'm gonna try to re install everything again
<Sebas_> well, i tried with another version 1.18.1 and didn't work again, so the proxy isn't working
<Sebas_> i don't know why :(
<lazyPower> jose: i'm about to EOD
<lazyPower> how's the owncloud discovery going?
<jose> lazyPower: as I said, I didn't find out anything obvious, so if you could point me in the right direction I'd appreciate it :)
#juju 2014-05-13
<vila> hi there, is this the right channel to ask about issues accessing az3.clouds.archive.ubuntu.com from a cloud instance ? (As in wget hangs at HTTP request sent, awaiting response)
<marcoceppi> vila: this is as good as any
<marcoceppi> vila: you'll want to probably talk to utlemming but he isn't on yet
<vila> marcoceppi: great, thanks for the pointer, this is a weird issue, I can reach archive.ubuntu.com and even some files on az3 but apt-get source is failing
<vila> ftr:
<vila> wget http://archive.ubuntu.com/ubuntu/pool/main/a/a52dec/a52dec_0.7.4.orig.tar.gz works
<vila> wget http://az3.clouds.archive.ubuntu.com/ubuntu/pool/main/libp/libpng/libpng_1.2.46-3ubuntu4.dsc hangs
<lazyPower> is there a proxy defined in /etc/apt/apt.conf.d?
<vila> crap wrong test, lazyPower let me check
<vila> but there shouldn't be any
<lazyPower> i ask because my maas provisioned machines occasionally have issues due to that. I haven't tracked down why its an issue but commenting out the apt proxy seems to fix me up.
<vila> grep proxy returns no matches in /etc/apt/apt.conf.d
<lazyPower> ok. it was worth checking :)
<vila> indeed, thanks for that ;)
<vila> wget http://az3.clouds.archive.ubuntu.com/ubuntu/dists/precise/Release works
<vila> so the network is correct. Right ? Right ??
<vila> :)
<vila> wget http://az3.clouds.archive.ubuntu.com/ubuntu/pool/ hangs, that seems to be the starting point
<marcoceppi> vila: well, wget on an empty directory might be expected results
<marcoceppi> vila: what are you trying to do?
<vila> wget http://az3.clouds.archive.ubuntu.com/ubuntu/ works though
<vila> marcoceppi: apt-get source libpng
<vila> from apt-get source libpng --print-uris I isolated the above
<marcoceppi> ah
<vila> marcoceppi: note that wget http://az3.clouds.archive.ubuntu.com/ubuntu/ works
<vila> so it could be that this one is special cased but... I'd be more inclined to think something is borked on the servers
<marcoceppi> right
<marcoceppi> vila: yeah, let me find someone
<vila> if I wait long enough, it ends up with ERROR 504: Gateway Time-out.
<vila> for all failing urls
<gnuoy> marcoceppi, hi, I'm writing my first set of amulet tests and was wondering if there are any examples of interrogating the relation sentry that you know of ?
<marcoceppi> gnuoy: we have a few charms that exist with amulet tests
<marcoceppi> gnuoy: memcache is one example, lazyPower do you have any others?
<gnuoy> marcoceppi, I'll take a look at memcache, thanks. (I'm really enjoying working with amulet btw)
<marcoceppi> gnuoy: I'm glad to hear that! There's a 1.5.1 with a few more patches landing today, but if you haven't hit the "charm not in source control" 1.5.0 should have what you need
<marcoceppi> gnuoy: anything amulet doesn't do that you need it, feel free to let me know!
<lazyPower> gnuoy, marcoceppi: mongodb, owncloud, apache allura (not merged yet), tomcat
<lazyPower> to name a few
<gnuoy> lazyPower, thanks, much appreciated
 * lazyPower hattips
<vila> lazyPower: https embeds an AI ?
<lazyPower> vila: wat
<vila> lazyPower: you said hAttIps, I read that as AI in https, was I wrong ?
<vila> :-D
<marcoceppi> vila: http://i.imgur.com/cEnYB7n.jpg
<vila> hehe
<lazyPower> vila: hat tip ;)
<vila> lazyPower: ;)
<vila> lazyPower: oh ! http get its own ai too ?? Great !
<dypsilon> Hi everyone, I'm just starting with juju and have a couple of basic questions. The first one would be: is it possible to deploy several services on one unit? So for example I would have one Digital Ocean virtual server and I want to install mongodb and node.js.
<tvansteenburgh> dypsilon: yes, see examples in `juju help deploy`
<dypsilon> tvansteenburgh, thank you
<didrocks> hey
<lazyPower> o/ didrocks
<didrocks> I have a small question, on jcastro's advice, I'm following https://juju.ubuntu.com/docs/howto-node.html
<didrocks> I was wondering about the node-app charm to set the right repo
<didrocks> it's pointing at juju deploy --config myapp.yaml node-app myapp
<didrocks> with the yaml being:
<didrocks> sample-node:
<didrocks>   repo: https://github.com/yourapplication
<didrocks> however, in the charm itself (http://manage.jujucharms.com/charms/precise/node-app), I see that the config is using app_url
<didrocks> I'm probably missing one transformation step :)
<lazyPower> didrocks: you discovered a bug in the docs, good catch.
<lazyPower> didrocks: would you mind filing a bug? https://github.com/juju/docs
<didrocks> lazyPower: ah nice, can do, for sure, so the name should be mapping the charm config?
<jose> didrocks, lazyPower: I can fix it right now
<didrocks> jose: still want the bug opened?
<jose> ask lazyPower!
<lazyPower> didrocks: when you say name should be mapping the charm config, what do you mean?
<didrocks> lazyPower: options in the yaml config like "repo" should be the one in the charm config options
<jcastro> the config is missing on the charm page
<jcastro> I think the docs are referring to a config option that either doesn't exist, or used to exist and bitrotted
<lazyPower> well, looking at the docs, this is really confusing, a) there is no repo config options, its app_url
<didrocks> yeah, seems to be app_url
<lazyPower> and the name at the top of that config example should reflect the name of the charm
<lazyPower> eg, if you deploy node-app charm as "hoobadooba", it should read: hoobadooba: app_url: http://myawesomegithub.git
<didrocks> oh, so not "sample-node" but "node-app"?
<lazyPower> right
<didrocks> ah ok :)
<lazyPower> it needs some TLC - i'm busy with the Q at the moment but if you file a bug i'll make sure to circle back or another community member can play whack-a-bug with it
<didrocks> sure
<lazyPower> :)
<didrocks> opening now with the IRC context
<lazyPower> jose: ^ there's my response. <3 ty for offering to take point on it.
<jose> no prob! :)
<jose> negronjl: btw, there's still an MP in the branch
<negronjl> jose: sorry about that ... give me the link and I'll review as soon as I can
<jose> negronjl: https://code.launchpad.net/~jose/charms/precise/seafile/readme-cache-departed/+merge/217170
<didrocks> lazyPower: jose: https://github.com/juju/docs/issues/94
<jose> cool, thanks for reporting!
<didrocks> yw!
<lazyPower> +1
<negronjl> jose: https://code.launchpad.net/~jose/charms/precise/seafile/readme-cache-departed/+merge/217170 Approved and Merged
<jose> negronjl: cool, thank you!
<jose> tvansteenburgh: hey! the seafile charm has been fixed now :)
<tvansteenburgh> jose: ack, i'll take a look now!
<jose> thank you!
<tvansteenburgh> jose: link to the seafile MP?
<jose> tvansteenburgh: https://bugs.launchpad.net/charms/+bug/1078575
<_mup_> Bug #1078575: Charm needed : Seafile <Juju Charms Collection:Fix Committed by jose> <https://launchpad.net/bugs/1078575>
<tvansteenburgh> jose: thanks
<jose> np
<Tug> there is something I don't understand in juju charms, where is the hook sequence defined ?
<Tug> I guessed there would be a default sequence like: install > config-changed > start when deploying
<Tug> but there are custom events also (like replica_set_relation_joined in mongodb charm) and I don't see what triggers them
<lazyPower> Tug: The relationship hooks fire in sequence depending on when the relationship is defined
<lazyPower> Now, i'm not sure about anything behond that. there may be some sorting algorithm, but the relationships fire as follows
<lazyPower> joined -> changed -> broken -> departed
<lazyPower> Tug: so the idea is, the replica set hooks only fire when a replica-set relationship is created/destroyed.
<Tug> lazyPower, I see
<Tug> lazyPower, and the name replica-set, is it the peers property in metadata.yaml ?
<lazyPower> Tug: correct. its a peer relationship.
<Tug> lazyPower, I see, and peer relationships are triggered by add-unit I guess ?
<Tug> (or remove-unit)
<lazyPower> Tug: correct. Have a look at the MongoDB cluster bundle. its a series of teh same charm, with different relationship types among them
<lazyPower> https://jujucharms.com/sidebar/search/?text=mongodb
<lazyPower> that should help conceptualize whats going on implicitly with the mongodb charm, as its fairly complex out of the box with all the capabilities. A single charm can define a config server, a mongos instance, and 3 replicated shards.
<lazyPower> in that specific example :)
<lazyPower> all through the power and magic of config+relationships.
<Tug> lazyPower, it's clear, thanks :)
<lazyPower> happy to help :)
 * lazyPower wanders off for lunch
<Tug> is there a way to rewrite a config option from inside a hook?
<Sebas_> hey!
<Sebas_> hi people :)
<Tug> A "config-set" would be useful no ?
<Tug> Hi
<niemeyer> Tug: That's "juju set"
<Tug> niemeyer, juju set is like export right ?
<niemeyer> Tug: Hm?
<jose> jcastro: to confirm, we're having troubleshooting I on fri at 19 UTC
<Tug> niemeyer, I'll have a look thx
<jcastro> yeah
<jcastro> let me post
<jose> cool, thanks
<niemeyer> Tug: There's no unit-side tool for setting the configuration
<niemeyer> Tug: If that's what you're looking for
<niemeyer> Tug: Configuration settings are global for all units of a given service, and they are read-only to the unit
<niemeyer> Tug: Which is why there's no config-set analogous to config-get
<niemeyer> Tug: juju set is how you change the settings, from the client side
<Sebas_> has anyone tested a fresh install of an ubuntu trusty with juju-local ?
<Tug> niemeyer, ok
<lazyPower> Sebas_: every day.
<Sebas_> i did 3 times now
<lazyPower> Sebas_: what appears to be teh side effect you're seeing? machines not spinning up?
<Sebas_> lazyPower: I did it 3 times yesterday and today, and the containers, doesn't have the lxcbr interface
<Tug> niemeyer, juju set will do the job just fine for me ;)
<lazyPower> Sebas_: is it present on your machine? Thats part of the juju-local package.
<Sebas_> lazyPower: the problem is that I don't have the bridge into the containers any more
<Sebas_> in the local machine is it, but, in the container doesn't
<jose> jcastro: no juju troubleshooting I?
<lazyPower> hmmm
<Sebas_> and i don't have any clue what can be that
<jcastro> jose, no that's after this one, we had to cancel this one due to technical problems
<jose> I know
<lazyPower> Sebas_: well LXBR0 is a bridged pseudo device, which should provide the eth0 in the container unless I'm mistaken.
<Sebas_> hey jcastro! :)
<jose> jcastro: can you confirm with me the date for Juju Troubleshooting I?
<Sebas_> yes!
<jose> jcastro: and speakers?
<Sebas_> maybe is the kernel version, i don't know what to try :(
<lazyPower> Sebas_: I'm not sure what would have changed. My LXC containers still have a proper bridged ethernet device.
<lazyPower> have you modified anything in /etc/lxc?
<Sebas_> nothing
<Sebas_> i'm just formating and re installing the machine every time
<Sebas_> its a dedicated server
<jcastro> evilnickveitch, I'd like to document juju ssh/scp, where should I put it?
<Sebas_> i thought that the problem was because is installing with the repositories pointed to a proxy of the company
<lazyPower> Sebas_: Ping in #juju-dev, if you get no response, open a bug and mail the list about it. Thats my suggested workflow :(
<Sebas_> but, I replace it before anything
<lazyPower> i wish i had better info to give you though... seems... crazy that it just tanked for you.
<Sebas_> thanks lazyPower
<Sebas_> :)
<jcastro> evilnickveitch, same question but for series
<jcastro> jose, lazyPower and me
<jose> ack
<jcastro> jose, we can pick one
<jose> jcastro: I had Troubleshooting II on 6/6
<lazyPower> jcastro: I'm doing Troubleshooting I as well? cool.
<lazyPower> this'll be fun :D
<jcastro> jose, let's make that troubleshooting I
<jcastro> and make troubleshooting II the week after
<jcastro> sound good?
<jose> same time, same speakers?
<jose> sounds good to me
<jcastro> 6/13
<jcastro> troubleshooting will be the entire team
<jcastro> you can just put me down as speaker
<jcastro> I'll emcee
<jcastro> I'll announce those now, get them out of the way
<jose> all set now
<jose> ubuntuonair.com/calendar has them
<jcastro> after that I want to do local provider
<jcastro> but I'll need to sync with thumper on that
<jose> just let me know and it'll all be set up for you
<jcastro> sinzui, got a minute to talk docs?
<lazyPower> hey sarnold
<sarnold> hey lazyPower :)
<lazyPower> do you have 5 minutes for a quick q&a? I have an idea i want to bounce off of you
<sarnold> lazyPower: security team is doing a blueprint mumble right now, maybe in a bit..
<lazyPower> ack. can you ping me? I'm in a meeting until 3 - but this is hot off the press of my nugget
<lazyPower> if i validate this idea, i'd like to draft up a spec.
<sarnold> oo :)
<Sebas__> lazyPower: so the problem is the route
<sinzui> jcastro, I do have time to talk about docs.
<jcastro> sinzui, hey so a bunch of the things you mention in the release notes
<Sebas__> lazyPower: because when i ping to an ip it works, but no domains
<jcastro> IMO we should also have those in the organized docs
<lazyPower> interesting
<jcastro> have you and nick thought about where some of those features live long term?
<Sebas__> so must be something related to gateway or dns resolving
<jcastro> for example, you document series
<jcastro> but like, it's not in the structured docs, only the release notes
<sinzui> jcastro, everything I write in the release notes because howtos or amendments to the stable docs. That is what we did with the release of 1.18
<jcastro> sinzui, ok, I see some of them are now sections are in the docs, but for example I don't see where series is documented outside of the release notes
<jcastro> sinzui, this isn't a criticism btw, I was just wondering for discoverability reasons
<sinzui> jcastro, that is because we didn't change anything about series. really we didn't
<sinzui> that is why CI has worked for 7 months
<jcastro> right, but before I didn't have to care about series
<jcastro> but now that 14.04 is out I have to care about series
<sinzui> we just blogged about how we use default-series
<sinzui> but local deployments are different, that that is documented
<jcastro> ok
<sinzui> jcastro, https://juju.ubuntu.com/docs/charms-deploying.html
<jcastro> juju set-env "default-series=trusty"
<jcastro> aha!
<jcastro> I had missed that
<jcastro> sinzui, ok so really, there's nothing left to do wrt. docs for 1.18?
<sinzui> The juju devs have not agreed to my proposal to always include default-series with juju init, so we get extra support duties
<sinzui> no
<jcastro> do you have any nagging things you need fixed that I can work on?
<sinzui> I have 4 tasks for 1.12.
<sinzui> 1.20
<jcastro> I have a card to ensure everything from 1.18 that is important be reviewed
<sarnold> lazyPower: okay, done :)
<sinzui> jcastro, check the orange cards in the backlog  https://canonical.leankit.com/Boards/View/14028616#workflow-view
<sinzui> jcastro We will write the official docs when we know the devs are just fixing bugs in the new features
<sinzui> jcastro, marcoceppi We are getting close of letting juju-ci do the docs. We have a script, and we have lp creds now. I hope to get docs operational by Monday
<marcoceppi> \o/ huzzah
<jcastro> I <3 you
<evilnickveitch> jcastro, i think scp/ssh needs a new page
<evilnickveitch> jcastro - as for default-series, it may be best to find all the undocumented config options. I'm sure there are more
<evilnickveitch> then we could have an @additional config section
<sebas5384> im _sEBAs_, lazyPower and jose, just changed my nickname to sebas5384 :)
<sebas5384> lazyPower i know the problem of resolving dns issue
<lazyPower> awesome !
<lazyPower> what was the prognosis?
<sebas5384> my dedicated server hadn't the nameservers hehe
<sebas5384> xD
<jcastro> blink blink
<sebas5384> but! the juju-gui still haves the problem with git
<jcastro> that's hadoop 2.2 in the review queue?!
<jose> looks like
<lazyPower> jcastro: you know it
<jose> uh, it's installing hadoop from a gzip in the charm
<marcoceppi> jose: that's fine
<jose> oh well
<lazyPower> jose: its what's called a fat charm
<lazyPower> works in offline environments
<jose> well, I didn't think of that
<asanjar> hi there
<lazyPower> asanjar: there's some talk about your work
<lazyPower> i believe jcastro was taken by awe that it was in the queue already
<asanjar> jcastro: is always taken away by my handywroks
<jcastro> I am a fan!
<asanjar> lol
<sebas5384> jcastro: fat charm ?
<sebas5384> the gzip is IN the charm ?
<jcastro> yes
<sebas5384> Oo
<jcastro> so you have like a /files or /payload directory
<jcastro> in environments where the cloud doesn't have access to the outside world
<sebas5384> yeah, but I would it use only for template files
<sebas5384> like configurations files, etc...
<sebas5384> hmmm jcastro get it...
<jcastro> it's not just charms, so like if you were doing an offline deployment, you need mirrors, charm mirrors, etc.
<jcastro> there are many places where ubuntu server/cloud as a whole falls over when there's no connection to the internet, so we're fixing that
<sebas5384> hmmm yeah, but in that case, you use mirrors, of course
<lazyPower> sebas5384: depends on your network policy. Some are *that* locked down
<lazyPower> and we as charm authors should cater to the lowest common denominator to ensure we have robust services. There are several charms that clone github - its illogical to assume you will run a github mirror on your network.
<lazyPower> you could include a snapshot point release with the charm and reduce the overhead of running that mirror by adding a few extra lines of code to check for net connectivity, if offline, deploy from payload, otherwise clone from github for -HEAD
<sebas5384> lazyPower: yeah i know some organizations that like to do that Â¬Â¬
<sebas5384> lazyPower: now i get it, we have to attend all the enterprise cases
<lazyPower> *most
<lazyPower> ;)
<lazyPower> who knows sebas5384, the startups we empower today, are the enterprises of tomorrow.
<lazyPower> s/are/could be/
<sebas5384> exactly!!
<marcoceppi> sebas5384: jcastro lazyPower there's work in juju-core to make asset management (payloads, etc) way easier. For now this is the workaround
<lazyPower> marcoceppi: that blob delivery thing right?
<lazyPower> i got like 1% of what that was all about.
<marcoceppi> lazyPower: yeah resources
<lazyPower> but it makes sense.
<marcoceppi> lazyPower: think gitsubmodules for charms
<sebas5384> but it isn't like a binary closed thing, right?
<marcoceppi> but not as crazy
<lazyPower> marcoceppi: http://cdn.memegenerator.net/instances/500x/45167208.jpg
<sebas5384> hahaha
<marcoceppi> sebas5384: it's just a way to map resources a charm needs and an asset repository. So if your deployment is behind a firewall you can sync the assets you need to your cloud deployment making "offline" deployments easy
<sebas5384> lazyPower: i love this one http://i.imgur.com/3CS31.jpg
<sebas5384> nice marcoceppi
<sebas5384> :)
<sebas5384> talking about things, where some talk about comparing juju vs openshift (since both are devops tools)
<sebas5384> ?
<sebas5384> i talked with openshift commnunity guys here at the FISL #15 the other day
<lazyPower> sebas5384: as announced today at openstack summit, we have some new stuff coming down the pipeline for running a PAAS
<sebas5384> oohhh nice
<sebas5384> lazyPower: can i know what?
<sebas5384> hehe
<lazyPower> https://www.youtube.com/watch?v=YsYdIJrJRLQ
<lazyPower> yep its public knowledge now
<sebas5384> because i started a project of a charm factory
<sebas5384> ohhh nice!
<sebas5384> i didn't sow that one
<lazyPower> it was just aired about 4 hours ago
<sebas5384> mark is homeless now? hehe
<sebas5384> niceeeee
<sebas5384> lazyPower: thank you sooo much
<lazyPower> very welcome.
<lazyPower> hattip.gif
#juju 2014-05-14
<Tug> when adding a relation to 2 services, events "joined" and "changed" are not synchronized between the 2 services. For instance my hooks execute in that order :
<Tug> service1-relation-joined > service1-relation-changed > service2-relation-joined > service2-relation-changed, but what I was expecting would be to have both "joined" hooks completed before running "changed" hooks.
<Tug> the issue here is that "joined" hooks execute 'relation-set' commands so we have missing entries in the relation list when running "service1-relation-changed"
<Tug> I'll try to ask you guys this during the day ;)
<tvansteenburgh> Tug: this is covered in the docs here: https://juju.ubuntu.com/docs/authors-relations-in-depth.html
<tvansteenburgh> "In one specific kind of hook, this is easy to deal with. A relation-changed hook can always exit without error when the current remote unit is missing data, because the hook is guaranteed to be run again when that data changes -- and, assuming the remote unit is running a charm that agrees on how to implement the interface, the data will change and the hook will be run again."
<tvansteenburgh> so in your relation-changed hook, do a relation-get, and if the value is empty (not there), just exit 0, knowing that your hook will be run again when the data /is/ there
<Tug> thx tvansteenburgh, yeah I skipped some parts of the docs
<Tug> ;)
<tvansteenburgh> Tug: no worries
<Tug> well I did that (actually the mongodb charm does that) but I think it fails
<Tug> I mean it is not run again
<tvansteenburgh> O_o
<Tug> the mongos/0 unit is in error state 'hook failed: "mongos-cfg-relation-changed"'
<Tug> running resolved -r indeed works
<tvansteenburgh> Tug: yeah ok, if a hook fails all events are paused until the failure is resolved
<tvansteenburgh> do you know what caused the failure?
<tvansteenburgh> ok
<Tug> yeah the relation-set
<Tug> I mean get
<Tug> mongos_relation_changed: relation data not ready.
<Tug> mongos_relation_changed returns: False
<Tug> ERROR juju.worker.uniter uniter.go:490 hook failed: exit status 1
<tvansteenburgh> Tug: if you pastebin the code i can try to help
<Tug> yep
<tvansteenburgh> but in general, don't use `relation-get` in a config-changed hook
<tvansteenburgh> b/c you can't be sure that the relation is established
<tvansteenburgh> only use it in relation hooks
<Tug> http://bazaar.launchpad.net/~dekervit/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py
<Tug> http://bazaar.launchpad.net/~dekervit/charms/precise/mongodb/trunk/view/head:/hooks/hooks.py#L1329
<Tug> I did not change too much the original script
<Tug> but what we see here is that configsvr_relation_joined() can happen after mongos_relation_changed()
<Tug> thus causing relation_gets to return None
<Tug> line 1337 (^^)
<tvansteenburgh> Tug: yep, that is a normal scenario
<sarnold> would a correct fix be to add a new line after 1338 that sets retVal = True  ?
<tvansteenburgh> sarnold: yes
<sarnold> \o/
<Tug> nice :)
<Tug> really ?
<tvansteenburgh> Tug: yep
<Tug> I have to try !
<tvansteenburgh> Tug: it's not an error for relation data to not be ready. if that it the case you must return True (exit 0) so that juju will continue processing events, and eventually run your hook again when it does have the data
<Tug> (exit 1 you mean ?)
<Tug> alright, I'm running the fix :)
<tvansteenburgh> Tug: no, exit 0
<tvansteenburgh> :P
<tvansteenburgh> exit 0 = success, exit > 0 = fail
<Tug> really ? I always type exit 1 with debug-hook to say it ended correctly
<tvansteenburgh> see lines 1660 - 1663
<Tug> ok I was wrong then, good to know :)
<Tug> ah yes
<tvansteenburgh> Tug: glad to hear you've discovered the joy of debug-hooks though!
<Tug> yeah thanks to lazyPow3r a few weeks ago
<lazyPower> tvansteenburgh: how do i get root path system independently? i thought os.path.abspath() would return it...
<Tug> btw, I just spent a few days improving the mongodb charm, it might help the community later ;)
<lazyPower> but its including the CWD
<lazyPower> which i do not want.
<tvansteenburgh> lazyPower: i don't understand what you want it to return
<tvansteenburgh> Tug: that's awesome!
<lazyPower> tvansteenburgh: if i say os.path.join('foo','bar') using abs path, iw ant '/foo/bar' to be the return path.
<tvansteenburgh> lazyPower: yeah, you'd have to os.path.join('/', 'foo', 'bar')
<lazyPower> that kind of defeats the idea of using os.path.join though...
<tvansteenburgh> there must be a better way, that's not portable
<tvansteenburgh> yeah
<tvansteenburgh> well do abspath on foo first
<lazyPower> http://paste.ubuntu.com/7460299/
<lazyPower> nope.xls
<Tug> thanks tvansteenburgh, sarnold it worked
<lazyPower> i think abspath always returns from CWD
<jose> lazyPower: now that you're on review, I've got a charm that got a +1 from a charm-contributor and a non-reviewing charmer
<tvansteenburgh> Tug: great!
<lazyPower> jose: offduty :P
<lazyPower> i've been on duty when when not duty'ing
 * lazyPower takes a vacation to work on other projects
<sarnold> Tug: nice :)
 * jose flips table
<lazyPower> bwahahahaha
 * lazyPower dangles carrots in front of jose
 * tvansteenburgh laughs and points
<lazyPower> jose: which charm? i'll look at it tomorrow
<lazyPower> i'm assuming seafile?
<jose> you got it right
<lazyPower> i figured, i saw extra work in that one this week
<jose> looks like you've read your emails
<lazyPower> butofcourse
<lazyPower> half hour in the morning, half hour at the end of the day
<tvansteenburgh> lazyPower: i still don't get what you're trying to achieve. are you making a path to a dir that doesn't exist?
<lazyPower> tvansteenburgh: yep
<lazyPower> i want to make /vagrant/charms/precise
<lazyPower> whcih will not exist by default
<lazyPower> i mean, i could get cheap, and just make it on the host... but thats not a fair assumption to make. the script should drive all actions and make assertions.
<jose> sudo mkdir /vagrant
<lazyPower> this is the equivalent of a nose test for testing the juju vagrant image.
<tvansteenburgh> http://stackoverflow.com/questions/12041525/a-system-independent-way-using-python-to-get-the-root-directory-drive-on-which-p
<tvansteenburgh> i guess that's the heart of what you want
<lazyPower> tvansteenburgh: look at the second answer
<lazyPower> :P
 * lazyPower edited that like, 5 minutes ago
<tvansteenburgh> hey, nice
<lazyPower> and its such a hack. you're finding the path to the interpreter, which may or may not be correct on windows.
<lazyPower> what if vagrant mounts on C:\\ but the python interpreter is on d:\\
<tvansteenburgh> your answer isn't a hack
<lazyPower> its not solid though. i just want the root of the current filesystem.
<lazyPower> thats the only safe assumption i'm willing to give with windows
<lazyPower> i guess this works :|
 * lazyPower resigns to using his own hack
<lazyPower> a hack, using hacks, to produce hacky scripts
 * lazyPower hackety hack hacks
<lazyPower> tvansteenburgh: thanks though, appreciate the extra braincells @ the problem.
 * tvansteenburgh was not much help
 * tvansteenburgh wanders off to eat pie, waving as he goes
<sarnold> I thought on windwos you just shoved everything into C:\windows\system32\ and called it a day? :)
<lazyPower> sarnold: duh
<lazyPower> ;)
<sarnold> :)
<lazyPower> only real lusers put stuff elsewhere
<lazyPower> making it easy to remoev
<sarnold> or backup
<sarnold> hahaha
<lazyPower> i mean, software is so great on that platform why wants to remove it?!
<lazyPower> s/why/who/
<lazyPower> hey sarnold
<sarnold> evening lazyPower :)
<lazyPower> http://askubuntu.com/questions/465544/what-is-the-reason-that-i-see-cron-session-opening-and-closing-every-hour-in-va <- this is a good question. Why DOES this happen?
<sarnold> lazyPower: heh that is a decent question :)
<lazyPower> oh man, did i stump you?
<sarnold> heh, no, I'm just saying that for a beginner it'll be utterly impenetrable with no clue where to go for finding the answer :)
<sarnold> his or her guess is utterly adorable :)
<lazyPower> dang
<lazyPower> i keep hoping i'll find an area of grey knowledge and stump you, its become quite the fun game to play.
<lazyPower> better know the day it happens i'm pooping the cork on the champagne.
<lazyPower> s/pooping/popping
<lazyPower> what a typo wow
<lazyPower> context... it is everything.
<sarnold> haha, look closer to home -- I know nearly nothing about Go. I spent two hours just trying to figure out how to do cscope-like things in it that didn't involve "Step 1: install a Java Servlet Container"
<sarnold> lol
<lazyPower> but, go isn't home here
<lazyPower> i work with pretty much everything *but* go
<sarnold> well, I guess if you're just using the API of juju, it wouldn't be your regular stomping grounds either..
<lazyPower> until there's a go charm in the store.
<sarnold> hehe
<lazyPower> then i'll be like "yo dawg"
<lazyPower> "whats up with this go code?"
<sarnold> and I'll guess my way through it :)
<sarnold> the answer there isn't too bad, but if a better one isn't posted when I'm done with dinner, I'll write a -good- one :) hehe
<lazyPower> looking forward to it
<lifeless> mkdir: cannot create directory ï¿½/var/run/rabbitmqï¿½: Permission denied
<lifeless> /usr/lib/rabbitmq/bin/rabbitmq-server: 80: /usr/lib/rabbitmq/bin/rabbitmq-server: cannot create /var/run/rabbitmq/pid: Directory nonexistent
<lifeless> bah, echannel
<lifeless> sorry
<cruisibesares> hey guys im testing out juju and maas. So far things are pretty awesome. I also have an aws cluster and i have set that up with a different name in my environments file. Do i need to run an instance of juju for each envionrment with juju bootstrap or is there a way that I can have one gui/juju server that will manage both providers? If you need one juju node per provider can it be colocated with maas? I found this http://askubuntu.
<cruisibesares> com/questions/181880/does-each-juju-environment-specified-require-its-own-master-node but im wondering if anything has changed at this point in juju's development. I think that it would be really nice to be able to join all my clouds with one orchestration tool. If its not there now is on the roadmap or am i missing something fundamental?
<Cuy> Hi! I'm looking into Juju for service orchestration and the planned deployment of an OpenStack cloud (possibly in combination with Saltstack). Could anyone tell me how resource hungry Juju is? As in: How many servers can I realistically expect to steer from one master? And is there some infrastructure size you would consider a hard limit of Juju's capabilities?
<Cuy> Sorry for asking here, but I didn't find any information on these topics anywhere on the net (and I've been looking into service orchestration and configuration management for a few weeks now ;) )
<gnuoy> Is it possible to write amulet tests that interrogate the relation sentry for charms which are not yet in the charm store ? The reason I ask is that I'm getting a "request failed with: 404" and it seems to be trying to query  https://manage.jujucharms.com/api/3/charm/...
<lazyPower> gnuoy: you can specify a launchpad branch to deploy from.
<lazyPower> are you using amulet 1.5?
<gnuoy> I am
<lazyPower> bueno, that *should* work.
<gnuoy> branch: lp:~blah
<lazyPower> if it doesn't let me know and i'll ping the parties working on amulet.
<lazyPower> that behaviour should have been triaged in 1.4.x of amulet
<gnuoy> lazyPower, thanks, I'll give it a whirl
<cruisibesares> Hey all i have asked this question last night be im guessing everyone was asleep. I have a physical infrastructure that im running with maas and a cloud in aws. I would like to manage both of these providers with one juju server. Im looked over all the questions concerning juju and maas on ask ubuntu and i found this http://askubuntu.com/questions/181880/does-each-juju-environment-specified-require-its-own-master-node which seems
<cruisibesares>  to sugest that you need one juju machine per enviornment. Im wondering if that is still true or if what i want to do is possible using manual provisioning
<cruisibesares> i think my main concern is that juju isn't ha and i would like to keep my single point failure on aws
<cruisibesares> it has a higher uptime guarantee and more tools
<cruisibesares> that and the point of the physical hardware im deploying is to be cheap and expendable
<lazyPower> cruisibesares: we don't have cross environment relationships yet. You can do it with the manual provider, but as the provider name implies, there's manual effort involved in it.
<cruisibesares> <lazyPower> thanks
<lazyPower> Its on the roadmap, but i don't have an ETA for X-environment implementation. So at best, you'll want to do 2 bootstrap nodes, one for your maas cluster one for aws, or go manual and do enlistment manually.
<lazyPower> cruisibesares: sorry i don't have a better answer for you than 'its coming' :(
<cruisibesares> ok great thats really helpful i will consider both of those options thanks so much
<cruisibesares> no thats totally fine i get that there is a lot on the roadmap
<lazyPower> Yep, features galore are coming in this iteration. I think we have it up for the iteration after this - but that's uncertain. It's still mid to high priority though.
<lazyPower> are you on the mailing list? You'll get notices of what lands in teh changelog on the mailing list.
<cruisibesares> so now i will just have to gamble on which setup will have cleanest update path
<cruisibesares> no im not on the mailing list yet
<cruisibesares> thats on the main page for juju?
<lazyPower> https://lists.ubuntu.com/mailman/listinfo/juju
<lazyPower> you may want to ping the list with your question for other community members that have taken one route vs another - and get feedback. Kind of a straw poll to speak - and see if their experiences align with your goals.
<lazyPower> I'm running a manual provider setup between Do and Softlayer that has been pretty solid.
<lazyPower> but I'm a minority in that aspect.
<lazyPower> and working with a true bootstrap node in the environment that does enlistment for me - is sorely missed. Its not *that* big of a deal to manually enlist but if i were scaling > 10 machines, i'd want this all to be automated for me.
<cruisibesares> great idea
<lazyPower> Given that you're running with maas and aws - both of which have full providers, it would probably be best to go that route and put some glue around it.
<lazyPower> depending on your scaling future :)
<cruisibesares> alright well its good to know people are doing it manually
<cruisibesares> alright cool i'll try and send something to the mailing list soon
<cruisibesares> thanks for your ideas and help
<cruisibesares> will give me something fun to play with while i wait :)
<lazyPower> anytime
<gnuoy> lazyPower, fwiw I've raised Bug#1319437 for the amulet issue
<_mup_> Bug #1319437: Amulet breaks when inspecting the relation-sentry regarding charms not in charmstore <Amulet:New> <https://launchpad.net/bugs/1319437>
<lazyPower> thanks gnuoy! i'll poke the fellas and let them know
<gnuoy> np, thanks for the help
<lazyPower> gnuoy: from the creators mouth - <marcoceppi> it's doing the right thing, it assuems OH YOU'RE NOT A LOCALCHARM, LETS USE THE API HERP DERP
<lazyPower> so you found a valid corner case, it'll get addressed soon.
<gnuoy> lazyPower, it breaks in the same way if the charm is in a local directory rather than lp
<lazyPower> gnuoy: should land in 1.5.1
<gnuoy> kk
<marcoceppi> yay, being quoted verbatium from another channel
<pindonga> hi, anyone around to help me with running juju locally? I'm running on trusty, but after an upgrade path from saucy
<pindonga> I can't get juju to create the machines with the local provider properly
<pindonga> keep getting: WARNING juju.worker.instanceupdater updater.go:231 cannot get instance info for instance "": no instances found
<pindonga> I had lxc already set up previously, so I assume this must be an issue with lxc (mis)configuration
<cjohnston> wallyworld_: I'm a little confused.. is bug #1306537 fixed in 1.18.3 in the PPA?
<_mup_> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):New> <juju-quickstart (Ubuntu Trusty):New> <https://launchpad.net/bugs/1306537>
<lazyPower> sinzui: looks like apparmor is going to be a troublemaker this time around - https://bugs.launchpad.net/ubuntu/+source/apparmor/+bug/1296384
<_mup_> Bug #1296384: LXC apparmor profile broken w/recent trusty update <amd64> <apport-bug> <trusty> <AppArmor:Confirmed> <apparmor (Ubuntu):Triaged> <https://launchpad.net/bugs/1296384>
<sinzui> lazyPower, looks like Ubuntu is still ignoring bug 1305280
<_mup_> Bug #1305280: apparmor get_cgroup fails when creating lxc with juju local provider <apparmor> <armhf> <local-provider> <lxc> <packaging> <regression> <juju-core:Invalid> <apparmor (Ubuntu):Confirmed> <https://launchpad.net/bugs/1305280>
<avoine> pindonga: can you tell me what is your version of lxc + juju-core?
<pindonga> avoine, lxc==1.0.3-0ubuntu3 , juju-core==1.18.1-0ubuntu1
<cjohnston> pindonga: I wonder if your having bug #1306537 too
<_mup_> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):New> <juju-quickstart (Ubuntu Trusty):New> <https://launchpad.net/bugs/1306537>
<avoine> pindonga: what lxc-ls gives you?
<pindonga> exit 0 (no output)
<pindonga> cjohnston, it's possible, I'm trying to deploy a charm for testing with default-series: trusty, and it looks like something is happening (taking it's time though)
<pindonga> and it just failed with: (error: error executing "lxc-start": command get_cgroup failed
<pindonga>       to receive response)
<cjohnston> there's also bug #1317197 but I'm not sure that'd be it if your getting a cgroup issue
<_mup_> Bug #1317197: juju deployed services to lxc containers stuck in pending <oil> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1317197>
<pindonga> thx cjohnston the title looks promising, will take a deeper look after lunch
<cjohnston> :-)
<_mup_> Bug #1319474 was filed: Juju uses hard-coded regions <pyjuju:New> <https://launchpad.net/bugs/1319474>
<cjohnston> lazyPower: I have an ansible question for you if you have a moment
<_mup_> Bug #1319475 was filed: Juju should support new signing format <pyjuju:New> <https://launchpad.net/bugs/1319475>
<didrocks> jcastro: https://github.com/juju/docs/pull/97 mind having a look?
<jcastro> on it
<jcastro> don't know how I missed that, thanks!
<lazyPower> cjohnston: shoot - sorry about the delay was away from my desk.
<didrocks> yw man ;)
<cjohnston> lazyPower: np..
<lazyPower> sinzui: tyhicks in #ubuntu-server has confirmed the apparmor bug is duplicated by one that was supposed to be fixed in reprelease.
<lazyPower> *prerelease
<lazyPower> cjohnston: but fire when ready :)
<cjohnston> lazyPower: I have http://paste.ubuntu.com/7463732/ ... however, it's failing that 'user' isn't found.. I added the when because it was failing for the same reason when I didn't have the when..
<cjohnston> I'm wondering if there is something else I should be doing, or maybe if I should add some sort of repeat type something?
<cjohnston> It seems like I'm just not getting back the relation data quick enough for ansible
<lazyPower> cjohnston: "'user' doesnt' seem like it would expand. Ansible playbooks expand jinja2 variable syntax as of 1.6
<lazyPower> so wouldn't it be {{user}}
<lazyPower> they depreciated the older style of variable notation using $'s, and i think what you're seeing is related. but i'm not positive.
<cjohnston> I tried that.. but I got an error.. let me try it again and see if I can figure the error
<lazyPower> capture the error for me and i'll take a look.
<cjohnston> ack
<cjohnston> lazyPower: http://paste.ubuntu.com/7463825/
<lazyPower> cjohnston: looking, 1 sec
<lazyPower> cjohnston: where's your line defining user?
<lazyPower> there sh9ould be a register play in your playbook that defines the username.
<lazyPower> or is it a config option on the charm?
<cjohnston> lazyPower: AIUI charmhelpers automatically makes everything in the /etc/ansible/host_vars/localhost available.. is that not the case?
<avoine> cjohnston: have you tried without all the quotes single and double?
<cjohnston> avoine: based on http://docs.ansible.com/playbooks_conditionals.html#applying-when-to-roles-and-includes when: "'reticulating splines' in output" <-- is how I was operating
<cjohnston> Is that not correct?
<avoine> I think when you use a variable name instead of plain text you don't need the single quotes
<cjohnston> "{{ user }} in current_relation" <-- is what I just tried that gave the traceback I pasted
<lazyPower> cjohnston: thats correct
<avoine> I would try: user in current_relation
<cjohnston> lazyPower: which part are you saying is correct
<lazyPower> Thjat the ansible helpers make all config values global keys in the playbook.
<lazyPower> i didn't know if that was coming from a config value or from your play
<lazyPower> cjohnston: thats strange that its undefined though...
<lazyPower> i'm nto sure what to recommend here.
<lazyPower> syntax looks fine
<balloons> wwitzel3, ah-hah, I've found you after all. heh. Glad I didn't bike, would have been a wet ride
<cjohnston> lazyPower: well.. avoine's suggestion didn't cause an error
<tvansteenburgh> balloons: o/ (Tim from lunch)
<balloons> o/
<onezero> Trying to deploy using the "manual" environment via which my bootstrap node will be in an already existing "juju-server" machine.  How do I change the port.  I see bootstrap-host: however there is no "bootstrap-port:" and appending the port at the end of the hostname doesn't work.  Any ideas?
<marcoceppi> onezero: it's not a good idea to bootstrap the same server twice
<onezero> Why?
<onezero> I'm using a docker container.
<marcoceppi> onezero: because, juju isn't designed to host more than one bootstrap on a single machine
<marcoceppi> you can deploy services to the bootstrap node, but a single node can't have more than one bootstrap running on it
<marcoceppi> onezero: I think there might be a communciation issue between bootstrapping and deploying
<onezero> In my case a vm is entirely dedicated as a juju server and that's the only function it serves.  It seems like a waste to create a whole new vm for each additional environment...
<marcoceppi> onezero: well, you can deploy services to that VM, but that one VM also controls any and all other VMS you wish to spin up in that environment
<marcoceppi> it's the orchestration service for that environment
<onezero> So basically, if I want to orchestrate multiple environments... I need multiple actual servers.
<onezero> One juju server per environment
<marcoceppi> onezero: you need atleast one server per environment
<marcoceppi> an environment can have 1+ severs which can have n+ charms deployed on to it (either using the whole machine or using containerization kvm, lxc, etc)
<marcoceppi> on to it, being on to any of the servers in the environment
<marcoceppi> with manual provider you can enlist an additional machine by running juju add-machine <user>@<machine>
<marcoceppi> which will make it available in that environment
<onezero> I just wanted to keep the orchestration (ie juju bootstrapped node) separated from the actual servers that are being orchestrated.  Basically I have a maas cluster which is my first environment for which I bootstrapped a single vm to serve only that singular purpose of juju orchestration.  Next I have some servers NOT part of maas that I would like to manually deploy charms to... but in order to do that I created a new manual environment in my yaml config.  I
<marcoceppi> onezero: yeah, so you want something like cross environment relations, which is on the roadmap but probably won't happen in the next few months
<marcoceppi> onezero: you can create a KVM or LXC on the maas bootstrap node
<marcoceppi> and use that as your manual provider bootstrap node
<marcoceppi> the main problem is there's not isolation, so juju-db will stomp all over itself
<marcoceppi> but if you put the bootstrap node in a container inside an existing bootstrap node, there's no real collison
<onezero> marco... THANKS a bunch.  Glad to get confirmation of that.
<rick_h_> mbruzek: around?
<mbruzek> Yes
<rick_h_> hey, see PM
<l1l> Ok folks, could use some help..
<l1l> This is a error I get when trying to view a juju status -e maas
<l1l> ERROR state/api: websocket.Dial wss://tngek.maas:17070/: dial tcp: lookup tngek.maas: no such host
<jose> l1l: afaik that's because bootstrap hasn't finished
<l1l> jose; The machine is already booted, and I can ssh into it.
<jose> l1l: bootstrap does some additional things as it needs some tools to be the master :)
<jose> did your 'juju bootstrap' finish?
<l1l> Yes, it finished with no errors and the node booted up. However, when I try to check the juju status I get that error repeating
<jose> hmm, that's weird
<jose> maybe someone else would be able to help
<l1l> I have googled till blue in the face and have found similiar bugs, but they see a "connection refused" instead of the "no such host". It's apparently related to DNS.
<l1l> Thanks though!
<jose> np :)
<andreas__> l1l: add "nameserver <maas-ip>" to the top of your /etc/resolv.conf temporarily
<ahasenack> there might be a way to tell your local resolver to only use that DNS for the .maas domain
<ahasenack> dnsmasq has the --server option which seems to suit well, but I haven't used it
<l1l> ahasenack; Adding that to the resolv.conf fixes it. So time to point the finger at maas-dns ?
<ahasenack> l1l: no
<ahasenack> l1l: maas is controlling that zone, so you need to use it for dns when talking to machines in that zone
<ahasenack> it's as simple as that
<ahasenack> now, that change you just made should be temporary, because that means all your other name resolution queries will go to maas, even the ones that have nothing to do with maas
<ahasenack> like google.com, gmail, etc
<l1l> Yea, wonder how I can make maas just use that zone.
<ahasenack> see man dnsmasq, look for the -S option
<ahasenack> and I think you can add a similar option to /etc/dnsmasq.conf, but I haven't tried
<ahasenack> would be a way to tell your local resolver, assuming you are on ubuntu and using dnsmasq (the default), to only use the maas dns for resolving names in the .maas domain
<ahasenack> that would be on your machine, where you are issuing juju commands, btw
<ahasenack> the maas nodes are already using the right dns
<l1l> odd, I dont have that dnsmasq.conf
<ahasenack> I think ubuntu works with snippets in dnsmasq.d
<ahasenack> there might be ubuntu specific documentation about this
<ahasenack> I also see a /etc/dnsmasq.d-available/
<l1l> hmm, I don't see anything todo with dnsmasq in the /etc dir
<ahasenack> I'm on trusty, if that makes a difference
<ahasenack> oh, and on a desktop
<ahasenack> $ dpkg -S /etc/dnsmasq.d
<ahasenack> network-manager: /etc/dnsmasq.d
<ahasenack> you might not have network-manager
<l1l> does a server install not get the network-manager?
#juju 2014-05-15
<zchander> Goodmorning to all. Can someone explain to me how the version number of the charm is calculated? When I do a bzr checkout for a charm I get (e.g.) a revision file with a number 7, but the charm shows â<charm>-13â
<lazyPower> zchander: while not a full explanation to your question, sinzui did a writeup over this http://curtis.hovey.name/2013/06/26/managing-juju-charm-versions/
<lazyPower> So, effectively there are different revisions depending on context. A revision that you see in teh charm store, is how many releases have been pushed to the store. Each time I merge a charm proposal, it increments the rev no in the store by 1
<lazyPower> every time you deploy a charm locally, it increments in your environment by 1 for each deployment
<lazyPower> so if i deployed elasticsearch-1, and tweaked something thend eployed again, it would be elasticsearch-2
<zchander> lazyPower: Thanks for the info.
<lazyPower> np. it can be confusing at first - but it's pretty straight forward once you get there's 3 possible revisions to reference.
<zchander> lazyPower: I am busy with ownCloud again (got my MaaS up and running again..)
<zchander> Is it possible to clean the juju charm cache?
<lazyPower> not that i'm aware of. I believe you'll want to upgrade the charm.
<lazyPower> i mean, its possible, but not in a sane and clean manner.
<zchander> I am trying to deploy my (updated) test version of ownCloud, but it wonât pick up the change(s) I made, e.g. in my config.yaml
<zchander> lazyPower: What happens if I rm-ed /var/lib/juju/charmcache/cs_<charmname> from my juju node?
<lazyPower> you'll kill kittens
<zchander> Oopsâ¦.
<lazyPower> did you try juju upgrade-charm <charmname>?
<lazyPower> that should be all you need to do to have it increment revision to whats deployed +1
<zchander> In my initial config.yaml I made an error for the source, e.g. cloud:precise-updates:havana. But this should be cloud:precise-updates/havana (obvious, huh ;) )
<zchander> But my current devel version I changed this, but it isnât picked up
<lazyPower> is a hook in an error state?
<lazyPower> or with an active debug-hooks session open?
<lazyPower> those two instances will prevent an upgrade from landing, as its waiting in line to execute.
<lazyPower> hook execution in juju is serial. You can queue many operations that will stack and resolve in FIFO fashion.
<zchander> I just did a juju upgrade-charm but my new config.yaml isnât picked up
<lazyPower> did you deploy from local or from teh charm store?
<zchander> lazyPower: Local
<lazyPower> strange
<lazyPower> zchander: i dont know :( that should work. Aside from tearing down i'm out of ideas this early in the morning.
<zchander> lazyPower: np. Iâll continue my quest :D In the worst case, Iâll tear down my juju and rebuild it
<zchander> lazyPower: Is there documentation for charm-helpers? Seems I cannot find any (or my Google skills are getting worse)
<lazyPower> just a few charm schools - there are no official documents to speak of
<lazyPower> its on the todo list to get them documented, but it hasn't happened yet.
<lazyPower> most of the code in charm helpers is pretty straight forward though, well enough to gleen what its doing.
<zchander> lazyPower: I am trying to add a new source (like cs:precise-updates/havana) to my charm, but it seems the ordinairy add-apt-repository doesnât understand this kind of URL. It seems the charm-helpers does have it, but I need/search the correct syntax
<lazyPower> cs:precise isn't a valid ppa url.
<lazyPower> ppa:precise-updates/havana?
<lazyPower> zchander: here's how dosaboy did it in the MySQL charm w/ charm helpers fetch: https://code.launchpad.net/~hopem/charms/precise/mysql/lp1281752/+merge/209312
<zchander> lazyPower: I know it isnât a valid ppd url, but I want(ed) to keep it a bit like e.g. the Ceph charm does it
<zchander> Iâll have a look at the MySQL charm
<lazyPower> it's using a helper from charm_helpers in the fetch module.
<didrocks> jcastro: hey man! not sure you noticed another small fix: https://github.com/juju/docs/pull/98
<marcoceppi> didrocks: thanks for the fix!
<didrocks> marcoceppi: yw!
<didrocks> thanks for looking at it :)
<roadmr> hello, I tried to juju deploy wordpress using default-series: trusty and I get "ERROR charm not found: cs:trusty/wordpress". Is there really no charm for wordpress on trusty, or do I need to update something for this to work?
<roadmr> btw I'm just curious, I don't need this for production or anything, just thought it was odd and worth pointing out
<lazyPower> roadmr: it has not been pushed to trusty yet
<roadmr> lazyPower: thanks! well that takes care of my question :)
<lazyPower> roadmr: http://manage.jujucharms.com/charms/trusty - are all our current trusty charms. There's an audit effort going on to ensure were promoting charms of good quality to the trusty series. We appreciate your patience while we continue the effort.
<roadmr> lazyPower: sure thing, no rush from me, and thanks for your work on this
<jcastro> hey sinzui
<jcastro> cjohnston and I are seeing https://bugs.launchpad.net/juju-core/+bug/1306537 still
<_mup_> Bug #1306537: LXC local provider fails to provision precise instances from a trusty host <deploy> <local-provider> <lxc> <juju-core:Fix Released by wallyworld> <juju-core 1.18:Fix Released by wallyworld> <juju-quickstart:Fix Released by frankban> <juju-quickstart (Ubuntu):New> <juju-quickstart (Ubuntu Trusty):New> <https://launchpad.net/bugs/1306537>
<jcastro> what's the procedure for reopening, I just didn't want to toggle it without talking to you first
<sinzui> jcastro, Lets open a new bug citing the old bug targeted to 1.18.4. The issue may not be the same since it was tested to be fixed
<lazyPower> jcastro: whats your output from dmesg?
<lazyPower> i want to validate i'm not nuts if at all possible.
<jose> negronjl: https://code.launchpad.net/~jose/charms/precise/seafile/add-set--eux/+merge/219736 for you to check
<cjohnston> jcastro: did you open a new bug by chance?
<jcastro> no
<cjohnston> jcastro: bug #1319947 if you want to confirm it please :-)
<_mup_> Bug #1319947: LXC local provider fails to provision precise instances from a trusty host - take 2 <juju-core:New> <https://launchpad.net/bugs/1319947>
<l1l> Can anyone help with this error. I get it during the bootstrap: DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i
<sinzui> charmers https://code.launchpad.net/~juju-qa/charms/precise/jenkins-slave/trunk/+merge/219749
<marcoceppi> sinzui: will take a peek in a min
<negronjl> jose: https://code.launchpad.net/~jose/charms/precise/seafile/add-set--eux/+merge/219736 ... Approved and Merged.  Thank you sir :)
<jose> negronjl: thanks to you :)
<jose> another MP coming soon to fix a little bug
<jcastro> http://askubuntu.com/questions/465508/juju-bootstrap-debug-failed-to-connect-https-streams-canonical-com
<jcastro> any ideas here?
<jose> jcastro: sometimes it happens with the archives that I hit in a second when they're down or something, but if it's an SSL error contact IS?
#juju 2014-05-16
<tvansteenburgh> bac: you around?
<rick_h_> tvansteenburgh: what's up? Anything I can help with?
<tvansteenburgh> nah, not really. i just discovered that he broke something in charm-tools :P
<rick_h_> doh
<rick_h_> :)
<tvansteenburgh> but i'm just gonna fix it since he's not around
<tvansteenburgh> no worries
<rick_h_> cool thanks
<_mup_> Bug #1320135 was filed: No way to configure the archive mirror <pyjuju:New> <https://launchpad.net/bugs/1320135>
<zchander> Anyone around who can help me a little with a local devel version of a charm which isnât picked up by juju (??)
<davecheney> zchander: i can (try to ) help
<zchander> davecheney: The problem I got, is that any changes I make to my (devel) local config.yaml isnât picked up when I deploy the charm (using the command juju deploy --repository=/home/madmin/charms/devel local:owncloud --to=6)
<zchander> I just got a reply through the list. Seems the problem was/is with the versions I got in my repo (both named owncloud in the metadata.yaml). I am testing this now
<zchander> It seems that was the problemâ¦.
<zchander> But the test/deploy is still running
<marcoceppi> zchander: that is probably the problem. Juju should throw an ambiguous error when more than one version of the charm is loaded via metadata.yaml
<zchander> Got it workingâ¦.. The reason I mentioned before (two different versions of the same charm, though different named (folder))
<zchander> Also got the Ceph relation working again, including MySQL ;)
 * zchander is going to get some teachers happy
<marcoceppi> \o/
<zchander> marcoceppi: Is any information available to format e.g. metadata.yaml and config.yaml? I am trying to tidy the docsâ¦
<marcoceppi> zchander: what do you mean to format?
<zchander> marcoceppi: I want to add some newlines, but it wonât render properlyâ¦.
<marcoceppi> zchander: can you paste what you've got?
<zchander> http://paste.ubuntu.com/7472808/
<zchander> Also for the config.yaml, I would like to make proper formatted descriptions
<lazyPower> hmm... seems a bit over the top being placed in the description like that.
<zchander> lazyPower: I knowâ¦. ;) But it is for my own testing (and this way I get to know the proper way to create the documentation)
<lazyPower> zchander: something like that is better served in the README. The description is more for the front facing display in the charm store.
<zchander> Iâll put it in the README.
<zchander> What is a charm with a proper metadata I can refer to?
<lazyPower> zchander: Postgres
<lazyPower> its got a really well formed charm structure, metadata, etc.
<zchander> Iâll have a look at it, tonight
<jcastro> lazyPower, aws-specific things are ok in charms, we prefer they be labelled clearly
<jcastro> so like "aws-loadbalancer" for example would be ok
<lazyPower> jcastro: thing is, its a voilation of the tenants of the charm store. so its not "ok" in the charm store's current guidelines.
<jcastro> what we don't want is something that claims to be generic to have cloud specific bits, so like "haproxy"
<lazyPower> and this is very much cloud specific to aws, openstack - as it won't work on azure / maas
<jcastro> it's a should
<jcastro> not a must
<lazyPower> hmmm
<jcastro> charms "should make use of apparmor" too, but we don't reject if they don't
<lazyPower> marcoceppi: i followed your lead here, i need backup. am I being too picky?
<jcastro> we should certainly clear it up
<lazyPower> jcastro: i'm not opposed to the idea that i'm making a mistake, but i'd like clarification on if i'm nacking a charm that would otherwise be included but lower ratings because of it.
<jcastro> yeah
<marcoceppi> lazyPower: tldr?
<lazyPower> marcoceppi: charm uses aws/openstack metadata ip hard coded in teh charm
<jcastro> marcoceppi, cloud specific charm features
<lazyPower> i nacked it
<lazyPower> is it ok to ack with lowr rating, or nack it and say "fix it"?
<jcastro> https://bugs.launchpad.net/charms/+bug/1259630
<_mup_> Bug #1259630: add storage subordinate charm <landscape> <Juju Charms Collection:Incomplete by davidpbritton> <https://launchpad.net/bugs/1259630>
<jcastro> I only noticed because I saw Kapil's comment scroll by
<lazyPower> jcastro: the problem I have with acking provider specific charms is its not really apparent that this charm would only work on those 2 substrates. so some guy deploys this on his private maas cloud, and assumes this setup will work
<marcoceppi> lazyPower: well, that fails charm proof
<lazyPower> marcoceppi: beg pardon? all i get is I fails stop/start hook
<jcastro> lazyPower, right, so in this case I think it's a naming issue
<lazyPower> which si safe to ignore, but it's also got this hoks.d thing
<lazyPower> er storageprovider.d
<jcastro> so like, this is really "s3-storage", not "storage"
 * marcoceppi remembers why he punted on this review now
<marcoceppi> jcastro: not even, more like aws|openstack-block-storage
<lazyPower> jcastro: ^
<jcastro> sure
<lazyPower> i was just typing that out.
<jcastro> either way, it's not generic "storage" at this time
<marcoceppi> jcastro: right, the only way to do that is in core
<marcoceppi> and that's, as Kapil mentioned, months away
<marcoceppi> lazyPower: well, charm testing will reveal that this doesn't work on X Y and Z cloud
<jcastro> right
<marcoceppi> if that was hooked up to the charm store
<jcastro> I think if we describe it properly, people will realize that
<marcoceppi> if the readme doesn't CLEARLY call out that it only works on X clouds, that's the only issue
<jcastro> right
<jcastro> Generic storage charm subordinate.  Intended to aid in making charms
<jcastro>     easier to interface with external storage solutions without having
<jcastro>     to speak and understand each type.  Presents a single mount point on
<jcastro>     the unit, and communicates that back to your service through the data
<jcastro>     relation.
<marcoceppi> I think "cloud-storage" might be a more apt name
<marcoceppi> as maas and local aren't a real cloud, though that does cut azure out
<jcastro> right, with clear limitations described in the description and readme
<marcoceppi> storage, I agree, is too all encompasing, unless then plan on adding azure, joyent, etc in future releases of the charm
<jcastro> yep
<marcoceppi> actually, maybe storage is okay
 * marcoceppi reviews the charm
<jcastro> if you read that description I just pasted it makes it sound all encompassing
<marcoceppi> jcastro: right, so it has this storagebroker.d directory in hooks, that allows you to select different broker methods
<jcastro> should be something like "This subordinate charm uses the AWS metadata server, which currently is supported by X, Y, and Z. Local and A,B,C are currently not support"
<marcoceppi> there's a block-storage-broker, which is the aws/openstaack stuff. then local and nfs
<marcoceppi> which would work in a cloud agnostic environment
<jcastro> what does it default to?
<marcoceppi> I'm inclined to recind some of what I said earlier, this is an interesting attempt at making a plugin based charm
<marcoceppi> local
 * marcoceppi goes to find what the heck that means
<lazyPower> well thing is
<jcastro> hey so out of the box it'll work on every substrate?
<lazyPower> the storage broker is whats supposed to be doing the communication with the aws storage services
<marcoceppi> asciishrug.tiff
<lazyPower> teh rest of that data comes in over the wire
<lazyPower> i'm wondering if the storage sub didn't get the update
<marcoceppi> lazyPower jcastro I'd nack this soley on the fact they don't seem to document each of the providers
<marcoceppi> a charm this monumental in undertaking needs more than 53 lines of readme
<marcoceppi> I'm 5 mins in to poking and I still can't figure out what's going on
<jcastro> marcoceppi, I'm not forming an opinion on the charm itself, doing a normal review of that is fine
<jcastro> I think we should just clarify what we mean by cloud-specific features in policy
<marcoceppi> I thought this was a nack or ack
<marcoceppi> jcastro: ah
<marcoceppi> well, we currently don't have any cloud-specific charms in the store
<marcoceppi> all of Kapil's AWS specific stuff is still personal namespaced
<jcastro> sure
<marcoceppi> so there's not precedence for this yet
<jcastro> so like, I think they should be fine, as long as it's obvious
<marcoceppi> and we'd be potentially making one with this charm
<jcastro> so if I do "aws-blueshift-workload-thing", that should be fine
<marcoceppi> I'm inclined to agree, esp. with testing being the true revealer of "does this work on X substrate"
<jcastro> right
<marcoceppi> <cloud>-name works great when it's for one cloud
<marcoceppi> this is potentially n clouds where n is < substrates
<jcastro> ok, I'll propose to list.
<marcoceppi> jcastro: yeah, lets move the discussion there
<jcastro> the policy bullet is somewhat ambiguous
<lazyPower> i put it in the review
<jcastro> lazyPower, I'll also post the new trusty vagrant juju boxes
<lazyPower> teh verbatim bullet
<jcastro> but let's stick to precise for today
<lazyPower> jcastro: did those get pushed?
<lazyPower> awesome!
<jcastro> yeah yesterday
<lazyPower> i have the precise box up right now
<lazyPower> i'm going to release a github repo with the workflow items in it for people to clone and quickstart
<jcastro> lazyPower, link to my mailing list post for your review
<lazyPower> ack
<lazyPower> oh nice you mailed the list about it yesterday. hi5 jcastro
<sebas5384> good morning :)
<jcastro> hmm
<lazyPower> Morning sebas5384
<jcastro> marcoceppi, any idea why https://code.launchpad.net/~jose/charms/precise/owncloud/port-change+repo+ssl-support/+merge/215527 is not in the review queue?
<sebas5384> ben is working in the vagrant box for juju with trusty
<lazyPower> jcastro: its listed as needs fixing by mbruzek
<marcoceppi> jcastro: it's not assigned to charmers
<lazyPower> if it needs to be in the rev q, it needs the "request a review" button clicked
<sebas5384> someone knows if there is a repository of the vagrantfile?
<jcastro> marcoceppi, oh I see, charmers is merely subscribed
<jcastro> that kind of sucks doesn't it?
<marcoceppi> jcastro: mbruzek gave the wrong canned message for the re-review
<sebas5384> because I have one, and i would like to help :)
<marcoceppi> jcastro: yes
<jcastro> sebas5384, you want to talk to lazyPower
<sebas5384> nice!
<jcastro> sebas5384, I was just about to post the address to the trusty boxes
<lazyPower> sebas5384: the vagrantfile is very barebones. the vagrant image itself has a cloudinit package that sets up the environment.
<mbruzek> marcoceppi, Did I?  we talking about the owncloud charm?
<jcastro> yeah owncloud ssl
<mbruzek> marcoceppi, I see now, sorry about that
<mbruzek> marcoceppi, So if a MP has tests and they don't pass it is OK to ack it?
<jcastro> we should shove them in juju/vagrant on github or something
<sebas5384> lazyPower: today the box use an externals python scripts
<sebas5384> having to install bzr and cloning some scripts
<marcoceppi> mbruzek: only if the tests failed before the mp
<jcastro> mbruzek, he updated the branches after from what I can see
<mbruzek> ack
<jcastro> no idea if it passes tests now though
<jcastro> because marco hasn't fixed the world yet.
<jcastro> come on marco, I want test results on the internet!
<sebas5384> lazyPower: so that means there is code from other places being used into the box, and that difficult the collaboration :)
<lazyPower> sebas5384: you're hitting me at literally the first corner i've turned with this process. up until now, the vagrant building process has lived witho ur cpc team
<sebas5384> lazyPower: but hey, it's just an ideia
<lazyPower> sebas5384: so i dont have an answer for you yet
<lazyPower> i'm gonna need you to calm down :P
<lazyPower> CURB YOUR ENTHUSIASM SEBAS!!
<lazyPower> and i'll follow up with you when i have a better picture of whats where and how i can expose this to community contributors
<marcoceppi> sebas5384: DON'T LISTEN TO LAZYPOWER, UNCURB THE ENTHUSIASM
<marcoceppi> :D
<lazyPower> marcoceppi: i'll push you :|
<sebas5384> lazyPower: i don't understand what you mean, i'm not hiting you at all hehe
<jcastro> just push it somewhere dude
<jcastro> actually
<jcastro> let's do this
<lazyPower> i think you guys are missing what i'm saying
<jcastro> let's do that after the charm school
<lazyPower> and a) i'm already working on a github repo to go WITH the charm school today
<sebas5384> lazyPower: calm down :) take a big breath :)
<lazyPower> b) the vagrant file doesn't control much at present, just which box to use
<jcastro> sebas5384, http://cloud-images.ubuntu.com/vagrant/trusty/current/
<sebas5384> jcastro: nice!
<jcastro> mbruzek, tedg is being a squeaky wheel wrt to ssl support, so if you've got time to review that that would be <3
 * tedg squeaks
<mbruzek> on it.
<tedg> mbruzek, Thank you!
<lazyPower> mbruzek: i rereleased tests, its close but still doesnt pass when being run through the test harness.
<lazyPower> and they have not been merged yet afaik
<mbruzek> lazyPower, They were your tests?
<lazyPower> mbruzek: according to the bzr history yes - and they passed at one point.
<lazyPower> which is why i started refactoring, because they do not pass now
<lazyPower> either the test infrastructure has changed, or we encountered a 'this shouldnt work but does' situation.
<jcastro> Incoming! https://github.com/juju/docs/pull/102
<lazyPower> jcastro: 1 comment, otherwise LGTM
<jcastro> fixed and pushed
<mbruzek> jose are you out there
<marcoceppi> lazyPower: sorry man, sniped that merge
<roadmr> you sniper, you
<dpb1> lazyPower: hey, updated the storage readme.  thanks for the review, let me know what you think.
<lazyPower> dpb1: pulling it up now, thanks for the quick turn around
<dpb1> lazyPower: np! write back on the review, if you have more feedback.  I'll be AFK for today.
<lazyPower> dpb1: ack. Should be good though. The hooks looked good when i looked through it, the big thing was the conversation that started with the metadata url
<lazyPower> dpb1: perfect.
<sebas5384> someone haves a guide of how to use ansible into a juju charm?
<sebas5384> i sow that I need to download some tools for the charm to work
<sebas5384> https://github.com/absoludity/charm-bootstrap-ansible
<whit> sebas5384, I used this charm as an example w/ ansible
<sebas5384> whit: didn't know! what charm?
<whit> woops -> http://bazaar.launchpad.net/~michael.nelson/charms/precise/elasticsearch/trunk/view/head:/hooks/hooks.py
<whit> sebas5384, charmhelpers grabs ansible and installs it on the remote machine
<whit> ala  charmhelpers.contrib.ansible.install_ansible_support
<sebas5384> whit: exactly!
<sebas5384> so i have to install charm-tools before
<marcoceppi> sebas5384: charm-helpers needs to be embeded in the charm
<sebas5384> whit: elasticsearch, sweet!
<marcoceppi> helpers and tools are two distinctly different things
<sebas5384> gotcha marcoceppi, because helpers is things that helps the charm, and tools are for the charmer
<sebas5384> or something like that hehe
<marcoceppi> sebas5384: exactly that
<marcoceppi> :D
<sebas5384> :D
<sebas5384> i'm starting to develop the drupal charm in ansible :)
<whit> ah nice
<sebas5384> yeah! I loved ansible
<sebas5384> and that would be to help me to prove a concept
<sebas5384> then start develop the charm factory :)
<whit> sebas5384, ansible great for this sort of stuff because it's fairly conscise and makes idempotency easy
<sebas5384> whit: exactly!!
<sebas5384> and reusable role are great to extend a charm
<sebas5384> but something I was thinking about
<sebas5384> ops, im back :P
<sebas5384> let say i have a drupal charm, and for now attends what the project needs
<sebas5384> but
<sebas5384> now we have a new library dependency, and other things like that
<sebas5384> so now, i have to "clone" the charm and called "nameOfTheProject"
<sebas5384> but thats not nice, because now i have duplicated code, etc...
<sebas5384> and now i have to maintain 2 codes
<sebas5384> so, thats why i'm going forward to the ansible, so then, i can just update the ansible role
<sebas5384> marcoceppi: what do you say, whats your advice to that case?
<sebas5384> when i need to "extend"  a charm, to attend the project needs
<sebas5384> :)
<marcoceppi> sebas5384: you can use a subordinate charm
<sebas5384> marcoceppi: so i would have a subordinate charm called with the name of the project, something like that?
<marcoceppi> sebas5384: well you can have a "drupal-project" subordinate, that would just install the themes, plugins, files, etc in the right place, create it's own apache configuration, etc
<sebas5384> hummm
<sebas5384> marcoceppi: interesting
<sebas5384> but I have to deploy it --to the same machine, or subordinated charms executes all into the related charm?
<sebas5384> https://juju.ubuntu.com/docs/authors-subordinate-services.html reading...
<marcoceppi> sebas5384: subordinates don't use --to, they only exist to co-exist on a service
<sebas5384> oohhh
<sebas5384> marcoceppi: gotcha
<sebas5384> marcoceppi: thanks! I will look at it :)
<cff> "Juju Charm School Vagrant Workflow" starting in 10 minutes https://www.youtube.com/watch?v=qLNPn2rQynM
<jcastro> jose, ping
<sebas5384> yeahh!! \o/
<jcastro> 3 minute warning!
<jcastro> https://www.youtube.com/watch?v=qLNPn2rQynM
<marcoceppi> CHARM SCHOOL! TUBULAR!
<jose> jcastro what's up
<jcastro> nm I figured it out!
<jose> np
<cory_fu> Did the video stop for anyone else?
<tvansteenburgh> no
<marcoceppi> cory_fu: not i
<cory_fu> nm, back
<sysdrum> seems fine for me
<tvansteenburgh> i called youtube and asked them to stop it for cory_fu only
<cory_fu> Also, is there any way in ubuntu to keep a video full screen on one monitor while typing in the second monitor?
<CodePulsar> :-)
<cory_fu> tvansteenburgh: Jerk.  :)
<CodePulsar> cory_fu: yes, there should be a way
<jose> sorry I couldnt help, I'm still trapped at university
<marcoceppi> lazyPower: it'd be cool if vagrant init just did that for you, re box_url
<sysdrum> is it working like docker?
<marcoceppi> sysdrum: similar, vagrant uses different backends (VirtualBox, kvm, vmware) so it's not limited to just linux since docker seems to be only LXC
<jcastro> https://github.com/chuckbutler/juju-vagrant-charmschool
<sebas5384> yeah i'm waiting, default: Warning: Connection timeout. Retrying...
<tvansteenburgh> jcastro: question: does it take this long to boot the vm everytime I do 'vagrant up' or just the first time it's provisioned?
<sebas5384> tvansteenburgh: no, thats only the first time
<sebas5384> tvansteenburgh: after that you can use just 'vagrant suspend' for example
<sebas5384> but, here, gives me a timeout exceeded to boot
<sebas5384> so im going to try again
<jcastro>  mine took a bit Retrying with the remote connection
<jcastro> but eventually continued
<marcoceppi> lazyPower: mediawiki is spelled wrong
<sysdrum> QUESTION: Would I be able to manage it from a remote windows clients if I spin it up on another windows host?
<jcastro> http://www.vagrantup.com/blog/feature-preview-vagrant-1-5-share.html
<sysdrum> Do I have the ability to bridge the network? under VBox?
<marcoceppi> \o/ thanks jcastro and lazyPower!
<sysdrum> was about to ask that.
<sebas538_> sweet!! jcastro and lazyPower :)
<sebas538_> jcastro: i remember that before these new vbox they where failing to connect after halting the vbox
<sebas538_> so i assume this was fixed?
<sebas5384> thanks!! \o/
<sysdrum> Thanks for answering my questions.
<lazyPower> sebas5384: yeah that's fixed last i checked.
<lazyPower> sebas5384: if you have any further issues with it make sure you open a bug against it or ping me and i'll do the due dilligence
<sebas5384> lazyPower: ohh great! nice work men :)
<sebas5384> *man
<lazyPower> o/
<sebas5384> thanks!
<tvansteenburgh> lazyPower, jacstro: good stuff, thanks guys!
<tvansteenburgh> jcastro even
<jcastro> omg
<jcastro> hey guys
<jcastro> http://itty-bitty-cat-4232.vagrantshare.com/
<jcastro> `vagrant share` is totally awesome
<jcastro> I did that, logged into vagrantcloud via the CLI
<jcastro> and then it generated that URL
<sebas5384> yeah! was introduced in the new version
<sebas5384> its awesome!
<lazyPower> sebas5384: So wrt your questions this morning about the Vagrantfile and what not
<lazyPower> did that github repository give you what you were looking for? or was there more to that statement that we haven't covered?
<jose> mbruzek: lazypower is working on the tests
<mbruzek> jose yeah I talked with him on that.
<jose> cool :)
<jose> also, I meant to set 443, 443 is https
<lazyPower> jose: what i've pushed is where the tests are.
<lazyPower> i dont have any additional cycles to devote to those atm
<jose> lazyPower: so tests are done for now?
<lazyPower> jose: whats there is what i got mang. :| they work when being run direct but fail when run via charm test
<mbruzek> jose, lazyPower:  Since the tests did not run before the MP I can't nack it for that.
<jose> mbruzek: so we need my branch merged and subsequently Charles' branch
<mbruzek> The owncloud needs a small fix that I found, I did not rejet on the tests.
<jose> it's stacked on mine afaik
<mbruzek> jose, did you fix the 80 problem?
<jose> I am branching to fix it now
<jose> literally *just* turned on my PC :)
<lazyPower> mbruzek: jose - https://code.launchpad.net/~lazypower/charms/precise/owncloud/refactor_amulet_tests
<lazyPower> jose: i can propose this for merging into your branch and you can pick up teh torch from there
<lazyPower> sound good?
<jose> lazyPower: sounds good to me, thank you
<lazyPower> np - sorry i dont have a 100% golden package for you, my week kind of blew up, and next week is looking just as hectic
<jose> it's good :)
<lazyPower> https://code.launchpad.net/~lazypower/charms/precise/owncloud/refactor_amulet_tests/+merge/219903
<jose> ack, thanks!
#juju 2014-05-17
<jose> arosales: the news team is waiting for you to publish minutes on the wiki
#juju 2014-05-18
<nottrobin> is there a standard place to store information about relations?
<nottrobin> inside juju instances?
<nottrobin> I'm writing a charm that requires mongodb
<nottrobin> and I have a hook called "mongodb-relation-joined"
<nottrobin> inside that hook, I need to store the hostname of the joined environment so my application can retrieve it to connect to it
<nottrobin> how should I do that?
<Tug> nottrobin, I think what you are looking for are the `relation-set` and `relation-get` commands
<nottrobin> Tug: I can use relation-set and relation-get in my hook to retrieve / store the details of the relation, but what about my application? It can't be expected to know it's running in a juju environment. But it still needs to know what hostname to look for mongodb on
<Tug> nottrobin, I guess you either have to start your app from the charm or modify a config file
<Tug> or set environment variables
<nottrobin> Tug: right. I'm trying to make the charm as generic as possible - IE have very little knowledge of the app's structure
<nottrobin> Tug: yeah I was thinking of setting environment variable
<nottrobin> Tug: but that seems to be impossible using Python
<Tug> nottrobin, you can do bash commands in Python
<bodie_> does anyone know whether it's possible to use coreos fleet as a deploy target with juju somehow?  can I write an extension provider if that option doesn't yet exist?
<nottrobin> Tug: but you can't set environment variables
<nottrobin> Tug: http://stackoverflow.com/questions/1506010/how-to-use-export-with-python-on-linux
<Tug> noodles775, https://docs.python.org/2/library/os.html#os.environ
<Tug> srry
<Tug> nottrobin, https://docs.python.org/2/library/os.html#os.environ
<nottrobin> setting an environment vaiable would be my perfect solution but I can't work out how to do it
<nottrobin> Tug: right, but os.putenv fails
<nottrobin> you can happily set the local os.environ dictionary, but it won't actually update the environment
<Tug> I personnaly don't like environment variables, it's not a clean way to pass information (imo)
<nottrobin> it's perfect for information about the environment
<Tug> yeah for a program
<nottrobin> right
<Tug> like DEBUG=true start_my_prog
<Tug> out of curiosity, what type of application is it ?
<nottrobin> wsgi
<nottrobin> the one I'm testing it with is django, but it could be flask or werkzeug
<Tug> mm I see, have you looked at the python-django charm to see how they did it ?
<nottrobin> good idea, I'll have a look
<Tug> look like they are modifying an upstart script
<Tug> did look for long
<Tug> * I did not look for long
<nottrobin> Tug: where did you see that?
<nottrobin> Tug: I can see it writes the DB settings to <app_dir>/juju_settings/20-engine-%(engine_name)s.py
<nottrobin> Tug: but I can't see how that file then gets used
<nottrobin> Tug: yeah the settings get read by some lines injected at the bottom of settings.py
<nottrobin> Tug: a very django-specific solution
<hazmat`> bodie_, not really re coreos.. not entirely sure how that would work, you could run full os ubuntu containers inside of a coreos machine, and then basically register them as machines for a juju deployment.. i've got some simple client side plugins that do api machine provisioning for digital ocean, etc. if your going down that road.. alternatively getting charms that directly do docker (for image based deploys) else its just the container as a machine.
<hazmat`> nottrobin, reading backlog, but typicaly way to address this issue is to have a standard way to configure your application, that the hooks can inject/render the relation config into
<hazmat`> Tug, just about every paas uses env variables to pass config in... (heroku, dotcloud, cloudfoundry, openshift) etc.
<hazmat`> its effectively the lowest common denominator across language runtimes.. but agreed that it has caveats (runtime manipulation etc)
<hazmat`> nottrobin, ie. your app reads config from file, and hooks write that file, and can either restart the process, or send signal if they change it.
<hazmat`> nottrobin, which wsgi framework out of curiosity?  pyramid and django both make this easy... flask requires some mucking about to get the plumbing right (re config from file) imo.
<nottrobin> hazmat`: the projects I want to use it with at the moment are flask and django
<nottrobin> hazmat`: that's why environment variables are perfect
 * hazmat` nods
<nottrobin> hazmat`: if I can just say "the mongodb hostname will be in the environment variables "MONGODB_HOST"
<nottrobin> hazmat`: than writing your app to make use of that is simple
<nottrobin> *then
<hazmat`> nottrobin, so i'd suggest an upstart file which sources an environment file for each then.. alot of packages do this.. ie source /etc/default/$package_name  in a pre-exec upstart script
<hazmat`> though you might want a different location than that.. but then the hooks just write to the env variable file, and restart the service after modifying the config
<nottrobin> hazmat`: right so charmhelpers has a helper method which saves "export ..." lines into a scriptrc file inside the charm directory
<nottrobin> hazmat`: but I couldn't work out how that file then gets sourced
<nottrobin> hazmat`: do you know of a link that could explain how to configure upstart scripts to me?
<nottrobin> hazmat`: I'm relatively new to this charming stuff
<lazyPower> nottrobin: upstart has some really complete documentation. http://upstart.ubuntu.com/cookbook/
<hazmat`> nottrobin, so re upstart looking at the docs (cookbook) or extant ones in packages is pretty good. it might be easier to just have a wrapper script that execs  the actual startup after sourcing your env variable file.
<lazyPower> dont let that page intimidate you, in practice you usually only  need ~ 5 or 6 lines to build an upstart job that will do what you want it to do.
<lazyPower> o/ hazmat`
<nottrobin> hazmat`, lazyPower: thanks I'll get reading
 * hazmat` heads out to the airport
<lazyPower> nottrobin: here's a good example upstart script for you: https://github.com/chuckbutler/starbound-charm/blob/master/contrib/starbound-universe.conf
<lazyPower> the only thing i read in the feedback above it doesn't exhibit is the environment variables. which are set with an 'env' keyword.
<nottrobin> lazyPower: presumably it could source a file which did the actual exporting of variables
<lazyPower> right
<cruisibesares> hey juju guys. I just joined your mailing list the other day as im trying out this technlolgy for a proof of concept. I see that there has been a substanital amount of work on the vagrant provider recently which is great. I personally would like to use maas with virutal box so it will mirror my final deploy as close a possible. I found this blog post http://marcoceppi.com/2012/05/juju-maas-virtualbox/ and got that all set up and run
<cruisibesares> ning. I got annoyed about having to power up all the boxes by hand durning commissioning so i built a little listener in go that basically just listens for the wake on lan packet and then look through the virutalbox boxes on the virulization host and uses the command line to power the right box on. So far it works like a charm (no pun intended) the only thing that is still kinda annoying me is that i have to use the gui to set the p
<cruisibesares> ower type for each of the servers to wol of each server. This is more of a maas issue but is there a way that i can configure maas so that it will always use the wol power type by default?
<lazyPower> cruisibesares: you can use KVM in leu of VirtualBox with the virsh provider to achieve non-manual powerup/powerdown
<lazyPower> I'm using that on a server i have here in my house with great success.
<lazyPower> i'm not really familiar with Virtualbox's WOL capabilities, unfortunately
<cruisibesares> yeah it doesn't have any by default
<cruisibesares> i just built a service that listens on port 9 and then calls the api
<cruisibesares> the wol protocol is easy enough and i wanted to keep it on my mac
<cruisibesares> that all works now somehow haha
<cruisibesares> only hichup that i have at this point is the fact that after i run this maas maas nodes accept-all
<cruisibesares> they all have power type set to
<cruisibesares> --------
<cruisibesares> if this makes no sense at all i can write it up a little better provide some examples and push it up to the maling list
<cruisibesares> oops sorry network dropped out for a second
<lazyPower> cruisibesares: it's an interesting use case, and I think others would benefit from your experience. If you have the time to post to the list, i'd love to read up on your specific testing implementation.
<cruisibesares> alright i
<cruisibesares> will fix up the read me and post what i have soon
<cruisibesares> i think that im going to need to tweak the enlist_userdata file but i would like to avoid messing around with files included in the package
<jokoka> yt ythhrt q;pk g
#juju 2015-05-11
<thumper> stub: can you ping me when you are around, I have a (hopefully simple) postgresql question for you
<noodles775> Can someone help me find more info about an issue I'm having: (Very) Approximately 50% of deploys I'm testing with the local provider, result in 7 of 8 machines all coming up successfully, but 1 machine remaining pending/allocating. If I check the machine log, that one machine is unable to connect to the api apparently because the juju-generated CA is rejected: http://paste.ubuntu.com/11071952/
<noodles775> I'll create a bug after lunch and destroy the env again, but if there's anything else I can put on the bug report while it's still up, that'd be great to know.
<thumper> noodles775: hmm... sounds bad
<thumper> yes, bug please
<noodles775> thumper: done (jfyi) https://bugs.launchpad.net/juju-core/+bug/1453644
<mup> Bug #1453644: One unit remains pending with local provider <juju-core:New> <https://launchpad.net/bugs/1453644>
<thumper> noodles775: ta
<schkovich> good morning, is charm-helper-sh package abandoned?! if yes does anyone know why?
<lazyPower> schkovich: its depreciated in favor of the `charmhelpers` package, which exposes a lot of the functionality through the 'ch' command
<schkovich> thanks lazyPower :)
<lazyPower> schkovich: however, the `ch` command is largely undocumented at this point, but marcoceppi may have published them while I wasn't looking.
<lazyPower> them being, updated docs on how to use it :)
<schkovich> hmhhh... but it is python package. am i on the wrong page?
<schkovich> lazyPower: I can find only projects and packages with dash eg charm-helpers and all of those are python and not bash helpers/libraries
<schkovich> is bash deprecated in favour of python ;)
<g3naro> ls -lhrt
<gnuoy> jamespage, ooi are you using juju 1.24 ? I seem to be seeing juju-dpeloyer explosions with it
<jamespage> gnuoy, not yet
<gnuoy> ah, ok.
<gnuoy> mgz, do you use juju-deployer in any of your juju testing?
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/neutron-api/kilo-dvr-stable/+merge/258750
<gnuoy> jamespage, ta
<mgz> gnuoy: yup
<mgz> gnuoy: eg http://reports.vapour.ws/releases/2627/job/aws-deployer-bundle/attempt/314
<gnuoy> mgz, Bug #1453760
<mup> Bug #1453760: "u'Error': u'status for key "<unit>" not found" before adding relations when using juju 1.24 <juju-deployer:New> <https://launchpad.net/bugs/1453760>
<mgz> gnuoy: could well be a juju bug, not clear what's triggered it though
<gnuoy> mgz, is the # in the unit name suspicious ?
<gnuoy>  {   u'Error': u'status for key "u#keystone/0" not found',
<mgz> yeah, it *looks* like an interal tag has leaked out somehow
<mgz> but I'm not sure I'm reading that correctly
<mgz> gnuoy: see bug 1437266 for comparison
<mup> Bug #1437266: Bootstrap node occasionally panicing with "not a valid unit name" <deploy> <destroy-machine> <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1437266>
<gnuoy> mgz, ah, i think mine is probably a dupe of that one
<mgz> well, not as written, but seems likely the cause is the same
<gnuoy> mgz, would it be possible to get Bug #1434458 triaged ? It's alsp killing random deployments
<mup> Bug #1434458: juju-deployer address,port split not working with ipv6 <python-jujuclient:New> <https://launchpad.net/bugs/1434458>
<mgz> gnuoy: who have we got actually working on jujuclient?
<mgz> gnuoy: in other words, should you or I just fix this?
<mgz> seems like bumping the bug around won't do anything
<gnuoy> mgz, oh, I assumed juju-core handled it.
<gnuoy> mgz, I can prep a branch
<mgz> I will review it
<catbus11> Hi, I tried to deploy landscape-dense-maas bundle by dragging and dropping it to the canvas, it gives "An error occurred while deploying the bundle: <urlopen error [Errno -2] Name or service not known>". Anyone know where in my configuration might be wrong?
<rick_h_> catbus11: atm a colocated bundle (like the landscape-dense) doesn't work but we'll have releases of the juju gui and juju quickstart that do supported the colocation in the bundle this week.
<catbus11> rick_h_: What does colocated bundle mean?
<catbus11> ah, services deployed in the same host? I guess?
<rick_h_> catbus11: the services are colocated on the same machine (using lxc containers or the like)
<catbus11> rick_h_: thanks.
<gnuoy> wallyworld, if you're about have an environment with Bug #1451283
<mup> Bug #1451283: deployer sometimes fails with a unit status not found error <blocker> <ci> <intermittent-failure> <regression> <juju-core:In Progress by wallyworld> <juju-core 1.24:In Progress by wallyworld> <https://launchpad.net/bugs/1451283>
<gnuoy> s/have/I have/
<jcastro> hey evilnickveitch
<evilnickveitch> jcastro, hey yourself
<jcastro> on the getting started page
<jcastro> I think we should get rid of the links to each provider, since they're already on the sidebar
<jcastro> it just gives us 2 places to update each time there is a new provider
<evilnickveitch> yes, that's a fair point
<jcastro> I'll send a PR after I send one for dreamhost
<gnuoy> mgz, https://code.launchpad.net/~gnuoy/python-jujuclient/ipv6/+merge/258770
<mgz> gnuoy: looking
<gnuoy> mgz, I haven't had much joy with test_jujuclient.py. I've done a manualy deploy with ipv4 and ipv6 but that's the extent of my testing
<mgz> what kind of not joy?
<mgz> hm, ipaddress module is 3.3 or later
<gnuoy> mgz, I tried it against the openstack provider and got failures=5, errors=2 and I don't believe they're related to my branch. I assume it makes some ec2 assumptions
<mgz> is I see
<mgz> gnuoy: what I'd do in cases like that,
<mgz> is writea "split port" function and just add unit tests for that
<mgz> not worry about making the whole suite green on my machine
<gnuoy> mgz, ok, can do
<mgz> like, I'm pretty sure your use of re.search is wrong
<mgz> and the following line is just .strip("[]") spelt funny
<mgz> gnuoy: like, would your unit test pass with the input "[2001:db8::1]:80"
<gnuoy> mgz, yes I understand.   I believe it would but I'll write the unit test to prove
<gnuoy> mgz, mp updated
<mgz> gnuoy: ta, looks fine, do you want pytho nits?
<gnuoy> mgz, yes pls
<mgz> gnuoy: done
<gnuoy> mgz, very much appreciated, thank you
<marcoceppi> lazyPower: it's `chlp`
<lazyPower> ah - right.
<lazyPower> 2am though - i'm not surprised I got it wrong.
<mgz> thanks for just fixing. we'll need to find out what the release process is too - looks from the log like dpb1 did one
<dpb1> andreas and I have the latest builds in a PPA.  I don't have access to push to pypi, that is just hazmat (from what I know)
<dpb1> Same for python-jujuclient btw.  I run both on my machine all the time.
<dpb1> https://launchpad.net/~ahasenack/+archive/ubuntu/python-jujuclient, https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
<mgz> dpb1: thanks!
<hazmat> dpb1: several people have pypi access
<hazmat> dpb1: tvansteenburgh and frankban  for example, let me know if you want it as well
<dpb1> hazmat: tvansteenburgh is enough, IMO
 * tvansteenburgh winces
<gnuoy> mgz, nits crushed
<gnuoy> hazmat, I believe I've addressed the nits, any chance you could land my branch ?
<hazmat> gnuoy: i can't atm (on corporate network) i can merge it latter this evening, if tvansteenburgh doesn't beat me to it.
<gnuoy> hazmat, thanks, that'd be great.
<tvansteenburgh> gnuoy, hazmat: merged
<gnuoy> tvansteenburgh, thanks!
<iamfuzz> any JuJu AWS experts about?  Trying to add a custom endpoint (for eucalyptus) by modifying aws.go, but when running juju bootstrap, I get the error:  ERROR index file has no data for cloud {emea-az-1 http://109.104.120.3:8773/services/compute} not found
<iamfuzz> what other files do I need to add the endpoint to to test this?
<stub> aisrael: You pinged me last week
<aisrael> stub: Hey. I was testing the postgresql charm at the time, but then you moved it to work-in-progress.
<stub> aisrael: Yeah, I'm not sure I can get the tests running sanely without much more restructuring. I'm thinking of doing that as part of a major rewrite now I have cool new juju features to work with.
<mwenning> lazyPower, ping
<lazyPower> mwenning: pong
<aisrael> stub: ack. If you need eyes on it, let me know.
<stub> aisrael: Maybe I can get some of them back. I'll be looking at it again this week.
<mwenning> Just got an email from Jean-Francois Joly, is everything set up for them?  any roadblocks?
<lazyPower> mwenning: they're at the point they need review from the ~openstack team
<lazyPower> ~openstack-charmers
<mwenning> lazyPower, who's that
<lazyPower> https://launchpad.net/~openstack-charmers
<lazyPower> oi *facepalm* i saw your reply and didnt verify...
<thumper> kirkland: ping
#juju 2015-05-12
<andreabedini> anyone help having trouble with the charm cs:trusty/mysql-25 ?
<andreabedini> lol, anyone *else*
<andreabedini> hook start fails and I can't get it to work
<andreabedini> haha, I got it, it's running out of memory ...
<lazyPower> andreabedini: which provider are you using? i'm assuming the local provider?
<andreabedini> correct, jujubox vagrant image
<lazyPower> yeah, this is a known bug - sorry you found out about it the hard way
<lazyPower> juju set mysql dataset-size=20%
<lazyPower> or 128M
<lazyPower> either value will work - but thats teh fix for local provider. Its documented in the readme
<andreabedini> no problem, main difficulty was troubleshooting
<gnuoy> wallyworld, I've attached logs from a deploy with --debug . I'll leave the environment up for the next few hors just incase you can think of anything else that would be helpful
<wallyworld> gnuoy: awesome, ty. i have a pull request up to add extra debugging also, which will come in beta2 on thursday. i must admit i'm quite perplexed at this stage as what i'm seeing doesn't make sense
<wallyworld> gnuoy: i'm off to soccer in a bit but will be back later when i'll look at the logs in detail
<jamespage> dosaboy, wolsen, gnuoy, coreycb: hey - I've switched over the quantum-gateway charm in /next to be called neutron-gateway
<jamespage> next.yaml being updated now
<gnuoy> kk
<jamespage> all outstanding merge proposals rejected with an appropriate comment
<dosaboy> jamespage: slick
<Baqar> Any openstack charm developers here?
<Baqar> About to pull out my hair with writing a unit_test for my charm ..
<tvansteenburgh> hazmat: dpb and i would like to do a hangout with you sometime to get a walkthrough on deployer and jujuclient releases, to make sure we actually know all the steps. let me know if/when you'd have time for something like that
<rick_h_> tvansteenburgh: frankban has been helping copying builds into the juju stable PPA. We generally work to get them into our team PPA and QA them with the gui charm and such and once they're ok we copy them to the juju stable ppa
<rick_h_> tvansteenburgh: so not at the start, but fyi towards the end
<tvansteenburgh> rick_h_: ack, thanks
<hazmat> tvansteenburgh: sure, its basically full ci smoke test, push to pypi, ping non daily ppa owners, and notify server team re new package for dev distro.
<hazmat> jcastro: tvansteenburgh: do you know if anyone is coming to gluecon next week?
<jcastro> not afaik
<tvansteenburgh> hazmat: cool thanks, what's a good day/time?
<hazmat> tvansteenburgh: how's the 26th work for you?
<hazmat> i'm basically slammed or traveling till then
<tvansteenburgh> hazmat: works for me
<hazmat> tvansteenburgh: anytime works, i'll be on pacific tz that day
<tvansteenburgh> hazmat: ok, i'll set something up, thanks!
<firl> anyone able to help me figure out why/where my charms for vivid are? I installed and bootstrapped a 15.04 environment on juju 1.23.2
<rick_h_> firl: what's up? which charms are you looking for? You set the series to vivid pre-bootstrap?
<firl> default-series: vivid
<firl> I am ultimately trying to install openstack kilo via charms
<rick_h_> firl: ok, I don't htink the openstack charms are released for vivid. At least none in the charm store https://api.jujucharms.com/v4/search?series=vivid
<firl> Thatâs what I figured :( just figured I would ask here to double check
<rick_h_> firl: because charms tend to run production we usually encourage/check things against LTS
<firl> I completely understand
<rick_h_> firl: and in between folks can download/checkout the charms and use them locally on the in-between releases
<firl> yeah, I am just trying to figure out one of the easier ways to automate install Openstack Kilo
<rick_h_> but with the systemd change I'll be a few charms need some love before running smoothly on vivid. That's just me guessing though
<rick_h_> firl: if you change the series to trusty you should be good
<firl> yeah, it looks like trusty is the way to go with me just changing the origin to openstack-origin: cloud:trusty-kilo
<rick_h_> firl: you might find http://summit.ubuntu.com/uos-1505/meeting/22426/openstack-kilo-updates-and-roadmap/ interesting from the recent online summit as well
<firl> nice, thanks rick_h_ I appreciate it
<marcoceppi> firl: you can run kilo on trusty, we backport all releases of Openstack to the previous Ubuntu LTS :)
<firl> marcoceppi: I am just trying to find the best way to get Kilo, the auto pilot didnât seem to install kilo, so it looks like configuring charms one by one with setting kilo as the origin is the best bet
<marcoceppi> firl: yes, but you can take the openstack bundle and just edit it once for all the charms
<marcoceppi> isntead of doing it by hand
<rick_h_> dpb1: ^ any word on autopilot/kilo? I didn't check out the session if that's on the way or there already?
<marcoceppi> I think the plan is that autopilot will be LTS/LTS (Ubuntu/OpenStack) but I'm not 100% on that
<rick_h_> marcoceppi: ah, gotcha
<marcoceppi> firl: https://jujucharms.com/openstack you can grab either teh base bundle or another one, and just set the openstack-origin option for each charm in that bundle's YAML file
<firl> marcoceppi: oh yeah? I will have to try that, I was having issues with my juju-gui and deciding which services where were.
<marcoceppi> then use juju quickstart to deploy the bundle
<firl> I will try that now
<firl> via command line or gui
<rick_h_> firl: let me know if you've got juju gui issues.
<firl> will try it now, probably a 30 minute turn around on my MAAS environment
<marcoceppi> firl: either should work, you're jsut going to have to modify the "bundle.yaml" file for that deployment from "cloud:trusty-juno" to kilo, etc
<firl> rick_h_: I am trying to see where to specify the version via the juju-gui and I think I might be missing it, it looks as though i might have to deploy it first
<rick_h_> firl: from the bundle?
<firl> yes
<rick_h_> firl: ah, wait a couple of days? :)
<firl> haha ok, command line it is :)
<rick_h_> firl: the guys are finishing a new feature in the GUI to make bunldes 'uncomitted' when you drop them on the canvas so you can change config/etc before you hit deploy
<rick_h_> firl: it's going through QA and some last updates today and hopefully out later this week if you care to try it out :)
<firl> thatâd be aweseome, because i have some machines that can run services but canât run emulation
<rick_h_> firl: yea, we're excited to get it out to allow you to take a large bundle and maybe tweak it down to size, or change up the services before you hit go
<firl> marcopeppi: rick_h_: I am having trouble finding documentation on how to change the origin for an entire bundle to deploy via the command line. Do I just clone: http://bazaar.launchpad.net/~charmers/charms/bundles/openstack-base/bundle/view/head:/bundles.yaml and then change each reference to openstack-origin and deploy a local bundle?
<rick_h_> firl: yes, unfrotunately that's exactly what marcoceppi was saying. To get the bundle file and edit it and then deploy it once the file is changed
<firl> So, configure maas, bootstrap a node, add the other machines to the juju environment, checkout bundle, change release, deploy bundle?
<hatch> firl: you'll want to let juju handle spinning up the machines the bundle will deploy to
<hatch> bundles can't deploy to existing machines
<firl> hatch: so should I bootstrap to a node that wonât be used inside the bundle essentially?
<hatch> correct
<firl> understood ( destroying my environment again hah )
<hatch> haha sorry
<firl> no worries, all a learning process
<firl> while waiting for the bundle to deploy from juju-gui should I be seeing âagent-state-info: 'cannot run instances: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No available node matches constraintsâ from a bunch of agents?
<lazyPower> firl: no, that means maas is having trouble fulfilling the request
<firl> had that fear
<lazyPower> ensure you have machines in the maas pool that meet the constraints passed, such as mem= cpu=
<lazyPower> that or change the constraints to fit your avilable pool of instances
<firl> lazyPower: I am just trying to install the openstack-base and I thought it only needed 4 nodes
<firl> but it tries to create 17 states essentially
<lazyPower> ah yeah
<firl> so I am confused lol
<lazyPower> each service has num_units that indicates how many machines to provision, and none of this is colocated
<firl> do i need to edit the bundle file manually?
<lazyPower> firl: there's a lot going on in here that will cause you trouble if you only have 4 machines avialable to you
<lazyPower> juju-quickstart doesn't understand colocation unless you implicitly call --to machine# (and this is discouraged - but can be done)
<firl> gotcha
<firl> gotcha, so I am better off manually building it out using the gui or auto pilot?
<lazyPower> the gui supports placement, but the bundle then becomes interesting - as you have to manually place them in the machine-view of the gui
<lazyPower> unfortunately i know very little about autopilot, i beleive autopilot still assumes a certain number of machines.
<lazyPower> our opstack charmer team has EOD'd by now
<lazyPower> *openstack
<firl> gotcha
<lazyPower> but what I suggest is if you've got the time - to circle back and ping beisner, or gnuoy about what your options are with 4 machines aside from doing it manually
<lazyPower> i know the manual route will work - but its a bit more complicated that way, and untested in terms of garanteed deployment. The formation thats in the bundle is what we've got under CI and is known to work
<firl> yeah, I am just doing this at home with only 9 physical machines
<lazyPower> 9 machines? :)
<lazyPower> sounds like my house
<firl> hah, at least just in the rack
<firl> I was using RDO, to do the deployment before, figured I would learn something new
<hatch> firl: if you're familiar with openstack you can drag and drop the appropriate services using the GUI
<hatch> bundle modifications in the gui are coming very soon
#juju 2015-05-13
<firl> hatch: yeah I am comfortable with it. I am trying to see for my co workers what might be easier for them to work with for node expansion / deployment etc
<hatch> ahh - well using the gui and bundles is definitely the way to go...later this week ;)
<firl> Any way to track that status so I can try it later?
<lazyPower> yeah hatch
<lazyPower> whats with these new toys i haven't heard about yet?
<lazyPower> you holdin out on us?
<hatch> firl you can follow our blog http://blog.jujugui.org/ or follow me on twitter @fromanegg
<hatch> lazyPower: we travel in different circles now ;)
<lazyPower> thats low hatch :|
<hatch> lol
<firl> hatch: thanks, is there a dev version I can try now by any chance?
<hatch> firl: there is, but deployment is a few steps becuase you have to run the dev version of the charm and of the actual gui source.
<hatch> If you're up for some bzr/git fun I can outline the steps
<firl> sure
<firl> I am bzr and git compatible
<hatch> haha ok give me a few
<firl> do I need a custom juju source client also?
<firl> or can i bootstrap
<hatch> nope, you can use whatever version of juju you have installed
<hatch> (assuming it's > 1.2)
<firl> yeah
<hatch> firl: https://gist.github.com/hatched/2dc93eddbcdc9a2c9974
<firl> hatch: ty, do I need to do anything special to have the un tethered bundle so to speak?
<hatch> firl: so with this branch you'll be able to drag/import a bundle and it'll be 'uncommitted'
<hatch> so you can play around with it
<hatch> delete services etc
<lazyPower> that, is awesome news
<firl> sweet, can i choose the placement as well on the machines?
<hatch> there are known bugs that we're squashing just FYI :)
<firl> I completely understand, I work in software dev
<hatch> firl: yeah you'll need to 'destroy' the unit from the placed unit, and then add another unit and place that one
<firl> I think I understand enough
<hatch> the UX for that is a little funky but we're working on it
<firl> so I should do juju add-machine
<firl> first?
<firl> so I can âpreâ place each service
<hatch> nope the GUI will take care of all of that from the Machine View
<firl> will it spawn the 17 machine states again?
<hatch> nope - it won't do anything until you hit 'commit'
<hatch> well.. 'commit' then 'confirm'
<firl> ok I will have to putz around
<hatch> yeah - I'll be doing some docs on how to do it for release
<firl> yeah when I was trying the trusty bundle it was trying to provision 17 machines through my MaaS
<hatch> yeah - it does that now as a default
<hatch> which bundle?
<firl> openstack-base
<firl> https://demo.jujucharms.com/?deploy-target=bundle/openstack-base-34
<firl> 17 items in the sandbox
<hatch> firl: ok that bundle doesn't have machine placement details so when you drag it to the canvas, after a bit you'll see a bunch of blue bordered icons showing up
<hatch> once it's done switch to the machine view where you'll see a list of unplaced units on the left
<firl> ok
<hatch> create the machines you want in the gui and place them as you like
<firl> so I need to juju add-machine for each node then right?
<hatch> nope when you create a machine in the GUI it'll handle doing all that in the background for you
<firl> ah! nice ok
<hatch> you shouldn't need to touch the CLI
<firl> ok just for the bzr branch and deploying of gui
<hatch> right
<firl> understood, thanks for sharing the info I appreciate it
<hatch> because you're using dev version of both :)
<firl> apparently it is a large trunk hah
<hatch> yeah it's huge
<hatch> sorry I should have mentioned that
<firl> haha
<hatch> firl: I have to run but I'll check back in a bit later to see if you run into any issues
<hatch> firl: having any luck?
<firl> hatch: JUST got back
<firl> about to deploy since bzr finished
<hatch> great, well I'll be around now if you have any q's
<firl> cool, âerror: no service name specifiedâ when I do âjuju set juju-gui-source=develop"
<firl> and in the order of your gist itâs after the deploy
<hatch> woops
<firl> ( not sure if it needs to be before )
<hatch> juju set juju-gui juju-gui-source=develop
<hatch> updated gist
<firl> rgr
<firl> then expose and run with it?
<hatch> yep - it'll take a bit as it needs to download the source and build it
<firl> download it from my local juju CLI client right?
<hatch> nope it actually downloads the latest source from github
<hatch> then builds it
<firl> so the local repo just holds the info of where to point to then?
<hatch> the gui has a charm and the 'app'
<hatch> we ship a built version of the app with the charm, but because you're using the bleading edge you have to download the source and build it
<hatch> shouldn't take more than 5min
<hatch> most of that will be cloning the repo
<firl> gotcha
<firl> looks like it is rolling
<hatch> do you know how to get the admin password for the GUI?
<firl> nope
<firl> but I did just notice that it is a nodejs app
<firl> I assumed it would be the same as my environments.yml
<hatch>  head ~/.juju/environments/<whatever your environment is>.yaml
<firl> gotcha, after itâs built and going I take it
<hatch> there will be a line in there called 'password' or the like
<hatch> yep
<firl> yeah, it picked it up from the environment, or looks that way
<hatch> firl: feel free to file any bugs you find https://bugs.launchpad.net/juju-gui :)
<firl> thanks, not sure I would know what a bug is yet vs what I am just using wrong hah
<firl> UI just came up
<firl> So even though it says âThis bundle will be deployed to your provider immediatelyâ it wonât?
<firl> hatch
<rick_h_> firl: hah, good feedback. That text will have to be updated.
<hatch> good catch
<hatch> rick_h_: I'll add that to the list....lol
<firl> lol i can open a bug i suppose
<hatch> it's ok I got it :)
<firl> :)
<firl> nice they are all blue
<hatch> great - now switch to the machine view
<hatch> the Machine tab at the top
<firl> yeah
<hatch> and you'll have a list of all the units on the left
<firl> so I should âadd machineâ from that view
<hatch> yup
<firl> and not worry about the containers on the right
<firl> ?
<hatch> once you create the machine you'll want to click on it
<hatch> then drag any units onto the 'Bare Metal' container shown on the right
<firl> just have it mimic my MaaS
<hatch> you can also create lxc/kvm containers in that right column if you so choose
<firl> âbare metalâ container on the right = âRoot Containerâ ?
<hatch> oh...yeah
<firl> rgr
<hatch> forgot we renamed that :)
<firl> yeah it threw me for a loop the first time
<firl> when I tried deploying to a non KVMâable machine
<firl> lets see how it rips
<firl> hatch: well, it provisioned the machines, it did not add any of the services to the root container though
<firl> and it lost all of itâs relations
<rick_h_> firl: give it time, if it fails it should come back with an error
<rick_h_> firl: otherwise it might take a bit to bring up the machines/get the charms/come up
<firl> I can check the juju logs, but juju status looks like itâs done
<rick_h_> firl: :/ ok
<lazyPower> firl: w/ 17 machines, in the config that ships with the bundle it takes 15 minutes to come up
<firl> ok
<lazyPower> firl: check juju debug-log -x machine-0 (this filters out some of the stateserver noise)
<lazyPower> the relationships come up after the agent state reaches started
<lazyPower> and there's a ton of relationships in there.
<firl> ( from the juju-gui node )?
<lazyPower> you should be able to run that command from your workstation
<lazyPower> all those commands route through the Juju-API
<rick_h_> lazyPower: purdy (aside from the docker icon which is a fix that we applied in search results but missed here https://jujucharms.com/u/lazypower
<firl> yeah, it didnât work thatâs the only reason I asked
<lazyPower> rick_h_: docker icon?
<lazyPower> wat did i miss
<rick_h_> yea, the docker icon wrecks things
<lazyPower> is it docker or etcd?
<lazyPower> i updated teh bundle - but the icon didn't udpate
<lazyPower> argh fingerssss, cooperate
<rick_h_> lazyPower: oh sorry, coreos I think
<lazyPower> yeah, its etcd then
<rick_h_> sorry yea etcd
<rick_h_> anyway, aside from that quite a nice profile page. You need to setup a gravatar
<lazyPower> ah i know what happened here - i'm pointing at hazmats old charm that needs a new icon
<lazyPower>     charm: cs:~hazmat/trusty/etcd-6
<lazyPower> i'll get a fix for that pushed shortly
<rick_h_> ah, gotcha
<firl> Ok, so every agent-state is âstartedâ all the âservicesâ are installed but not put on any of the machines, and there has been no log activity on any of the nodes for over 10 minutes
<rick_h_> firl: hmm, it sounds like the services were added but not actually deployed. hatch ^
<rick_h_> firl: we'll have to check into it and see if we can dupe it and if there's something off in the commands sent to juju with the new uncomitted work
<firl> anything I can do to help / log a bug for it?
<rick_h_> sorry it's not fully baked yet but hopefully shows where we're headed.
<firl> grab the logs / recreate etch
<firl> etc*
<firl> Im not dissapointed, itâs awesome
<rick_h_> we'd need to pull a ton of info. I think it'll be easier for us to just do a manual deployment that has manual location stuff in it.
<rick_h_> it should duplicate, if we can't we'll be in touch :)
<rick_h_> cool, glad you like it
<firl> hah cool, I will mess around a bit with it, see if I can get a work around going
<firl> since it installed them just not deployed
<lazyPower> rick_h_: next ingest that should look a lot cleaner
<rick_h_> firl: hmm, you can try to reload the gui, check machine view, and see if it thinks anything is on those machines
<rick_h_> firl: it'll reload the data juju has
<firl> Already did that, it will allow me to add new units
<rick_h_> firl: and if nothing's there then it failed to tell juju to add those services to those machines
<firl> but the new HW MaaS machines show up
<rick_h_> firl: if it did not, then you can go to the scale up UX "Add units"
<rick_h_> and put them on
<rick_h_> the machines and commit a second round and hopefully that'll work out
<firl> yeah I was thinking about wiping the environment, then committing the machines first
<firl> waiting for them to be âstartedâ then applying the bundle
<rick_h_> can't do that
<firl> oh ok
<rick_h_> a bundle cannot reference machines in the environment already since there's no way for you to map your desire there
<firl> gotcha
<rick_h_> bundles are always just 'additive' vs a merge
<firl> makes sense
<rick_h_> best bet would be to wipe it and go to juju-quickstart with the bundle file you've got there.
<firl> I figured with the new gui, I could just drag and drop to existing machines
<rick_h_> firl: heh, not yet.
<rick_h_> firl: you'd have to drag each thign into place and that's a complex UI to get right.
<rick_h_> each service/unit
<firl> I will try the âunitâ additive
<firl> I donât understand it, but always fun to learn
<firl> looks to be working
<firl> rick_h_ after the units have started, should I see relations in the gui if juju status shows relations to clusters?
<rick_h_> firl: yes, you should see relation lines if there are relations in the juju status
<rick_h_> firl: if you don't please take a screenshot and the output of juju status (if you can share it, sanitize it) and file a bug please
<firl> glad you said that, I was about to clear the buffer
<rick_h_> firl: going to call it a night here but thanks for being a tester for us and checking things out. Really appreciate it.
<firl> not a problem
<firl> have a good night
<rick_h_> firl: if you hit/need anything feel free to send me a note rick.harding@canonical.com
<firl> thanks mate
<thumper> stub: ping
<stub> thumper: pong. Sorry I missed you yesterday - didn't notice my dead VPN
<thumper> stub: that's fine
<thumper> stub: the default trusty postgresql charm, what do I need to configure to get point in time backup/restore?
<stub> thumper: wal_e_storage_uri and the appropriate credential config items (wabs_*, os_*, aws_* depending on your cloud)
<thumper> hmm... ok
<thumper> I should really get around to setting that up at some stage
<thumper> I remembered one question I ha
<thumper> d
<thumper> I grabbed a backup from my server
<thumper> and wanted to restore it on my laptop version
<thumper> it has different db owner and database name
<stub> yes. that story is very poor. If you explicitly specify the database and roles options in the relation, recovery (or what you are doing) is easier but still sucks somewhat.
<stub> One of many things I want to address in the great Leadership Refactoring/Rewrite.
<stub> thumper: With what you have now, you need to manually go in and rename the database to the generated database name and fix ownership of tables etc. to match the generated username
<stub> If you had explicitly specified the database name and ensured all your tables etc. had relevant permissions granted to the roles you specify, then this isn't a problem.
<thumper> hmm.. fair enough
<thumper> stub: where does the automated daily backup cron live?
<stub> fair enough, but it is a horrible user experience. Security vs. usability as usual, but I'd like it to be better
<stub> thumper: backup_dir config item
<stub>  var/lib/postgresq/backups by default
<stub> ql
<thumper> not the files, but the cron to run it
<stub> oh, umm...
<thumper> what I'm wanting to do is to take a backup now before I upgrade my app
<thumper> just for piece of mind
<thumper> the daily one is about 6 hours old
<stub>  /etc/cron.d/postgresql
<stub> There was someone working on actions for this, but it didn't go so well
<thumper> hmm...
<thumper> haha...
<thumper> yeah
<thumper> I did find a way to stream the file off...
<thumper> juju run -e kawaw-prod --machine=0 'cd /var/lib/postgresql/backups && sudo tar -c kawaw-site.20150423.dump' | tar -x
<stub> actions can't return a stream IIRC, just stuff the backups somewhere and return the location for the user to retrieve.
<thumper> what is the param '7' to the backup script?
<stub> retention probably
<thumper> stub: exactly, what I plan on doing is having the action return the command to get the file off :)
<thumper> ah
<thumper> so with no args it backs up all databases?
<stub> I'd like a stream. stuffing things into temporary files sucks when you are dealing with terabytes
<stub> but I guess you use pitr for any of those systems anyway
 * thumper nods
<thumper> once we have storage working well, I'll talk to you about migrating my db off the local instance
<stub> thumper: It works well with the storage broker setup if you want to do things that way. Complex, but seems solid enough.
<thumper> yeah... wanted to skip that though
<mattyw> thumper, got a moment for a minor distraction?
<thumper> mattyw: depends
<thumper> mattyw: I'm not working
<mattyw> thumper, any idea if there's a simple way to get a units private address from the juju command?
<mattyw> thumper, juju run unit-get private-address could work - but doesn't seem to
<thumper> juju run would need to be run in the context of a unit
<thumper> let me try that
<thumper> $ juju run --unit postgresql/0 'unit-get private-address'
<thumper> 10.218.169.141
<thumper> mattyw: ^^
<mattyw> thumper, perfect
<apuimedo> jamespage: thanks for the reviews
<apuimedo> Should I re-target the keystone patch then
<apuimedo> Sorry that I didn't reply to that yet, I was a bit swamped in meetings
<jamespage> apuimedo, I've sorted and merged the keystone changes
<apuimedo> ah, cool :-)
<apuimedo> thanks for that ;-)
<jamespage> apuimedo, do you have a neutron-api branch up for review?
<apuimedo> neutron-api and neutron-agents-midonet I want to re-check today
<apuimedo> for some reason the sql connection data was not properly written
<jamespage> apuimedo, hmm - what's neutron-agents-midonet?
<apuimedo> badly rendered from the template
<apuimedo> that's more or less like the quantum gateway plugin
<apuimedo> that does the metadata agents and dhcp agents stuff
<apuimedo> but as a subordinate charm to neutron-api
<apuimedo> (and without gateway, since we need different things for that)
<jamespage> apuimedo, so it provides:
<jamespage> - nova-api-metadata
<jamespage> - neutron-dhcp-agent
<jamespage> - neutron-metadata-agent
<jamespage> that's pretty much exactly what the 'nsx' config option does on the neutron-gateway charm
<jamespage> apuimedo, pushing those functions onto neutron-api means it can't be deployed into a container
<jamespage> whereas we categorically know that neutron-gateway can never be container deployed
<apuimedo> doesn't 'nsx' config option also give you the gateway functionality?
<jamespage> apuimedo, no
<jamespage> just some 'network nodes services'
<jamespage> apuimedo, that charm may be a little misnamed
<apuimedo> 'a little' :P
<apuimedo> I read quite a bit of it
<jamespage> apuimedo, yes - plugin=nsx makes it do what you want
<apuimedo> but I guess not enough
<apuimedo> I have to remember why I ruled it out
<apuimedo> but it is likely that I thought that it was hardcoded to provide gateway
<jamespage> apuimedo, ack
<apuimedo> which is something that we need to provide on separate charm that uses midonet-host-agent as a subordinate
<jamespage> apuimedo, so in this case you would use neutron-gateway with midonet-host-agent I think
<jamespage> that's how we use it for NSX
<jamespage> nsx-transport-node gets deployed with the neutron-gateway and nova-compute
<apuimedo> nsx-transport-node then must be like our midonet-host-agent
<jamespage> apuimedo, do you have a deployment bundle yet?
<jamespage> apuimedo, I think so yes - for nsx it installs the right openvswitch version and then registered the edge back into NSX controller
<apuimedo> no, I have a file openstack.cfg with all the configs
<apuimedo> and another with the relations
<apuimedo> the problem for having a bundle
<apuimedo> is that neutron-api requires at installation time a package which it does not have available
<apuimedo> python-midonetclient
<jamespage> apuimedo, this is a biggish problem
<apuimedo> yes
<apuimedo> I wish the neutron-api charm has an additional repos config option
<jamespage> apuimedo, does it actually need it? we set the config option for plugin to midonet
<jamespage> and then the charm should know what todo re extra repos
<jamespage> that's acceptable
<apuimedo> jamespage: you mean that I add a patch to neutron-api that when midonet is selected it installs the repo?
<jamespage> yes
<apuimedo> ok, let's look at that
<apuimedo> https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet_stable_midonet_backport_v2
<apuimedo> here was my last change
<apuimedo> I'll check it out
<apuimedo> and let's see where we could add it
<jamespage> apuimedo, proposed that so we can see the diff
<jamespage> https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet_stable_midonet_backport_v2/+merge/258978
<jamespage> apuimedo, ok - so first comment
<jamespage> apuimedo, midonet and neutron will both be users under the service admin tenant
<apuimedo> yes, that's the case
<jamespage> apuimedo, so you don't need to provide the midonet access credentials via config
<jamespage> apuimedo, as neutron-api already has access
<jamespage> apuimedo, and the ip and port should be done via a relation, not via config
<jamespage> apuimedo, juju add-relation neutron-api midonet-api
<apuimedo> that was my original plan
<apuimedo> but I think somebody here discouraged me from doing so
<apuimedo> I don't remember exactly the details
<jamespage> apuimedo, hmm - not sure who - if you remember who is was I'll go berate them
<apuimedo> which relation would it be providing?
<jamespage> apuimedo, neutron-api consumes midonet-api right?
 * jamespage looks
<apuimedo> it does
<jamespage> apuimedo, midonet-api
<jamespage> provides:
<jamespage>   midonet-api:
<jamespage>     interface: midonet
<jamespage> add a relation to neutron-api that consumes that
<jamespage> requires:
<jamespage>     midonet-api:
<jamespage>       interface: midonet
<jamespage> its optional for when you deploy neutron with midonet - that's fine
<apuimedo> jamespage: that's exactly what I was told I couldn't do for now :P
 * jamespage sighs
<apuimedo> I remember now
<jamespage> apuimedo, this is exactly what juju is really great a doing
<apuimedo> because it makes so much more sense
<apuimedo> exactly, you saw that I even use relationships to change between upstream and downstream versions
<apuimedo> I want relations to provide for everything
<jamespage> apuimedo, I guess the only case is where midonet is pre-installed somewhere already
<jamespage> apuimedo, yeah I'm not sold on midonet-repository
<jamespage> its config, not a charm ;-)
<apuimedo> but it allows me to config once
<jamespage> gnuoy, ^^ what that you?
<apuimedo> change everywhere
<jamespage> apuimedo, hrm are you sure?
<jamespage> apuimedo, config can change
<apuimedo> if you change the repo it should change for all the midonet relations
<jamespage> apuimedo, I understand conceptually what its doing
<jamespage> apuimedo, but I think its a sledgehammer to hit a nut
<apuimedo> you'd rather have the configuration be specified for each charm like you do with cloud-origin, is that right?
<jamespage> apuimedo, setting the same config on two deployed services is not that hard
<jamespage> apuimedo, yes
<apuimedo> mmm
<apuimedo> I'll think about it
<jamespage> apuimedo, its even easier with overrides in a bundle
<apuimedo> yes, in the bundle case it's much simpler
<apuimedo> I agree with that
<jamespage> apuimedo, ok - so how about this
<jamespage> apuimedo, provide those configuration options on midonet-apu
<jamespage> apuimedo, and then pass them down to midonet-host-agent?
<apuimedo> you mean for the repo stuff?
<jamespage> that way to guarantee that all deployed units are using the same config
<jamespage> apuimedo, yes
<jamespage> there is a relation between midonet-api and midonet-host-agent right?
<apuimedo> yes
<apuimedo> it's used for setting the tunneling
<jamespage> apuimedo, so make that data part of the 'midonet-host' interface type
<jamespage> midonet-api -> midonet-host-agent
<jamespage> apuimedo, does that make sense?
<jamespage> it avoids the need for the extra charm, and still gives you the single point of control on setting source software config options
<apuimedo> It makes sense
<apuimedo> I want to fix neutron-api + quantum-gateway(with the change you proposed above) first
<apuimedo> I'd like to have that today
<jamespage> apuimedo, ok but target you changes at the neutron-gateway next branch
<jamespage> apuimedo, we've renamed that charm
<apuimedo> jamespage: thank God for that ;-)
<jamespage> apuimedo, I can't guarantee re-review today
<jamespage> have alot todo for next week still
<apuimedo> will neutron-gateway-next work well with the current stable neutron-api?
<apuimedo> how stable are the neutron-api-next and neutron-gateway-next
<apuimedo> cause I want to use that for deployments soonish
<apuimedo> or should I target them and then do backports?
<apuimedo> jamespage: ^^
<jamespage> apuimedo, they are pretty stable
<jamespage> apuimedo, we operate next first policy
<jamespage> changes land their first, and can then be backported to stable
<apuimedo> that means that I should test it first with keystone-next as well
<apuimedo> then backport, right?
<apuimedo> jamespage: https://code.launchpad.net/~landscape/charms/trusty/neutron-api-next/trunk and the others first, then?
<jamespage> apuimedo, all of the branches are under ~openstack-charmres
<jamespage> ers
<jamespage> not sure where that one came from
<apuimedo> that's the link from "view code" of the juju charm store
<apuimedo> jamespage: which of these branches for the resync to neutron-api-next https://code.launchpad.net/~openstack-charmers/charm-helpers
<jamespage> apuimedo, next branches don't appear tin the charmstore
<apuimedo> jamespage: so I should move the patches to charm-helpers and neutron api to the next branches now
<apuimedo> add the new relation
<jamespage> yes
<apuimedo> between neutron-api and midonet-api
<apuimedo> then backport them
<gnuoy> Mmike, if you're about I'd be grateful for any feedback on https://code.launchpad.net/~gnuoy/charm-helpers/no-create-if-none/+merge/258982
<gnuoy> and
<gnuoy>  https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/1454317/+merge/258981
<gnuoy> they relate to Bug #1454317
<mup> Bug #1454317: sstpassword often set to wrong value in cluster and ha relations <percona-cluster (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1454317>
<Mmike> gnuoy: ack, will take a look in a moment
<gnuoy> ta
<Mmike> gnuoy: one more thing, prolly needs a bug to be opened - debian.cnf file, which has password for debian-sys-maint user, differs accross units (in multi-unit deploy). It is not super-issue as percona-xtradb-server package doesn't use it, but sometimes some maintenance tool use that account (for instance, mysqltuner)
<Mmike> hm, this might actually be a percona-cluster-xtradb-server package bug, and not a charm bug
<gnuoy> Mmike, ok, I'll let you raise the bug if you think it's valid.
<Mmike> Ack - it is a percona bug, but we might create a workaround in charm... as I'm not sure how percona could fix this easily...
<Mmike> gnuoy: these look ok for me, just waiting for the deploy on ctsstack to finish before +1
<gnuoy> jamespage, do you think it's ok for me to review and merge lp:~le-charmers/charm-helpers/leadership-election ? I didn't actually contribute any code to that mp
<lazyPower> rick_h_: hey man, new profile page really pops :) It also highlighted i pushed to the wrong branch lastnight *doh*
<rick_h_> lazyPower: :)
<apuimedo> jamespage: could I add the repository from a config('midonet-repo') in hooks/neutron_api_utils.py:determine_packages() ?
<apuimedo> I'd do it just before the loop if config('neutron-plugin') == 'midonet'
<hatch> firl: hey did you get it working for you last night?
<firl> hatch: I ran into a possible bug
<firl> where once the units were associated with machines
<firl> when I hit commit, the services got installed to juju, but the units didnât get allocated and the relationships were not persisted to juju
<hatch> hmm that's very odd, were there any errors in the browser console that you saw?
<hatch> firl: from your bug it looks like they got units
<firl> I manually re associated
<hatch> ohh ok
<firl> so I think the bug I made, specifically isnât valid
<firl> I think I was misreading the associations in the juju status
<hatch> ahh ok ok
<hatch> I'm quite interested in the issue where it didn't place the units
<firl> want me to replicate and give you remote access so you can see it?
<hatch> well the issues would be shown in the browser console if a notification wasn't generated
<firl> I can give a vnc shell or a port forward and let you do what I did
<hatch> so you could `juju set juju-gui juju-gui-console-enabled=true juju-gui-debug=true`
<firl> sure
<hatch> and then those errors 'should' be surfaced
<firl> give me a sec
<hatch> thanks!
<firl> np
<firl> takes about 15 minutes because of bare metal deploys
<hatch> no rush whenever you have time
<firl> also the changes in juju via console stoped being propagated into the juju gui like the stable release shows it
<hatch> hmmm
<hatch> I wonder if trying a smaller bundle wouldn't be a bad idea
<hatch> something like the django bundle
<lazyPower> ...or docker-core...
<lazyPower> ;)
<hatch> I've been using that one for testing here so I knwo it works
<hatch> but requires the machine placements
<lazyPower> nice mix of subordinate + services there, representations of all the major concepts
<hatch> or that!
<firl> so django or full on openstack?
<hatch> firl: well I'm just thinkin that it might be better to try a smaller bundle to rule out any other issues
<firl> sure
<hatch> here is the docker one lazyPower was talking about https://jujucharms.com/u/lazypower/docker-core/4
<firl> again, no âjuju add-machine xyz.maasâ first ?
<firl> build it out all via the ui
<hatch> nope
<firl> ok
<hatch> yup
<firl> my MaaS is running on vivid as well, in case that makes any difference
<hatch> hmm it shouldn't - at least as far as I know
<firl> hatch: looks like the smaller bundle is working better than the large one
<hatch> ok so that's good, placements appear to be working as expected?
<firl> yeah it got further than the openstack one
<firl> relations persisted etc
<firl> let me try across 2 nodes
<hatch> lazyPower: https://jujucharms.com/u/lazypower/docker-core/4 one of your code examples isn't highlighted as code fyi
<lazyPower> hatch: bug plz <3
<firl> after deleting the services, I had to restart the web ui to be able to add another bundle
<hatch> hmm, what was it doing that requires you to do that?
<firl> I dragged the bundle to the canvas, didnât do it
<firl> clicked add manually, didnât do it
<firl> checked console saw some errors, restarted, repeated manually adding and it worked
<firl> I also recorded it
<hatch> ahh you have the errors? great
<firl> once docker is done deploying via 2 I will upload a video for ya
<firl> see if itâs user issue
<firl> hatch: https://vimeo.com/user29582213/review/127726021/09e7d5b4cd
<hatch> firl: just otp right now, will look in a few
<hatch> firl: thanks for that video - it's definitely not your fault, there is a bug there somewhere
<firl> cool
<hatch> the bundles were able to deploy properly? The smaller ones that is
<firl> yeah
<firl> did you want me to open a bug for it?
<hatch> it's ok I got it
<firl> I just finished recording a longer video showing the full issue
<hatch> oh that would be awesome
<firl> of how to replicate
<firl> kk
<hatch> if you wanted to file the bug so you get notified that would be ok
<firl> Either way, just trying to help
<hatch> I suppose you have a better idea of how you created it :)
<firl> sure
<firl> I will try the openstack bundle again to see
<firl> https://bugs.launchpad.net/juju-gui/+bug/1454750
<mup> Bug #1454750: bundle to canvas sometimes doesn't work <juju-gui:New> <https://launchpad.net/bugs/1454750>
<hatch> thanks!
<firl> no problem
<firl> well it might work this time
<firl> donât know if the code changed between yesterday night and today
<hatch> it hasn't
<hatch> at least, nothing beyond tests
<hatch> if it works....then yay?
<hatch> :)
<firl> haha yeah, I will try one more time to see if it is anything I might have done differently
<firl> is there a way to mass select and delete services?
<hatch> unfortunately no
<lazyPower> firl: if you have juju-deployer installed:  juju-deployer -T
<lazyPower> that will reset you back down to just the state server, you will need to re-deploy the gui
<firl> thatâs awesome
<lazyPower> man, actions + new status stuff in 1.24 is really nice
<lazyPower> hattip @jujucore for this
<hatch> lazyPower: yeah it's fun
<hatch> I'm going to add the actions to the gui charm and to the ghost charm at some point here
<hatch> surfacing those statuses will be so cool
<lazyPower> yeah
<lazyPower> i'm working on a benchmark suite for a charm i prototyped lastnight, this is silly nice
<lazyPower> i see some instances where my status set in config-changed are pretty bummer, and require extra logic to surface the proper message - but hey - this is 100% better than "installing","started"
<hatch> oo a benchmarking suite as a subordinate would be awesome
<hatch> firl: the first bug was closed (as discussed) and the second will be fixed before release
<firl> very cool
<firl> do you want me to just keep filing them? hah just found another where if a service on destroy has an issue, the UI leaves it âblueâ but shows up in juju status and I have to âre-destroy"
<hatch> yes please :) That would be very helpful
<firl> ok, replicated my issue
<firl> before I refresh the GUI is there anything you want me to do?
<firl> hatch
<hatch> those console errors, it would be nice if you could expand them to get a stack trace
<hatch> little red arrow to the left of the error message
<firl> cool
<firl> already did
<firl> file a bug for this, or did you want to see the video first?
<hatch> might as well file the bug
<hatch> I'm sure it's a real bug :)
<firl> hah
<hatch> and even if it isn't - if you're able to cause the failure that's an issue that needs to be fixed
<hatch> :)
<hatch> (but I'm sure it's a bug)
<firl> hah thanks
<firl> Anything else you want me to do with the environment before I blow it away ( going to do a landscape install )
<hatch> nope let it go!
<firl> very cool
<hatch> firl: just in case you didn't know there is a gui specific channel at #juju-gui :)
<firl> hatch: thanks, yeah when I first joined I didnât know which channel my issue was in, but you seemed to care more about the gui so I obliged
<hatch> there is also #juju-dev for development of juju core
<mattrae> using the latest version of juju-deployer, i appear to be hitting this bug.. juju deployer exits with 'watcher was stopped' at different points. https://bugs.launchpad.net/juju-deployer/+bug/1284690
<mup> Bug #1284690: all watcher api regression <hs-arm64> <juju-deployer:Fix Released> <python-jujuclient:Fix Released> <python-jujuclient (Ubuntu):Fix Released> <python-jujuclient (Ubuntu Trusty):Fix Released> <https://launchpad.net/bugs/1284690>
<mattrae> anyone know what may be causing the 'watcher was stopped' error with juju-deployer?
<mattrae> the comments seem to indicate that this error happens when the state server loses its connection to mongo
<bdx> charmers, openstack-charmers: From what I can gather the openstack service charms deployed on trusty using the "openstack-origin: cloud:trusty-kilo" param are not preforming an "apt update" before the openstack services get installed and thus kilo packages are not installed until "apt update" and "apt upgrade" are ran manually on each respective node for which a charm is deployed.
<mattrae> here's the exact output i'm seeing http://pastebin.com/DCtJpwRY
<bdx> charmers, openstack-charmers: Is this expected^^???
<lm_> hi
<mbruzek> Does anyone here know a charm that takes a payload URL and a hashsum as a configuration parameter?  (and also happens to be written in bash)?
#juju 2015-05-14
<rick_h_> lazyPower: hey there
<lazyPower> o/
<lazyPower> rick_h_: pong
<rick_h_> lazyPower: ty email away :)
<lazyPower> jrwren: ping - just got a big MP from foli w/ nagios bits targeted at my namespace charm of logstash
<blahdeblah> Any charmers stil awake?  aisrael tells me that as long as "python tests/20-mytest" runs, then any test code should work.  But 20-mytest relies on python modules elsewhere in the charm tree; what's the correct method for including it in the python module path?
<marcoceppi> blahdeblah: still around?
<blahdeblah> marcoceppi: Sort of; past EOD & dealing with a broken machine at the moment
<marcoceppi> you can import sys and patch the pythonpath at the top of the test file if you're unable to import it by normal means
<blahdeblah> marcoceppi: Yeah - found that in the latest unit_tests template and used that in my unit_tests.
<blahdeblah> Do we have any documentation about the preferred best practice for unit tests at present?
<marcoceppi> ah, I figured 7 hours of searching must have yeilded something, sorry we couldn't get you an answer sooner
<marcoceppi> blahdeblah: not entirely, no
<blahdeblah> np - I know I'm on the wrong side of the planet. :-)
<marcoceppi> has anyone run into an issue with local provider and 1.24?
<aisrael> marcoceppi: The only problem I had was it wasn't a clean upgrade. I had to destroy and bootstrap my env
<aisrael> (1.24-beta1)
<aisrael> blahdeblah: Glad you found a solution!
<aisrael> The upgrade from 1.24-beta1 to beta2 went smooth
<aisrael> ANd we have service-status and workload-status!
<lazyPower> woot
<jose> 'ello guys, having some troubles over here
<jose> `juju bootstrap --constraints "cpu-power=0 mem=512M"` is trying to launch a t2.micro instead of a t1.micro
<jose> any idea why?
<jrwren> "The t1.micro is a previous generation instance and it has been replaced by the t2.micro"
<jrwren> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html
<jose> jrwren: I've continued using t1.micro instances because they suit me for charming, has the core team decided to deprecate them?
<jose> t1.micros can still be launched from the aws console and everything
<stokachu> sinzui: have you guys noticed issues with juju trying to create vivid containers?
 * sinzui gets bug
<sinzui> stokachu, bug 1442308 is our nemesis
<mup> Bug #1442308: Juju cannot create vivid containers <ci> <local-provider> <lxc> <ubuntu-engineering> <vivid> <juju-core:Triaged by cherylj> <juju-core 1.24:Triaged by cherylj> <https://launchpad.net/bugs/1442308>
<stokachu> sinzui: thanks!
<sinzui> stokachu, in the details of the bug, I think we see that the container logic is using upstart :(
 * sinzui high fixes Core for getting every branch blessed
<sinzui> hi ses
<sinzui> I can discuss compatibility tests now
<lukasa> Quick question: what's the incantation for deploying the /next branch of a charm?
<lukasa> Or do I have to check it out in bzr and deploy it locally?
<marcoceppi> lukasa: the latter
<lukasa> Ok. =)
#juju 2015-05-15
<gnuoy> jamespage, I'm guessing you don't have anytime today but if you do I'd love to get https://code.launchpad.net/~gnuoy/charm-helpers/current-dc-leader/+merge/259123 reviewed
<jamespage> gnuoy, aside for a constant for 'DC' lgtm
<apuimedo> jamespage: ping
<jamespage> apuimedo, hello
<apuimedo> jamespage: I am making the changes to neutron-api so that it installs the right repos and it uses the neutron credentials from identityservicecontext ;-)
<jamespage> apuimedo, +1
<tvansteenburgh> stub: you are ready for the new trusty/cassandra charm to enter the charm store correct? double-checking before i do it
<elurkki> Is it possible to configure juju to use specific maas zone for auto selected nodes ?
<lazyPower> maas tagging should give you that
<lazyPower> constraints="tag=zone"
<lazyPower> as an example
<lazyPower> there may be another tag thats more specific to the zoning integration...
<lazyPower> natefinch: ericsnow ^ any ideas on this? I'm pretty sure that tags are the only maas specific constraint supported...
<natefinch> hmm
<natefinch> you can do juju deploy some_charm --to zone=foo
<natefinch> There's no way to set up a default zone right now, though
<lazyPower> perfect, thanks natefinch
<lazyPower> elurkki: ^
<elurkki> lazyPower: Ahaa, ok. Thanks a lot
<elurkki> Will try this. Currently juju bootstrap errors on mongodb. Wrestling with that first
<lazyPower> elurkki: whats teh stacktrace?
<elurkki> The last error I get is : "ERROR juju.cmd supercommand.go:430 cannot initiate replica set: cannot get replica set status: can't get local.system.replset config from self or any seed (EMPTYCONFIG)"
<elurkki> Not sure if this is from stacktrace
<lazyPower> i just recently encountered that
<lazyPower> what my problem was, is it used the public interface that was firewalled and unable to initate itself. let me fish up teh bug i commented on
<elurkki> Not sure if this is related to fact that DNS can not resolve the node name
<lazyPower> this was however in an AWS VPC
<lazyPower> yeah, if its using the nodename that will be it
<elurkki> lazyPower: I am totally newbie with maas & juju (second day here)
<elurkki> lazyPower: Not sure even if I have to manually add the DNS entry for DHCP reserved node
<lazyPower> if you're using maas you shouldn't have to
<lazyPower> it should be communciating with your region-server, and getting dns info w/ resolveable config back from that
<elurkki> lazyPower: That is what I thought. I am using Maas yep
<lazyPower> unless you're using external dns
<elurkki> MaaS DNS is used
<lazyPower> ok, yeah it should all be peachy
<elurkki> lazyPower: Ok. I will check system logs for possible problems
<lazyPower> https://bugs.launchpad.net/juju-core/+bug/1340663
<mup> Bug #1340663: bootstrap issue with replicasets on 1.20.1 with VM on MAAS provider <bootstrap> <landscape> <maas-provider> <mongodb> <juju-core:Invalid> <https://launchpad.net/bugs/1340663>
<elurkki> lazyPower: Ok, that looks a bit different than mine here. I will compare these a bit. Thanks a lot for pointer
<lazyPower> np
<Argon]> it feels like I'm asking a pretty stupid question, but I just don't get it. where does juju search for jujud when bootstrapping with --upload-tools?
<perrito666> Argon]: your path
<Argon]> perrito666: $PATH? and there it's searching for the appropriate tgz? hm, then I have to double check it again, the files are in my path
<perrito666> Argon]: i believe it looks for jujud bin in your path
<perrito666> in my case /home/hduran/gocode/bin/jujud
<marcoceppi> in mine it's /usr/lib/juju-1.24-beta2/bin/jujud
<perrito666> so I just go install github.com/juju/juju/... and then run the upgrade
<perrito666> marcoceppi: showoff
<marcoceppi> ;)
<marcoceppi> perrito666: you're the one running from source, I'm just using the stable/devel ppas
<Argon]> :/ both, jujud and the tarball are in my path but "no matching tools available"
<perrito666> marcoceppi: there is no good reason for using upload tools if you are not running from source
<perrito666> Argon]: what command are you running?
<marcoceppi> perrito666: true, I'm just saying
<Argon]> juju bootstrap --environment amazon --constraints "cpu-power=10 cpu-cores=1 mem=768M" --upload-tools
<Argon]> and I loaded an appropriate jujud from http://streams.canonical.com/juju/tools/devel
<Argon]> (this is supposed to be a test, going to use a different jujud later on)
<perrito666> throw a --debug in there and paste the result in a pastebin
<Argon]> oh there's debug? I wondered why -v doesn't work
<perrito666> there is --show-logs and --debug which are in turn like -v and -vv
<Argon]> https://gist.github.com/Argon-/a74263ba5a9309a0a5b4
<Argon]> I should run it without --upload-tools to see what's "new", I guess
<perrito666> Argon]: it is not working actually
<perrito666> you should see 2015-05-15 22:23:12 DEBUG juju.environs.sync sync.go:304 Building tools
<perrito666> try having constraints as the last param
<Argon]> I don't want to build them, though. just pass a already built binary. at least who I understood it: https://github.com/juju/juju/blob/master/cmd/juju/upgradejuju.go#L56
<Argon]> ok
<Argon]> *that's how
<perrito666> Argon]: I think build refers to the targz
<perrito666> I know for a fact that what I just ran did not actually built it or it would have failed since my code does not compile right now
<Argon]> nope, no "Building tools", errors out with "no matching tools available" even without --constraints
<Argon]> (the debug output is exactly the same)
<perrito666> :|
#juju 2015-05-16
<elurkki> When I deploy e.g. Mediawiki the apache2 is listening ipv6 but no ipv4 ports. When trying to browse to the public-address I can not get connection to the service. What have I done wrong ? Thanks a lot
<elurkki> Doh, iptables was blocking juju-br0.
#juju 2015-05-17
<stub> tvansteenburgh: Yes
<med_> marcoceppi, business class? Dude, you rock!
<marcoceppi> med_: yeah, lucked out
<med_> you must be really racking up the miles.
 * med_ is in the cheap seats along with those that moo.
<med_> arosales said you were up there.
<med_> marcoceppi, order some champage. We need champagne in the back....
 * med_ is MOSTLY kidding.
<arosales> med_, ha
<marcoceppi> med_: I'll see what I can do
<med_> +1
 * med_ sees a call button go on and a FA take an order and crosses his fingers
 * med_ crossed his fingers, not the FA
<marcoceppi> med_: what seat are you in?
<med_> marcoceppi, I'm in 23C
<med_> cheapseats
<marcoceppi> med_: whoa, the nose bleeds
<med_> yep
<med_> though some from my team are in the high 30s
<med_> marcoceppi, you rock. All my economy friends are jealous.
<med_> and since I don't actually drink, I gave it to one of my coworkers (whose moving away right after Vancouver.) I owe you one in YVR.
<marcoceppi> med_: no worries man
#juju 2016-05-16
<lazyPower> cholcombe - FYI - for this version of the revq, bugs must be in a new, or fix-committed state in order to be ingested. otherwise they get ignored. The "in-progress" status of that but would have actively prevented it from being ingested.
<lazyPower> i went ahead and ftfy
<neiljerram> lazyPower, hi, good morning.
<neiljerram> Regarding the problem with resource-get hanging... I just hit that; https://bugs.launchpad.net/juju-core/+bug/1577415/comments/2
<mup> Bug #1577415: resource-get hangs when trying to deploy a charm with resource from the store <juju-release-support> <resources> <juju-core:Triaged> <https://launchpad.net/bugs/1577415>
<lazyPower> neiljerram - yep. one of two things will fix that
<neiljerram> Is there a straightforward workaround, given that I'm deploying from a bundle?
<lazyPower> supply the ETCD bins (that make target is handy for grabbing them), or deploy it as a local charm so it hits the xenial fallback
<neiljerram> Do bundles in Juju 2 have a syntax for local charms?
<lazyPower> yep, instead of defining cs:foo/bar - give it a fullpath on disk
<neiljerram> A few days ago, IIRC, I had a bundle with "branch:" settings, and it was complaining...
<neiljerram> But perhaps you're saying that charm: <path> will work.
<neiljerram> ... yes, it appears so.
<neiljerram> lazyPower, Now with the local charm, juju status says "Missing Resource: see README".  Does that mean that the fallback hasn't worked?
<lazyPower> niemeyer - can you confirm the series is xenial and not trusty?
<lazyPower> er
<lazyPower> neiljerram
<neiljerram> Ah no, it appears the machine is trusty.  I must be missing something in my bundle.
<neiljerram> Is it because the charm's own metadata.yaml has trusty before xenial?
<lazyPower> neiljerram - give me a sec and i'll publish a revision of the charm that short circuts the resource-get calls
<lazyPower> in the interest of making it less painful while we sort out the store
<neiljerram> well if you like - here I just deleted the trusty decl from my local, and it looks like it's now deploying on xenial, so should work.
<lazyPower> \o/
<neiljerram> lazyPower, my etcd install has completed now
<lazyPower> neiljerram - juju ssh into that etcd node and poke it :)  your calico node should have barfed trying to talk to it, as its TLS encrypted and it doesn't have client certificates yet
<neiljerram> even with -6?
<neiljerram> (I thought the TLS work came after that version)
<lazyPower> -7 should be the TLS bits
<lazyPower> -6 is just xenial support
<neiljerram> I'm using just -6 for now - focussing on the Xenial support first (plus other things for the relevant contract) first.
<lazyPower> ack
<neiljerram> FYI, http://pastebin.com/UnxWHXrd  I think that all indicates that it's not expecting https or TLS communication...
<lazyPower> yep, thats all the old port schema, and no TLS
<jamespage> icey, beisner: minor niggle as a result of the switchout of ceph-mon from ceph:
<jamespage> https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1577519
<mup> Bug #1577519: ceph-radosgw "Initialization timeout, failed to initialize" <landscape> <ceph-radosgw (Juju Charms Collection):New> <https://launchpad.net/bugs/1577519>
<jamespage> basically rgw gives up if it can't complete init in 5 mins - if no OSD's appear, then we hit that...
<jamespage> fginther, ^^
<jamespage> they do appear - just a bit later I guess
<icey> jamespage: I'm not certain that it is, I may try to reproduce with the ceph charm
<icey> jamespage: in essence, I suspect that woul happen with the ceph charm deploying no ceph-osds
<jamespage> icey, quite correct
<jamespage> icey, its fairly easy to reproduce - just deploy ceph-mon with ceph-radosgw with no ceph-osd...
<icey> in essence, maybe we should hold off on allowing attachment until we have OSDs in the quorum jamespage
<icey> jamespage: I mean repro on the ceph charm since it's not specifically a ceph-mon+ceph-osd issue but rather an issue of allowing the related connections to get setup before we have OSDs
<jamespage> icey, like you say, this has always existed, but the switch to ceph-mon in containers triggers this more frequently in deployments; ceph was typically placed on hardware so had its own osd's
<jamespage> fginther, hey - think we have a root cause for the radosgw init failures...
<fginther> jamespage, nice!
<fginther> jamespage, sorry if you sent me a bunch of questions, been having network/IRC issues
<jamespage> fginther, its actually a bug that's always existed, but until the switch to ceph-mon, you would have been unlikely to see it
<jamespage> fginther, I suspect that ceph-osd is not the first charm down onto the physical machines right?
<jamespage> i.e. it gets done after nova-compute
<fginther> jamespage, let me check
<jamespage> fginther, well whatever the order, the time between ceph-radosgw starting up, and the first osd's joining the cluster is > 5 mins
<jamespage> fginther, as icey and I where discussing, the right fix is to probably not give out the keys to radosgw until we know there are some OSD's in the cluster...
<fginther> jamespage, let me know if you have something to test. I've been hitting this frequently on one of my maas clusters
<fginther> jamespage, would this be a change to the charm or the service itself?
<icey> fginther: it's a change to the charm
<neiljerram> lazyPower, hi again - just hit a couple further issues with the relation between etcd-6 and other charms that incorporate an etcd proxy
<neiljerram> lazyPower, one is that apparently a proxy needs the peering URL for its ETCD_INITIAL_CLUSTER config; not the client URL.  In other words, typically, :2380 instead of :2379.
<lazyPower> neiljerram - ah i didnt notice it was using the management port
 * lazyPower snaps
<lazyPower> so much for deprecating an interface
<neiljerram> lazyPower, the second is that the connection string, that gets passed across the relation, is missing the cluster name; in other words it appears to be something like "http://172.18.19.20:2379", whereas what a proxy needs is "etcd0=http://172.18.19.20:2380"
<neiljerram> lazyPower, yes, I'm afraid so. :-)
<lazyPower> neiljerram - think you're up to contributing the interface?
<neiljerram> I can hack around these things for now, but firstly was just wondering if I'd missed something.
<lazyPower> nope, you've caught me red handed at breaking you :(
<lazyPower> i would have gotten away with it too had it not been for you pesky kids and your dog!
<neiljerram> lazyPower, hehe
<neiljerram> lazyPower, so what would be involved in putting back a distinct proxy relation?
<lazyPower> neiljerram - its a departure from whats familiar - https://github.com/juju-solutions/interface-etcd  -- basically modify this to be the etcd-proxy interface, and we'll need to feed it the management initial-cluster-string
<lazyPower> when i say modify, i mean clone / use as a template.  it'll be a brand new interface, but a majority of what needs to be there is there.
<lazyPower> nix the peers interface and go for a strict provides/requires
<lazyPower> and some slight tweaks to the layer to implement the relationship (metadata, supporting code in reactive/etcd.py to provide the relation data)
<neiljerram> lazyPower, OK, thanks - let's leave the matter here for now.  I'll review implications for my current project and follow up with you by email.
<lazyPower> sounds good neiljerram. sorry about the inconvenience :/   i wish i would have caught that sooner
<cholcombe> how do i pass --storage data=ebs,10G to amulet for testing?
<lazyPower> cholcombe - i dont think we have storage support in amulet, but i may be wrong
<cholcombe> lazyPower, yeah.  I see you can pass constraints but that won't help me
<lazyPower> iirc there's a bug to track that, but its under the umbrella of 2.0 feature compatibility
<cholcombe> lazyPower, yeah i found the bug
<cholcombe> lazyPower, ok so my charmhelpers promotion of gluster is prob on hold then until i can get storage support in amulet.
<lazyPower> cholcombe - feel up to contributing storage support? (its python, not rust)
<cholcombe> i might look into just hacking this in myself but i don't know if it'll get accepted
<cholcombe> lol
<lazyPower> :D sorry, obvious troll is obvious
<cholcombe> ;)
<beisner> howdy cholcombe, i think storage support will also be dependent on the provider used to run the test.  ie., i don't think the juju openstack provider understands storage yet.
<cholcombe> interesting
<lazyPower> ah right
<beisner> seems like currently a maas prover only kinda thing i think?
<beisner> provider even
<lazyPower> https://jujucharms.com/docs/devel/charms-storage
<lazyPower> beisner - all supported substrates are listed here
<beisner> looks like it's more feature-ful than i recall :-)
<lazyPower> \o/  wooo features
<lazyPower> we like features
<beisner> https://review.openstack.org/#/q/topic:switch-to-bundletester
<beisner> ^ jamespage, wolsen, thedac
<beisner> 3 oscharms switched to run bundletester+amulet purely from venv via tox.  thedac reviewed & +1d, he and i are looking for add'l reviews before unleashing the sweep of updates.
<beisner> I'd suggest starting with the README_TESTS.md to see the new workflow.  The test bot is already wired to use this new method by priority, but fall back to the legacy stuff if tox func tests are not present.
<beisner> Can you have a sweep through, call out anything that's not clear?  tia.
<beisner> dosaboy tinwood too ^
<wolsen> beisner: sure thing, I'll queue it up for the day
<arosales> very likely pilot error, but are folks able to deploy a bundle on beta7?
<arosales> I can bootstrap, but once I deploy a bundle I get error on each machine
<tinwood> dosaboy, will take a look tomorrow.
<wolsen> beisner ^^
<beisner> thx tinwood wolsen much appreciated
<beisner> i'll let those set and chill, switching gears to charm upload/publish automation
<aisrael> tvansteenburgh: any known issues with bundletester and this error? ssl.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)
<beisner> tinwood, wolsen, dosaboy - fwiw, if you're bored ;-)  we still have "workload status vs init systems" raciness @ master for at least keystone, and i suspect others.  see thedac's keystone review.  he fixed one race path, but there is still something off.
<beisner> jamespage, fyi ^
<beisner> ref:  https://review.openstack.org/#/c/316195/
<beisner> bug 1581171
<mup> Bug #1581171: pause/resume failing (workload status races) <uosci> <cinder (Juju Charms Collection):Fix Committed by thedac> <glance (Juju Charms Collection):Fix Committed by thedac> <keystone (Juju Charms Collection):In Progress by thedac> <https://launchpad.net/bugs/1581171>
<lazyPower> arosales - is it an error at the machine level? (i assume yes)
<arosales> lazyPower: correct, error at the machine level
<lazyPower> arosales - and if thats the case, have you tried juju retry-provisioning? (this wont help in all cases... eg: out of resources)
 * arosales trying west-2 now
<arosales> was on east1
<lazyPower> arosales - are you on the CDP credentials?
<arosales> just wanted to see if others had been sucessfuly hee, I suspect so.
<arosales> lazyPower: no
<lazyPower> yep, i've been in/out of AWS/GCE today without issue
<lazyPower> ok just checking, as thats getting a bit crowded lately
<arosales> west-2 now at least in pending
<arosales> instead of straight error
<arosales> lazyPower: ok thanks for the feedback
<lazyPower> well, thats progress
 * lazyPower is bootstrapping in us-east-1 rn
<marcoceppi> magicaltrout: ping
<tvansteenburgh> aisrael: you need the latest python-jujuclient
<tvansteenburgh> aisrael: install from pypi or ppa:tvansteenburgh/ppa
<tvansteenburgh> aisrael: ditto for juju-deployer
<aisrael> tvansteenburgh: ta!
<Brochacho> hey all, been trying to set up ceph-osd and ceph-mon but get a 'hook failed: "mon-relation-changed" for ceph-mon:osd ' message for ceph-osd. AFAIK I've got correct parameters in my YAML file. Output and config available here:
<Brochacho> https://gist.github.com/0X1A/fd19baf8e8db7586e8d5e753b54589fc Any ideas?
<tvansteenburgh> rick_h_: you around?
<tinwood> beisner, okay, something for me to look at tomorrow too :)
<beisner> tinwood, muchos
<cholcombe> Brochacho, hey there
<cholcombe> Brochacho, looks like you're using juju2 is the correct?
<Brochacho> cholcombe: Hey Chris, yes I'm using 2.0-beta6
<cholcombe> Brochacho, icey was just saying that he deploys on xenial with juju2 and it's fine.  hmm..
<cholcombe> Brochacho, can you do a juju debug-log --include ceph-osd/0
<cholcombe> i'd like to try and figure out what failed
<aisrael> arosales: any luck with those instances? I had one machine allocate fine but the next two failed
<cholcombe> Brochacho, is your filesystem ext4 on those osds?  That would give this error
<icey> Brochacho: can you jump into a hangout for a minute?
<arosales> aisrael: finally got a good deploy in us-west-2
<arosales> aisrael: thanks for checking
<aisrael> arosales: ack, thanks. I think us-east is my problem, too.
<Brochacho> cholcombe: Yes, one second. I respun the osd's since they locked in in error. Seems that when they error I can't use remove-unit
<Brochacho> icey: Yes, I can
<lazyPower> omg, there's a brochacho here
<beisner> ha!
<lazyPower> sorry for the absolutely do nothing ping... but you're my most abused slang word to refer to friends... just ask my co-workers
<lazyPower> i feel somehow validated...
<lazyPower> mbruzek hey, check this out. today is full of random occurrences of awesome
<mbruzek> what?
<lazyPower> theres a bro-chacho here... like... proper noun and everything
<lazyPower> its the little things...really :)
<icey> arosales: how did you switch to us-west?
<icey> oh, controller/region
<arosales> icey: 'juju set-default-region aws us-west-2'
<icey> thanks arosales
<arosales> icey: np
<bdx> hows it going everyone? Has anyone successfully bootstrapped lxd on a non lxdbr0 interface?
<bdx> ....like a bridge bridged to an external network
<bdx> non host only
#juju 2016-05-17
<marcoceppi> bdx: yes
<bdx> marcoceppi: can you let me in on the secret?
<jamespage> fginther, dpb1_: cs:~james-page/xenial/ceph-mon-gated-0
<jamespage> testing myself atm
<lazyPower> arosales icey  - also just fyi, you dont need to set defaults. juju bootstrap my-amazon-controller aws/us-west-1 would work equally as well.
<Rajith> Hi,   I am facing issue while installing mariadb from charm store, install hook error is displayed in status and logs have 2016-05-17 10:42:46 INFO config-changed gpg: requesting key 1BB943DB from hkp server keyserver.ubuntu.com 2016-05-17 10:43:07 INFO config-changed gpgkeys: key CBCB082A1BB943DB can't be retrieved
<lazyPower> Rajith - ah, seems like they may have updated their key. Their dev is west coast based, so there's a bit of a delay before i can get confirmation. Do you mind filing a bug against the charm? https://bugs.launchpad.net/charms/+source/mariadb
<Rajith> sure will raise a bug
<lazyPower> thanks Rajith
<icey> lazyPower: that's what I ended up doing
<lazyPower> right on.
<magicaltrout> I appear to have opened a can of monitoring worms
<lazyPower> magicaltrout you knew exactly what you were doing :) And i'm happy you did
<magicaltrout> hehe
<magicaltrout> its interesting to get all the different perspectives
<lazyPower> you started the discussion, you can lay claim to the fact its gone further than expected, but dont be sad/upset at all. its a brilliant convo
<fginther> jamespage, thanks, are you also porting that to trusty?
<jamespage> fginther, yes but I was testing on xenial
<fginther> jamespage, thanks
<shewless_> Hello. I'm trying to bootstrap lxd on 16.04 and I'm getting this error: "ERROR cannot find network interface "lxdbr0": route ip+net: no such network interface". I tried adding the bridge using "sudo dpkg-reconfigure -p medium lxd"
<shewless_> any ideas?
<lazyPower> neiljerram - which charm from calico relates to etcd? (sorry i should have looked this up) - i have a patch re-introducing the proxy relation
<lazyPower> shewless_ - ah that looks familiar. did you install a prior beta of juju?
<shewless_> lazyPower: I just did a "sudo apt-get install juju"
<shewless_> should I have added some ppa first or something?
<lazyPower> shewless_ - installation instructions for the current beta is here: https://jujucharms.com/docs/devel/getting-started
<shewless_> thanks I'll have a look
<lazyPower> np, dont hesitate to ping back if you get stuck
<jamespage> fginther, do you need that published to trusty for testing as well?
<neiljerram> lazyPower, it's neutron-calico and neutron-api
<fginther> jamespage, yes that would be great. I could start testing that today for trusty
<lazyPower> neiljerram - yikes thats a big dep chain :D
<neiljerram> lazyPower, for which one?
<lazyPower> neutron - full stop
<neiljerram> lazyPower, unfortunately yes.
<lazyPower> that comes with an openstack dependency. for validation of this branch i think i'll hack together a simple charm consuming the interface and go from there
<lazyPower> all we need is a mirror of the initial-cluster-string right?
<jamespage> fginther, cs:~james-page/xenial/ceph-mon-bug1577519
<jamespage> no not that one
<jamespage> one sec
<jamespage> fginther, cs:~james-page/trusty/ceph-mon-bug1577519
<neiljerram> lazyPower, that sounds sensible to me.  Yes, a proxy just needs a valid value for ETCD_INITIAL_CLUSTER.
<lazyPower> neiljerram - here's the interface code - i'm validating the reactive bits before pushing up - https://github.com/chuckbutler/interface-etcd-proxy
<neiljerram> lazyPower, It would actually be quite nice to have a charm for an etcd proxy - perhaps just a different flavour of the normal etcd charm, if that would make sense?
<lazyPower> would that be a subordinate then? as it needs to co-locate with another service right?
<neiljerram> lazyPower, Then I could just install that alongside neutron-calico and neutron-api, instead of putting etcd stuff into those charms.
<neiljerram> lazyPower, I'm not sure I completely understand, but I believe that "subordinate" is not exactly the same concept as "co-located"?
<neiljerram> lazyPower, For example, my current bundle uses decls like "to: [ bird ]" to indicate co-location.  But I think that's different from being subordinate, isn't it?
<lazyPower> well, yes and no. Subordinates will automatically deploy to every unit of the charm, occupying the same "containment" space as that charm. So if its in lxd, both will be installed in the same lxd container. Or of its on a bare metal server, both will be installed to that bare metal. Its effectively like using to, but different constructs. You can deploy a single unit of a different charm using the --to directive, and it has no notion of the
<lazyPower>  supporting service
<lazyPower> whereas subordinates gain some notion of another service that it is dependent/interacting with over that relation, and it also replicates to every unit of the charm.
<neiljerram> lazyPower, ah OK, it sounds like subordinate is the right concept, then.
<neiljerram> lazyPower, so if there was an etcd proxy charm, neutron-calico could say that it should have a subordinate etcd proxy; and similarly neutron-api.  Is that right?
<neiljerram> lazyPower, or would it be the bundle that said that?
<lazyPower> yeah, you'll still need to add the relationship bits to both charms, and whatever supporting data-pass that needs to happen
<neiljerram> lazyPower, we could almost get away without any explicit relation at all ... with the proxy users just assuming http://127.0.0.1:2379.  But that would be a slight hack.
<lazyPower> yep, deploy it without the sub and watch it break.
<lazyPower> with no clear indicator as to why :/
<neiljerram> lazyPower, OK, happy to defer to your experience on that point!
<lazyPower> using the sub approach you can use status messaging to tell the user its incomplete
<lazyPower> status_set('waiting', "Waiting on missing relation: etcd-proxy")   which is pretty clear whats happened, and not a fail.
<neiljerram> lazyPower, Yes, on reflection I agree that it absolutely makes sense to have an explicit relation; independently of whether any config data is passed across it.
<neiljerram> lazyPower, BTW - in case not already obvious to you - it would be super-easy to test an etcd-proxy charm: just deploy 1 etcd and 2 etcd-proxies; do etcdctl set /xxx yyy on one proxy, and etcdctl get /xxx on the other.
<lazyPower> nice. Thanks for the tip
<bdx> hello everyone. Does anyone know of a legitimate path around this -> https://github.com/juju/juju/issues/5411 ?
<bdx> I've been battling it for a few days now, thought I would see whats up :-)
<lazyPower> bdx - i cant say for certain but i think you're running into some assumptions the lxd provider is making on your behalf.
<lazyPower> admittedly i haven't tried the lxd provider with a modified networking profile by default
<lazyPower> I'm curious if it will work if you leave the default networking in-tact, and add your bridged network as a second nic on the container
<bdx> lazyPower: yea, that works fine
<lazyPower> bdx paydirt! thats a legitimate path around it
<bdx> ha - I'm paying thats for sure
<lazyPower> its a dirty hack, and i dont know what thats doing in terms of public-address/private-address
<lazyPower> but theory states, so long as you're not binding to an interface, it should work for remote-access to your lxd resources on your laptop so you can share w/ a co-worker
<lazyPower> ymmv
<bdx> try to bootstrap lxd with the default profile set to use a bridge on an external interface ... then tell me your theory :-)
<lazyPower> bdx - i mean *adding* that bridge on an external interface
<lazyPower> not replacing
<bdx> lazyPower: yeah ...
<lazyPower> same result?
<bdx> lazyPower: yeah
<lazyPower> > [12:11:01] bdx:	lazyPower: yea, that works fine
<lazyPower> i took that as "it works"
<lazyPower> neiljerram - https://github.com/juju-solutions/layer-etcd/pull/13
<bdx> oh .... so the problem is that you have to set the default lxd profile to use the bridge on the external interface too
<bdx> lazyPower: bc the lxd controller gets created with the default profile
<lazyPower> bdx something is getting lost in translation here. I'll TAL and see if i can beat it into submission later on
<catbus1> Hi, what's the developer tool to discover the interfaces of other charms?
<lazyPower> catbus1 - http://interfaces.juju.solutions
<bdx> awesome, thx!
<lazyPower> neiljerram - I'll push this up to my namespace. I based this off the Xenial branch so you can test w/o tls to start with as there's time there. This interface doesn't support the certificate exchange yet, we'll gate that all through at the same time when the tls-branch lands
<catbus1> lazyPower: let's say I am developing a subordinate charm that will need to work with an OpenStack charm, how do I find out the interface names, attributes of the openstack charms? I remember from the first charmer summit, that is a tool developed to discover that and makes updates easier to manage.
<lazyPower> catbus1 - ah that information is listed in the charm store itself, right hand side at the top of the charms listing. eg: https://jujucharms.com/neutron-api/  -- click "show more"
<lazyPower> you'll get relation-name:interface
<catbus1> I see that. I remember seeing a command line way of finding it out.
<lazyPower> there's also charm show cs:xenial/neutron-api
<catbus1> the interfaces are also defined in metadata.yaml. I remember it allows you to pull in the charms you are going to work with and show the interfaces. maybe it's charm compose.
<catbus1> I think charm show is what I am looking for. Thank you!
<lazyPower> np :)
<Garyx> Hey is it a known issue with maas 2.0 + juju 2.0 that sshkeys that are saved in the maas gui is not being deployed to nodes?
<Garyx> Using juju 2.0 beta 7
<lazyPower> neiljerram - and its up at cs:~lazypower/etcd-8   https://jujucharms.com/u/lazypower/etcd
<Garyx> Everything else seems to bo working fine, but cant log in with my key pair
<lazyPower> Garyx - a skim through the bug tracker didn't surface anything that looked relevant. Thats certainly bug-worthy, as i know maas 2.0 + juju 2.0 has been a focus recently for the core team.  Any associated logs, steps to reproduce would be helpful   https://bugs.launchpad.net/juju-core/+filebug
<Garyx> lazypower Thanks, haven't found anything in the logs yet that could help. Was wondering if anyone had seen the same before filing a bug.
<Garyx> lazypower before I log a new bug the default user created is still ubuntu right?
<lazyPower> Yep
<Garyx> okies cannot see anything on my end then :/
<jamespage> beisner, thedac: I think this will do the trick, but I've not tested yet...
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/fixup-service-running/+merge/294942
<beisner> thedac, tldr;  SRU bug 1273462 caused bug 1581171.  bug 1582813 raised to ID it as a regression.    thx jamespage
<mup> Bug #1273462: Users can mistakenly run init.d scripts and cause problems if an equivalent upstart job already exists <init> <sts> <trusty> <upstart> <verification-done> <lsb (Ubuntu):Fix Released> <upstart (Ubuntu):Fix Released by xnox> <lsb (Ubuntu Trusty):Fix Released by zhhuabj> <upstart (Ubuntu
<mup> Trusty):Won't Fix by xnox> <lsb (Ubuntu Utopic):Fix Released> <upstart (Ubuntu Utopic):Fix Released by xnox> <upstart (Debian):Fix Released> <https://launchpad.net/bugs/1273462>
<mup> Bug #1581171: pause/resume failing (workload status races) <landscape> <maintenance-mode> <uosci> <ceilometer-agent (Juju Charms Collection):In Progress by 1chb1n> <cinder (Juju Charms Collection):Fix Committed by thedac> <glance (Juju Charms Collection):Fix Committed by thedac> <keystone (Juju
<mup> Charms Collection):In Progress by thedac> <https://launchpad.net/bugs/1581171>
<mup> Bug #1582813: service --status-all always reports upstart managed daemons as running <amd64> <apport-bug> <ec2-images> <regression-update> <trusty> <lsb (Ubuntu):Confirmed> <https://launchpad.net/bugs/1582813>
<mpjetta> Iâm trying to setup a new MAAS 2.0 setup with juju 2.0 and running into an issue. I can add the hardware nodes just fine and I see them PXE boot and cloud-init and whatnot when commisioning. However when I go to actually bootstrap the juju CLI to the maas, the nodes PXE boot once, shutdown and then just hang at "Booting local disk. WARN: No MBR magic, treating disk as raw. Booting..."
<mpjetta> any ideas?
<bdx> mpjetta: format your disks
<bdx> mpjetta: gpt + fat32
<bdx> should do the trick
<mpjetta> thanks, trying that now
<thedac> beisner: jamespage. Ah that makes more sense. I'll try and take a look today
<beisner> thedac, first - happy sprinting! :-)   2nd, i've got that sync'd into a couple of charms and testing where it was known failing w/out.
<thedac> great
<beisner> thedac, but plz do review that c-h change if you've got cycles
<magicaltrout> hmmmmmm!
<magicaltrout> maybe I should enhance my DC/OS mesos talk and head over to MesosCon
<magicaltrout> http://events.linuxfoundation.org/events/mesoscon-europe/program
<magicaltrout> http://events.linuxfoundation.org/events/linuxcon-europe
<magicaltrout> and linuxcon \o/
<lazyPower> magicaltrout - indeed you should!
<marcoceppi> magicaltrout: hey, you've got a bunch of instances running for a while in cdp, do you need them still?
<magicaltrout> juju status says I've got 1 failed node
<magicaltrout> anything open to me marcoceppi tear it down
<marcoceppi> magicaltrout: ack, there were leaks in earlier betas, will purge
<anops> Hi
<anops> I am unsure if juju is the right thing for me. I mainly do research and each project requires a custom and usually complex environment with a lot of dependencies. It's very important however to communicate, share and reproduce my environment
<anops> with fellow researchers and to eventually deploy it to a server
<anops> how can I create a custom juju app receipe/image/solution or however you call it? Do you think juju is the right thing for me? I run these projects on my local laptop.
<magicaltrout> anops: depends, if the charms you require are available or you're willing to build it can be very helpful in that regard
<anops> I would most likely need to build it, I work on machine-learning and web-development projects
<magicaltrout> you can build a redeployable system that would work the same on your laptop or in the cloud, or on someone elses laptop assuming their running the same OS
<magicaltrout> a charm is a single service, you're basically describing a bundle
<anops> how complicated is it to build a juju app-solution... eh charm?
<magicaltrout> https://jujucharms.com/openstack-base/bundle/42 bit like that ;)
<magicaltrout> depends what the apps are and what they have to do, can you give me a clue?
<anops> For example a topological analysis platform for data, anomaly detection via a deep learning framework, or just classic data-analyis. On the other hand there are isolated little projects involving haskell,java,c,c++ etc. each most preferrably in a container to keep dependencies isolated from the host syste
<magicaltrout> okay well Juju has a new concept where you can use LXD on your local system
<magicaltrout> so instead of building docker images for each one, you use a vanilla Xenial/Trusty etc image
<magicaltrout> and install your stuff on it when its being spun up.
<magicaltrout> so you have enacpsulation of your server within an LXD image
<magicaltrout> but without having messed around creating a bunch of docker images and pushing them up to dockerhub etc
<magicaltrout> at that point the charms understand the concept of relations
<magicaltrout> so if you join various deep learning components together, they can configure themselves and get themselves up and running without users configuring them directly
<magicaltrout> which is what those lines depict in a bundle
<magicaltrout> relationships between different charms
<anops> That sounds great, but wouldn't that make it extremely complicated to get running? Especially when you have to work on a hard research project and don't have too much time
<magicaltrout> dunno, thats a choice you have to make :)
<anops> don't want to sounds lazy, but sometimes you just need get work done :)
<anops> magicaltrout: do you have an example?
<magicaltrout> example of what?
<magicaltrout> a charm?
<magicaltrout> http://bazaar.launchpad.net/~spicule/charms/trusty/pentahodataintegration/trunk/view/head:/reactive/pdi.py thats the code that maintains my pentaho data integration charm
<magicaltrout> just controls the installation, which services are started et
<magicaltrout> c
<anops> So juju is essentially something like ansible/salt/chef?
<magicaltrout> 165 lines of code so i can do "juju deploy pentahodataintegration" on my local server, AWS cloud or anywhere else that takes my fancy is pretty good bang for your buck i'd have thought
<magicaltrout> yerp
<magicaltrout> sorta :)
<magicaltrout> chef etc are generally lower level
<anops> I would get  my hands dirty with that, but where is the part that creates "the container" to run your pentaho instance?
<magicaltrout> i just run 'juju deploy ...' and if my local juju setup is configured to do LXD it will build the container and deploy it on the fly
<magicaltrout> which takes the best part of about 15 seconds
<magicaltrout> https://demo.jujucharms.com/ or you can use the gui
<magicaltrout> which does the same thing
<anops> hmm, ok but I don't deploy unless it works. Does it on-th-fly create containers localy and remove them when there's a bug in the python code?
<anops> It would be harder to manage if I have 20 containers and need to find out which one was the working one..
<magicaltrout> in my code above you can see me setting states
<magicaltrout>   status_set('maintenance', 'PDI Installed')
<magicaltrout>     set_state('pdi.installed')
<magicaltrout> stuff like that
<magicaltrout> so your charm would detect a failed bootstrap and set the state to failed
<anops> does that result in a zfs snapshot or a state in a state-machine where you can rollback to?
<anops> ah
<magicaltrout> the containers themselves will allow you to take zfs snapshots if you so desire
<magicaltrout> if a machine enters a failed state you can tear it down, login and fix it and mark it resolved
<magicaltrout> whatever floats your boat
<anops> Is the best way to start creating a charm setting up server 16.04 on my laptop?
<magicaltrout> yeah
<magicaltrout> marcoceppi: whats the best way to get charm tools? :0
<magicaltrout> apt-get install charm-tools i believe anops
<anops> is it possible to create a container with a personal desktop that runs on tty1? I'd like to keep the core, personal and work seperated so that snapshots are more granular
<magicaltrout> then charm-create <charmname>
<anops> also more secure
<magicaltrout> https://jujucharms.com/u/kwmonroe/ubuntu-devenv/trusty/2
<magicaltrout> something like that i suspect you want
<magicaltrout> thats not GUI based but the idea is similar
<magicaltrout> https://jujucharms.com/ubuntu/0 or that
<anops> magicaltrout: oh you mean I should put the ubuntu-desktop into a charm too? I thought about lxc..
<magicaltrout> you can do it direct in lxc
<magicaltrout> lxc launch xenial
<magicaltrout> or something like that
<magicaltrout> lxc launch trusty
<magicaltrout> depends what you want to bootstrap
<magicaltrout> you might have some interesting issues with gfx though, although I did some ssh tunelling successfully
<magicaltrout> https://github.com/ustuehler/lxc-desktop or something like that
<magicaltrout> https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-containers/
<anops> cool,eactly what I need
<magicaltrout> alright i need to run anops, you'll probably find some life in this channel if you need more questions answered, if not, I'll be back in about 9 hours ;)
<anops> magicaltrout: have a nice day/night thank you very much for your great help!
<magicaltrout> no problem
<anops> I'm just looking into sandstorm.io for the web-app part, maybe that's the way to share these and juju for the heavy stuff like machine-learning
<jhobbs> is there a way to use a "charm: " option in a bundle to get the latest version of the charm for the series being used in the deployment (default-series)?
<jhobbs> like "charm: cs:nova-compute"
<jhobbs> if my default-series is trusty i want that to get the latest trusty nova-compute charm
<blahdeblah> Quick Q: is there an overview showing which versions of juju2 are in stable/proposed/devel?  Or an easy way to obtain this information without having the repositories enabled on my machine?
<anops> blahdeblah: https://launchpad.net/juju-core maybe
<anops> or https://jujucharms.com/docs/stable/reference-releases
<blahdeblah> anops: Looked at the first one - couldn't see any reference to the ppas there, although I may need more coffee
<anops> the milestone timeline graphic
<blahdeblah> That does't appear to have anything to do with which version is in which ppa...
<blahdeblah> Looks like comparing https://launchpad.net/~juju/+archive/ubuntu/stable https://launchpad.net/~juju/+archive/ubuntu/proposed & https://launchpad.net/~juju/+archive/ubuntu/devel is easiest, but only the last has juju 2 packages...
#juju 2016-05-18
<marcoceppi> magicaltrout: yeah, apt get install charm-tools; then you can do `charm create` (sans the -) ;)
<jamespage> dosaboy, can you cast your eye over https://code.launchpad.net/~james-page/charm-helpers/fixup-service-running/+merge/294942
<dosaboy> jamespage: sure
<jamespage> dosaboy, ta - I was looking for the testing that beisner did yesterday - but he must of done it offline I think
<dosaboy> jamespage: i just realised that since your fix for 1580320 is ceph-osd only and some amulet tests use ceph for osds it won't apply so tests will still fail
<dosaboy> so i have a tmp workaround
<jamespage> dosaboy, I can apply to ceph as well
<dosaboy> jamespage: that would be the easy fix ;)
<dosaboy> or we could get teh amulet tests using ceph-mon+ceph-osd
<dosaboy> but that means more resources for each test
<dosaboy> test run that is
<dosaboy> jamespage: i'm working around it with https://review.openstack.org/#/c/314063/10..11/tests/basic_deployment.py for now
<jamespage> dosaboy, https://review.openstack.org/#/c/317910/
<jamespage> dosaboy, beisner: raised https://bugs.launchpad.net/juju-core/+bug/1583109
<mup> Bug #1583109: error: private-address not set <juju-core:New> <https://launchpad.net/bugs/1583109>
<gennadiy> hi everyone, is it possible to deploy custom openstack image from juju charm?
<gennadiy> we have a few snapshots and we want to use them in our charms
<gennadiy> seems it's not 'true' juju way
<jamespage> beisner, shall we land the ch fix for service_running and then generate the master branch re-syncs?
<jamespage> that would exercise this bug well
<beisner> jamespage, sounds good.  just confirming that the trace log gets enabled as planned on 1 short run now.
<jamespage> beisner, okies
<jamespage> going to eat lunch
<simonklb_> anyone familiar with the sync-watch plugin? for some reason it's causing two updates each time I re-build my charm - any idea why?
<jamespage> beisner, ok going to raise reviews for resyncs right now
<beisner> jamespage, merging your c-h thing now
<jamespage> beisner, already done
<beisner> jamespage, oh you beat me to it
<jamespage> just pushed it
<beisner> ha, good deal
<jamespage> dosaboy, what's your bugref for the IPv6 thing?
<beisner> jamespage, skip ceilometer-agent.  i've got a review that i'll just resync and append.
<dosaboy> jamespage: there's this one for the charm bug - https://bugs.launchpad.net/charm-helpers/+bug/1581598
<mup> Bug #1581598: ipv6 enabled charms don't understand mngtmpaddr flag <ipv6> <openstack> <sts> <Charm Helpers:In Progress by hopem> <ceilometer (Juju Charms Collection):In Progress by hopem> <ceph (Juju Charms Collection):In Progress by hopem> <ceph-mon (Juju Charms Collection):New> <ceph-osd (Juju
<mup> Charms Collection):In Progress by hopem> <ceph-radosgw (Juju Charms Collection):New for hopem> <cinder (Juju Charms Collection):In Progress by hopem> <glance (Juju Charms Collection):In Progress by hopem> <heat (Juju Charms Collection):In Progress by hopem> <keystone (Juju Charms Collection):In
<mup> Progress by hopem> <neutron-api (Juju Charms Collection):In Progress by hopem> <nova-cloud-controller (Juju Charms Collection):In Progress by hopem> <nova-compute (Juju Charms Collection):In Progress by hopem> <openstack-dashboard (Juju Charms Collection):In Progress by hopem> <percona-cluster
<mup> (Juju Charms Collection):In Progress by hopem> <rabbitmq-server (Juju Charms Collection):In Progress by hopem> <swift-proxy (Juju Charms Collection):In Progress by hopem> <swift-storage (Juju Charms Collection):In Progress by hopem> <https://launchpad.net/bugs/1581598>
<jamespage> dosaboy, beisner: OK re-running now with amended commit message - I'd not raised any reviews just yet...
<dosaboy> jamespage: once it all lands i can kick off an ipv6 run
<jamespage> beisner, dosaboy: ok here they come
<jamespage> https://review.openstack.org/#/q/status:open+branch:master+topic:bug/1581171
<mup> Bug #1581171: pause/resume failing (workload status races) <landscape> <maintenance-mode> <uosci> <ceilometer-agent (Juju Charms Collection):In Progress by 1chb1n> <cinder (Juju Charms Collection):Fix Committed by thedac> <glance (Juju Charms Collection):Fix Committed by thedac> <keystone (Juju
<mup> Charms Collection):In Progress by thedac> <https://launchpad.net/bugs/1581171>
<kjackal> Hey lazyPower, do you have a minute? Its about an exception I am getting when I stop filebeats.
<lazyPower> sure, whats up?
<kjackal> https://pastebin.canonical.com/156708/
<kjackal> Let me also tell you what i am deploying
<lazyPower> I've seen this before kjackal
<kjackal> https://pastebin.canonical.com/156709/
<kjackal> Do we know what it is?
<lazyPower> unit-filebeat-0[15988]: 2016-05-18 13:06:11 INFO unit.filebeat/0.stop logger.go:40 subprocess.CalledProcessError: Command '['relation-list', '--format=json', '-r', 'elasticsearch:5']' returned non-zero exit status 2 <--  its trying to call relation-list on a relation thats been removed right?
<lazyPower> i only got this once, and was unable to reproduce it
<lazyPower> is the unit still alive?
<kjackal> yes
<lazyPower> paydirt. is the ES unit still alive?
<kjackal> yes, the entire deployment is online
<lazyPower> hmm.. when i encountered this, the beat subordinate did not have an active relation, and the scope.unit relation interfaces were still trying ot contact a disconnected elasticsearch unit
<lazyPower> perhaps thats not the problem here
<kjackal> waitup
<lazyPower> which makes me think i should go peek at the elasticsearch interface to make sure i'm removing its .available state
<kjackal> how do you check if you have active relations?
<lazyPower> either via juju status, the gui, or by running the relation-* hookenv cmds on the unit
<lazyPower> kjackal i think i found the root of the issue though
<lazyPower> https://github.com/juju-solutions/interface-elasticsearch/blob/master/requires.py#L32
<lazyPower> reactive is doing exactly what i told it to do. the .available state is still set,  (notice i only remove .connected) - so its hitting its cached data
<lazyPower> and trying to reference that relationship because its still marked as active (i think this all true, cory may mythbust me)
<kjackal> :)
<lazyPower> kjackal - has this been happening consistently?
<lazyPower> or is this the one-off that got hung up?
<lazyPower> s/the/a
<kjackal> it happened in two seperate deployment
<kjackal> and i can reproduce it by jujuresolved --retry
<lazyPower> ok let me grab a coffee and i'll push a fixed publish to my namespace
<lazyPower> if it works out, we'll bump the store revision
<kjackal> sounds good
<kjackal> so we need something line @when (elastic search available AND connected) call cache_elasticsearch_data(...)
<kjackal> let me try that
<lazyPower> kjackal - think is, .available presumes .connected
<lazyPower> as .available is only ever set when we've gotten the connection string data we expect
<lazyPower> i think the problem lies in not removing that .available state on that conversation object
<lazyPower> kjackal https://github.com/juju-solutions/interface-elasticsearch/pull/6
<kjackal> let me try to test this... it will take some time...
<kjackal> Ha! I believe the charm is killed now!
<lazyPower> yeah?
<kjackal> Yes, it seems to work!
<kjackal> Good job lazyPower!!! Not so lazy!!!
<kjackal> :)
<lazyPower> :) Thanks
<kjackal> So, do you have any eta on when this will be available?
<kjackal> Do you want me to merge it?
<lazyPower> url: cs:~lazypower/filebeat-0
<lazyPower> give that a go in place of the store url if you dont mind. i'd like an a-z test before we merge this assuming its fixed
<kjackal> let me do a full deployment with that charm
<kjackal> These easy fixes are too good to be true! But I am optimistic!
<lazyPower> :)
<lazyPower> I am as well
<lazyPower> i'm curious why this wasn't rooted out i the bundle tests
<lazyPower> *in
<beisner> jamespage, dosaboy, thedac, tinwood - fyi - i'll be doing some readme-only change/reviews and landing them to iterate a bit on gerrit change-merged stream advertisements and triggers wrt getting charm upload jobs set up in the chain.  just no-op noise, safe to ignore.
<kjackal> lazyPower could I have also a url for topbeat? The fix is in the beats_base and affects all beats charms
<lazyPower> sure, 1 sec and i'll rebuild that as well
<kjackal> thank you
<lazyPower> url: cs:~lazypower/topbeat-0
<gahan> I'd like to update /etc/neutron/dhcp_agent.ini using juju (as it is maintained by juju). can someone enlighten me how to achieve through cli or gui?
<cory_fu> I forget, does quickstart work with bundles that reference local: charms?
<lazyPower> beisner - i know we've exposed config points for cinder like this, where you can modify config with a sub. Is this also true with neutron that you're aware of? (re: gahan's question above)
<lazyPower> cory_fu negative
<cory_fu> Blast and damn
<lazyPower> thats why deployer was so long-lived
<lazyPower> that and its wrapped by our testing tooling
<beisner> gahan, curious, what specifically are you needing to mod?
<gahan> beisner: I heard if I modify dhcp_domain field in mentioned .ini file, it wil change the value of field "search" in /etc/resolv.conf for all new instances in openstack
<beisner> gahan, indeed, in liberty and prior you could do that there (ref:  http://docs.openstack.org/liberty/config-reference/content/section_neutron-dhcp_agent.ini.html) ... but it was deprecated and removed @mitaka.  i'm not sure where the equivalent is off hand.
<gahan> beisner, I'm still using liberty :) thanks
<beisner> gahan, typically, any adjustments to those confs must be done via charm config values.  if you modify the file by hand, your changes will be overwritten when the conf is re-rendered from templates by the charm, which could happen at any time.
<gahan> it doesn't seem like this particular one is exposed or I'm looking in wrong place
<beisner> gahan, we'd need to add a charm config option, but also figure out what it means for mitaka and later.  i'd imagine it can be set by some other approach there.
<beisner> gahan, right.  we don't expose 1:1 all conf options as that would be massive.  we typically encapsulate sane common defaults with knobs and levers to tune things to common needs.
<beisner> ah, dns_domain in neutron.conf for >= mitaka it appears
<beisner> so this *could* be added to the charm so long as the conf template rendering logic is set up to plop the config in the right file depending on release version (we do that for other things, so that framework is already there).
<beisner> it'd have to go into the next/dev (master) charm first, which would then be released at either 16.07 or 16.10.
<lazyPower> hmm. i haven't validated this assertion yet, but do we know when using 1.25 if i set environment constraints (eg; tags=lazypower) and then i have additional tags defined in the bundle if those are additive (eg: tags=lazypower,bundle-constraint)   or if the bundle constraints override the environment constraints
<lazyPower> beisner i think you have experience in this ^
<beisner> lazyPower, hmm i've only ever used constraints at bootstrap and deploy time (ie. in the bundle)
<beisner> lazyPower, i would think the constraint in the bundle would win though
<lazyPower> yeah i was afraid of that :/
<lazyPower> i'm trying to come up with a good workflow to test these submissions, without stepping all over oil's toes and without modifying the bundle
<lazyPower> i guess i'm caught in a corner where i have to mod the bundle. *snaps*
<beisner> lazyPower, sec...
<jamespage> beisner, I think we should put this resync through on a smoke only
<beisner> jamespage, ack
<beisner> jamespage, maybe kick a few fulls to try to trap this juju core bug?
<beisner> i kicked 1
<jamespage> sure
<jamespage> but I'd not block +2 landing on that basis
<jamespage> infra queue must be long.... atm
<jamespage> UOSCI is besting it hands down...
<lazyPower>  i think they *are* merging the tag constraints, at least thats the behavior i'm seeing with this deploy. #TIL
<beisner> jamespage, she's a hopping atm for sure
<beisner> lazyPower, we use this to strip or insert constraints on the fly in osci:  http://bazaar.launchpad.net/~uosci/ubuntu-openstack-ci/trunk/view/head:/tools/bundle_constrainer.py#L18
<beisner> lazyPower, i've got a WIP refactor of that into our git namespace with other features, but that one is functional.
<lazyPower> that would be excellent if i were generating this bundle
<beisner> lazyPower, pull it down, pipe it through that, get 0 constraints ;-)
<lazyPower> i think thats a big part of the disconnect here, is i'm testing an artifact of their submission, not really using the tooling OS/OIL uses
<beisner> oh, so they have a bundle that they're proposing with abnormal constraints?
<lazyPower> yeah
<lazyPower> tags=openstack, or tags=nedge
<cory_fu> arosales, kwmonroe, kjackal: Quick review of README updates for Bigtop PR: https://github.com/juju-solutions/bigtop/pull/2
<lazyPower> we kind of need that for placement however
<lazyPower> these charms have very strict hardware requirements, if the charms detect they are on a sub-par node, it panics and sets the charm to blocked and refuses to continue
<beisner> lazyPower, good thing actually.  much better than mysteriously failing
<lazyPower> yeah i'd rather it punch me in the face with a reason, than punch me in the face and tell me to go troll the logs :)
<beisner> jamespage, yah the zuul queue is large atm
<beisner> we have 42 of the 235 jobs queued up ;-)
<beisner> jamespage, bahh.  ceph-mon tox ini has site_packages True, which is bad.  it failed upstream CI, but passes ours b/c we of course have that installed.
<beisner> i pulled ceph-mon master, changed to sitepackages = False locally, and we fail identically to http://logs.openstack.org/57/318057/1/check/gate-charm-ceph-mon-python27/1ab631a/console.html
<beisner> we need to make a pass and flip those False on any charms that have that True
<jamespage> beisner, is apt in pypi?
<beisner> jamespage, not sure if it is.  but it seems to me that should be mocked out anyway
<lazyPower> jamespage sure is https://pypi.python.org/pypi/apt/
<beisner> jamespage, i'd suspect this passed before infra started re-paving hosts in the p->t upgrade efforts.
<beisner> or something along those lines
<jamespage> beisner, new virtualenv disables this by default
<beisner> either way, site packages pollute the test and will give differing results on different hosts
<beisner> jamespage, yes, but if it's explicitly set True in tox.ini, it'll still use site packages
<jamespage> yup
<beisner> we might be wise to inject False in case someone fires up with an older tox or virtualenv
<arosales> cory_fu: thanks, taking a look at the readme now . . .
<jcastro> lazyPower: out of curiosity, have you tested the beats charms on xenial? wondering why they're trusty-only
<lazyPower> jcastro - at the time of writing that layer their archive did not have xenial packages available
<lazyPower> now that xenial is out, i think that may have changed. simple enough to add a series and kick off a bundle test. /me will do that later today
<jcastro> oh ok, so blocking on upstream packages
<lazyPower> yeah however i'm not certain thats still the case
<lutostag> where would I file a bug for the charm store?
<lutostag> ah there's even a link at the bottom of the page... duh ;)
<lazyPower> lutostag - we hid it down there in hopes it would be helpful :D
<gennadiy> hi everybody, seems i have some issues charms publishing. as example i have pushed mesos-dns charm more than 1hr ago. but haven't got in store yet.
<gennadiy> source code - https://code.launchpad.net/~dataart.telco/charms/trusty/mesos-dns/trunk
<gennadiy> is it possible to check what is wrong with it?
<lazyPower> gennadiy - launchpad ingestion was broken a couple weeks ago and has been unable to be restored. we're relying on the new charm publishing features of the store to continue progress
<lazyPower> gennadiy see: https://jujucharms.com/docs/devel/authors-charm-store
<lazyPower> gennadiy - bright side of the inconvenience is your publishing becomes instantaneous vs the 30/60 minute ingest waiting period.
<gennadiy> thank you @lazyPower i will review new mechanism
<jamespage> beisner, updated https://review.openstack.org/#/c/318057/ with mocking for apt
<jamespage> waiting for check to confirm that's OK
<jamespage> beisner, stable syncs
<jamespage> https://review.openstack.org/#/q/status:open+branch:stable/16.04+topic:bug/1581171
<mup> Bug #1581171: pause/resume failing (workload status races) <landscape> <maintenance-mode> <uosci> <ceilometer-agent (Juju Charms Collection):Fix Committed by 1chb1n> <cinder (Juju Charms Collection):Fix Committed by thedac> <glance (Juju Charms Collection):Fix Committed by thedac> <keystone (Juju
<mup> Charms Collection):In Progress by thedac> <https://launchpad.net/bugs/1581171>
<beisner> jamespage, ack thx
<gennadiy> @lazyPower - is it possible to remove charm?
<lazyPower> no removal, but you can remove the permissions from the charm so only you can see it
<lazyPower> charm grant --help
<gennadiy> ok
<gennadiy> another question: can we deploy specific openstack snapshot from charm?
<lazyPower> not sure what you mean
<gennadiy> @lazyPower - now we have snapshot/image of openstack machine and we would like to deploy it from juju.
<gennadiy> i know that it's not 'true' way. but maybe it's possible
<lazyPower> ah, i dont think we have any support for that, as we assume the charms to probe/setup as required. i'm not going to say its not possible, but i dont know thats a supported deployment method as of yet
<lazyPower> there has been talk of imaging support for providers that can support it, but i have no idea where that is on the roadmap if its a next cycle thing or cycle after next.
<gennadiy> @lazyPower - one more question: i have account https://launchpad.net/~dataart.telco but email login is tads2015dataart@gmail.com so when i make charm login it will not allow to push to cs:~dataart.telco/sipp
<gennadiy> as i understand it calculates name by primary email. am i right?
<lazyPower> launchpad name, or launchpad group actually
<lazyPower> its still backed by the mechanisms (groups, acls, etc) inherited by ubuntu SSO
<gennadiy> hm. "charm push . cs:~dataart.telco/sipp" returns error "unauthorized: access denied for user "tads2015dataart" but https://launchpad.net/~dataart.telco it's me
<gennadiy> a few mouth ago i renamed account from tads2015dataart to dataart.telco
<gennadiy> *month
<lazyPower> jrwren - any known issues with accounts that have changed their handle and the charm-store's backend bits?
<jrwren> lazyPower: yes, if the account name changed since they first logged into www.jujucharms.com
<lazyPower> gennadiy - does this sound like the situation? ^   did you log into the charm-store (jujucharms.com website) prior to renaming your launchpad account?
<gennadiy> also i can't login to jujucharm too
<gennadiy> yes
<gennadiy> yes i logged before rename
<lazyPower> jrwren - anything we can do to help gennadiy? And for future reference, if i encounter a user with similar issues, where should i direct them to file bugs/etc. so we can triage accordingly?
<gennadiy> does it fix issue if i remove account and create new with correct name?
<lazyPower> ideally we should be able to fix this... i'm not very familiar with that side of the codebase however, so i'm bugging jay for details :)
<Prabakaran> Hi Matt/Kevin, I was reviewing the merge proposal which you have given for ibm-java. It looks good to me and I have a small doubt in the default package name and SHA value for JRE.
<Prabakaran> As per this merge proposal http://bazaar.launchpad.net/~kwmonroe/charms/trusty/ibm-java/may-2016/view/head:/reactive/ibm-java default package name and SHA values are used only for SDK in the reactive/ibm-java file (Between Line number 14 to 25).
<Prabakaran> How this default package name and SHA values for JRE if it is not provided by the user as it handles only for SDK.
<jrwren> lazyPower, gennadiy i'm browsing our docs to see if we have notes on this issue. iirc, we did the same thing for aisrael
<Prabakaran> Kindly advise on this and also I was not able deploy this merge proposal because of the if loop in the line number 65. After accepting this merge proposal I will be correcting this line of code.
<mbruzek> Prabakaran: What is your concern with the ibm-java merge proposal?
<mbruzek>  Prabakaran: If you can fix it I think that is great. Let me check the code. Why is there a loop
<Prabakaran> we have a different package for JRE and SDK. My doubt is regarding the line of codes http://pastebin.ubuntu.com/16498220/ in the reactive file. How default package will work for ibm-jre
<Prabakaran> thats no problem. i will fix it.
<jrwren> gennadiy: can you file a bug with your account rename details at https://github.com/CanonicalLtd/jujucharms.com/issues ?
<Prabakaran> As per my understanding this new proposal code will install SDK primarly and if the user wants install jre obivously he must have to feed package name and its sha thru juju set command. Is my understanding is correct?
<mbruzek> Prabakaran: that sounds good to me.
<bbaqar> hey guys .. we can push revisions in charms using bzr right? i can see the revisions on launchpad but not is charm show cs....
<kwmonroe> yup Prabakaran - you got it.  the user gets to choose what installer to use (sdk or jre) by setting the installer filename
<kwmonroe> Prabakaran: so there is no need ot set the default for a jre (in fact, there's no need to set any default installer or sha since we say the installer filename and sha is required to be set in the config
<mbruzek> bbaqar: You can push to your own namespace, but the ones in ~charmers need to be reviewed
<bbaqar> mbruzek: well i have a private team space where i can push charms ..
<bbaqar> i still cant get new revisions in
<lazyPower> bbaqar - launchpad ingest has been broken for about 2 weeks, we sent a notice to the mailing list
<lazyPower> bbaqar see: https://jujucharms.com/docs/devel/authors-charm-store
<mbruzek> bbaqar: Yes the automatic injest is broken, but you can push using the document that lazyPower linked ^
<lazyPower> this will get you ramped up on the new charm push commands which will eliminate your need to wait 20/30 minutes for ingest. Make sure you fully read that document and grokk the new ACL structure for charms. by default, they are only visible to you as the uploader, and go into an unpublished channel
<bbaqar> lazyPower: i believe i have done all this .. i ll go over the document once again
<lazyPower> bbaqar so just to confirm, charm publish . ~plumgrid-ons/series/mycharm does not work for you?
<mbruzek> notice the dot for current directory.
<lazyPower> i paraphrased the namespace, feel free to sub with the proper group string
<bbaqar> lazypower: i ran the exact same thing: charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid .. but the problem is it did not have the last two revisions that i pushed in the last week
<lazyPower> bbaqar - did you publish those charms you pushed?
<mbruzek> bbaqar: Did you publish those charms?
<lazyPower> by default they land in an unpublished channel. You have to both push, and publish against the appropriate channel (Stable/devel accordingly)
<Prabakaran> I am deploying and checking this ibm-java now.. i will accept this merge proposal. Thanks matt and kevin :)
<bbaqar> lazypower: yup .. charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-18 --channel stable    problem is i want 20th rev
<mbruzek> bbaquar: charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-20
<bbaqar> mbruzek: no matching charm or bundle for cs:~plumgrid-team/trusty/neutron-api-plumgrid-20
<lazyPower> bbaqar - i'm showing that -18 is head of what you have published. verified with `charm list -u plumgrid-team`  and additionally charm show cs:~plumgrid-team/trusty/neutron-api-plumgrid  does not show a -20 revision as available.
<lazyPower> bbaqar so where is the "20'th revision" coming from?
<lazyPower> is that the BZR id?
<mbruzek> bbaqar: Yeah I only see 18 revisions in the listing. Perhaps the permissions are not allowing us to see 19 and 20?
<bbaqar> mbrukzek, lazypower: http://bazaar.launchpad.net/~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk/changes/20?start_revid=20
<lazyPower> mbruzek - i'm pretty sure evne if its in unpublished, it will show up in the revision-info key
<lazyPower> bbaqar ah thats where the disconnect is coming from. those revision id's do not match whats in bzr, at all
<mbruzek> ah
<bbaqar> its there on launchpad
<lazyPower> its completely disconnected from VCS
<bbaqar> So am i doing something wrong here?
<mbruzek> bbaqar: Yeah as we mentioned the automatic launchpad injest is broken, you have to manually push each revision, and then publish the ones you want to see.  The upside is no waiting 30 minutes for the automatic process, and the downside is that you have more manual work.
<lazyPower> bbaqar if you did a charm push, it will tell you what revision the charm store incremented to.
<mbruzek> So checkout the latest from bzr to your computer, then push it to the charm store.
<lazyPower> ^
<mbruzek> * Where more manual work is 2 additional "charm" commands.
<bbaqar> I love the fact that we dont have to wait 30 mins and i am willing to run as many commands it takes .. just trying to figure out what they are exactly .. so this is what i am going to do now
<mbruzek> bbaqar: the document that lazyPower linked you to outlines the steps
<bbaqar> okay so this is what i am going to do 1) bzr branch lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk 2) bzr ci -m "commit message" --unchanged 3) bzr push lp:~plumgrid-team/charms/trusty/neutron-api-plumgrid/trunk 4) charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid 5) charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid --channel stable
<bbaqar> mbruzek i did run that .. let me try this
<lazyPower> bbaqar nope
<lazyPower> bbaqar you *have* to specify the revision output in the charm push command to that charm publish command
<bbaqar> ohhhh h
<bbaqar> i get it ..
<lazyPower> which should increment to -19, as -18 is head
<bbaqar> wait let me try that
<gennadiy> @lazyPower - new publish charm tool - very cool. it shows errors in bundle! very useful
<lazyPower> gennadiy really happy its a positive impact on your experience :)
<lazyPower> jrwren you're getting praise and not targeted sir ^ <3
<gennadiy> one more improvement: add flag --grant-everyone to publish command
<mbruzek> gennadiy: and it lets *you* control what charms/bundles are in the store, no more automated injestion
<lazyPower> gennadiy file a bug over here: https://github.com/juju/charm
<mbruzek> gennadiy: you can do that with the charm set acl command
<bbaqar> lazyPower: so this right charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid-19
<bbaqar> lazypower but it says error: charm or bundle id "cs:~plumgrid-team/trusty/neutron-api-plumgrid-21" is not allowed a revision
<lazyPower> bbaqar nope push doesn't need the revno
<mbruzek> gennadiy:  sorry the `charm grant cs:~kirk/foo everyone` command.
<lazyPower> only publish does, push i creating the entity, publish is registering it.
<lazyPower> bbaqar - so what you're looking to do is this:
<lazyPower> charm push . cs:~plumgrid-team/trusty/neutron-api-plumgrid
<lazyPower> which it should come back with some output like:
<lazyPower> url: cs:~plumgrid-team/trusty/enutron-api-plumgrid-19
<lazyPower> channel: unpublished
<lazyPower> Then you move into the publish phase if its ready for that:
<lazyPower> charm publish cs:~plumgrid-team/trusty/neutron-api-plumgrid-19 --acl read everyone
<lazyPower> it will default to publishing into the stable channel. if this is a devel release, please target the channel appropriately
<bbaqar> It returns the same revno url: cs:~plumgrid-team/trusty/neutron-api-plumgrid-18
<lazyPower> so, this is a super cool feature of push
<lazyPower> you dont have a change from what you pushed into the -18 revision, teh charm command does store *small bits of metadata about the VCS tree*
<lazyPower> all that info is shown when you charm show cs:~plumgrid-team/trusty/neutron-api-plumgrid  --- pipe that into less and evaluate the output to see the revision it read in from teh bzr metadata. Its not reading and is disconnected from, but if its present, we will use it to our advantage and keep you from revving a charm with no changes
<lazyPower> bbaqar - one way you can validate is to `charm pull  cs:~plumgrid-team/trusty/neutron-api-plumgrid-18`  to a temporary location and then dir-diff that against what you have in your bzr archive. You should see its 1:1 copy
<lazyPower> also i think its worth mentioning that some of these nuances will be much easier to see once we have the new review queue launched, which offers diffing between revisions and is layer aware. no ETA on when that will be available though, other than we are actively cycling towards resolution.
<bbaqar> i understand .. yup you are right . .i pulled the charm .. and did a diff between the two directories .. its the same .. but one more thing ... i should be able to deploy the charm using juju deploy cs:~plumgrid-team/trusty/neutron-api-plumgrid-18
<lazyPower> That is correct, so long as you granted read access to the published charm to the user you are logged in as thats trying ot deploy it, or gave read permissions to everyone
<mbruzek> bbaqar: It will depend on the permissions granted, but yes you should be able to
<bbaqar> i got it .. thanks guys ..
<bbaqar> thanks for patiently explaining it
<mbruzek> bbaqar: No problem please raise issues (link above) if you feel it can be improved.  If the documentation was not clear you can open an issue for that too
<lazyPower> np! if there's any verbiage we could explain better in those docs, i'd love to get a bug report from anyone struggling with the new publish workflow
<lazyPower> mbruzek we have finished assimilation, we are the same person
<lazyPower> mbruzek who are you?
<bbaqar> mbruzek: now that it is working .. this is awesome
 * mbruzek whoami
<lazyPower> > Pizza
<mbruzek> https://github.com/juju/docs/issues/new
<lazyPower> :O it all makes sense now
<mbruzek> So my other self (lazypower) gave you a link to create issues with the charm command, and the documentation link is there if you feel it could have been more clear (which I have a feeling it could have been)
<lazyPower> we are the juju singularity
<bbaqar> lazyPower: I ll read it once again and raise an issue if i think we should update the doc
<bbaqar> Once again .. guys this is cool .. i feel like i have control now ..
<lazyPower> \o/ thats awesome
<lazyPower> bbaqar glad it had a positive impact on your experience :)
<Prabakaran> Hi Matt/Kevin, I have tested ibm-java and i am able to deploy successfully. I have merged those changes into the stream. And also i have updated the charm store with the updated code https://jujucharms.com/u/ibmcharmers/ibm-java/trusty/7 . I hope it will be approved soon. Thanks for your help and support :)
<mbruzek> Prabakaran: OK we will give it another look
<Prabakaran> Thanks matt
<mbruzek> Prabakaran: did you also build the charm and update the bzr merge proposal
<Prabakaran> i did charm build with the latest source code and pushed using charm push command
<Prabakaran> mbruzek, All ok? is there anything needs to be done from my end?...
<mbruzek> Prabakaran: can you give me link to the merge proposal ?
<Prabakaran> from which branch to which branch?
<Prabakaran> Source Code Repo : https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-java/source  This charm has been pushed into the charm store. Below is the link  Charm store link : https://jujucharms.com/u/ibmcharmers/ibm-java/trusty/7
<mbruzek> Prabakaran: The Juju charm store and bzr are different things.
<mbruzek> To be in the review queue you need to create a bug with a merge proposal
<mbruzek> You had one, let me find it
<Prabakaran> https://bugs.launchpad.net/charms/+bug/1477067
<mup> Bug #1477067: New Charm: IBM Java SDK <Juju Charms Collection:Fix Committed> <https://launchpad.net/bugs/1477067>
<mbruzek> Prabakaran: Yep that is what I needed.
<mbruzek> Prabakaran: my mistake, this one was a bug since there is no charm to merge with, so I incorrectly used "merge proposal"
<Prabakaran> thats no problem
<Prabakaran> nothing else from my end right..
<mbruzek> no
<Prabakaran> its time for me to sleep...if anything is required please email me.. thanks again for your great support :)
<kjackal> cory_fu: Do you have a minute?
<cory_fu> Sure.  We're in bigdata-daily
<cholcombe> interfaces.juju.solutions doesn't seem to understand lp links
<cholcombe> the Apache WSGI base layer repo link is dead
<lazyPower> cholcombe - err thats just a databag
<lazyPower> do you mean the builder?
<lazyPower> cholcombe - here's the list of what we test with the builder: https://github.com/juju/charm-tools/blob/master/tests/test_fetchers.py#L49
<cholcombe> lazyPower, i'm not sure.
<lazyPower> sorry, i mean the fetcher
<cholcombe> lazyPower, i see
<cholcombe> well than maybe interfaces.juju.solutions is having an issue resolving lp links
<cholcombe> lazyPower, i was just trying to click the repo link to figure out what the apache wsgi layer requires
<lazyPower> cholcombe OH
<cholcombe> firefox just gives me an: I don't understand this link error
<cholcombe> :D you get it now haha
<lazyPower> you mean with the browser
<lazyPower> yeahhhhhh
<cholcombe> yep
<lazyPower> this is a fail on our planning part
<lazyPower> can you file a bug against the repo?
<cholcombe> sure if i could find it lol
<lazyPower> this may still live in bens namespace
<cholcombe> i can't find it under jacek's lp
<lazyPower> https://github.com/bcsaller/juju-interface <- is what i meant
<cholcombe> oh nvm i found it
<cholcombe> https://code.launchpad.net/~jacekn/charms/+source/apache-wsgi/+git/apache-wsgi is the link
<lazyPower> this is the codebase for the interfaces webservice
<lazyPower> we'll need to add support to resolve those links
<cholcombe> lazyPower, everyone else seems to just be putting in the link to the summary page
 * lazyPower shrugs
<lazyPower> i use github yo
<cholcombe> lazyPower, i updated it
<lazyPower> but that works equally as well
<cholcombe> it would take me 10x as long to write a bug report haha
<cholcombe> i still think it's too difficult to find out what reactive layers and interfaces require of the person using them.  You have to dig
<mpjetta> random question: Iâm using juju 2.0-beta6 and MAAS 2.0 beta3. Iâm still having to use âexport JUJU_DEV_FEATURE_FLAGS=maas2â , is that correct or am I doing something wrong ?
<lazyPower> mpjetta - i dont think that came out from behind the feature flag until beta7
<lazyPower> I'm pretty sure thats covered in the release notes in /topic however
<mpjetta> ok thanks
#juju 2016-05-19
<cameron_C> Hi, how to deploy a service to lxc auto select the machine or add a new machine with lxc?
<gnuoy> jamespage, morning, we need to define an interface for the peer relation for the openstack api charms. Each Openstack API charm uniquely names the interface atm (eg keystone-ha, nova-ha, neutron-api-ha etc). I propose that we create an openstack-ha interface and have new charms use that. Sound ok?
<jamespage> gnuoy, well so long as the interface uses the same data semantics, I think its ok to use a single type
<jamespage> gnuoy, so we might endup with charms that have two peer relations - one general, one specific
<jamespage> but leader storage should avoid most of that
<jamespage> gnuoy, can you take a peek at https://review.openstack.org/#/c/317910/
<jamespage> its the peer to one that's already landed in ceph-mon
<jamespage> gnuoy, I also have a few stragglers for a general charm helpers resync across master and stable charms yesterday
<jamespage> I'll fix those up this morning
<jamespage> gnuoy, hey - could you nudge https://review.openstack.org/#/c/318220/
<jamespage> beisner and I agreed not todo full rechecks for the syncs...
<gnuoy> jamespage, done...
<jamespage> ta
<jamespage> gnuoy, governance review ressurected and companion email sent to openstack-dev!
<jamespage> here we come ....
<jamespage> that was worth an extra dot
<jamespage> gnuoy: https://review.openstack.org/#/c/318231/ and https://review.openstack.org/#/c/318221/ are good to go
<jamespage> beisner, I had to create some offical charm branches on launchpad to get the stable amulet tests to pass...
<beisner> jamespage, oh yah, i suspect for the newbies, mon, lxd, et al?
<beisner> jamespage, this --> https://review.openstack.org/#/q/topic:switch-to-bundletester   should allow us to use the version of amulet that contains the git branch fix.
<beisner> ie. be able to resurrect that change review you took a stab at a while back
<beisner> welcome back, gnuoy :)
<beisner> jamespage, ceph-osd + odl-controller stable sync passed, ready to land?
<jamespage> beisner, yes
<beisner> gnuoy got 'em
<gnuoy> I did!
<kjackal> Hey lazyPower
<kjackal> Quick questio
<kjackal> where does the kibana charm live?
<kjackal> its source?
<jamespage> gnuoy, beisner: https://review.openstack.org/#/c/317910/
<gnuoy> done
<beisner> jamespage, curious - are ceph-mon or -osd affected similarly?
<beisner> asked, with my 'keep the cephs in alignment' hat on
<jamespage> beisner, erm for which?
<jamespage> beisner, oh -mon already fixed; ceph is the sync
<jamespage> then I'll stable branch cherry pick them both
<jamespage> beisner, no - -osd was effected - not -mon
<jamespage> that's the other one...
<beisner> well look, there it is :-)
<jamespage> beisner, gnuoy: https://review.openstack.org/#/c/317313/
<jamespage> that should sort out the deployment race for the landscape team
<lazyPower> kjackal - i based my branch off of the upstream charmer repo - https://code.launchpad.net/~charmers/charms/trusty/kibana/trunk
<kjackal> lazyPower, I have a fix for the Kibana test that is failing
<kjackal> should I submit the fix upstream? Is it well maintained?
<kjackal> is it maintained by us?
<kjackal> never checked
<jcastro> hey lazyPower
<jcastro> are you on the latest beta?
<lazyPower> i am
<jcastro> do you have any issues with it getting stuck on creating units with "pending"?
<lazyPower> not with the clouds, i haven't been using LXD
<lazyPower> so ymmv
<tvansteenburgh> i'm on lxd, no probs
<jcastro> huh.
<lazyPower> kjackal well... yeah :)
<lazyPower> but i wouldn't say its well maintained. its a long lived charm. As i'm sure you can tell poking around in there its long in the tooth
<lazyPower> prior to this last round of updates, i think icey upgraded it to kibana4, so its got a few hands looking after it
<jcastro> tvansteenburgh: do you leave your deployments running for a long period of time?
<jcastro> seems for me it only acts up when I leave it running overnight
<magicaltrout> don't answer! its a trap!
<tvansteenburgh> jcastro: yeah, overnight quite a bit
<lazyPower> kjackal - with these pr's are you still seeing the failure wrt the relations?
<kjackal> Nope! Everything is GREEN :)
<lazyPower> Solid, thanks for the contributions
<lazyPower> I'm not real excited about the sleeps... but i see why its a pain. I had intermittent failures during test runs myself because of that
<lazyPower> we kind of need a more deterministic way to sniff that out
<jcastro> anyone know where actions went in 2.0?
<jcastro> list-actions and juju actions are missing now
<lazyPower> http://paste.ubuntu.com/16506582/
<lazyPower> i call schenanigans
<magicaltrout> bugg@tomsdevbox:~$ juju list-actions joshua-decoder
<magicaltrout> No actions defined for joshua-decoder
<magicaltrout> does something for me
<jcastro> http://paste.ubuntu.com/16506594/
<jcastro> what's happening to me
<magicaltrout> indeed jorge! thats a very prudent question
<tvansteenburgh> well are you gonna specify a service name??
<magicaltrout> i am on beta4 so they may have gone away
<magicaltrout> ah yeah lol
<kjackal> Ok, lazyPower! The patch for the failing Kibana test is here: https://bugs.launchpad.net/charms/+source/kibana/+bug/1576706
<mup> Bug #1576706: Tests are failure prone <kibana (Juju Charms Collection):New> <https://launchpad.net/bugs/1576706>
<jcastro> oh, lol
<lazyPower> kjackal on it
<lazyPower> jcastro - i love how it told you what was wrong xD
<tvansteenburgh> never underestimate the power of reading
<jcastro> in my defense I was following instructions for a charm written for 1.25
<jcastro> $ juju action do spark/0 smoke-test
<jcastro> ERROR unrecognized command: juju action
<tvansteenburgh> defense rejected
<tvansteenburgh> run-action
 * jcastro nods
<lazyPower> ooo tim swingin the gavel of readme/2.0-translation justice
<tvansteenburgh> Juju Judge Judy
<jcastro> and to get the results of an action?
<jcastro> it's not in the devel docs
<tvansteenburgh> show-action-output
<tvansteenburgh> show-action-status for just the status
<jcastro> man, I am really missing tab completion now
<tvansteenburgh> mine still works
<jcastro> I am failing at computers today it seems
<jcastro> cory_fu: hey so I got bigdata-dev/apache-processing-spark running, the readme says it's a standalone cluster, but the status for spark says it's waiting for a relation to hadoop plugin
<aisrael> tvansteenburgh: how long should CI test results stay around? I'm seeing runs less than 2 weeks old go 404 on me: https://code.launchpad.net/~timkuhlman/charms/trusty/rsyslog-forwarder-ha/nrpe/+merge/292987
<tvansteenburgh> aisrael: not unusual, it only keeps the latest 300 iirc. the new revq caches the results forever
<cory_fu> kjackal: Is that status message a known issue with that bundle?  ^
<aisrael> tvansteenburgh: ack. I will make an offering of redbull and daycare to the revq gods in that case.
<tvansteenburgh> aisrael: those gods prefer beer and pipe tobacco i think
<cory_fu> kwmonroe: I fixed the charm references in the bigtop repo's tests, built and published the charms, and updated the bundle in the bigtop repo & store.  So, both the PR and store are updated.
<Shruthima> Hi All, I am developing IBM-Http Server on top of IBM-IM layer, Can we have separate license for HTTP server are do we need to use only one license that we are getting from ibm-base layer....?
<Shruthima> Actually we are facing one issue when we use common license i.e when we use this state in reactive of HTTP server  @when ('ibm-http-server.installed') @when_not ('ibm-base.license.accepted')   and the below states in reactive of ibm-im  @when ('ibm-im.installed') @when_not ('ibm-base.license.accepted')
<Shruthima> it is picking randomly ibm-im state first and then ibm-http  which should not happen bec IBM-Installation Manager will not be uninstalled till the products installed through IM are uninstalled.
<Shruthima> Could you please suggest on the same....
<tvansteenburgh> Shruthima: if the conditions for multiple handlers are true, you can't know which will run first
<Shruthima> ya that is the issue ,can we use seperate license for http server ?
<tvansteenburgh> Shruthima: i'm not sure that is the right solution
<kjackal> jcastro, I am looking at the bundle issue
<Shruthima> tvansteenburgh: ok thanks
<kjackal> how urgent is it?
<jcastro> kjackal: average I guess? I am blogging about it
<jcastro> but don't expect to publish it until closer to juju 2.0
<kjackal> jcastro, thanks
<dweaver> Anyone know how to recover juju state server when it thinks it is upgrading after a reboot?
<dweaver> Would upgrading the tools to a later version and starting the jujud service solve it, or is there somewhere in mongo that needs to be tweaked?
<Shruthima> Hi kwmonroe/mbruzek , I have sent an email regarding the issue with common license.. could you please suggest on the same....!!
<kwmonroe> hi Shruthima, the thought with a single config variable for the license in the ibm-base layer is that setting this to "True" would indicate that the user accepts whatever terms and conditions the charm requires -- whether it's one license or multiple licenses.
<kwmonroe> Shruthima: so yes, the http server can have a separate license.  in the http server README, you would simply say "ibm http server requires the acceptance of license X, Y, and Z.  if you agree, indicate your acceptance of these terms by entering "juju set <charm> license_accepted=True"'
<kwmonroe> Shruthima: i'll follow up to your email and we can decide if more thought is necessary to handle multiple licenses in your layered charms.
<Shruthima> kwmonroe: oh ok thanks :)
<kwmonroe> np
<lazyPower> dweaver ah thats a tricky one. I've only seen fixes to "unstick" an upgrade
<lazyPower> not when it thinks its upgrading erroniously
<dweaver> lazyPower, OK, that's not good news.  It thinks it should be upgrading and this was only after a reboot of the state server, not an upgrade request. I am prepared to modify the mongo database or fiddle with the versions installed myself.  However, if this is not recoverable at all, then this is going to be a show stopper.  And this is likely to be published as this is a case study
<lazyPower> dweaver - dont abandon hope yet, we can likely get a core dev to lend a hand. I'm not even sure where to begin debugging other than starting with a bug and collecting the controller logs.
<lazyPower> dweaver - which version of juju is this? 1.25.5 i assume?
<dweaver> lazyPower, OK, I'll raise a proper bug with lots of log information, it's actually 1.25.0, which means I can force an upgrade manually, if I need to.
<lazyPower> can i get a hot review on this? I need to land it so i can rebase on top of it and prep/trim/rebase the tls branch on top of it. https://github.com/juju-solutions/layer-etcd/pull/13
<lazyPower> mbruzek ^
<xilet> So, I am new to working with juju, it looks like the cs branch of ~openstack-charmers-next/xenial/percona-cluster is out of date with the current branch. So mysql deployments are failing due to this bug https://bugs.launchpad.net/charms/+source/percona-cluster/+bug/1571789,   I was curious a.) if this is something that should be reported somewhere, and b.) is there a way to force change over juju to use the up
<mup> Bug #1571789: install hook failing on Xenial with unmet dependency on mysql-client <uosci> <percona-cluster (Juju Charms Collection):Fix Released by 1chb1n> <https://launchpad.net/bugs/1571789>
<xilet> dated branch in the meantime?
<beisner> xilet, fyi https://jujucharms.com/u/openstack-charmers/percona-cluster/xenial   (cs:~openstack-charmers/xenial/percona-cluster-0)  does contain the fix
<beisner> fyi, the -next charms are dev/possibly-bloody versions
<xilet> Good to know, stupid question, any idea why ubuntu 16.04 LTS ships with those as the defaults?
<stokachu> beisner: xilet, there is the latest openstack package in xenial-proposed that fixes this
<stokachu> as for the next question it was because those stable charms were released after xenial GA
<beisner> stokachu, excellent.   thanks for the clarification.   so, a change is in flight to stabilize it in xenial main.
<stokachu> beisner: yea soon as someone marks verification-done on the sru
<stokachu> beisner: xilet, https://bugs.launchpad.net/ubuntu/+source/openstack/+bug/1576412
<mup> Bug #1576412: package does not use released charms <verification-needed> <openstack (Ubuntu):Fix Released by adam-stokes> <openstack (Ubuntu Xenial):Fix Committed> <https://launchpad.net/bugs/1576412>
<xilet> stokachu: thanks!
<stokachu> though i was just made aware of some issues with ceph-osd which I think are being worked out now
<stokachu> but that wont require an update to openstack package
<lazyPower> dweaver - just a follow up - many of our team members are at a planning sprint and thus responses have been latent. I'm trying to track someone down to lend a hand
<lazyPower> it may take some time however, if you're willing to be patient and hand off the bug once its filed i'll be happy to run it up the pole
<geetha> @kevin/matt: Hi!
<mbruzek> hello
<geetha> we need a clarification for state names that are set by interface. For ex: Websphere is connecting to DB2 and metadat.yaml file in Websphere charm has 'requires' definition and relation name as 'db' so in reactive script we are using state name as 'db.available'. But you suggested to use state name as 'ibm-was-base.db.available', if we give relation name as 'ibm-wa-base.db' in metadata.yaml, it's giving me error.
<xilet> Should "juju upgrade-charm --force-units  mysql --switch=cs:~openstack-charmers/xenial/percona-cluster-0" pull the correct version of mysql? I am still getting the error though juju status is showing the new repo.  (Not sure if I have the correct syntax)
<mbruzek> geetha: In this case "db" is the relation name, that has nothing to do with state names.
<geetha> If I use same relation name i.e, 'db' in both(WAS and DB2) metadata.yaml file, it's not giving me any conflicts and state names set by interface is 'db.available' in both charms(without layer name prefix) is that ok?
<mbruzek> geetha: kwmonroe and I recommended using the layer prefix (dot) state name.
<dweaver> lazyPower, yes, willing to be patient, thanks.  I'll ping you with a bug ID once it is submitted with all the info.  Will be most appreciative thanks.
<mbruzek> geetha: I think we have a disconnect here. We were not recommending renaming the relation name.
<mbruzek> and I don't think relation names can have dots in there.
<mbruzek> dot/period etc
<mbruzek> geetha: can you give me a code example? Perhaps I don't understand your question
<lazyPower> An interface name is a string that must only contain characters a-z and -, and neither start nor end with -. It's the single determiner of compatibility between charms; and it carries with it nothing more than a mutual promise that the provider and requirer somehow know the communication protocol implied by the name.
<mbruzek> pastebin or something
<lazyPower> mbruzek - thats what i found here
<lazyPower> mbruzek and typically they are decorated with @hook('{relation_name}-joined')
<lazyPower> that hook decorator string is run through some helpful templating functions in reactive to expand that relation_name to expand to whatever you have defined in metadata
<lazyPower> so in this example, it should probably be declared in metadata like so
<lazyPower> requires:   db2-database: ibm-db2
<lazyPower> where db2-database would be the relationship name
<lazyPower> so the state would become @when('db2-database.available')
<lazyPower> geetha ^ does that help?
<lazyPower> and i totally biffed on that @hook decorator code... its not *that* simple, there's a bit more to it, and it declares the interface inline like so: @hook('{provides:db2}-relation-joined') --- sorry for the typo. i was going from memory :)
<geetha> then it will be 'db2-database.available'. there is no <layer name > prefix right?
<lazyPower> geetha - right, its all predicated by what you name it in the metadata
<geetha> it will take what ever relation name we give in metadata.yaml file.
<lazyPower> the <layer name> prefix is a convention we use in the layers, when setting states so we can avoid collision with other layers. Similar care should be taken when defining relationship names
<lazyPower> geetha - here's a super simple interface class that illustrates the code i was typing out above https://github.com/juju-solutions/interface-consul/blob/master/provides.py#L11.  Notice the use of that same template marco teh '{relation_name}' -  that expands and controls the states you will subscribe to in your layer.
<gennadiy> hi guys
<beisner> dimitern, jamespage - "install error: private-address not set" artifact and traceback on bug 1583109
<mup> Bug #1583109: error: private-address/public-address not set (1.25.5) <sts> <juju-core:New> <https://launchpad.net/bugs/1583109>
<geetha> Then we can give 'ibm-was-base-database' as a relation name in Websphere side right?
<gennadiy> i can't bootstrap juju for openstack environment
<jamespage> beisner, \o/
<gennadiy> we have deployed openstack and would like to use juju to deploy software to it
<geetha> then it will be @when 'ibm-was-base-database.available'
<dimitern> beisner: you mean you got repro + trace logging? \o/ indeed!
<beisner> dimitern, yessir
<gennadiy> but now i have got error  ERROR juju.cmd supercommand.go:429 cannot set initial environ constraints: index file has no data for cloud {regionOne http://10.9.8.21:5000/v2.0/} not found
<dimitern> beisner: awesome! tyvm, will have a look shortly
<gennadiy> it happens when it setups tools on machine 0
<gennadiy> i run it with: juju bootstrap -v --upload-tools --metadata-source /home/juju/.juju --debug --show-log
<lazyPower> gennadiy is this juju 1.25?
<beisner> machine-0.log is 11+MB and I'd like to turn trace off now if that gives you what you're after dimitern
<dimitern> beisner: can you point me to the log?
<dimitern> (too many logs there..)
<gennadiy> yes, it's 1.25
<gennadiy> do i need to reinstall?
<lazyPower> i'm not certain, i've never seen that error before
<dimitern> beisner: ah, got it - 0-var-log.tar.bz2
<gennadiy> our openstack has own regionname - regionOne
<beisner> bingo dimitern
<gennadiy> so we need to use custom --metadata-source
<lazyPower> gennadiy  https://bugs.launchpad.net/juju-core/+bug/1567763
<mup> Bug #1567763: bootstrapping private openstack, with --metadata-source fails when instance-type constraint is specified <bootstrap> <constraints> <simplestreams> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1567763>
<gennadiy> i can't bootstrap env
<lazyPower> right i get that, it seems like theres some streams you'll have to setup because of it but i'm not positive
<lazyPower> but that bug looks applicable
<gennadiy> but i don't use "--constraints"
<dimitern> beisner: great, the log has everything I need (and suspected as the cause) - feel free to reset the logging-config to lower level
<beisner> dimitern, ack tyvm
<gennadiy> i used this tutorial - https://blog.felipe-alfaro.com/2014/04/29/bootstraping-juju-on-top-of-an-openstack-private-cloud/
<gennadiy> i added image to openstack and generated metadata
<jcastro> bdx: ping
<dweaver> lazyPower, The bug is submitted here: https://bugs.launchpad.net/juju-core/+bug/1583683 log files are uploaded to the bug.
<mup> Bug #1583683: juju thinks it is upgrading after a reboot <juju-core:New> <https://launchpad.net/bugs/1583683>
<lazyPower> dweaver - thanks for the bug. I'll send this over to the appropriate parties. You should get updates via email as the bug gets triaged/worked. That'll be our focal point for resolution/updates.
<xilet> Hrmm cs:~openstack-charmers/xenial/percona-cluster-0  has the same mysql-client issue
<gennadiy> why remote bootstrapped machine does't use provided image-metadata-url?
<gennadiy> seems it tryiesto use https://streams.canonical.com/juju/images/releases/streams/v1/index2.json
<beisner> hi xilet - i just did a juju bootstrap ... then juju deploy cs:~openstack-charmers/xenial/percona-cluster-0 to check/confirm sanity of that charm and it looks good from that:  http://pastebin.ubuntu.com/16510376/    are you sure that charm is what is deployed?
<gennadiy> i see in the log - " skipping index "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson" because of missing information: index file has no data for cloud {regionOne http://10.9.8.21:5000/v2.0/} not found" but i have provided own url - http://192.168.177.51:8888/
<xilet>  beisner checking,
<bdx> hows it going everyone? Have vlan tenant networks on a lxd openstack been verified as something that should work? Has anyone got their feet wet in this area yet, or is it just me?
<magicaltrout> lazyPower: you'll know the answer to this
<magicaltrout> if I use a layer like https://github.com/juju-solutions/layer-hadoop-client
<magicaltrout> do I need to define the provides/requires stuff in metadata.yaml ?
<magicaltrout> or does layers just sort that stuff out
<lazyPower> it sorts it out as the metadata.yaml is already there
<lazyPower> and it declares the interfaces in its layer.yaml
<lazyPower> so it should be a single line include
<magicaltrout> ta
<magicaltrout> ooh yeah so it does
<magicaltrout> funky stuff
<Dirler>  Hi All, if I have kvm machines listed in âvirsh listâ but not listed in âjuju statusâ for kvm environment,  Can I add such machines in juju kvm environment?
<Dirler> p.s. such machines can exist in some scenarios: deployed using libvirt api directly, was exist before juju installation and so on.
<penguinRaider> Hi, I am using juju to deploy ceph using the command juju deploy  cs:trusty/ceph -n 3 --config=ceph.yaml with just fsid and mon-secret in the yaml file. But when I try to do the same with cs:xenial/ceph it fails with this trace http://paste.ubuntu.com/16513817/. Am I doing something wrong here?
<gennadiy_> hi. does it juju support images-metadata-url from openstack env config?
<gennadiy_> i provided it but juju use standard ubuntu cloud url
<gennadiy_> if we provide --metadata-source for bootstrap command it creates zero machine in openstack but throws error during software installetion(agent setup)
<bdx> jcastro: sup
<gennadiy> also how to use "juju metadata generate-image" in juju2? it requires model but model doesn't exist because controller is not bootstrapped
#juju 2016-05-20
<ryebot> I keep getting "ERROR invalid entity name or password" when I try to run `juju debug-log`, not sure how to get rid of it - any tips?
<ryebot> I've tried a juju logout/back in and rebooting, but no luck.
<ryebot> this is on juju2
<ryebot> also looked in the /var/log/juju logs in machine-0; nothing there either
<ryebot> making a new controller fixed it
<tx> Hey guys, I'm trying to bootstrap openstack from a fresh ubuntu 16.04 LTS install, it has been on the initializing model stage for about an hour now
<Dirler> Hi All. If I have kvm machines listed in âvirsh listâ but not listed in âjuju statusâ for kvm environment, Can I add such machines in juju kvm environment?
<jamespage> gnuoy, morning
<jamespage> can you take a look at https://review.openstack.org/#/c/318641/
<jamespage> and
<jamespage> https://review.openstack.org/#/c/318612/
<gnuoy> sure
<jamespage> gnuoy, and https://review.openstack.org/#/c/319138/ :-)
<jamespage> gnuoy, https://review.openstack.org/#/c/319172/ please (stable counterpart to the one before)
<jamespage> gnuoy, ta
<gahan> how do I force cancelling of activity in landscape? 'Add hardware' is hanging on 'Add jujue machine...' for 24 hours now.
<magicaltrout> is it acceptable to crackout the beers when you are at work with the cricket on the TV and everyone in the stadium is drinking?
<ryebot> juju2 - how do I remove a service that has a unit in an "agent is lost, sorry!" state?
<tvansteenburgh> ryebot: you could try remove-machine
<ryebot> tvansteenburgh: excellent, thanks, I'll give that a shot
<jamespage> gnuoy, https://review.openstack.org/#/c/318611/ and then I'll push master and stable branches to the charm store...
<lazyPower> kjackal - if you have the spare cycles, this is a pretty small review https://github.com/juju-solutions/layer-filebeat/pull/8
<kjackal> I will do it in a moment lazyPower
<lazyPower> ta
<kjackal> jcastro ?
<kjackal> jcastro, I hear we have an issue with the pluin?
<jcastro> kjackal: yeah!
<jcastro> let me refire up the bundle, give me a moment
<jcastro> https://jujucharms.com/u/bigdata-dev/apache-processing-spark/
<jcastro> this is the bundle I am trying
<jcastro> hmm, issues with the store today?
<kjackal> yes, it seems so...
<kjackal> in any case, which bundle are you deploying, jcastro? Can you show me the .yaml you are deploying?
<jcastro> hah no, I was deploying right from the store
<jcastro> yikes
<kjackal> Ah, that might be an issue, we havent published the charms yet, so the referenced ones might to be working
<kjackal> *not
<kjackal> jcastro, I guess you are deploying this: https://jujucharms.com/u/bigdata-dev/apache-processing-spark/bundle/1
<jcastro> yep
<jcastro> I needed a bundle that used beats so I could write about it, and cory recommended this one
<jcastro> but I'm not wedded to any specific bundle, I just need to be able to deploy a workload, the post itself is about how to use beats to get metrics from a workload
<kjackal> so the bundle.yaml there is suposed to use the production charm: cs:trusty/apache-spark)
<kjackal> This yaml is the one we want https://api.jujucharms.com/charmstore/v5/~bigdata-dev/bundle/apache-processing-spark-1/archive/bundle-dev.yaml
<jcastro> yep, I see that
<jcastro> oh ok, so deploy the -dev bundle instead is what you're saying?
<kjackal> Yes, this should work
<kjackal> I wonder if there is a way to specify the yaml within the bundle when you reference the bundle from the store?
<jcastro> not yet
<jcastro> but that's fine
<jcastro> though next time you guys publish please fill out the metadata url's so I can click through to the code, etc.
<jcastro> right now "view code" is broken, etc. on the charm store page
<jcastro> oh, nevermind, you did fill it out
<jcastro> it's just "Home" and not "view code" on the page.
<kjackal> lazyPower, the patch looks good. Merged!
<lazyPower> ta kjackal!
<suchvenu> Hello
<suchvenu> I have a query regarding naming the relation in  ibm-db2 charm. The relation name is "db2" defined in metadatafile as
<suchvenu> provides:  db:   interface: db2
<suchvenu> sorry the relation name is "db"
<jcastro> lazyPower: I think I found a bug in ~containers/kibana
<jcastro> unit-kibana-0: 2016-05-20 14:55:26 INFO unit.kibana/0.install logger.go:40 groupadd: group 'kibana' already exists
<jcastro> that causes the install hook to fail
<suchvenu> Is this name ok ? Or should it be renamed to some other name as db2-db or so ?
<lazyPower> ah yeah
<lazyPower> there's a patch to fix this, its known
<jcastro> ack, thanks
<jcastro> is this the promulgated charm or is that somewhere else?
<lazyPower> negative, the promulgated charm is missing the dashboard action
<jcastro> hmm, it sucks that the page doesn't make it obvious which is the promulgated charm
<jcastro> other than looking at the URL I mean
<lazyPower> when we were riffing a couple weeks ago, i've delayed pushing the updates until i can get the config option to deploy *with* a dashboard
<lazyPower> and still need to trim the fat on the demonstration boards
<suchvenu> I see the same relation name used by mysql charm. So will it cause confusion, when both are used in any bundle ?
<lazyPower> suchvenu - the relation-name is arbitrary. The interface is the important part of that equation.
<suchvenu> when the states of the interface are used in the reactive layer
<suchvenu> the states from the db2 interface would be like db.available, db.ready etc
<lazyPower> well, interestingly enough - do you use both mysql as well as db2 in the same charm? and is this a common deployment formation?
<lazyPower> sorry, model formation
<suchvenu> i am not using both in any of the charm as of now
<suchvenu> I just told an example
<lazyPower> jcastro - i'm in a lull with other things i have in flight, i can spend some time on that today. Do you mind being my primary stakeholder? I think i can get you an updated charm pushed @ containers just before standup
<jcastro> yeah no worries
<lazyPower> suchvenu - ok. yeah. you should be fine. The onus of using those states and how its named is up to the charm author consuming DB2
<jcastro> I can't blog about beats without a dashboard, heh
<lazyPower> ok, i'll fold in that patch and get you a revised dashboard
<suchvenu> does the relation name need to be unique so that the states also remain unique ?
<jcastro> what's the tldr difference between this kibana and the promulgated one?
<lazyPower> the dashboard loader action and test updates so it actually passes and flex's the deployment
<suchvenu> http://pastebin.ubuntu.com/16522846/
<suchvenu> a decorater like this , is fine  ?
<jcastro> https://github.com/CanonicalLtd/jujucharms.com/issues/274
<lazyPower> suchvenu - That looks fine, but without context its hard to say
<lazyPower> suchvenu - you can mitigate any name collisions by setting the mysql database relation to something other than db when you add it ot metadata, like "mysql-db" or "mdb" or "notibmdb2"
<suchvenu> I am thinking of the naming convention for the relation name . I have used db as the relation and if some other database (other than db2) also uses the same relation name , will it cause any issues
<suchvenu> you mean to say, the consumer charm of db2 or mysql can create a specific relation name as db2-db or msql-db and so it will not have issues ?
<lazyPower> correct
<suchvenu> ok
<suchvenu> Thanks lazyPower
<lazyPower> relation-names are arbitrary. As the charm author consuming the interface, you define what that nomenclature is, and are responsible for the associated states.
<lazyPower> np happy to help suchvenu
<suchvenu> :)
<cory_fu> kwmonroe, kjackal: I submit https://github.com/juju-solutions/layer-apache-bigtop-base/pull/7 and https://github.com/juju-solutions/layer-hadoop-datanode/pull/1 for your consideration
<cory_fu> kwmonroe: Also, looks like we don't have to mess around with the bzr-owner hack any more: https://github.com/CanonicalLtd/jujucharms.com/issues/245#event-666466300
<kwmonroe> woohoo!  nice cory_fu
<kwmonroe> cory_fu: for the dn PR, do you feel ok by returning from start_datanode with the status set as "starting datanode"?
<kwmonroe> i'm talking about if/when start_datanode hits the timeout and returns, of course
<cory_fu> Oh, no, I suppose not
<cory_fu> Good catch
<magicaltrout> random question big data folk: https://github.com/juju-solutions/layer-hadoop-client doesn't actually provide me with a hadoop executable
<magicaltrout> I need my charm to do something like: hadoop jar $THRAX/bin/thrax.jar
<cory_fu> kwmonroe: Updated.  Though, I should hope that the status handling in the slave charm would actually override it
<cory_fu> magicaltrout: Right.  Maybe we need to update the README.  The way that is intended to work is that you use layer:hadoop-client as a base layer, and then you connect it to a hadoop-plugin charm (e.g., https://jujucharms.com/hadoop-plugin/ or the older https://jujucharms.com/apache-hadoop-plugin/ for the non-Bigtop charms)
<bdx> openstack-charmers: will there be a charm rev'ing again before 16.07 ?
<cory_fu> magicaltrout: The plugin is what provides the Hadoop libraries, and it ensures you get the correct libraries for the particular deployment of Hadoop to which you are connecting
<magicaltrout> yeah cory_fu when I connect the plugin to my layer:hadoop-client enabled charm it all works
<magicaltrout> but "which hadoop" is empty and I can't figure out what its doing :)
<magicaltrout> I thought it main role was to provide config information over the interface
<cory_fu> magicaltrout: Can you give me a pastebin of your `juju status --format=tabular`?
<cory_fu> It provides config info over the interface but also installs the  client libs (the plugin; the client just manages the plugin relation for you to make it a bit easier)
<magicaltrout> hmm
<magicaltrout> i tried that yesterday and got zip
<magicaltrout> give me 5 mins to spin up my charm and I'll get back to you
<cory_fu> magicaltrout: Are you deploying the older apache-hadoop-X charms, or the newer Bigtop charms?
<magicaltrout> the older stuff
<cory_fu> Ok
<magicaltrout>  juju deploy cs:bundle/apache-processing-mapreduce-0
<cory_fu> The status messages ought to tell you what, if anything is missing, and if they all say ready then you should have the hadoop bin
<magicaltrout> that one to be exact
<magicaltrout> all my status flags were green
<cory_fu> magicaltrout: One question.  Are you trying to run the Hadoop binary from inside charm code?  There is environment data that doesn't seem to get populated in the hook context for whatever reason, so it requires a bit of additional work.
<kwmonroe> cory_fu: both PRs lgtm.  merged.
<magicaltrout> nope cory_fu juju ssh'd in
<cory_fu> Odd
<cory_fu> That should definitely work
<cory_fu> kwmonroe: Shall we do a JBD release, charm builds and publishes, and update the Bigtop PR?
<cory_fu> kwmonroe: Also, would you consider this a bugfix release for JBD or a minor feature release?
<cory_fu> I'm inclined to go with bugfix
<kwmonroe> yeah cory_fu, i've got jbd 7.1.2 building now
<Brochacho> cholcombe: Is there anyway to ignore ceph config parsing errors?
<cholcombe> Brochacho, yeah put a newline at the end of the file.  We have a patch going in to fix that
<Brochacho> cholcombe: Thanks! Was driving me nuts, doesn't seem there's an option to silence that?
<cholcombe> Brochacho, no and ceph is really bitchy about it
<cholcombe> i thought about submitting a patch to ceph because it's so annoying haha
<Brochacho> cholcombe: dang, also seems to be no way to avoid that 'dumped all in format json'?
<cholcombe> Brochacho, when you say --format=json is it appending something to stdout?
<Brochacho> cholcombe: One sec
<Brochacho> cholcombe: No, wasn't grabbing stdout correctly -_-
<cholcombe> ah ok haha
<magicaltrout> i'm talking $hit cory_fu
<magicaltrout> I suspect i probably sshd in yesterday before the path was set
<magicaltrout> and never noticed
<cory_fu> :)  Glad it's working for you
<magicaltrout> well you'll have a bunch of happy linguists now who can build translation models on proper infrastructure and not standalone hadoop on their laptops
<cory_fu> Awesome.  :)
<cory_fu> kwmonroe: Do you use Chrome?
<cory_fu> kwmonroe: Well, anyway.  You chastising me about approving the PR with trailing whitespace made me feel really bad, so I made this: https://github.com/johnsca/github-trailing-whitespace
<lazyPower> cory_fu https://twitter.com/lazypower/status/733726628828852224
<cory_fu> lazyPower: ha.
<cory_fu> If I'd known you were going to tweet it, I would have ponied up the $5 to publish it to the Chrome Web Store
<kwmonroe> cory_fu: sometimes i wonder what greatness you could achieve if i didn't bug you about petty stuff.
<kwmonroe> too bad we'll never know
<cory_fu> heh
<kwmonroe> cory_fu: i literally think you're great.  i just spent the afternoon loading all kinds of unpacked extensions and now chrome works like firefox... in that it's mostly broken.
<cory_fu> lol
<cory_fu> What other unpacked extensions did you load?
<kwmonroe> mostly stuff i wrote
<kwmonroe> but let's not dwell on the past.  i have a bundle update for bigtop.  can you teach me git real quick so i can update pr 108?
<lazyPower> hahaha
<lazyPower> i can tell its Friday
<kwmonroe> https://appear.in/kevin-does-git
<magicaltrout> git commit -a -m "mega commit"
<magicaltrout> git push -f
<magicaltrout> nuke it all!
<kwmonroe> magicaltrout: join the appear.in!
<lazyPower> magicaltrout - thats fun when it pipelines into prod on friday at 4:59pm
<kwmonroe> being that it's 2:25pm, i'm not that concerned
<magicaltrout> i'm drinking beer and writing code for NASA at 8:30pm
<magicaltrout> what could possibly go wrong?
<magicaltrout> git push -f
<magicaltrout> oooh sugar
<lazyPower> jcastro cory_fu  - i have a branch ready which should squash the last issues with kibana and bring this up to spec for CWR efforts - https://code.launchpad.net/~lazypower/charms/trusty/kibana/add-dashboard-loader-action/+merge/295359
<lazyPower> not sure if you want to poke this now or leave it, but i thought I would ping with it regardless
<gennadiy> hello everyone, can we deploy local charms to juju2 ? i try to use "juju deploy local:trusty/sipp" but got error "unknown schema for charm URL "local:trusty/sipp""
<tvansteenburgh> gennadiy: just give it a path
<magicaltrout> gennadiy:  juju deploy --repository=/home/bugg/charms local:trusty/joshua-full joshua-full
<gennadiy> a lot of changes in version 2 :) thanks
<magicaltrout> bit of that
<tvansteenburgh> magicaltrout: that's juju1
<tvansteenburgh> gennadiy: all the details at `juju help deploy`
<magicaltrout> aww tvansteenburgh whatever
<marcoceppi> gennadiy magicaltrout `juju deploy /home/bugg/charms/trusty/joshua-full` is all you need
<magicaltrout>  
<magicaltrout> bugg@tomsdevbox:~$ juju --version
<magicaltrout> 2.0-beta4-xenial-amd64
<marcoceppi> magicaltrout: beta7 is the latest ;)
<magicaltrout> i'm just a bit slow to update :P
<magicaltrout> marcoceppi: i get scared of updating these days :P
<marcoceppi> magicaltrout: I understand, we're getting close to RC though! when we get that it'd be nice ot have you an dothers update to help shake the tree
<magicaltrout> i'm only messing marcoceppi its just a dev environment, i'm not sure why its stuck on beta4, must come from a random ppa or something
<marcoceppi> magicaltrout: probably, there were package renames and such for 2.0
<arosales> kwmonroe: cory_fu are https://jujucharms.com/u/bigdata-dev/apache-processing-mapreduce/bundle/2 and https://jujucharms.com/u/bigdata-dev/apache-processing-spark/bundle/1 the latest or have these been promulagated?
<cory_fu> arosales: Sorry, those are the most recent, yes
<gennadiy> i use local env with lxd how to run machines in privileged mode? i need to change "ulimit -n"
<gennadiy> maybe i can provide lxd profile for juju
<arosales> cory_fu: oh need to apologize, just making sure I found  / using the current ones
<cory_fu> kwmonroe: You still around?
<arosales> kwmonroe: if your still around
<arosales> kwmonroe: http://paste.ubuntu.com/16538524/
<arosales> waiting on plugin :-(
<arosales> cory_fu: admcleod  ^
<cory_fu> arosales: Hrm.  I think we had a fix for that, and I thought kjackal published it to bd-dev earlier today
<cory_fu> arosales: Can you give me the [Services] section to see what charm revs you're  using?
<arosales> ya, seems it cropped back up or didn't make to dev branch
<cory_fu> arosales: I think the charm may have been updated but the bundle missed
<arosales> http://paste.ubuntu.com/16538576/
<cory_fu> arosales: You're not even using the bd-dev versions of the charms
<cory_fu> The main bundle.yaml must be pointing to the prod charms, which makes sense, but doesn't help for demo / testing
<cory_fu> arosales: Do you need a fix asap?
<arosales> cory_fu: just doing a lightning talk, but I can gloss over that as I am almost up
<arosales> cory_fu: should I have been using a different bundle?
<cory_fu> arosales: If you had time, I'd say `juju upgrade-charm --switch spark cs:~bigdata-dev/trusty/apache-spark` but it sounds like that would be risky and too close to the wire
 * arosales will try it :-)
<magicaltrout> go on arosales live life on the edge!
<arosales> error: invalid service name "cs:~bigdata-dev/trusty/apache-spark"
<cory_fu> arosales: the bundle-dev.yaml in that bundle should have worked, but we need to get those charms promulgated and sorted.  We switched focus all on to bigtop and they got left behind
<cory_fu> arosales: My args were backwards
<arosales> magicaltrout: keeps things exciting
<magicaltrout> indeed
 * arosales waves to magicaltrout
<cory_fu> juju upgrade-charm cs:~bigdata-dev/trusty/apache-spark spark
<cory_fu> maybe
<cory_fu> I forgot the --switch
<arosales> error: unrecognized args: ["spark"]
<cory_fu> -_-
<cory_fu> Does anyone here remember the syntax for upgrade-charm --switch?  :p
<cory_fu> Also, does that even still work in 2.0?
<magicaltrout> the way things change i barely get through deploying with out doing juju help commands  :P
<cory_fu> juju upgrade-charm spark --switch cs:~bigdata-dev/trusty/apache-spark
<cory_fu> arosales: ^
 * arosales reading help
<arosales> ya that looks better
<arosales> cory_fu: thanks
 * cory_fu sits back and watches arosales's lightning talk burn.
<arosales> lol
<cory_fu> Er, I mean, good luck!
 * kwmonroe has much faith
<arosales> I only have 39 machines running
<arosales> what could go wrong
<cory_fu> lol
<arosales> kube, swarm, spark, and 2 hadoop clusters
 * arosales had to make another ec2 support request to up limits
<kwmonroe> arosales: up from 50?!?!
<arosales> kwmonroe: well for us-west-2 as us-east-1 errored out on provisioning
<kwmonroe> azure will go until your wallet empties without all this "limit" nonsense
 * magicaltrout makes a note not to use azure
<kwmonroe> :)
<arosales> the upgrade worked, but I lost ha on spark
<cory_fu> kwmonroe: The reason I pinged you, btw, is I wanted to know where to file bugs against layer:ibm-base
<cory_fu> >_<
<kwmonroe> cory_fu: file them against cory.johns@canonical.com
<cory_fu> arosales: Did you lose HA, or did the status message just change?  I think it reports it slightly differently
<cory_fu> arosales: Can you give me the new status?
<cory_fu> kwmonroe: ha
<cory_fu> kwmonroe: Also, did you know that Monday is a holiday?  I didn't.
<kwmonroe> cory_fu: a holiday for who(m)?
<arosales> http://paste.ubuntu.com/16538972/
<cory_fu> kwmonroe: All of the US
<cory_fu> kwmonroe: It's Memorial Day
<kwmonroe> holy crap, is it May already?!?!
<cory_fu> Indeed.  It's almost my birthday, even.  :p
<cory_fu> arosales: "Fetching resources" isn't the most helpful status message
<cory_fu> arosales: I think the status message may no longer include "HA" but it should mention which is master
<arosales> sorry wrong buffer
<arosales> http://paste.ubuntu.com/16539022/
<arosales> workload status = standalone
<kwmonroe> arosales: that looks legit
<cory_fu> arosales: Yep.  The one that says (standalone - master) means that it is in HA
<kwmonroe> you're in 'standalone' mode, which means any of those can take over at any moment
<arosales> ah yes, I was looking at unit 1
<arosales> unit 0 = Ready (standalone - master)
<arosales> good stuff
<magicaltrout> midnight rolled around, it is my birthday, i beat you cory_fu :P
<arosales> Happy Birthday magicaltrout !
<cory_fu> kwmonroe: But for reals, clicking "Bugs" on https://code.launchpad.net/~ibmcharmers/layer-ibm-base/trunk goes to a 404
<magicaltrout> meh
<kwmonroe> happy bday magicaltrout!!!
<cory_fu> magicaltrout: Really?!?  Happy birthday!!!!!
<magicaltrout> i got my age wrong earlier
<magicaltrout> i'm over it
<arosales> happy early birthday cory_fu
<kwmonroe> cory_fu: please file a bug re: the bug url.  i'll get to it monday.
<cory_fu> magicaltrout: I do that all the time.  Actually, my health insurance had my birth year as 1995, and I was super excited to have de-aged until they fixed the glitch
<cory_fu> kwmonroe: ha
<cory_fu> arosales: :)  Thanks
<arosales> 1995, wow
<magicaltrout> hehe
<magicaltrout> people born in the 90's and 2000's depress me
<arosales> cory_fu: should have quickly filled for life insurance
<arosales> pretty amazing how well upgrade charm worked
<arosales> and the operational knowledge distilled in those charms
<cory_fu> arosales: In case you didn't notice (not at all because I forgot to put it in until just now), I asked for next Friday off for my (belated) birthday.  ;)
<arosales> pretty amazing. On one controller I have 5 models with 40 machines with completly functioning kubernetes, swarm, spark cluster, hadoop cluster (ha), and hadoop cluster (bigtop)
<arosales> cory_fu: most def you should take Friday off
<arosales> magicaltrout: you too :-)
<magicaltrout> aww thanks arosales
<cory_fu> arosales: Well, upgrade-charm --switch is a little duplicitous, considering it changes the entire charm out from under the service.  But Spark is pretty resilient to being switched out, as long as jobs aren't actively running in standalone mode.
<kwmonroe> hehe, magicaltrout, i will give you next thursday off too.  keep being great.
<arosales> kwmonroe: is soo generous
<magicaltrout> hehe
<arosales> cory_fu: ya upgrade implies you are already upgrading
<magicaltrout> i get tomorrow off.....
<cory_fu> magicaltrout: I was sad to realize I didn't quite make it under the wire to get hugs if I were born in the 80's
<arosales> ha, but only tomorrow
<magicaltrout> yeah sunday the mrs gets her mum day... so i'm lumped with the kids
<kwmonroe> magicaltrout: are you still coding for nasa?  i feel like i don't want my rovers coded by you at this time of night.
<magicaltrout> thats work... right?
<arosales> #daddyduty
<magicaltrout> i am kwmonroe..... currently wondering why requirejs hates me so much :)
<arosales> magicaltrout: indeed it is
<magicaltrout> yeah thought so
<magicaltrout> so i get 1 day off
<kwmonroe> not because you're not a great coder, mind you, but because i fear your maliciousness when you don't get cake for your birthday.
<magicaltrout> oh well, better than none
<magicaltrout> die rover! die!!!!
<arosales> magicaltrout: you need another jubilee year
<cory_fu> magicaltrout: Or worse: https://www.youtube.com/watch?v=Y6ljFaKRTrI
<kwmonroe> excactly
<magicaltrout> lol
<kwmonroe> aight arosales, i have to bid you farewell and good luck.  i'm sure the LT will go great because juju is frickin amazing.  lmk how you fare.  be well all!
<arosales> kwmonroe: have a good long weekend
<magicaltrout> cya
<arosales> thanks for the last minute help here
<cory_fu> arosales: np.  Are you up already for your lightning talk? Are you IRCing from the podium?
<arosales> not up yet
<arosales> like 15 min out
<magicaltrout> makes me feel as prepared as i was for my talk
<cory_fu> I had the impression you were upgrading charms as you walked up to the stage.  :p
<cory_fu> Alright.  Really must go, or my wife will murder me.  Have a good weekend, all!
 * cory_fu disconnects from the Matrix.
<cory_fu> Oh, and happy birthday again, magicaltrout!
<arosales> cory_fu: have a good weekend
<magicaltrout> thanks....
<arosales> juju spoils one. ya I can spin up 5 clusters across 40 machines 1 hour before my talk :-)
<magicaltrout> depends if you write the charms yourself or not.... if you use the lovely bigdata dev charms thats probably true ;)
 * arosales would do the same with saiku :-)
#juju 2016-05-21
<magicaltrout> well one thing thats coming in our next release is like 1 click schema design (assuming you have a sane table) so it'll make it easy(ier) to do saiku demos which will be cool
<magicaltrout> anyway.... i'm offski
<magicaltrout> cya arosales hope your lightning talk goes well
<magicaltrout> oh on a slight side note admcleod the speakers gift from apachecon was posh chocolate, one was bluecheese
<magicaltrout> it was disgusting......
<blahdeblah> Hi all; what's the preferred method for shutting down the local lxd-based controller/model when I'm not using it?  Just lxc stop?
<lazyPower> blahdeblah - that should work just fine yeah.
<blahdeblah> thanks lazyPower - I found my hard disk a *lot* more chattery while the local provider is running
<lazyPower> oh certainly, hte logging to mongo keeps that pretty  much constant
<blahdeblah> ugh; any way we can turn that off?
<lazyPower> logging to mongo? afraid not, thats a core function of the controller now that there's multi-model support
<blahdeblah> :-(
 * blahdeblah would really like a "This is just my laptop; don't destroy my SSD or log anything" setting
<lazyPower> that sounds like a decent feature request
<lazyPower> i would bug that, and see if makes it on the roadmap
<lathiat> yeah i keep having to remember to lxc stop it
#juju 2016-05-22
<jobot> hello I am not able to run any juju commands in 16.04
<jobot> command not found
<lathiat> jobot: on 16.04 there are two juju packages, juju-1.25 and juju-2.0.  the juju-1.25 package now contains the command 'juju-1'
<lathiat> jobot: you can install juju-1-default to link it to 'juju'
<jobot> ah thanks
<lathiat> or you can also export PATH=/usr/lib/juju-1.25:$PATH
<lathiat> or just use it as 'juju-1' which is the recommendation going forward
<jobot> thank you. does that mean juju-2.0 will be 'juju' or will it be 'juju-2' ?
<ejat> anyone can help me with this error : http://paste.ubuntu.com/16614114/
<rick_h_> ejat: ah folks were working on that last week
<rick_h_> ejat: something with dns in thwre and rabbimq not working
<rick_h_> ejat: can you email that to the list? i know a couple of folks were working on it but they're not here atm
#juju 2017-05-15
<kjackal> Good morning Juju world!
<erik_lonroth> good morning
<armaan_> jamespage: Hello, I am trying to figure out the rolling upgrade process for Ceph with juju. I want to upgrade from firefly -> hammer -> jewel, Could you please let me know if there is any official documentation available for this process?
<jamespage> armaan_: there is some documentaion in https://jujucharms.com/ceph-mon/ 'Rolling Upgrades'
<jamespage> armaan_: basically you have to set the 'source' configuration option
<armaan_> jamespage: AFAIU, I will need to follow these steps: (1) set action-managed-upgrade=true (2) juju upgrade-charm (3) Set new origin "openstack-origin=cloud:trusty-mitaka"? Or am i missing some steps here
<jamespage> armaan_: no
<armaan_> jamespage: ahh, thanks let me have a look at the link
<jamespage> armaan_: just the source config option
<jamespage> armaan_: the ceph charms don't have 'action-managed-upgrade'
<jamespage> or openstack-origin
<jamespage> 'Supported Upgrade Paths' is also importnat
<armaan_> jamespage: please correct me if i am wrong, by executing "juju set ceph-mon source=cloud:trusty-mitaka"; the charm will upgarde the ceph-mon to jewel release?
<jamespage> armaan_: yes but you have to set via kilo first
<jamespage> armaan_: if you just try jump direct to mitaka from icehouse (stock trusty), it won't upgrade
<armaan_> jamespage: My environment is running on Liberty + Firefly and the target is to upgrade to Newton + Jewel.
<jamespage> armaan_: you'll only be able to get as far as mitaka on trusty
<armaan_> jamespage: oh, you mean Newton is not supported on trusty.
<jamespage> armaan_: liberty was aligned with hammer
<jamespage> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/liberty_versions.html
<jamespage> not jewel - be careful mixing things as client/server version mismatches between ceph releases can cause issues
<armaan_> jamespage: ok so, juju set ceph-mon source=cloud:trusty-liberty, will upgrade the ceph charm to hammer?
<jamespage> basically yes
<armaan> sorry, bad internet connection
<armaan> jamespage: this was my question before i got disconnected :(. Is it fair to assume that i will have to upgrade from 14.04 to 16.04 first, because Newton charms are not supported in Trusty?
<jamespage> armaan: no that's not correct
<jamespage> the charms are the same whatever the release, the Newton UCA is for Xenial onwards
<jamespage> armaan: however, its not possible to in-place upgrade between Ubuntu series using Juju
<jamespage> armaan: you can get trusty to mitaka, but that's the last supported OpenStack release for trusty in the UCA
<dakj> Anyone can help me with Openstack Base bundle? It's has been deployed on all nodes on MAAS but Nova-Cloud-Controller and Ceph-mon are in pending and not complete their task. I've open also a post here (https://askubuntu.com/questions/913007/openstack-base-bundle-deployed-with-juju). Thanks
<jamespage> dakj: looking
<jamespage> dakj: its a little hard to say from your question post
<armaan> jamespage: understood, thanks!
<jamespage> dakj: it would appear that the ceph-mon cluster is struggling to bootstrap
<dakj> James-age: I've an issue with Nova-Cloud-Controller and Ceph-mon, both result in pending and not complete the task
<armaan> jamespage: but for openstack, these steps: (1) set action-managed-upgrade=true (2) juju upgrade-charm (3) Set new origin "openstack-origin=cloud:trusty-mitaka" are okay?
<jamespage> armaan: if you set action-managed-upgrade=true you'll also have to run the 'openstack-upgrade' action on each unit of every charm in turn
<jamespage> dakj: we'll need to figure out why its not bootstrapping
<jamespage> dakj: (for ceph)
<armaan> jamespage: Ok, so only two steps (1) juju upgrade-charm <service> (2) Set new origin "openstack-origin=cloud:trusty-mitaka" ?
<jamespage> I'm wondering whether you might be seeing some memory contention but I though the bundle contained config to reduce the chance that happens
<jamespage> armaan: that will work
<jamespage> 1) upgrades the charm 2) tells the charm to upgrade openstack
<dakj> James-age: what do you need to figure that?
<armaan> jamespage: awesome, thanks! :)
<jamespage> dakj: you'll need to ssh to the ceph-mon units and look at the log files
<jamespage> in /var/log/ceph
<dakj> jamespage: ok, I'm doing that and post its result
<dakj> James-age: here is that http://paste.ubuntu.com/24580436/
<Hetfield> hi all. i deployed openstack-base but i have issues with radosgw
<Hetfield> basically when deployed on a lxc container, with basic network settings (all endpoint are the same) it's not reachable by VM in openstack world
<Hetfield> actually all the admin network is not reachable, the neutron-gateway doesn't let the VM reach the infra endpoints
<Hetfield> i.e. a guest vm cannot reach keystone
<Hetfield> anyone with a similar issue?
<dakj> Jamesage:
<dakj> Jamespage: any suggest?
<jamespage> dakj: not just from that - what does "sudo ceph -s" say?
<jamespage> Hetfield: the network topology between your instances and the control plane of the cloud is not limited by the bundle
<jamespage> Hetfield: so if the network containing floating ip's used to access your instances can't route to/from the network hosting the IP addresses
<Hetfield> jamespage: sure, actually rados is the only app that is needed by users, the others are just internal
<jamespage> Hetfield: for the API services in the control plane, then no vm's won't be able to
<dakj> Jamespage: here is http://paste.ubuntu.com/24580701/
<Hetfield> jamespage: but is looks like the vxlan packets coming from a guest VM are goint to neutron-gatway machine correctly, the router routes all but those  to the admin network
<jamespage> dakj: it looks like the ceph-mon units are not able to form a new cluster for some reason
<jamespage> dakj: can I see a 'ps -aef' from all three units please
<dakj> Jamespage: here is http://paste.ubuntu.com/24580706/ for the LXD that is in maintenance.
<dakj> Jamespage: the juju status for ceph gives this result http://paste.ubuntu.com/24580714/
<jamespage> dakj: can you do the "sudo ceph -s" from all three ceph-mon units please
<Zic> hi here
<jamespage> dakj: the 'unable to detect block devices' maybe that the osd-devices in the bundle uses /dev/sdb, but your VM's won't have that block device - probably /dev/vdb
<Zic> it seems that etcd snap of CDK bundle is a little... too confined: http://paste.ubuntu.com/24580734/
<Zic> (cc lazyPower / kjackal)
<jamespage> Hetfield: the neutron router on the gateway should just ship everything to the default gateway it has set
<jamespage> it is possible to add extra routes to its routing tablke
<jamespage> but I'd have to google to remember quite how
<lazyPower> Zic: Ah, yeah. What you can do, is path that to $HOME/snap/etcd/  and it should work as expected. I'll file a bug for that as well.
<jamespage> I expect its exposed in the api somewhere
<dakj> jamespage: here is the first one (http://paste.ubuntu.com/24580733/), the second one (http://paste.ubuntu.com/24580737/), the last one (http://paste.ubuntu.com/24580741/)
<Zic> lazyPower: thanks, I will try that (hello o/)
<jamespage> dakj: "noname-b=10.20.81.16:6789/0" - not something I've seen before
<jamespage> one of the monitors is not bootstrapped into the cluster correctly AFAICT
<jamespage> dakj: might be something time-ish
<jamespage> dakj: if clocks are not synced between the physical hosts hosting the ceph-mon units you might get this
<dakj> jamespage: it has created 3 LXC machine for CEPH-mon, 3 for CEPH-osd and 1 for CEPH-radosgw
<dakj> Jamespage: the date reports 2 different time between host with MAAS and the VM why?
<jamespage> well that's a good question
<Hetfield> jamespage: the default routing is already working. my issue is only when routing tries to reach a network directly connected to the hypervisor hosting the neutron-gateway unit
<jamespage> dakj: which MAAS version are you using?
<dakj> Jamespage: the VM where I've installed MAAS has the correct clock, the VM used for Openstack different clock
<dakj> Jamespage: MAAS Version 2.1.5+bzr5596-0ubuntu1 (16.04.1)
<jamespage> dakj: timezone or clock?
<jamespage> dakj: in any case, the important bit here is the time is in sync between the VM's that are hosting the cloud
<jamespage> dakj: I'd expect those are using UTC
<dakj> jamespage: on MAAS is CEST while others VM is UTC
<jamespage> yeah that's what I'd expect
<jamespage> dakj: you did the MAAS VM install by hand?
<jamespage> dakj: anyway the problem machine is juju-37af3b-2-lxd-1
<dakj> No I've used Ubuntu 16.04 ISO and then updated it via spa stable
<jamespage> dakj: can you check the avalaible free memory on that machine please
<dakj> jamespage: all 4 nodes dedicated for Openstack have 12GB RAM each one, and 250GBx2 of HDD
<jamespage> yeah I see the spec - but how much free memory does machine 2 have?
<jamespage> once it has everything running on it
<jamespage> or trying to run on it
<Zic> lazyPower: thanks, it works
<lazyPower> Zic: cheers :) sorry you hit that snag
<jamespage> dakj: my thought process here is that something is inhibiting the third mon unit from joining the cluster properly - trying to figure out what
<dakj> Jamespage: here its free memory http://paste.ubuntu.com/24580784/
<jamespage> hmm its been in and out of swap alot
<lazyPower> Zic: i think the crux of the issue here is etcdctl is shipping with the etcd server bin, and if we strictly confine one it affects the other. In order to get the behavior you're looking for we'd need to package up etcdctl as a separate snap and in turn change its confinement flags. I may be wrong and we might be able to just add some plugs to etcdctl. But i do believe it presumes you'll be working in $HOME with
<lazyPower> its current confinment model.
<lazyPower> Zic: you might have been able to get away with just $HOME, i believe etcdctl has the home slot declared.
<dakj> jamespage: all vnodes on MAAS is using UTC, while the node where is installing MAAS uses CEST. On VMware ESX Is configured as NTP server ntp.ubuntu.com
<dakj> jamespage: also the VM with JUJU has UTC as clock!!!! Why Host server has the correct clock and nodes not???
<jamespage> dakj: anything deployed by MAAS will have UTC
 * jamespage thinks that is generally a good practice for servers btw
 * jamespage spent to long doing 0100 support to 'watch' daylight saving changes in a past life
<lazyPower> +1
<jamespage> management used to insist that some things got shutdown for the lost/gained hour...
<jamespage> just in case they got confused....
<dakj> Jamesspage: all the VM are deployed on VMware ESX, I don't understand why the VM used for MAAS has the CEST and the others VM UTC!!! In this way the clock between host and VM is never sync.
<dakj> James-age: I've changed clock on MAAS from CEST to UTC as the node, now I trying to re-deploy all node and run the bundle, I'll say if something is changed or not. See you later
<Zic> lazyPower: does changing the storage driver of Docker is supported? I saw it's overlay for now, but as we have "out of inodes" issues, we saw that Overlay2 might be a solution
<lazyPower> Zic: its not exposed, but you bet if thats something you need supported i can expose that
<lazyPower> Zic: is this in a test cluster that you can just drop that graph driver in and give it a trial before we expose it?
<lazyPower> i'm not certain what types of headaches that may bring in for operators
<Zic> lazyPower: https://docs.docker.com/engine/userguide/storagedriver/images/driver-pros-cons.png
<Zic> it's a bit complex :(
<lazyPower> Zic: i dont see overlay2 in this chart at all
<Zic> https://docs.docker.com/engine/userguide/storagedriver/selectadriver/#overlay-vs-overlay2
<Zic> it's merged I think
<Zic> The overlay driver has known limitations with inode exhaustion and commit performance. <= we are touched by the "inode" problem, we are OK with the performances actually
<Zic> lazyPower: I will do some tests on testing cluster, I will report to you what I discovered :)
<lazyPower> Zic: thanks for driving that. I'm happy to support you in this effort though
<lazyPower> so keep me in the loop and lets do a discovery on what needs to happen. I think it may be as simple as exposing a graph config option, but we may need supporting packages yeah?
<lazyPower> i guess i should hold my questions until you've done discovery
<Zic> for now, the switch from overlay to overlay2 is my main fear, as it will need a full docker stop, change driver, clean overlay FS, start docker, re-pulling all image containers of the cluster
<Zic> I think overlay2 is already a part of the docker package of Ubuntu archive
<Zic> > 1.11 is needed said the doc
<Zic> Ubuntu have 1.12.6
<lazyPower> ah yeah
<lazyPower> doing the backend graph migration is goign to be intense for you on your existing deployment
<lazyPower> i can see why that would induce concern
<lazyPower> Zic: i wonder if it wouldn't make more sense for you in this case to deploy a fresh worker pool set and migrate stuff instead of attempting to do an in place update
<lazyPower> basically cordon + drain the overlay nodes, and let k8s migrate to overlay2
<Zic> lazyPower: on the production cluster, some kubernetes-worker are physical machines
<Zic> so it's not that easy for this ones :) for VMs it will be OK
<lazyPower> Zic: ack. So i'll work with you to make sure this isn't nail biting.
<Zic> it will maybe take an outage for doing replacement migration from overlay overlay2
<Zic> but if it's planed and in midnight, it's not a big problem :}
<Zic> my concern is more about, does it will work just by stopping docker, change the driver, clean all /var/lib/docker/overlay, start docker, let kubelet handle the re-pull for every pods/containers
<catbus1> Hi, does nova-cloud-controller charm support subordinate charms?
<cholcombe> is there a library for juju storage functions?  I couldn't find anything in charmhelpers
<lazyPower> cholcombe: there isn't any library that i'm aware of. This is a prime opportunity to contribute charms.storage though :)
<cholcombe> lazyPower: thanks.  wolsen pointed out that hookenv has it
<lazyPower> oh nice
<cholcombe> lazyPower: brain not working haha
 * lazyPower still likes the idea of charms.storage though
<cholcombe> yeah i like that also
<lazyPower> charms.storage.format('xfs', '/path/to/bd')   charms.storage.mount('/path/to/bd', 'path/to/mount', persist=True)
<lazyPower> and have sub classes for specifics like doing tuning to zfs pools or whatever the case may be
<lazyPower> #wishlisted
<Budgie^Smore> o/ juju world
<thumper> o/
#juju 2017-05-16
<kjackal> Good morning Juju world!
<jwd> morning
<dakj> jamespage: hi, I remade all the lab, all nodes have the clock like host. The issue is always present on Nova-Cloud-Controller and Ceph-Mon
<dakj> Anyone has a solution about that issue (https://askubuntu.com/questions/913007/issue-with-nova-cloud-controller-and-ceph-mon-with-openstack-base-bundle)?
<dakj> Anyone can help me to resolve the issue? thanks
<icey> dakj it looks like your ceph-osd charm doesn't have any block devices configured with `osd-devices`
<dakj> Icey: if you have a look on its juju status there ceph-mon/11 is in active status, ceph-mon/9 is in blocked and ceph-mon/10 is in maintenance.
<icey> dakj: can you paste the `juju status` into paste.ubuntu.com
<icey> dakj: https://i.stack.imgur.com/5fkrF.png looks like it's clustered
<dakj> icey: here is it (https://paste.ubuntu.com/24587013/)
<icey> right dakj, the ceph-mons are (Unit is ready and clustered) and the OSDs are (No block devices detected using current configuration)
<icey> you have to configure block devices for the OSD charms (and Ceph) to provide block storage to the cloud
<jamespage> dakj: still wedged on that second unit right?
<jamespage> odd
 * jamespage ponders whether its a network mtu mismatch of some description
<dakj> Jamespage: yes
<jamespage> dakj: might be worth a check
<dakj> icey: how do I can that after deploy the bundle?
<jamespage> dakj: 99% of odd problems turn out to be some sort of network misconfig in my experience
<jamespage> dakj: I'd use iperf
<jamespage> and test the performance between the lxd container with the problem ceph-mon
<jamespage> so say from ceph-mon/10 -> ceph-mon/9 and from ceph-mon/11 -> ceph-mon/9
<jamespage> there is a flag you can use to display the actual MTU
<icey> dakj: something like : `juju config ceph-osd osd-devices=/dev/sdb` where /dev/sdb is a space separated list of block devices or directories
<dakj> Icey: I've to run that before to launch the deploy of Openstack Base via juju?
<icey> dakj: if you want to set OSD devices up before deploy, you would want to put them into the bundle, that command is something you can run to add devices after deploying
<admcleod> jamespage: theres something else that might be useful...
<jamespage> admcleod: suggest away
<dakj> icey: the command has to be put in ceph-mon application?
<admcleod> oh no, nvm, wrong idea. gotta run to ameeting
<icey> dakj: you would run that command from your juju client
<dakj> Icey: ok, Let me try that, I'll inform you about the result soon.
<dakj> icey: it gives me this error "ERROR application "ceph-osd" not found (not found)"
<icey> dakj: and you ran that on the same machine that you have been running juju status on? I just deployed ceph-osd and ran that command, and I can confirm that it works
<dakj> Icey: yes
<dakj> Icey: wait, I've 4 virtual node used for deploying Openstack and another one for Juju. On this last one I've to launch the deploy first ceph-osd? And then run via Juju guy Openstack base. Is it right?
<icey> I don't understand what you're asking dakj, but you should be configuring the ceph-osd application from the same client where you deployed the openstack-base bundle
<dakj> icey: sorry. I try to explain that
<dakj> Icey: I've 1VM used for MAAS, 1VM used for JUJU Gui and 4VM used for OPENSTACK. I tryed to deploy Openstack Base bundle via JUJU Gui. The command you suggested me where I've to run?
<icey> from what machine did you run `juju bootstrap`
<icey> alternately, you could change that configuration value from within the juju gui
<dakj> icey: on MAAS used this command "juju bootstrap maaslab maaslab-controller --to juju.maas"
<icey> ok, so either you should run that command on that MAAS node, or you should update the configuration through the GUI
<dakj> Perfect, on that MAAS node I obtained this error https://paste.ubuntu.com/24587244/
<icey> dakj, then you did not run `juju bootstrap...` on that MAAS node
<icey> dakj, can you try to change the osd-devices configuration option from within the Juju GUI for the cpeh-osd application instead
<dakj> Ice: in ceph-osd is already /dev/sdb.
<icey> dakj: the value /dev/sdb is a default, you need to configure it to match your disk setup
<dakj> Ice: this the a node dedicated to Openstack on MAAS https://pasteboard.co/706aKSJHl.png
<icey> dakj: it can be a space separated list of either disks or directories
<icey> dakj: dakj according to your bundle, machines 12,13, and 14 are the machines with ceph-osd on them, what disks do those machines have available?
<dakj> Ice: I've to run the commit on Juju to see that because I cleaned all to re-run that from begin.
<dakj> icey: I've started that, when it'll finished I'll see what you asked me
<icey> dakj: I'm about to End of Day but there are other people around who can help with questions :)
<dakj> Ice: thanks a lot for your support. Have a nice day, see you soon.
<dakj> icey: now on ceph-devices there is /dev/vdb
<icey> does /dev/vdb exist on the ceph-osd nodes dakj ?
<icey> dakj I'm EOD but cholcombe can probably help with ceph questions
<cholcombe> dakj: o/
<dakj> Icey: here is th fdisk https://paste.ubuntu.com/24587473/
<cholcombe> dakj: so they all have sdb 400GB on them.
<dakj> Cholcombe: yes
<dakj> icey: thanks a lot
<dakj> Cholcombe: on ceph-osd is present /dev/vdb
<dakj> Cholcombe: I'm EOD, can we meet tomorrow to see how to resolve my issue?
<cholcombe> dakj: sure
<dakj> Cholcombe: thanks have a nice day and see you tomorrow with my lab...... Are you present in the morning or evening?
<cholcombe> dakj: i'm on pacific west coast time
<dakj> Cholcombe: I'm in Europe time :-) see you!!!
<bdx> @stokachu
<rahworkx> hello all, can someone point me in the direction of how to uninstall "apt-get install conjure-up" entirely so I can use snap to install successfully?
<bdx> http://installion.co.uk/ubuntu/yakkety/universe/c/conjure-up/uninstall/index.html
<bdx> someone needs to take care of that bad boy
<rick_h> SimonKLB: ping
<Budgie^Smore> woot! almost time for a final in person interview round!
<lazyPower> Budgie^Smore: good luck mate
<Budgie^Smore> got 3 in person final rounds this week! this job hunting is a full time thing!
<Budgie^Smore> oh and for anyone interested, I have been told by recruiters that there are more positions in the area than we have good people to fill them!
<rahworkx> hello all, when deploying cdk with conjure-up in a local bare metal server the ectd nodes are failing with "Missing relation to certificate authority."Are there any suggestions of a fix for this?
<stokachu_> rahworkx: easyrsa should be deployed and active
<rahworkx> stokachu_: it is deployed with msg "Certificate Authority ready."
<stokachu_> rahworkx: whats output of `juju status --format yaml`
<rahworkx> stokachu_: https://paste.ubuntu.com/24588600/
<stokachu_> rahworkx: how long has it been blocked for?
<stokachu_> maybe lazyPower has an idea ^
<lazyPower> rahworkx: interesting, i dont see the etcd->easyrsa relation declared in that status yaml. try 'juju add-relation etcd easyrsa' and see if that resolves the status message
<rahworkx> stokachu_: hmm, I was waiting untill each app finished installing before selecting the next.. I kicked off the last app "workers" and status is "started" now.
<stokachu_> rahworkx: yea relations dont get set until after everything is deployed
<stokachu_> b/c there could be applications the relations require that are not yet known to juju
<rahworkx> lazypower: Previously when selecting all apps to deploy at once, it was failing with a "failed to find hook msg" this is probably hardware related. "older server"
<lazyPower> failed to find hook?
<lazyPower> thats... not expected at all
<stokachu_> yea that's a new one to me
<lazyPower> the hooks are charm components, the executeable events we invoke when things happen
<lazyPower> like that relationship for example will trigger a certificates-relation-joined hook
<rahworkx> my mistake.. may of seen that elsewhere..
<lazyPower> rahworkx: there's a known deficiency right now where etcd isn't starting due to some changes in how snap presents version info in 'snap info etcd'
<lazyPower> there's a PR landing and we'll have a fix out the door today, that might have been what you saw... if it was etcd that was the problem child
<rahworkx> lazypower: this is the error I saw before.... https://paste.ubuntu.com/24588663/
<lazyPower> rahworkx: yep thats the bug i was just referencing
<lazyPower> that happened sometime between lastnight and today, it didn't seem to affect existing deployments, only new deployments.
<rahworkx> ohh ok, makes sense
<rahworkx> stokachu_: lazyPower: thanks for shedding some light on that...
<lazyPower> np rahworkx - i'll ping you when the fix gets published in the bundles
<lazyPower> we're working through some nuances with this breakage, and hae a functional fix but its still a bit brittle. Trying to make this more robust so you dont find this six months later.
#juju 2017-05-17
<dakj> icey: hi icey, are you here?
<icey> Indeed dakj
<dakj> icey: do you have time to help me to resolve the issue with Ceph-Mon?
<icey> dakj: it looked like the mons were clustered happily, except one was showing stale status
<icey> dakj: did updating your ceph-osd configuration get disks working?
<dakj> icey: on ceph-osd in old-devices there is /dev/vdb, while before the commit it was /dev/sdb. Units ceph-osd/12, ceph-osd13, and ceph-osd14 result blocked. On each node the command fdisk reports that https://paste.ubuntu.com/24591531/. Its juju status is here https://paste.ubuntu.com/24591542/
<icey> dakj: can you log into one of the ceph-mon units (`juju ssh ceph-mon/12`) and run `sudo ceph -s` for me?
<Alex_____> team, i am getting this while running yum command  http://pastebin.ubuntu.com/24588230/ in centos
<Alex_____> any idea how to resolved this
<anrah> Alex_____: are you bootstrapping controller?
<Alex_____> @anrah
<Alex_____> i was trying to run yum command giving some error
<Alex_____> anrah:
<Alex_____> anrah: i am getting this inside the vm
<anrah> Alex_____: I mean how this is related to juju? And it seems like you are using Vagrant? You must configure your Vagrant box to have access to Internet
<Alex_____> anrah: i just did vagrant config for some testing purpose.. and by default if i go inside the centos vm box i am not able to run the yum command itself.. can you help me on this
<dakj> icey: here is it https://paste.ubuntu.com/24591604/
<icey> dakj: so that answers part of it, the /12 machine is definitely not clustered :) could you also run that from one of the other 2 machines (ceph-mon/13 for example)?
<dakj> Icey: here is it https://paste.ubuntu.com/24591622/
<icey> dakj: can your ceph* nodes talk to each other on the network? it looks like it has failed to bring one of the mon nodes up, and liek the OSD nodes have never managed to register themselves with the mons
<jamespage> icey, dakj: has the path between the units been verified for network MTU configuration etc... using iperf?
<jamespage> feels like the mon is trying to bootstrap, but failing due to some external reason
<jamespage> suspicion would be packet gfrag but I may be wrong
<icey> jamespage: that's what it looks like to me, it seems like 2 of the mons were fine (and successful), one of the mons and the OSDs are left in the cold
<dakj> Icey: I'm making ping between the lxd node. All node response
<jamespage> dakj: ping won't tell you the right things
<jamespage> use iperf with the mtu flag set
<jamespage> it will give you perf data and validate the mtu settings
<dakj> Icey: here is the paste of the ping between lxc machine https://paste.ubuntu.com/24591663/
<kjackal> Good morning juju world!
<dakj> Jamespage: I can try that, but if the issue is MTU I think that also other lxc machine had to have the same issue.
<jamespage> maybe
<jamespage> lets see
<dakj> James-age: where do I must to install that?
<dakj> Ice, James-age: now the situation of juju status is changing https://paste.ubuntu.com/24591673/
<dakj> Now all ceph-mon and ceph-osd are in blocked
<jamespage> dakj: you'll need to install iperf on the machines you want to test connectivity between
<jamespage> dakj: and then on one run iperf -s -m and from another do iperf -c <IP of first machine> -m
<jamespage> and then vica versa
<jamespage> network problems can be uni-directional
<dakj> I've to install that on 2 o more LXC machine and test the MTU status, isn't it?
<dakj> James-age & icey: the juju status is return to original status (https://paste.ubuntu.com/24591699/)
<dakj> James-age, icey: the result between the VM with MAAS and the LXC vm of openstack-dashboard/4 is here https://paste.ubuntu.com/24591716/
<dakj> Here is between open openstack-dashboard/4 and ceph-mom/13 https://paste.ubuntu.com/24591721/
<dakj> Ice, Jamespage: from the result I don't think the issue is about that.....I hope of having read  that well.
<kklimonda> is there a way to override how juju installs lxd and lxcfs?
<dakj> Icey, Jamespage: any idea?
<jamespage> dakj: not sure - can you do the tests between the ceph-mon units please
<dakj> Jamespage: any response between ceph-mon/12 and ceph-mon/13 or /14, otherwise between /13 and /14 is fine https://paste.ubuntu.com/24591867/
<jamespage> dakj: all three machines need to be able to communicate with each other
<dakj> I think that issue with /12 because is in maintenance
<dakj> Then osd/12, osd/13 and osd/14 are in blocked
<dakj> Juju status https://paste.ubuntu.com/24591878/
<anrah> Is there a way for amulet to load charm configuration file from file?
<kklimonda> how does jujucharms.com versioning work?  is it set in stone, and are there guarantees that a) charm will never be deleted and b) no two releases will have the same version? If so, does this hold for the charms not in the "main" namespace, for example cs:~sdn-charmers/keystone-0 ? If not, what's the correct way to guarantee charm version over the life of the
<kklimonda> deployment?
<dakj> Jamespage: I was thinking and if the issue is on ceph-osd/12,ceph-osd/13, and ceph-osd/14?? Why their status on juju is in blocked and on gui have not error?
<dakj> the hypothesis
<stub> kklimonda: It is guaranteed that no two releases will have the same version. It is possible for someone to revoke access to a charm, effectively deleting it (and maybe you can really delete it)
<stub> kklimonda: This goes for both namespaced charms and the top level namespace
<stub> kklimonda: If you want to pin a particular version, deploy cs:~sdn-charmers/keystone-0 and never run upgrade-charm
<stub> kklimonda: If you want to upgrade to a particular version, use the --switch argument to juju upgrade-charm
<kklimonda> stub: is keystone-0 always point to the same version of the charm, assuming maintainer doesn't do anything stupid like deleting and uploading it again with diffent code?
<stub> kklimonda: But most deployments want latest stable, which would be cs:~sdn-charmers/keystone (ie. no revision, default channel)
<stub> kklimonda: It will always point to the same version of the charm. If the maintainer deleted it and uploaded it again, it would get a new revision
<kklimonda> stub: I assume most larger deployments want to have a tested version of the charm, to avoid any drift in a longer period
<kklimonda> especially if they have more than one deployment (for example testing and prod)
<stub> If they are doing their own testing, yes. They will deploy the last known good revision.
<kklimonda> my only concern is that someone can revoke access to the charm, that sounds like a npm left-pad nightmare.
<kklimonda> I'll have to reconsider maintaining local copy of all charms
<stub> I don't think it would happen in the curated top level namespace, and may not be possible.
<stub> The eco system team or charm store team would know more
<kklimonda> mhm
<stub> I think mortals can only control access to their own namespace. The charms promulgated to the top level namespace are maintained by the ~charmers team.
<stub> People in the US timezone will know more
<stub> Nothing to stop you maintaining your own forks though if that is what you prefer.
<stub> You can even use the charm store to do it ;)
<stub> We actually deploy most of our stuff from a local copy (which we pull from the charmstore), because we need to inject site specific hooks before we deploy.
<dakj> Jamespage: is there any other task I can make to resolve that? thanks
<jamespage> dakj: tbh without understanding why you're hitting the problem you have, I don't have a step forward for you atm
<jamespage> stub: actually "The charms promulgated to the top level namespace are maintained by the ~charmers team" is not 100% true any longer
<jamespage> its possible for any team to own promulgated charms (see ~openstack-charmers or ~ganglia-charmers for examples)
<stub> I've only worked with my local namespaces, and had them magically promulgate to the top level.
<jamespage> dakj: you could strace the ceph-mon on the unit that's failing to join - might give a sniff of what's up
<jamespage> dakj: hmm just noticed this
<jamespage> [  4]  0.0-19.1 sec   525 MBytes   231 Mbits/sec
<jamespage> that appears out of sync with the other two metrics you recorded
<dakj> James-age: do you think it's a network issue?
<dakj> And why the other lxc vm work well??? the issue is only on lxc that have ceph-mon........
<dakj> James-age: do you know the credential for the login on Openstack dashboard?
<jamespage> dakj: you need to prefix messages to me with 'jamespage' otherwise I don't get notifications
<jamespage> dakj: if you're working from the openstack-base bundle then the username and password are
<jamespage> admin/openstack
<dakj> jamespage: ok, sorry it's the autocorrection that change your nick.
<jamespage> dakj: that's entirely likely
<dakj> jamespage: I send you a private text
<jamespage> dakj: your deployment is still not complete; I'm pretty sure that you have some sort of networky issue in your virtual lab setup, but its hard to be specific as to what exactly that is
<jamespage> the fact that hooks are still trying to run to complete the deployment sniffs like packet fragmentation type issues
<jamespage> dakj: you might want to try to validate you virtual lab independently of trying to deploy openstack itself
<jamespage> dakj: https://jujucharms.com/u/admcleod/magpie is useful here - you can deploy that charm to both physical machines and to lxd containers on the machines and it will check and report on performance and mtu configuration/mismatches
<jamespage> dakj: fwiw this is what I do on hardware prior to even trying to deploy openstack
<jamespage> as it flushed out issues that are hard to diagnose later when you see things behaving oddly
<jamespage> dakj: ultimately I think MAAS will grow this type of feature, but for now its easy todo with a charm
<jamespage> dakj: that last status output you pasted - some units had completely disappeared
<cnf> \o jamespage
<cnf> i am in london atm
<jamespage> hey cnf
<jamespage> enjoy the city ;)
<cnf> eh, meeting / demos all day
<dakj>  jamespage: my lab is based on a IBM server with ESX as hypervisor and it's connected directly to firewall via switch. On ESX then I've created all environment MAAS, Landscape, Juju and Openstack all of them are vm.
<dakj> MAAS has been configured with 2 vnet (https://paste.ubuntu.com/24592671/) the first one is for DNS/DHCP service the other one for public as suggest here https://jujucharms.com/openstack-base/
<jamespage> dakj: all one physical server?
<dakj> yes
<jamespage> dajk: what type of vmware switch are you using?
<jwd> hello
<dakj> I've an IBM System x3650 M4 with 64GB of RAM and 4tb of storage
<jwd> just doing my first experiments with juju here
<rick_h> jwd: welcome
<randomhack> having a problem bootstrapping to openstack if the keystone endpoint doesn't fit this pattern https://host:port/version - my keystone is on https://host/keystone/version
<jwd> rhx
<jwd> thx
<randomhack> ERROR cannot set config: cannot create a client: invalid major version number /keystone/v3: strconv.Atoi: parsing "/keystone/v3": invalid syntax
<dakj> jamespage: yes  it's a only physical server (I've an IBM System x3650 M4 with 64GB of RAM and 4tb of storage)
<jwd> from watching the chat here most ppl seem to use it to deploy an openstack environment?
<jamespage> dakj: yeah - I was really after the vmware virtual switch type and configuration being used
<jamespage> jwd: agreed alot of the conversation would lead you towards that conclusion, but it does alot of other things as well
<dakj> jamespage: I've 1 vSwitch with 2 vnet (10.20.81.0 'n 10.20.82.0) both are in Promiscuous mode actived
<jamespage> dakj: that last nugget was what I was looking for
<jamespage> thanks for confirming
 * jamespage remembered something from way back about having to have that enabled, otherwise the LXD containers never get network access
<SimonKLB> is it possible to register a localhost/lxd controller on a remote client?
<SimonKLB> im able to access the gui, but when trying to register gives me:
<SimonKLB> ERROR unable to connect to API: x509: certificate is valid for anything, juju-apiserver, juju-mongodb, localhost, not [dns]
<rick_h> SimonKLB: no, not at this time. There's work going on to build on lxd to make it more cloud-like but it's in flight
<SimonKLB> rick_h: got it! thanks
<lazyPower> #TIL juju status now reports the progress of a lxd image import
<lazyPower> \o/
<dakj> jamespage: but MAAS subnets has to be configured in this way eth0(https://pasteboard.co/7oi5M27zS.png) and eth1 (https://pasteboard.co/7oitlHdJm.png)
<freyes> hi marcoceppi , could you take a look to this PR when you have some time? https://github.com/marcoceppi/charm-ubuntu/pull/5
<magicaltrout> lazyPower: should i be able to update from CDK deployed a few months ago to current?
<magicaltrout> its not a trick question, just want to know if i can run juju update-charm or whatever
<jwd> can a controller be deleted somehow?
<lazyPower> magicaltrout: yes, there's upgrade instructions on the k8s docs on how to do an in place upgrade
<jwd> ah found it
<magicaltrout> oh yeah lazyPower
<magicaltrout> because the next level up said "ubuntu" i figured it was ubuntu and not juju
<magicaltrout> iykwim
<lazyPower> magicaltrout: https://kubernetes.io/docs/getting-started-guides/ubuntu/upgrades/
<magicaltrout> yeah i see it
<dokerya> Hi
<jwd> did i already mention that i like what i see so far ;-)
<SimonKLB> jwd: it sure is awesome :)
<jwd> yeah this will help me alot to speed up my building and testing processes
<SimonKLB> jwd: are you deploying something from the charmstore are you planning on building charms for your own applications?
<jwd> i will build my own soon
<SimonKLB> jwd: cool! good luck :)
<jwd> we run a multi tier stack build of alot components. and i been in need to remodel all of that.
<SimonKLB> jwd: then juju will be perfect for you :)
<jwd> i will start with some of the charms i seen and model around those
<SimonKLB> sounds like a great approach
<jwd> so many components to rethink .hehe
<SimonKLB> it might be quite a bit of work to get it all charmed, but in the end im sure that it's going to be extremely rewarding :)
<jwd> i did all we have so far by hand and ansible roles. we run around 80 vms in a openebula cloud atm :-)
<jwd> so anything helping me to model that faster is a benefit
<jwd> i just need to think about how to get that into a production ready setup asap
<SimonKLB> jwd: i actually think it's possible to re-use a lot of the work youve done with ansible already
<SimonKLB> someone else can step in and correct me if im wrong
<jwd> i am sure i can reuse my work.
<SimonKLB> jwd: https://jujucharms.com/docs/2.0/about-juju :)
<jwd> i started with this yesterday, so i will need to learn a bit about the basics
<jwd> as usual. alot new stuff to learn :-)
<jwd> hehe got my development notebook to its limits :-)
<jwd> guess its time to claim some funds for a bigger lab environment
<Zic> lazyPower: hi, hey, a simple question today: what is precisely "GA"? I saw this many times in Ubuntu Insights concerning CDK
<Zic> it seems to be a development acronym that I don't distinguish in English :>
<lazyPower> Zic: General availability (GA) is the marketing stage at which all necessary commercialization activities have been completed and a software product is available for purchase, depending, however, on language, region, electronic vs. media availability.
<Zic> oh, ok :)
<Zic> thanks
<lazyPower> np
<SaMnCo> magicaltrout: have a look at my last post: https://goo.gl/22invt and jump to the conclusion ;)
<lazyPower> SaMnCo: your post is already out of date :) that edge etcd charm is now in stable <3
<SaMnCo> aouch :D
<SaMnCo> fixing...
<SaMnCo> fixed
<lazyPower> <3 like a boss sir. Thanks for keeping the world in the loop that we're one of if not the best solution to get moving with GPU's
<SaMnCo> oh yeah :D
<SaMnCo> I think the best actually. Really not easy with other stuff as you need to prep the drivers from cloud-init, which means adding logic to id if a node has GPUs or not...
<lazyPower> :) i was allowing room for other opinions, no matter how iffy they might be :D
<lazyPower> i'm clearly biased
<SaMnCo> me too, but I've been testing a few things to understand how our UX differs from others, and I really really really like how the GPU stuff comes in
<SaMnCo> this is from my most objective self, so be proud
<SaMnCo> I wouldn't blog about it otherwise, opinions are my own on medium
<lazyPower> <3 its taken a village but the effort has certainly been worth it
<Budgie^Smore> o/ juju world
<lazyPower> \o Budgie^Smore
<Budgie^Smore> are we having fun yet lazyPower?
<lazyPower> Budgie^Smore: yeah, i'm gutting TLS key authentication and replacing with basic-auth/token-auth w/ pass/token rotation.
<lazyPower> how about yourself?
<Budgie^Smore> lazyPower oh nice! so SSO should be easier to implement then? ... waiting for feedback from the interview yesterday, prepping for another tomorrow and a phone screen today
<lazyPower> Budgie^Smore: well its more like, i skipped all the authentiation/authorization steps and tried to do sso without having any o those primitives in place and then got mad when it didn't work
<Budgie^Smore> lazyPower ain't that always the way ;-)
<lazyPower> i had some colleagues course correct me and check facts and we're now rebuilding that vector from teh ground up, because we made some pretty obtuse assumptions when we first landed our auth model
<Budgie^Smore> lazyPower "oh this should be easy... damn it! why isn't this working?!?!"
<Budgie^Smore> lazyPower well you know what they say about assuming anything ;-)
<lazyPower> it makes us all grow in the end?
<chatter29> hey guys
<chatter29> allah is doing
<chatter29> sun is not doing allah is doing
<chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
<Budgie^Smore> lazyPower "assume = to make an 'ass' out of 'u' and 'me'." - http://www.urbandictionary.com/define.php?term=Assume
<lazyPower> Budgie^Smore: language! :P
<Budgie^Smore> lazyPower sorry is 'arse' more acceptable ;-)
<magicaltrout> lazyPower: Waiting for kube-system pods to start
<magicaltrout> actually might just be slow standby
<magicaltrout> sweet
<magicaltrout> forget it
<magicaltrout> upgrade works amazing
<lazyPower> magicaltrout: tweet that :)
<magicaltrout> shipped
<lazyPower> magicaltrout: <3
<jwd> is there a way to disable the automatic updates when a charm creates a node?
<jrwren> jwd: a charm could have the side effect of disabling the underlying ubuntu unattended-upgrades, but there is no built in juju way, a charm would need that, preferably as a non-default config option.
<jwd> oki
<jwd> so much to learn :-)
<Budgie^Smore> wow my kernel upgrade knowledge is way out of date! so how does juju handle livepatch?
<rick_h> Budgie^Smore: it doesn't since you have to enable it and it has to be associated to your account?
<Budgie^Smore> oh I get that part but then there is system level things that need to happen. I suppose that could be a custom os level image... still pretty interesting tech, wonder how it performs against a 4.x kernel no reboot upgrade
<Budgie^Smore> just goes to show how long it has been since I researched that problem though lol ;-)
<umbSublime> Is there a node limit, for a self-hosted MaaS with juju to deploy openstack ?
<rick_h> umbSublime: not really, it's up to the scaling of the maas/juju controller to handle the load.
<rick_h> umbSublime: what are you looking for?
<umbSublime> I think I got confused by the 10 Node free limit for autopilot
#juju 2017-05-18
<dakj> jamespage, icey: yesterday I've removed everything the vm and tried to install Openstack with conjure-up using an only vm with 64GB of RAM, 1TB x2 of HDD and the result has been perfect, Openstack has been installed correctly. Now that I rebuild the situation before the issue with Ceph is always there presented. Now why with conjure-up the installation is correct and via manual with juju no???
<kjackal> good morning juju world
<jamespage> dakj: conjure-up on a single vm will be all contained within the same machine - i.e. nothing goes outside of the machines to the VMware vSwitch
<jamespage> dakj: this is a strong pointer that the problem lies in the network connectivity between machines or lxd containers on different machines
<dakj> jamespage: In MAAS the subnets must configured in this way eth0(https://pasteboard.co/7oi5M27zS.png) and eth1 (https://pasteboard.co/7oitlHdJm.png)?
<dakj> jamespage: I've deployed MagPie on another node on MAAS now,  to use that on my lab how do I must do?
<jamespage> dakj: charm README has some details on usage
<jamespage> https://jujucharms.com/u/admcleod/magpie
<jamespage> dakj: basically you want to replicate the unit structure you have with openstack, but using magpie
<jamespage> dakj: so something like
<jamespage> juju deploy -n 4 magpie
<jamespage> juju add-unit --to lxd:0 magpie
<jamespage> juju add-unit --to lxd:1 magpie
<jamespage> juju add-unit --to lxd:2 magpie
<jamespage> juju add-unit --to lxd:3 magpie
<jamespage> which will spin up four 'physical' servers, and place a magpie lxd unit on each one
<jamespage> once deployed they will run the benchmark and mtu tests
<dakj> jamespage: ok I'm doing what you suggest me...5 min and I come back soon
<jamespage> dakj: you might need to tweak the digit for the --to lxd:X to target the actual machine id's that the first step would have created
<dakj> jamespage: I don't forget you, I was in break ... anyway could you have a look here to check if I made that right https://paste.ubuntu.com/24598754/
<jamespage> dakj: kinda - you need to have more that one top level machine involved, otherwise everything is local bridge on machine 0
<jamespage> dakj: wait you deployed all of them to lxd:0
<dakj> jamespage: I know sorry....I wrong to create the lxc
<jamespage> so they are all over the loopback device only
<jamespage> which is why you get that odd mismatch on the mtu
<dakj> jamespage: yes, yes sorry
<dakj> Jamespage: here is the right way https://paste.ubuntu.com/24598905/, sorry for before.
<jamespage> dakj: not quite - you still have a single physical machine with four containers
<jamespage> rather than four physical machines with a container each
<jamespage> need to get that network flow testing between physical machines :-)
<dakj> Jamespage: I've only one physical server (IBM System x3650 R4 with 64GB of RAM and 4TB of store) where I'm doing my lab.
<dakj> jamespage: you're saying I've to have 4 vmachine and on each one 1 magpie?
<Budgie^Smore> morning o/ juju world
<jamespage> dakj: ok so the pastebin you showed me the other day with the problem ceph-mons had 4 x 'physical' machines
<jamespage> dakj: I thought those where VMware machines right?
<dakj> jamespage: I'm waiting juju finishes to deploy magpie on 4 machine in lxcd container
<dakj> <jamespage: yes it's a VMware esx
<jamespage> dakj: ok so this one - https://paste.ubuntu.com/24592540/
<jamespage> dakj: machines 16, 17, 18 and 19 are vmware machines connected to each other via the vSwitch in ESX right
<jamespage> dakj: and then the lxd containers are on each machine
<jamespage> dakj: you need to reproduce that topology with magpie only
<jamespage> dakj: so four vmware machines, each with one lxd container on each
<dakj> jamespage: that is now https://paste.ubuntu.com/24599034/
<jamespage> all running magpie
<jamespage> dakj: yup that's the one
<jamespage> dakj: but please put magpie on the physical machines as well
<jamespage> so you can diff between lxd and non-lxd traffic
<dakj> jamespage: I give up :-), the situation now is that (https://paste.ubuntu.com/24599100/). I don't believe that to install Openstack is so hard there are a lot of thinks of consider. Problems to deploy Landscape Dense-MAAS, problem with Openstack base.....install VMware is easiest then Openstack with Juju...
<jamespage> dakj: you're a bit off the beaten track here with how you are trying to test things
<jamespage> dakj: all I can say is that getting your infrastructure right before attempting an openstack deployment in terms of networking, MTU, broadcast domains etc...
<jamespage> dakj: is a problem that is common to any openstack deployment irrespective of deployment tool
<dakj> jamespage: that was just a way to joke with my time to spend with openstack. If I'm here is to understand how I can replace vmware with openstack.
<jamespage> dakj: \o/
<jamespage> a common goal for alot of users
<jamespage> dakj: getting existing vmware deployers to a point where they can try Juju/MAAS/Charms/OpenStack easily is a win for those users
<dakj> jamespage: I know it's no easy to replace technology. I know also that one. After to understand that the next step will be replace esx with ubuntu lxd. By I've to make that step by step.
<jamespage> dakj: ok so what type of scale are you looking to achieve here?
<dakj> jamespage: the strange way is that host worked well with VMware Esx and VMware server center and it not moved nothing about networking. It's correctly connected to firewall via core switch.
<dakj> jamespage: just to know how to resolve that issue and deploy openstack and landscape via juju. So I've the problem also for Landscape look here https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite.
<dakj> It's a month I'm trying to build that using Juju and in both case I have an issue, It's no easy to understand what is the issue and how to resolve that. have a look here is the last past with magpie https://paste.ubuntu.com/24599321/.
<dakj> jamespage: I want to thank you about all your support but it's not easy to understand why I've that issues.
<jamespage> dakj:  hey - minor niggle with your use of magpie
<jamespage> you've deployed four different instances of the magpie charm +(-a,b,c)
<jamespage> rather than eight units (four physical machines, four lxd containers) of a single instance of magpie
<jamespage> which is why none of the units can find any peers
<dakj> jamespage: sorry about that....it was only a joke. Tomorrow I'll make another try but this time not on that host but on our datacenter 16 compute node with 64GB per each node and 12 switch. All Compute node have vmware installed and my goal would be to replace that with Openstack....I'm sure that there is not problem with network. Also if that host is connect to the same infrastructure.let me try this last lab :-) then I don't know what is the problem.....
<carpenike> Hi all, having trouble bootstrapping juju on a node with non-managed networks as I cannot set the ipv6 to disabled within lxc/lxd even though it's disabled on the parent interface.
<carpenike> Is there a way for juju to skip the ipv6 check when it loads up?
<carpenike> loads up = bootstraps.
<tvansteenburgh> stokachu_: did you find a good solution for snapping c-u with go1.8 yet?
<kwmonroe> marcoceppi lazyPower!!! i need halp.  for a global-scoped interface provider, when do i use conf.set_state (https://github.com/juju-solutions/interface-spark/blob/master/provides.py#L25) vs self.set_state (https://github.com/juju-solutions/interface-http/blob/master/provides.py#L12)?
<kwmonroe> s/conf/conv
<lazyPower> kwmonroe: conv.set_state is when you want to scope it to the conversation happening, eg: scope unit.  self.set_state works in the global namespace and doesn't matter what context you're in
<kwmonroe> lazyPower: so anytime scope == global, self.foo is the right answer?  is it redundant to do "conv = get_conversation; conv.set_state"?
<lazyPower> kwmonroe: I think the conv.set_state on a global scoped conv is implied to work in global state
<lazyPower> cory_fu: fact check me <3
<kwmonroe> ack, gracias lazyPower
<kwmonroe> cory_fu is out pyconning
<lazyPower> kwmonroe: i'm only 80% certain thats right, but i'm gonna say it convincingly enough that you think its right.
<lazyPower> and i myself would never use self.set_state in an interface
<kwmonroe> s/contact=kwmonroe/contact=lazypower/ && ship
<lazyPower> s/contact=/ignored=/g && ship
<kwmonroe> ship || shennanigans
<lazyPower> Shenanigans wins every time
<lazyPower> kwmonroe: i'm pretty sure that if i'm heinously wrong stub will follow up with why. He's really good at lurking with great information when i'm wrong :)  But again, i'm mostly certain thats how it works. Otherwise the `scope.global` would be pointless.
<kwmonroe> lazyPower: when in doubt, github!  https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/relations.py#L265 tells me self.set_state and self.conversation().set_state work the same.
<lazyPower> kwmonroe: welp, and there you have it
<bdx> what are CAAS models?
<rick_h> bdx: :)
<rick_h> bdx: experiment of juju on container platforms
<thumper> rick_h: you took the words out of my mouth
<thumper> :)
<rick_h> thumper: I do my b est
<rick_h> best
<thumper> balloons, veebers: I can still hear you
<veebers> luckily balloons and I weren't saying anything mean about you then :-)
<bdx> rick_h: like a way to `juju deploy mydockerthing`?
<rick_h> bdx: still a bit early to really go through how it goes.
<rick_h> bdx: but we realize that folks like their containers and often they need to talk to things that don't fit well in containers
#juju 2017-05-19
<magicaltrout> lazyPower: https://gist.github.com/buggtb/6cfa67a2ae9c0c97321cae86ca73da6c whats the current work around for that?
<lazyPower> magicaltrout: that looks remis of an old version of the api-lb
<magicaltrout> yeah i updated it
<magicaltrout> that has indeed gone
<magicaltrout> instead i get Error from server: error dialing backend: dial tcp: lookup k8s-workers-1 on 10.105.255.250:53: no such host
<lazyPower> that looks like the dns pod is dead
<lazyPower> this is a fun suite of errors
<lazyPower> magicaltrout: are you getting both errors, or one successively after the other? or has the error changed?
<magicaltrout> it changed now to just the above
<lazyPower> magicaltrout: kubectl get svc --all-namespaces  --- is that the VIP of your DNS service?
<magicaltrout> my own in house dns lazyPower ?
<lazyPower> magicaltrout: kubernetes runs a kubedns pod in the kube-system namespace that provides cluster level dns
<magicaltrout> ah
<lazyPower> port 53 is the dns port, i presume it was querying kube-dns, or it may be trying to resolve the hostname for k8s-workers-1. Depends entirely on where the error is being emitted from
<magicaltrout> i dunno where that ip address subnet comes from lazyPower
<magicaltrout> its on openstack and some public ip's are in 10.104.1.0/24
<magicaltrout> so I guess there is probably some 10.105 ip addresses as well
<lazyPower> magicaltrout: thats not promising... there is a VIP range declared by kubernetes when you deploy it as cluster-cidr. it defaults to a /16
<lazyPower> its quite possible that address range is what was picked for the vip of those services.
<fengxia41103> Hi Juju, I need to create a CentOS node. What's the quickest way?
<fengxia41103> I have tried MAAS and using add-machine to add a CentOS, failed at "no tools found"
<kjackal> Good morning Juju world!
<jujulearn> anyone with openstack-base experience?
<erik_lonroth_> Hmm. When I'm running "juju status", the command is not returning but gets stuck in a blocking state. I've recently changed the IP address of the server so I guess this might have something to do with it? How can I debug this?
<jujulearn> How long was the wait
<erik_lonroth_> its still blocking
<erik_lonroth_> about 3-4 minutes how
<erik_lonroth_> now*
<jujulearn> are u able to see some output with juju show-machine 0
<erik_lonroth_> I think I'll reboot. I've changed the network so I think perhaps lxc/lxd even has something to do with it.
<jujulearn> the above command will tell you if you lxd have any issues.
<erik_lonroth_> OK, I'll try once the machine comes back up in a few
<erik_lonroth_> https://pastebin.com/Cv20bZRN
<erik_lonroth_> It seems its still messed up
<erik_lonroth_> Oh, no
<erik_lonroth_> Now it started to work again. I think I might have been to fast
<jujulearn> good
<jujulearn> are u using openstack-base?
<erik_lonroth_> No I don't think so. I'm not sure what it is.
<erik_lonroth_> Is there any way for juju to update the ipaddresses listed in the "juju status" command. I've changed the ipv6 address scheme and even though the machines has picked up the new IPv6, "juju status" still shows the wrong ipv6 address.
<jujulearn> whats the output of juju subnets
<erik_lonroth_> "No subnets to display"
<erik_lonroth_> I filed a bug for this: https://bugs.launchpad.net/juju/+bug/1691977
<mup> Bug #1691977: ipv6 addresses not updated after changed ipv6 subnet <juju:New> <https://launchpad.net/bugs/1691977>
<dakj__> jamespage: hi james....have a look here https://paste.ubuntu.com/24603858/
<jamespage> dakj__: well that looks much better
<jamespage> what did you change?
<dakj__> jamespage: my desire now?  It's to "kill" a my colleague :-), when I saw he wrong to make the port channel between IBM host (Esx) and the Cisco switch my face is became red, fortunately it's a lab and not in production... so anyway now I want to remake also the deploy of landscape. But why the ceph-osd/21, /22 and 23 are in blocked....
<jamespage> dakj__: osd-devices configuration is incorrect
<jamespage> dakj__: what's the block device name for the second disk in each of those servers?
<dakj__> jamespage: just a second check that
<dakj__> jamespage: https://paste.ubuntu.com/24603952/ while on juju guy it's /dev/vdb
<jamespage> dakj__: juju config ceph-osd osd-devices=/dev/sdb
<dakj__> jamespage: https://pasteboard.co/86NyS1g5U.png
<jamespage> well change that to /dev/sdb
<jamespage> cli or gui does the same thing
<dakj__>  jamespage: do I must to remake all deploy or it's only necessary to change that via gui and save, and it goes in unlock automatically?
<jamespage> dakj__: it should take effect without redeployment
<jamespage> 'config-changed' hook will fire, detect the configured disk on each unit and bootstrap it into the ceph cluster
<dakj__> jamespage: I try that immediately
<dakj__> jamespage: https://paste.ubuntu.com/24604142/
<jamespage> dakj__: so it sounds like my hypothesis "this smells like a network problem"  was right?
<jamespage> dakj__: btw I'd highly recommend you move to using juju 2.1.2
<dakj__> jamespage: I'm so happy :-) :-) thanks a lot for your support next steps landscape, and install ubuntu with lxd on IBM system x3650 M4
<jamespage> dakj__: can I ask a favour in return - can you put together some basic docs on how you deployed the bundle on vSphere?
<dakj__> jamespage: you had right.....I could believe when I see the port channel wronged
<jamespage> could be a google doc or a github gist
<dakj__> jamespage: sure, explain how do I make that
<jamespage> +10
<jamespage> please
<jamespage> dakj__: I'd also be interested to see if you can actually start instances with kvm nested inside esx
 * jamespage does not currently have access to a vsphere cluster
<dakj__> jamespage:  do you know the credentials to make the login in openstack?
<jamespage> dakj__: all in the README for the bundle
<dakj__> jamespage: are you sure? because on README I don't find any about that
<jamespage> dakj__: ah its in the novarc
<jamespage> https://api.jujucharms.com/charmstore/v5/openstack-base/archive/novarc
<jamespage> dakj__: admin/openstack
<dakj__> jamespage: https://pasteboard.co/87R6JDR7O.png
<jamespage> dakj__: I'm guessing you did not change the admin-password option in the bundle?
<dakj__> jamespage: no I didn't it :-( Where can I see that? And where do I must make it....
<jamespage> dakj__: no that username and password shoudl work ok - can you check things from the CLI please? The README has lots on cli usage
<dakj__> Jamespage: is it fine this link https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html?
<dakj__> jamespage: if I change the value in admin-password in Keystone could it work?
<dakj__> jamespage: I'm remaking the deploy of that, the password has to set in keystone? Because there is already present "open stack" https://pasteboard.co/89TVK2USz.png
<jamespage> dakj__: yes that's what I'm saying - the default username and password is admin/openstack
<jamespage> dakj__: I don't know why that's not working in the dashboard - if you look at the README in the bundle, it shows you some basic cli usage for openstack - please check that first
<dakj__> jamespage: last time after the deploy it was None....
<jamespage> what was none?
<dakj__> jamespage: sorry, the value in Admin-password was "None"
<jamespage> not sure why - that value comes directly from the bundle you used
<dakj__> jamespage: now I've changed that and run the deploy again, after that I'll check it.
<dakj__> Jamespage: waiting that, the issue with HAproxy is not changed (https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite) do you know something about that?
<jamespage> dakj__: sorry - I'd have to defer to someone who knows the landscape charms
<dakj__> jamespage: ok, I've to thanks you for all your time dedicated to me and my lab :-).  At moment this issue can wait!!!!!!
<erik_lonroth_> Anyway I can remove a model where I still have machines left in the list which are no longer within lxc/lxd ? I removed then forcefully to start over with the model, but now I can get rid of the model at all. Tried: "juju remove-machine 1 --force" without luck...
<erik_lonroth_> I did "lxc delete <machine>" previously, hence the machines are no longer available.
<erik_lonroth_> ... but I want to get rid of the model from juju
<rick_h> erik_lonroth_: hmm, not sure about that one. We have a way to kill the controllers but just a model that's been tampered with behind the scenes I'm not sure.
<rick_h> erik_lonroth_: migth have to hit the juju list and see what the devs think (though they're mostly out for the weekend at this point)
<erik_lonroth_> ok thanx
<dakj__> jamespage: now it's prefect. We need to change that before to make the deploy https://pasteboard.co/8aAkNISbV.png
<jamespage> dakj__: correct - you can't change it afterwards (at the moment at least)
<jujulearn_> Any compatibility issues maas 2.1.5 and juju-2.1.2 ?
<dakj__> jujulearn: I'm using that, MAAS version 2.1.5 and JUJU version 2.1.2-xenial-amd64 and at moment any issue...
<dakj__> jamespage: ok let me organise about that documentation and I'll give you that soon. Now I can only say thanks a lot for your support. Next steps are to solve the issue with landscape and replace esx with ubuntu lxd on that IBM host.
<dakj__> jamespage: let's keep in touch here
<dakj__> let's keep in touch
<dakj__> let's keep in touch
<dakj__> jamespage: but is there a way to organise in juju gui the location of applications? https://pasteboard.co/8aTf5bV5Z.png
<rick_h> dakj__: if you move them it should update their locations as annotations in the juju db and remember them next time
<dakj__> rick_h: ok thanks I thought that in automatic juju makes that.
<dakj__> rick_h: have you had any experiences with the deploy of landscape dense-Maas?
<rick_h> dakj__: sorry, nothing I can add to the askubuntu question
<dakj__> rick_h: thanks, I've already done that, but never answer.
<dakj__> rick_h: the issue is with HAproxy and the post is that https://askubuntu.com/questions/906763/haproxy-reverseproxy-relation-changed-for-landscape-serverwebsite
<rick_h> dakj__: I'll see if I can ping someone to look at it
<dakj__> rick_h: thanks a lot. It'd be a great favour. Because I looking for any post or fix about that or someone that to deploy the bundle.
<Zic> hmm, I have a strange bug with Juju GUI (used locally, without JAAS), it keeps trying to open the authentification page of Ubuntu One to log me in, but I don't need it since I'm running it fully on baremetal / auto-hosted Juju
<Zic> is this normal?
<hatch> Hi Zic  I understand you're having an issue trying to log in to the GUI
<hatch> when you open the GUI, you're visiting the link form the `juju gui` command output?
<Zic> nop, I can correctly log into the juju GUI, actually I am already seeing the GUI, but randomly, when I click to configure unit, it tries to open a popup window with Ubuntu One authentification
<Zic> I think it's useful for JAAS
<Zic> but not for a local/private Juju
<Zic> I just close the auth window to Ubuntu One and continue to use Juju GUI normally after, but sometime it re-open...
<hatch> ohh interesting
<hatch> Zic can you check the GUI version by hitting Shift+?
<hatch> it'll be in the bottom left of that window
<Zic> the requested action from ubuntu One account is " Juju Charms API log in with Ubuntu One "
<hatch> are you using private charms from the charmstore?
<Zic> nop, just canonical-kubernetes
<Zic> Shift+ does not do anything :(
<Zic> oh, oops, Shift+?, sorry, I'm trying again
<Zic> 2.6.0
<hatch> alright thanks, I'mjust trying to narrow it down
<hatch> one more question, is this on MAAS?
<Zic> nop, manual provisionning
<hatch> ok sorry one more :)
<hatch> does ithappen at a usual interval?
<Zic> (as we have our own MaaS-like that is not connected to Juju but to all of our internal tools :/ I will gladly embrace MaaS otherway)
<hatch> sure, no problem
<Zic> hatch: it seems that it happens when I do this path for example : click on easyrsa charm, Units, scale, then "back, back back" and sometime it open up at this stage
<Zic> ah better than this example
<Zic> this one is always reproductible:
<Zic> click on kube-api-loadbalancer
<Zic> (to configure it)
<Zic> it always open Ubuntu One popup
<hatch> thanks a lot Zic we'll look into this
<Zic> a totally different question/problem: can I deploy and old revision of charms (or a charm-bundle, if it deploys old charms with it) through the Juju GUI?
<Zic> maybe I'm wrong but I saw this option on old Juju GUI version
<Zic> I don't find it in the 2.6.0 :s
<Zic> (I need to deploy an old CDK bundle to test the upgrade)
<Zic> the one with .deb/1.5.3 of Kubernetes
<Zic> cc @ lazyPower
<hatch> Zic here is the issue I created to track this https://github.com/juju/juju-gui/issues/2922
<Zic> thanks a lot hatch
<hatch> Zic you can get to old versions of a bundle by specifying it in the url https://jujucharms.com/canonical-kubernetes/37
<hatch> the last version
<hatch> for example
<Zic> does a old charm-bundle deployed its old-charm with it?
<Zic> (it seems logic actually but... just to be sure)
<hatch> Zic depends on the bundle - if the bundle defines the expicit version then yes
<hatch> if it doesn't then it'll deploy the latest
<Zic> it's about canonical-kubernetes
<hatch> Zic you can also download the bundle.yaml then modify it and import it into the GUI to deploy
<Zic> hmm, it's a good idea
<Zic> I will export through the old GUI of my production cluster the bundle.yaml
<Zic> and then load it into the new Juju with Import
<hatch> yep you can definitely do that
<hatch> Zic would you be able to check one more thing around this login popup bug?
<Zic> yup
<hatch> after opening the browser console ctrl+alt+i and switching to the Network tab, perform the action which causes the popup to open
<hatch> and then paste the output into that github issue
<hatch> seeing the network requests will help us narrow down the issue
<Zic> hatch: do you want a screenshot or an export ?
<Zic> because I can't see any option ton export as text :(
<hatch> Zic screenshot should be fine, thanks a lot!
<Zic> hatch: added
<hatch> ahah!
<hatch> thanks Zic we'll get this resolved
<Zic> (for info, it's not really https://localhost:8080/, it's an SSH tunneling to the Juju GUI pointing to juju:17070 :)
<hatch> yeah that's fine, I can actually see the problem already :)
<hatch> we'll try and get this fix into the next release
<Zic> thanks :)
<hetfield> hi all, i have issues with ceph deployed with juju
<hetfield> when i try to issue any rbd/rados command it asks for key as it's not in default patch, when i pass the proper key it gives me auth failures
<hetfield> but it's very strange as openstack plat works, it happens when i log in a unit and issue commands as root
<hetfield> no one?
<zeestrat> hetfield: You checked in #openstack-charms ?
<hetfield> no :)
#juju 2017-05-20
<blahdeblah> Review for charmers when you have a chance: https://code.launchpad.net/~paulgear/charm-helpers/enable-disable-service-systemd/+merge/324361
<blahdeblah> And a follow-up which is dependent on it: https://code.launchpad.net/~paulgear/telegraf-charm/+git/telegraf-charm-1/+merge/324362
#juju 2017-05-21
<erik_lonroth> Is there anyone with experience in the django-charm ? I'm trying to deploy a website bundle but it fails and I could use some help. I've also sent a question to the juju mailing-list on this topic.
#juju 2018-05-14
<veebers> thumper: yay sorted. /me updates bug now
<veebers> KingJ: Are you still online? FYI those agents are available now. Sorry for the hassle
* louis_ changed the topic of #juju to: /!\ THIS CHANNEL HAS MOVED TO #krustykrab /!\
* thumper changed the topic of #juju to: All things Juju
<babbageclunk> So we haven't renamed juju to krustykrab then?
<babbageclunk> wallyworld: when you get a chance could you please have a squiz at https://github.com/juju/juju/pull/8695?
<babbageclunk> wallyworld: it's not blocking me though so no hurry
<wallyworld> babbageclunk: ok, np, after talking to tim
<babbageclunk> ta
<veebers> wallyworld: public-clouds have been updated
<wallyworld> veebers: great! ty
<wallyworld> babbageclunk: done!
<thumper> wallyworld: here is that rework I was telling you about: https://github.com/juju/bundlechanges/pull/39
<wallyworld> ok
<babbageclunk> thanks wallyworld
<wallyworld> thumper: done
<thumper> wallyworld: awesome, thanks
<thumper> wallyworld_: https://github.com/juju/juju/pull/8687 - this is the work to pass policy into deploy and add unit api calls
<thumper> there was some refactoring to rework the api structures in the facade
<thumper> but I think it now represents what we consider best practice
<srihas> hi, with the help from channel, I have installed the OpenStack but when I try to open the horizon GUI after exposing the servie
<srihas> I got internal server error
<srihas> I just did "sudo chown www-data /var/lib/openstack-dashboard/secret_key" and restarted apache2 in one of the lxd containers and it started working from all the endpoints
<srihas> is it just fine to do the change in one of the lxd hosts and it gets synced?
<srihas> but now "Unable to establish connection to http://127.0.0.1:5000/v2.0/tokens: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /v2.0/tokens (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7f80619967d0>: Failed to establish a new connection: [Errno 111] Connection refused',))" horizon is trying to connect on the localhost endpoint for keystone, which is not true in this case as the
<manadart> jam: In HO.
<jam> manadart: I'm kind of head deep in the netplan bonds stuff, and I don't think he's done an update yet, has he?
<manadart> jam: Doesn't appear so.
<manadart> jam: Happy to punt.
<manadart> jam: Let me know if you get a natural break sooner than our 1:1. Need a quick powow.
<jam> manadart: k. making food right now and then I'll ping you when I'm back
<manadart> externalreality: G'morning. Are you able to hop on a quick HO?
<jam> manadart: now is a good time before I get back into it.
<jam> 1:1 ?
<manadart> K.
<wallyworld> rogpeppe: did you still need input om cmr macaroons?
<TheAbsentOne> The layer-index repo does not have the cassandra interface. Could someone explain me how juju then works with that interface? How is the communication done?
<cnp> Have installed pycharm using which have cloned juju-gui in my IDE. After which I ran setup.py which was successful. Now when I try to run development.ini from the IDE am receving as :
<cnp> C:\Users\CG3\venv\juju-gui-2\Scripts\python.exe C:/Users/CG3/PycharmProjects/juju-gui/development.ini File "C:/Users/CG3/PycharmProjects/juju-gui/development.ini", line 6 [app:main] ^ SyntaxError: invalid syntax Process finished with exit code 1
<cnp> please help
<rick_h_> Hmm, wonder what cnp was trying to do
<TheAbsentOne> true he didn't really stay long
<TheAbsentOne> rick_h_: do you have an idea about my interface question? Also how do I receive the app-name of the charm actually? Couldn't find it in the docs and I thought I read it somewhere in other charms
<stub> TheAbsentOne: Nobody has written a Cassandra interface yet, or if they have they haven't published it. It predates reactive charms.
<stub> TheAbsentOne: You either need to write one, wait for someone to write one, or use relation_get/relation_set like in the good ol' days before charms.reactive
<jam> manadart: reviewing 8696
<TheAbsentOne> Ah I see, I never used the relation_get/set stuff thanks for the info stub
<TheAbsentOne> I also took the liberty to put a feature request on the pgsql interface, I hope you don't mind
<jam> stub: TheAbsentOne: I started working on one here: https://github.com/jameinel/interface-cassandra
<stub> TheAbsentOne: Writing the interface is on my todo list along with a reactive rewrite of the Cassandra charm and supporting the very latest Cassandra versions (like DSE 6, which was reported as failing the other week)
<manadart> jam: Thanks. Didn't want to break your stride. Was wondering how it looked against the the CI test for Bionic.
<jam> but I didn't get to the point of knowing it worked for the case I needed, and got sidetracked with other things.
<jam> It is likely close, IIRC
<TheAbsentOne> ah that looks nice jam, depending on a lot of things I might take a look at it too, but I'm afraid I'm too much of a noob to write a decent cassandra charm, thanks for the info and the link!
<jam> manadart: commented. btw you are likely fixing bug #1668547
<mup> Bug #1668547: juju doesn't configure lxdbr0 properly with new LXD (>2.3) <bridge> <lxd> <network> <juju:Triaged by jameinel> <https://launchpad.net/bugs/1668547>
<jam> manadart: btw, you have a couple of nodes allocated on GUIMAAS are you still using them?
<manadart> jam: That's a LXD cluster I set up. Easy to do again when I need it. All yours.
<jam> manadart: I don't, just saw it running and was checking.I'm doing a bionic netplan test, but I should only need 1 node.
<bobeo> goooood morning!
<rick_h_> jam: feel free to kill mine off if I still have any
<rick_h_> bobeo: morning
<bobeo> rick_h_: another beautiful day ! getting the morning started here. I realized this weekend I couldnt ssh via juju ssh into my models I have access to on one system, but I can another. does juju have a specific set of ssh keys It needs, IE the ones from maas, or does it use its own keys, and if so, that would be juju add-ssh-key --model <ModelName>
<bobeo> rick_h_: also, to clarify, I mean I have access from one machine via juju client, but not from the new one
<rick_h_> bobeo: yea, so in the MAAS case, if MAAS has a key it's used normally pulled in. If you need to add a key then you can add-ssh-key or import-ssh-key
<rick_h_> bobeo: right, so juju will setup a key on bootstrap so that things work, and if you go to another machine you'll need to make sure a key on that machine is available to juju (well known to it)
<bobeo> rick_h_: ok, so Im a bit lost at this part. I do juju import-ssh-key --model <ModelName> but then it repeats the options on the command with --model again, bt this time offers different options, but that are the same models. maybe im misinterpreting this? I deployed via conjure up, and the initial option offers both all the conjure up options, as well as all the model options, but the second interation only offers all the models, does it
<bobeo> rick_h_: first, and then the specified model, or does one option, just as the conjure up option, allow me to target all models in that instance?
<bobeo> rick_h_: if thats not the case, can we get that to be the case in some future option? because that would be really awesome! One import for all models in a cloud instance.
<rick_h_> bobeo: so no, right now ssh keys are per model with is a :( thing that we've got on the roadmap to clean up
<bobeo> YAY! rick_h_  I GUESSED A FEATURE! I get a COOKIE!
<rick_h_> bobeo: so what you have to do is, for each model, juju import-ssh-key -m xxxx $lp_or_gh_username
<bobeo> so juju import-ssh-key -m <modelname> <myusername>
<bobeo> rick_h_: correct?
<rick_h_> bobeo: correct
<bobeo> rick_h_: ok, so I think I missed something again, do you have any docs on lp or gh? Im not familiar with either abbreviation, but it didnt like when I simply used my username, it gave me a "prefix in key ID "<username>" not supported, only lp: and gh: are allowed.
<rick_h_> bobeo: sorry, that's the github or launchpad username that has public keys for you to use
<bobeo> rick_h_: OOOH! thats what those mean! lemme update my list real quick!
<rick_h_> bobeo: so I've got my public keys on both sites that I know my laptop will work with so it's easier for me to just import them from those sites vs upload it manually
<bobeo> rick_h_: is that safe?
<rick_h_> bobeo: so it's your public key you'd normally use. It's not the private key. Neither launchpad or github have that. The private key only exists on your laptop
<bobeo> bobeo: that doesnt sound safe. I know you can login with ssh without a pass depending on how its configed and built. wait..ignore that, stupid question. you would have set a password
<rick_h_> bobeo: so basically, you're saying that the juju models should accept the same key that you'd use to push changes to github or launchpad
<rick_h_> bobeo: never send your private key off your laptop anywhere
<bobeo> rick_h_: so toss <username> pub key, pull pub key, use priv key + pass to auth. I see where thats headed
<rick_h_> well, unless you're backing it up or copying it to another machine you use or the like
<rick_h_> bobeo: right
<bobeo> rick_h_: righto, so Im guessing toss it in...wait, where does juju store the key?
<rick_h_> bobeo: so it adds the public key to the normal ssh place of ~/.ssh/authorized_keys
<bobeo> rick_h_: btw, are you guys ok with me creating ASCII docs on all the stuff you guys teach me? It helps me remember, and it would help you provide links on how-to's.
<rick_h_> bobeo: when you use a command like ssh-copy-id or ssh-import-id or the like on a linux system the public key it put into a list in ^ file and then when you ssh it checks if your key you're providing matches up and works with the public keys in that list
<bobeo> rick_h_: ok, I was wondering about that
<bobeo> rick_h_: ok, so Im gonna try this with a git command. this might take awhile, I dont use it nearly often enough. im tryin to get better. Ill be back sooner than later if I level my git by accident XD
<bobeo> and whatever you do, dont give me the answer o.o
<rick_h_> bobeo: :)
<rick_h_> bobeo:good stuff to learn
<cnp> Have installed pycharm using which have cloned juju-gui in my IDE. After which I ran setup.py which was successful. Now when I try to run development.ini from the IDE am receving below error:  C:\Users\CG3\venv\juju-gui-2\Scripts\python.exe C:/Users/CG3/PycharmProjects/juju-gui/development.ini File "C:/Users/CG3/PycharmProjects/juju-gui/development.ini", line 6 [app:main] ^ SyntaxError: invalid syntax Process finished with exit code 1
<cnp> can someone please help me in making development.ini run?
<rick_h_> cnp: what are you doing? Is this from the github.com/juju/juju-gui repo?
<cnp> am trying to create my own GUI version for juju. And yes, it is from juju-gui repo.
<cnp> am I missing anything to make development.ini run?
<rick_h_> cnp: so https://github.com/juju/juju-gui/blob/develop/docs/hacking.md walks you through hacking on the project
<rick_h_> cnp: it's not been tested on windows so I'm not sure if the dev tools will work happily there to be honest
<manadart> jam: Can you cast an eye over the new commit for 8696 (https://github.com/juju/juju/pull/8696/commits/4861a263479c9cbd2a029237fe945486372413de) ?
<manadart> Worked like a champ when I delete eth0 and the default bridge.
<manadart> I can address other sundries as next work and go onto these other bugs.
<jam> manadart: I think I made comments
<jam> manadart: if you didn't get them, let me know, and I'll see if I failed submit somehow
<jam> manadart: tit-for-tat can you review my WIP: https://github.com/juju/juju/pull/8697 ?
<manadart> jam: Ja.
<jam> manadart: sorry its already 1k lines long. damn I suck :)
<jam> fortunately a good portion of that should just be the 'examples'. (I hope)
<srihas> hi guys, can someone tell if juju charm for neutron-api supports "neutron-plugin: aci" option?
<rick_h_> beisner: ^ ?
<rick_h_> evilnick: ping, how goes?
<evilnick> rick_h_, hi, what's up?
<rick_h_> evilnick: I'm looking to send the email ont he 2.4-beta2 announcement and I wanted to see what I needed to do for us to sync up
<rick_h_> evilnick: in looking at the checklist, there's notes on syncing up with docs folks and as it's my first time going through it I'm not sure what "normal" is these days
<evilnick> heh. Yes, we usually sync up so we can make sure any references are in the dev docs and any release notes are published in dev too. pmatulis would probably be better to sync with
<evilnick> he is more day to day on juju and closer to your timezone
<rick_h_> evilnick: gotcha, pmatulis howdy :)
<evilnick> rick_h_, he is around but may be on lunch right now
<rick_h_> evilnick: k, np
<pmatulis> rick_h_, b2 you say? i see the gdoc RelNotes have been updated. are they complete?
<rick_h_> pmatulis: well, complete as they're going to get today I think
<rick_h_> pmatulis: I'm sure I'm messing something up and someone will correct me at some point heh
<manadart> jam: Approved.
<pmatulis> rick_h_, no mistakes allowed
<rick_h_> pmatulis: boooooo
<rick_h_> I have a reputation to uphold
<pmatulis> rick_h_, i'm ready to publish. good to go?
<rick_h_> pmatulis: sure, I'm heading home atm but will send the email when I get home
<rick_h_> ty pmatulis
<bobeo> hey kwmonroe is it possible to deploy openjdk as a container? I see that you mantain that one.
<pmatulis> rick_h_, k, it will be there in 15 minutes (max)
<kwmonroe> bobeo: openjdk is a subordinate charm, meaning it needs a principal to relate to.. as long as that principal can be deployed into a container, openjdk will happily go along with it.
<kwmonroe> bobeo: all the openjdk charm really does is add a repo (if needed) and let you easily switch between versions via juju config (vs apt reinstalling on a unit)
<bobeo> kwmonroe: OOOH! thats what that means, ok. So I need to simply deploy the principal first.
<kwmonroe> yup bobeo, note that the principal must provide a java relation -- you can't just attach openjdk to any 'ol thing.  here's a list of stuff openjdk can relate to: https://jujucharms.com/q/?provides=java
<JoeJulian> Is there a way to make juju partition a maas node?
<bobeo> rick_h_: I found a thing lool http://paste.ubuntu.com/p/6NTG85V6N5/
<bobeo> no machines, no applications, it states its already resolved, no units, and still an error
<rick_h_> bobeo: quit breaking things!
<bobeo> LOOL
<bobeo> IIll provide you a copy of my history commands to be able to reproduce
<rick_h_> bobeo: oh I believe you
<rick_h_> bobeo: a bug with those steps would be awesome and we can see how to make sure you can get out of jail there
<bobeo> oh absolutely! I know you do. lying to you guys is like picking up vipers and smacking them, surely to bite you in a bad way.
<bobeo> hopefully with the history you can reproduce easily. deployment method was conjure-up
<JoeJulian> Ok, no answer about juju partitioning maas so the other way I can solve this is by adding a loopback image. I see instructions for how to do that from a command line but can it just be configured in the bundle? I tried adding something to the machine section and it didn't error - but I also didn't get a device.
<kwmonroe> JoeJulian: not sure what you mean by provisioning a maas node, but if you have juju bootstrapped in your maas env (https://jujucharms.com/docs/stable/clouds-maas), things like "juju deploy foo" should ask maas for an available machine and do whatever needs to be done to put "foo" on it.
<JoeJulian> Yeah, everything but give me a block device I can use with heketi. So I need to either partition the machine drive instead of allocating the whole thing, or add a loopback image that heketi can put a thin-provision lvm on.
<kwmonroe> bobeo: you're using bdx's elasticsearch charm, aren't you?
<bobeo> kwmonroe: yes o.o
<bobeo> kwmonroe: today ive tried I think 4 different types of elasticsearch
<bobeo> one gave me no feedback from a direct curl query, one gave me a failed install, another worked just fine, and bxds, well its bxds, and I trust him enough to give him the keys to my car, and my own ssh keys.
<kwmonroe> bobeo: i ask because http://paste.ubuntu.com/p/6NTG85V6N5/ shows ES and storage -- i'm not sure how (or if) storage works in containers.  bdx, has this worked for you?
<bobeo> kwmonroe: my error message was that it didnt.
<bdx> nah
<bdx> I steered bobeo in the wrong direction
<kwmonroe> good thing you've got his car keys
<bdx> bobeo: neither elasticsearch charm will work on lxd currently as far as I know
<bobeo> kwmonroe: my thoughts are use a separate elasticsearch outside of his in the interim if they work, simply deploy the other two modules which worked
<bobeo> bdx: ive gotten elastic to work in lxd
<bdx> bobeo: you have ? elastic 2.x?
<bobeo> my thoughts are mesh the two solutions
<kwmonroe> 2.x??? tell me you're not rocking ES 2.x...
<bdx> elastic 5.x will hit this container bug
<bdx> 2.x does not
<kwmonroe> boooooo
<bdx> so, bobeo: you very well may have deployed es to container previously
<bdx> see https://github.com/elastic/elasticsearch/commit/32df032c5944326e351a7910a877d1992563f791
<bobeo> just to be clear, i dont think conjure-up is broken, nor relevant. I am currently working on my terminology, and am in no way blaming the creators, or contributors for what is most likely my fault
<bobeo> also, is there a good tutorial on how conjure-up works? I use it a lot for my openstack deployment, and its absolutely amazing, but id also like to understand how it works and why it works better.
<bdx> hmm ... I'm not sure, you should ask in #juju for that
<bdx> :p
<bdx> lol whoops
<bdx> I see we are there
<bdx> @bobeo, I think there is a conjure-up channel, or slak or something too
<bdx> @bobeo #conjure-up
<rick_h_> bobeo: so conjure-up works by adding "spells" idea which it allows it to hook up to and do more than a basic bundle.
<rick_h_> bobeo: and with the UX bits in there it basically lets you "build a bundle" so that you can tweak things in a more interactive fashion
<rick_h_> bobeo: search for openstack spell conjure-up and you can see what goes into it
<bobeo> rick_h_: so are you saying its a bit like a bundle of bundles? like a bundle book?
<rick_h_> https://github.com/conjure-up/spells
<rick_h_> bobeo: heh, it's a bit more like a "choose your own bundle adventure"
<bobeo> rick_h_: ill definitely earmark those for reading later. as an update to the issue, I also discovered I cant remove the model, as it hangs up on removing the application.
<bobeo> sorry, I cant destroy* the model
<rick_h_> bobeo: right...normally we'd use the juju remove-machine --force to clean it up
<rick_h_> bobeo: but....since you have no machines I'm not sure how we'd clean that out :/
<bobeo> rick_h_: yea, I tried that
<bobeo> rick_h_: i figured if I destroyed the machine, it woudl go away. I could try to reboot the controllers?
<bobeo> rick_h_: maybe they are just hung,a nd a reboot might resolve the issue?
<kwmonroe> bobeo: have you tried "juju remove-application xxx"?
<rick_h_> bobeo: have you tried to reoslve it with --no-retry and see if you can get it to remove?
<bobeo> kwmonroe: yea I tried that rick_h_ yes I tried that as well, it states its been resolved.
<rick_h_> bobeo: k, so it states it's resolved, but then you can't remove it then?
<bobeo> rick_h_: correct, and when I run the command again, the juju resolved --no-retry, it states its already resolved
<kwmonroe> bobeo: does remove-application give you an error?
<bobeo> kwmonroe: strangly no, it says "removing application elasticsearch" and then returns to prompt
<kwmonroe> hmph bobeo, and destroy-model just hangs?
<bdx> kwmonroe, rick_h: I think I just realized something, is it true that if a charm defines storage, it can no longer be deployed to lxd (on any provider other then lxd)?
<rick_h_> kwmonroe: so storage is always optional
<bdx> I guess in short, I'm wondering if lxd deploys and juju storage are mutually exclusive?
<bdx> it seems they are
<rick_h_> kwmonroe: by default it'll just use a path on disk
<bobeo> kwmonroe: what it does it goes into a statement of "Waiting on model to be removed, 1 application(s)..."
<rick_h_> kwmonroe: and the storage lxd bits are 18.10 goals so it'll be coming
<bdx> ok thank
<kwmonroe> bdx: i assume when rick_h_ was talking to me, you knew he meant you.
<rick_h_> kwmonroe: bdx well I guess both of you
<rick_h_> oh, I see what I did there. bdx was asking you
<rick_h_> whoops
<kwmonroe> :)
<bdx> seriously monday
<rick_h_> :P
<bobeo> *sits back and nods head as if knowing whats going on*
<rick_h_> multi-tasking
<bobeo> caffeine my friends, caffeine. I know little about linux, but I know much of the sorcery of caffiene.
<KingJ> veebers: Apologies, only just had a chance to try things out. All good now though - been able to deploy a 2.4-beta2 controller and the agent downloaded.
<rick_h_> KingJ: yay
<kwmonroe> bobeo: how much would it hurt to tear down your whole controller?  if you're able to do that without much angst, this is a pretty big hammer that might get rid of that model "juju destroy-controller --destroy-all-models"
<bobeo> kwmonroe: I trust you, rick_h_ and bdx enough to buy you plane tickets, hand you the keys to my office, and leave for a 2 week vacation. if you tell me to burn the whole thing to the ground, id do it and not blink an eye.
<kwmonroe> bobeo: otherwise, if destroy-model is looping forever on  "Waiting on model to be removed, 1 application(s)...", i'm not really sure what to do.  rick_h_, do you have anything less drastic to try other than taking out the controller?
<bobeo> althogh I did something a bit simpler,a nd simply built another model called secappclusters-001-000002
<rick_h_> bobeo: kwmonroe no, because of the fact that it's not on a machine and that Juju thinks it's resolved there's no other hammer I have at my disposal :(
<bobeo> I figured id burn through a few dozen models, so I planned ahead
<rick_h_> bobeo: yea, I mean basically just put a note on the wall that this model is a dead model killed by an infestation of bugs
<rick_h_> bobeo: and carry on, it's not hurting anything
<rick_h_> bobeo: but if you want a clean pristine setup controller death is the only way forward I've got for you
<rick_h_> bobeo: kwmonroe well model migration is the other path
<rick_h_> just bootstrap, migrate any models you want to keep, and kill off this controller
<bobeo> rick_h_: ive already got a new model built, with a new system already building
 * rick_h_ should have put quotes around "just"
<rick_h_> bobeo: cool
<bobeo> also, I have redundant controllers, or at least I think I do?
<bobeo> I definitely have 3 controllers
<rick_h_> bobeo: juju status -m controller
<rick_h_> bobeo: will show you controller machines/etc
<bobeo> rick_h_: that brings up a good point. how do I validate if controllers are clustered?
<bobeo> IE in HA mode
<kwmonroe> bobeo: shut down the controller and see if things still work.
<bobeo> I had a controller fail on me int he past without it, it ended...poorly.
<bobeo> kwmonroe: LOOL
<kwmonroe> kidding btw ;)
<bobeo> kwmonroe: that definitely is a good way to do it XD
<rick_h_> lol
<rick_h_> bobeo: juju show-controller
<rick_h_> bobeo: should show HA status details
<bobeo> rick_h_:  api-endpoints: ['10.0.0.99:17070', '10.0.0.17:17070', '10.0.0.20:17070'
<bobeo> thats extremely reassuring
<bobeo> bare minimum, the failure is also replicated?
<rick_h_> bobeo: and I think juju status -m controller that the machines list will give you some details
<rick_h_> lol
<bobeo> either way, im not afraid of a dead model
<bobeo> also, would it be helpful if I built a port based diagram for juju, maas, etc?
<kwmonroe> bdx: good call on ES 5.x busted on containers -- systemd[1]: Failed to reset devices.list on /system.slice/elasticsearch.service: Operation not permitted :/
<bobeo> it seems to be a popular use, and it would help me better understand as well.
<kwmonroe> yeah bobeo, that would be awesome.  you're right that the question of how those pieces fit together comes up a lot.
<bobeo> kwmonroe: rick_h_ bdx ok so I think I found a way to mcgyver it togther.
<bobeo> I went with a generic elastic on baremetal, with an lxd instance install with kibana and logstash from bdx. ill let you konw how it goes. so far it looks good.
<bobeo> ok so Ive got everything working minus the relationships, what are the requirements for the relationships? kwmonroe bdx rick_h_ do you have a link for this?
<bobeo> i checked the ELK, elasticsearch, and kibana pages, but it didnt state relations
<bobeo> ok so I got all three of them working, I just need to configure them now, and its fully operational (I hope)
<bobeo> but status wise, all green http://paste.ubuntu.com/p/hBXqCwHxvb/
<ascend> If I do not like the ceph.conf file that came out of a juju build, or any other service build how can I change that persistently.
<rick_h_> ascend: most of the charms have some sort of "user extra config" you can try like https://jujucharms.com/ceph/#charm-config-config-flags
<rick_h_> ascend: but honestly, you can't tell Juju that the config is X and then the application is running Y or else all the charm hooks and such that get config details and make decisions on scripts to execute, packages to install, information to tell related applications and have things work out well
<TheAbsentOne> ascend, or you could hack your way into the charm and change it if the configs or actions are not sufficient x)
<rick_h_> TheAbsentOne: ascend except juju runs various hooks just as a matter of course and you'll probably find your changes overwritten from time to time
<TheAbsentOne> totally!
<TheAbsentOne> rick_h_, if an interface layer only has a requires (like mongo), can I still use it with the endpoint pattern?
<rick_h_> TheAbsentOne: I'm not sure, I've not messed with the endpoint pattern stuff atm
<bobeo_> o/
<TheAbsentOne> ah np rick_h_ I might try a Minimal working thing for mongo tomorrow, can't find any recent (reactive) charm using mongodb
<TheAbsentOne> hi bobeo_
<rick_h_> TheAbsentOne: yea, there's not one atm
<bobeo_> hey TheAbsentOne!
<bobeo_> Im gonna start workin on that document today kwmonroe and rick_h_ ill let you know as things progress
<kwmonroe> +1 bobeo_ -- hey you dropped earlier, but i was gonna say a couple things about your ELK status.. (1) it looks like you'll need an elasticsearch:client logstash:elasticsearch relation (i didn't see any logstash relations in your paste), and (2) putting kibana in a container may not work in all clouds if the container network isn't available -- in your case, it looks like you'll be fine across the 10.0.x.x network, but if
<kwmonroe> you moved that deployment to AWS, for example, kibana would have a private address that wouldn't be accessible to the outside world without some proxy/forwarder.
<kwmonroe> if you want kibana accessible in something like a public cloud, you could hulk smash it on a public machine.  that means placing both ES and kibana on the same public host with something like "juju deploy kibana --to 0".  it's not generally recommended to smash multiple charms on the same host, but it's not illegal to do so.
<bobeo_> kwmonroe: I push to it via haproxy for loadbalancing external to my juju instances
<kwmonroe> ha bobeo_!  that was gonna be my next suggestion :)  haproxy can certainly handle the forwarding from public units to containers in the model.
<bobeo_> kwmonroe: ill put those other relations in place, one moment please
<bobeo_> kwmonroe: relations added. Also, how did you know that relation was missing? is there a command to see all the available relations, or relations required to make a charm fully functional?
<bobeo_> kwmonroe: whats the best way for me to make config changes inside of juju with a standard install guide?
<bobeo_> I ask because according to the system, everything is deployed, but according to the system, when I load the login page, or try to, its only nginx default landing page.
<TheAbsentOne> what are you trying to deploy bobeo_
<kwmonroe> bobeo_: i knew your logstash relation was missing because i didn't see anything in the Relation section related to logstash in your paste (http://paste.ubuntu.com/p/hBXqCwHxvb/)
<kwmonroe> bobeo_: i didn't follow you on that last question... what login page are you loading?  kibana?
<bobeo_> TheAbsentOne: kwmonroe deploying ELK, and yes kwmonroe , kibana.
<bobeo_> im digging around in the kibana config, and I dont see much in regards to kibana for the web UI
<kwmonroe> hm, bobeo_, not sure.. maybe try http://10.0.0.79/kibana
<bobeo_> I did notice it didnt deploy with ES hosts configured, so Ill have to add all the es hosts, but other than that, it gives me the default password, adn the default listen port, but it needs to be forwarding 80 to something
<kwmonroe> bdx: does your kibana have a different url path?  as in /kibana vs /, or maybe /james-is-the-best?
<bobeo_> it doesnt show, which is the weird thing
<bobeo_>  http://paste.ubuntu.com/p/fNPgZXp55R/
<kwmonroe> bobeo_: gimme a minute to deploy the omni kibana.
<bobeo_> kwmonroe: Omni kibana? that sounds badass
<bobeo_> kwmonroe: Ill have what he's having
<kwmonroe> bobeo_: you're using omnivector's kibana, right?  this one? https://jujucharms.com/u/omnivector/kibana/
<kwmonroe> vs this one: https://jujucharms.com/kibana/
<bobeo_> oh yea! I am having what youre having XD
<bobeo_> kwmonroe: yea thats correct
<kwmonroe> ok bobeo_, so bdx was sick and tired of the ELK stack development pace (which is fair), so he made ~omnivector versions of the ELK charms.  at some point, those will all merge to be the best of both worlds, but for now, it means i need to ask things like "which kibana are you deploying" so i know which one to try on my side.
<bobeo_> kwmonroe: thats totally fine, Ill find you whatever information I can. honestly I like a lot of the things he did. merging out openjdk was genious
<bobeo_> genius*
<bobeo_> so this made me realize, omnivector isnt a username bdx uses, its a type of charm? So whats the difference in a regular charm, and an omnivector charm?
<bobeo_> kwmonroe:
<ascend> Sorry was away for a while thanks for the feedback on the above query.
<zeestrat> bobeo_: It's one of bdx's namespaces on the charms.
<kwmonroe> yeah bobeo_, what zeestrat said ^.  the charm store supports multiple namespaces, so for example, you and i could both have an openjdk charm.. one would be deployed with "juju deploy ~kwmonroe/openjdk" and the other with "juju deploy ~bobeo/openjdk" (or whatever namespace you wanted).  omnivector is bdx's company name, so he has a set of charms that they use published in the ~omnivector namespace.
<kwmonroe> bobeo_: the charm store also supports a top level namespace, which means you can omit the ~namespace and just do "juju deploy openjdk".  that will deploy whatever namespace is promulgated as the top level charm.  there can only be 1 instance of a top level (or "promulgated") charm.
<kwmonroe> bobeo_: so right now, https://jujucharms.com/kibana/ is the promulgated version of kibana.  it comes from the ~elasticsearch-charmers namespace.  both "juju deploy kibana" and "juju deploy ~elasticsearch-charmers/kibana" will deploy the same thing.  it's just that the former saves you some typing and is the default charm that shows up when people search for kibana.
<kwmonroe> bobeo_: bdx has an alternate version of the kibana charm in the ~omnivector namespace, which you get when you type "juju deploy ~omnivector/kibana".
<kwmonroe> and that's the one you're currently using
<kwmonroe> there are 17 other versions of the kibana charm, btw.. you can see them by clicking "show 17 community results" here: https://jujucharms.com/q/kibana
<TheAbsentOne> kwmonroe, since you are here did you encounter a charm using mongodb recently by any chance? I'm looking for a charm (reactive framework/endpoint pattern) that uses mongod charm/interface
<bobeo_> kwmonroe: im guessing the promulgated charm is assigned to the official project owner? asin the one who created the project assessed with the charm, IE Elastic would be the promulgated elastic instance?
<bobeo_> elasticsearch*
<kwmonroe> TheAbsentOne: graylog uses mongo.. it's reactive, but not endpointy.  see around line 466: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py
<Guest75873> wow I got dc'd, saw last comment, thanks for the link kwmonroe I will give it a look tomorrow, time to catch some zzz's goodnight all!
<kwmonroe> TheAbsentOne: to make that endpointy, you'd say "@when('endpoint.mongodb.available'), def configure_mongodb_connection(), mongo = endpoint_from_name('mongodb'), for mongo_host in mongo.connection_strings():"
<kwmonroe> Guest75873: ^^
<Guest75873> ah kwmonroe so that's where I was wrong I tried to fetch it by flag as well
<Guest75873> instead of name
<kwmonroe> Guest75873: there is a _from_flag too.. i think it'd be "endpoint_from_flag('mongodb.available').  but we can check that after your nap ;)
<Guest75873> kwmonroe, https://github.com/Ciberth/charm-mongo-consumer/blob/master/reactive/charm-mongo-consumer.py
<Guest75873> it's okay my sleeping schedule is just as broken as my very first charm I wrote x)
<kwmonroe> Guest75873: cory_fu may have to correct me here, but i think you want mongodb = endpoint_from_flag('mongodb.available') instead of mongodb = endpoint_from_flag('endpoint.mongodb.available') -- omit the 'endpoint.' prefix.
<kwmonroe> maybe not though -- i know there was a _from_name vs _from_flag discussion recently, i just don't know which way it's supposed to go :)
<Guest75873> yeah I wanted to check what flag was set right when I stopped working on it. Thanks kwmonroe I'll be back soon ^^
<kwmonroe> +1
<Guest75873> pretty sure flag I prefer flag x)
<Guest75873> gnight!
<kwmonroe> goodnight Guest75873!
<wallyworld> thumper: did you see i reviewed your pr?
<thumper> yes, I'd like to have a quick chat about it later
<wallyworld> sure
<thumper> but I'm on calls for ages now
<kwmonroe> bobeo_: the promulgated charm is the one that the community decides is the best suited as a default charm.  anyone can have the top level / promulgated namespace.. you just have to mail the juju mailing list and ask for it.  currently ~elasticsearch-charmers (which is *not* actually affiliated with Elastic Co) has the promulgated elasticsearch charm, but there's nothing stopping bdx from requesting a takeover.  if he wanted
<kwmonroe> , he'd make a case for that on the list, and anyone that has a stake in the elasticsearch charm would have a chance to ack/nak the transfer.  that said, lots of people don't bother promulgating these days -- if you know bdx's omnivector charms work best for you, you simply deploy with ~omnivector/<foo>.
<thumper> babbageclunk: fyi race in raft stuff
<babbageclunk> thumper: ah d'oh - ok, chasing.
<thumper> babbageclunk: thanks
#juju 2018-05-15
<babbageclunk> thumper: review my race fix plz? https://github.com/juju/juju/pull/8698
<babbageclunk> or wallyworld? ^
<babbageclunk> oh, and this one too https://github.com/juju/juju/pull/8688
 * thumper looks
<thumper> babbageclunk: first one done
<babbageclunk> ta
<thumper> babbageclunk: other done
<babbageclunk> awesome, fanks thumper
<thumper> np
<thumper> wallyworld: up for a chat about the review any time
<babbageclunk> thumper: I thought the unsubscribe would be enough too, but it wasn't.
<thumper> um...
<thumper> I beg to differ
<thumper> chat?
<babbageclunk> I mean, feel free to try it - I put the unsubscribe in first, and it didn't fix the race.
<babbageclunk> thumper: The other option that worked was to put a mutex into the suite and lock around both s.reqs = make(chan) and the send in the callback.
<babbageclunk> But the local variable seemed cleaner.
<babbageclunk> thumper: in 1:1?
<thumper> coming
<wallyworld> thumper: just got back from meeting witn anastasia, ping when you are free
<thumper> wallyworld: ack
<thumper> wallyworld: now?
<wallyworld> sure
<wallyworld> thumper: am in HO when you are ready
<babbageclunk> wallyworld or thumper, another one to take a look at? https://github.com/juju/juju/pull/8699
 * thumper looks
<babbageclunk> thumper: thanks for the glowing review!
<thumper> :)
<thumper> jam: has anyone looked at the network PR yet?
<thumper> jam: wow... not a small change then
<jam> thumper: yeah. Manadart looked at some of it already
<jam> thumper: most of that is lots more tests
<thumper> jam: I guess it looks good to me, although I'm not sure if I'm a good judge or not
<thumper> jam: re: https://github.com/juju/juju/pull/8687 - like your comment on whether or not we should have different params structs for v5 and latest
<thumper> or just add new field to params
<jam> thumper: thanks for the approval. Ultimately it needs to be tested, and I can at least test the VLAN stuff directly, and I think someone just gave me a bonded machine to test that part on.
<thumper> sweet
<thumper> hmmm... wondering why our windows reboot tests are trying to initialize a lxd container manager
<thumper> that seems dumb
<jam> thumper: I think new types are better. It means generating automated descriptions of API preserves the shape of old APIs
 * thumper nods
<wallyworld> thumper: i would like a second opinion on the use of a bulk remove vs creating a slice of individual removes using removeInCollectionOps() in https://github.com/juju/juju/pull/8700
<wallyworld> to remove the credentials for a deleted cloud
<thumper> fuck fuck fuckity fuck
 * thumper loses the will to keep going today
<thumper> will attack tomorrow with gusto...
<thumper> laters peeps
<manadart> externalreality: Want to cast an eye over this one for me? https://github.com/juju/juju/pull/8703
<manadart> Not here. jam ^. Will get on to yours next.
<jam> manadart: not here?
<manadart> Eric.
<manadart> jam: Approved #8697. Looks like nothing new on OCI. I assume we're taking a pass again...
<jam> yeah
<manadart> jam: OK if I forward port https://github.com/juju/juju/pull/8690 ?
<jam> manadart: any reason to not just merge 2.3 into develop?
<manadart> jam: Suppose not. I'll give it a lash.
<manadart> https://github.com/juju/juju/pull/8704
<bobeo> o/
<eriklonroth> Does anyone know of a charm/layer for tensorRT ?
<bobeo> how do I show the results of an action run with the run-action command? is that juju show run-action by chance?
<bobeo> figured it out.
<bobeo> my next question, if an action completes successfully, it updates the config file correct?
<bobeo> just to clarify, I send all my log files to filebeats in logstash on its port to import logs into ELK correct? kwmonroe
<bobeo> ooh, and its working! time to feed it o.o
<admcleod_> any update on when we might get 2.3.8 stable?
<rick_h_> admcleod_: working on it, the team's successfully gotten a container in bionic on a vlan and tossing the current updates at the full test suite to see what other gaps needs to be fixed up
<admcleod_> awesome
<admcleod_> thanks
<bobeo> am I the only one having severe issues with elasticsearch at the moment?
<bobeo> nothing seems to be able to connect to elasticsearch, regardless of what application I deploy and install.
<bobeo> rick_h_: so I was looking at a bundle for ELK, andit got me thinking. If i simply take the bundle, and install everything piecemeal, that would allow me to bypass the restriction on deploying bundles with the --to option correct?
<rick_h_> bobeo: sure, but the what restriction are you hitting with --to now?
<jhobbs> I'm trying to deploy arm64 nodes using an amd64 controller and juju-2.4 beta2
<bobeo> rick_h_: --to tells me i cant deploy bundles using the --to option.
<jhobbs> I should just need to configure agent-stream=devel on the model right?
<rick_h_> bobeo: oh right, I mean a bundle has --to placement stuff in it so putting a bundle on a machine doesn't work out right
<rick_h_> bobeo: so yea, you just break it down and run the manual commands you want
<bobeo> rick_h_: so when I was previously trying to deploy elk, it was with --to lxd:<machine#>, whch was failing, and when I tried to deploy it as is, its when it broke everything, as in I hit that bug that I found yesterday. So I decided today to open the bundle.yaml to try and better understand bundles.
<bobeo> rick_h_: right. so my question is, how do you translate the relations sections of the yaml files to understand what relations are requirements for which charm?
<rick_h_> bobeo: sec, otp
<kwmonroe> bobeo: can you pastebin the bundle that you're working with?
<bobeo> for instance, in the bundle.yaml for ELK, you have this:   - - "openjdk:java"     - "logstash:java"   - - "kibana:rest"     - "elasticsearch:client"   - - "logstash:elasticsearch"     - "elasticsearch:client"
<bobeo> kwmonroe: sure! lnk is easier though https://api.jujucharms.com/charmstore/v5/~elasticsearch-charmers/elk-stack/archive/bundle.yaml
<bobeo> kwmonroe: as for the above, im assuming the -- and the - mean they are related, so for instance, juju relate openjdk:java logstash:java would be one relational pairing
<bobeo> with each set of -- and - being another pair.
<kwmonroe> cool bobeo.  you asked how to translate the relation section of the bundle yaml.. for that bundle, you would achieve the same thing with juju from the CLI like this:
<kwmonroe> juju relate openjdk:java logstash:java
<kwmonroe> juju relate kibana:rest elasticsearch:client
<kwmonroe> juju relate logstash:elasticsearch elasticsearch:client
<kwmonroe> the -- and - are just the way bundles in the store format that section of yaml
<kwmonroe> so the first "-" means here comes a relation, and it's between - openjdk and - logstash.
<bobeo> kwmonroe: awesome! yay im learning! it feels good to be able to finally realize im understanding how the yamls interact to build charms.
<kwmonroe> :) good to hear!
<bobeo> ok so jumping in further, kwmonroe looking at the config files, I have realized alot of things that youd configure in the files arent always there. so my question is as I go into the files to configure them, if I reset the box, will it blow away those required changes that werent in the charm that I had to add later, or will they be rolled back, even though they arent in the charm config file
<bobeo> my concern for instance with ELK, I had to go into logstash, kibana, and elastic to make changes that werent included by default with filebeat, kibana, and elastic. While I am still having issues, I hav efewer ones now. If I reset the boxes, I dont want to lose my current configs as I move closer to it working as intended.
<bobeo> http://paste.ubuntu.com/p/dPmwsHDvMn/  http://paste.ubuntu.com/p/jbR33H84Pc/  http://paste.ubuntu.com/p/jnCBKcZfPk/   http://paste.ubuntu.com/p/3yJFqx67JY/ for reference, juju status, kibana, logstash, and elasticsearch respectively
<bobeo> kwmonroe: I understand that juju manages the file(s) to manage configurations, or deploys by snaps, but additional configurations will sometimes be required.
<kwmonroe> bobeo: if you manually ssh to a unit and update, for example, /etc/logstash/logstash.conf, those changes may very well get overwritten by juju on a subsequent hook.
<kwmonroe> bobeo: best practice is to let juju handle the config, which may mean updating the charm to expose config that isn't available.
<kwmonroe> bobeo: what kinds of things are you manually configuring?
<bobeo> logstash, filebeats, and hopefully client and admin systems for elastic so it will work. currently its not responding to web requests or curl requests, which is why mine isnt working
<bobeo> I wish I knew how to do that. I know how to configure these systems via generic install with ease, but not with juju. if you guys can show me how, I can do a lot to contribute, even if they might need a few tweeks.
<bobeo> I would love to give back kwmonroe
<kwmonroe> sure bobeo -- just tell me what you've had to change.  you mentioned elastic wasn't responding to curl -- did you change something in /etc/elasticsearch/elasticsearch.yml to make it work for you?
<kwmonroe> similarly with filebeat -- was it something in /etc/filebeat/filebeat.yaml that you had to configure so that logs would ship to logstash?
<bobeo> kwmonroe: no, its still not responding. Ive had issues with elasticsearch in graylog and in ELK, making me think elasticsearch is having issues. graylog says elastic 6x isnt workign with graylog 2.4 atm, and ELK seems to be having issues with elastic as well, at least as far as I can tell so far. Logstash and kibana are working fine for me.
<bobeo> kwmonroe: yea, I had to make changes in filebeat. specifically I had to add elasticsearch IP from localhost
<bobeo> for some reason, it had elasticsearch uncommentd, with correct port, but pointed at local host instead of at the IP for elasticsearch
<jhobbs> rick_h_: where can i find the agent stream on the web? are agents for non x86 architectures built for non released versions of juju?
<rick_h_> jhobbs: typically no we don't. We must at some point but not sure if beta or rc is when that kicks in. I've got a question out but not sure yet.
<rick_h_> jhobbs: ok found http://streams.canonical.com/juju/tools/agent/2.4-beta2/
<rick_h_> But have to find the stream file to reference it
<jhobbs> cool, arm64 agent is what i'm after
<jhobbs> is that devel? proposed?
<rick_h_> jhobbs: searching the various stream files I can see. I don't have it yet. I'm not up on which one is used.
<jhobbs> ok no problem
<jhobbs> i can try them all!
<jhobbs> thanks rick_h_
<rick_h_> Yea at least feel better I can see arch built agents
<jhobbs> yeah, and now i know to test with beta2 instead of edge
<jhobbs> so i'm in a happier place
<bobeo> hey rick_h_ is there a way to marry both graylog and elk together? I mean, I like the way graylog handles data a lot better than ELK, especially with indexing, and I like its ease of use, but I like the UI better frrom older kibana instances. is this possible you think?
<rick_h_> jhobbs: ok, they're listed in http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju-devel-tools.sjson
<rick_h_> bobeo: sorry no, I'm not familiar enough with graylog/elk unfortunately.
<jhobbs> rick_h_: woo hoo! thank you
<bdx> bobeo: what do you mean marry the two together?
<bobeo> bdx: so there are several really great portions of both, for instance kibana, I love kibana, way more than the graylog UI, and I love how graylog does indexing, and handles logs, way more than how elk does it. Id like to take the kibana UI, put it on graylog, and use how graylog imports and processes logs, as compared to logstash, which it might be using? Im still digging through graylog docs on what its actually "made of/from"
<bdx> bobeo: you can connect kibana to the elasticsearch that you use for graylog and have what you desire
<bobeo> bdx: I did that actually, remember that "X" i wanted to build? I built that today. Now I just need to make sure elastic is working properly, but im hitting issues. Its not responding to curl requests.
<bobeo> bdx: it, as in ElasticSearch, my current deployment of it. http://paste.ubuntu.com/p/tKTTVkQ6Mn/
<kwmonroe> bobeo: fwiw, i took the elk-stack bundle and swapped logstash for graylog.. the result is https://jujucharms.com/u/kwmonroe/egk-stack/
<kwmonroe> bobeo: it's totally a dev bundle, but it illustrates how you might swap ELK components to your liking.. (logstash for graylog, in this case)
<kwmonroe> bobeo: fwiw, i don't have any trouble curl'ing ES:9200 from within the cluster:
<kwmonroe> $ juju run --unit elasticsearch/0 'curl -XGET http://localhost:9200/_cluster/health'
<kwmonroe> {"cluster_name":"elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":5,"active_shards":5,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":83.33333333333334}
<kwmonroe> i mean, yeah it's yellow, but yellow ain't red!
<TheAbsentOne> kwmonroe: can I create bundles from local charms?
<kwmonroe> you betcha TheAbsentOne!
<kwmonroe> TheAbsentOne: have a look at https://jujucharms.com/hadoop-processing/ see 'bundle-local.yaml' in the side bar
<kwmonroe> TheAbsentOne: you'll note the "charm:" directive includes a local path for the charms that you want to pull from your local filesystem... you'd deploy that one with "juju deploy bundle-local.yaml", but substitue whatever your bundle name is.
<TheAbsentOne> ahn awesome since you guys were talking about bundles I was asking myself that question xD
<TheAbsentOne> thanks for the info kwmonroe
<kwmonroe> np
<veebers> Morning o/
<rick_h_> nmorning veebers
<veebers> hi rick_h_ o/
<veebers> rick_h_: saw your email, digging in now
<rick_h_> veebers: cool, definitely let me know if there's something I should see/walk me through something. Gave it a go but still a lot to learn there.
<veebers> rick_h_: right, so I figured out why the cleanup wasn't working and updated lxd on grumpig. (still from xenial backports, haven't moved over to snap yet :-\
<rick_h_> veebers: k, by cleanup you mean old lxds from the failed runs?
<wallyworld> thumper: you coming to caas standup?
<thumper> wallyworld: no, I have another call
<thumper> wallyworld: https://github.com/juju/collections/pull/1
<wallyworld> looking
<wallyworld> thumper: lgtm, so we only had set and dequeue?
<thumper> wallyworld: https://github.com/juju/names/pull/88
<thumper> yes
<veebers> thumper: was making a tea, yep you want to hangout?
<thumper> veebers: yeah
<thumper> 1:1?
<veebers> sounds good omw
<wallyworld> thumper: why oh why doesn't go have generics :-(
<thumper> uh ha
<wallyworld> thumper: done
<thumper> wallyworld: https://github.com/juju/proxy/pull/1 and yes, I'm leaving the years the same
 * thumper heads to physio
<thumper> bbs
<wallyworld> kelvinliu__: there's a couple of more things to remove which I've mentioned in the PR
<kelvinliu__> wallyworld, yes, i saw the comments, i will fix them soon, thanks.
#juju 2018-05-16
<thumper> quick PR for someone https://github.com/juju/bundlechanges/pull/40
<thumper> and another https://github.com/juju/gomaasapi/pull/73
<veebers> thumper: lgtm on the fist one
<veebers> first*
<veebers> thumper: reviewed the 2nd, one question and a suggestion ^_^
<thumper> veebers: juju/names is the old one
<thumper> and not used again
<veebers> thumper: ack, makes sense
<thumper> yes, many of the things I'm doing need merge jobs
<thumper> veebers: there used to be a merge bot... was it the old one?
<veebers> thumper: probably, many moons ago. All that old stuff is long gone
 * thumper sighs
<thumper> I know we went through it last time...
<veebers> thumper: it's for the best ;-) Have you done a merge job before? it's pretty straight forward
<veebers> and also lots of fun
<thumper> lies
<veebers> and, uh, helps the environment?
<thumper> can you point me to the notes again?
<thumper> https://github.com/juju/description/pull/38
<wallyworld> veebers: tiny one https://github.com/juju/juju/pull/8706
<veebers> wallyworld: hey, no name calling, I might get taller one day. Oh you mean the PR right. What does ?= do, or is it a typo
<wallyworld> veebers: it means set if empty
<veebers> ack
<wallyworld> we also use it for tje docker user name
<wallyworld> allow env var to override
<veebers> wallyworld: ack, quick question on the pr, looks good to me though
<wallyworld> looking
<wallyworld> veebers: answer make sense?
<veebers> wallyworld: indeed, LGTM
<wallyworld> ty
<wallyworld> kelvinliu__: veebers: we need to talk at some point about k8s CI testing - the docker operator image needs to be built and uploaded as part of that whole process
<wallyworld> i just did it manually for rc1
<veebers> wallyworld: ack, there was a brief email thread about that. I'm pretty sure we can fit that into our CI-Run bits easily enough (i.e. add an early step in the build process)
<wallyworld> yup, no big issue, just something to add to the todo list
<veebers> wallyworld: or is that for a release? (or both, something for each commit and something official for a release)
<wallyworld> we'll need to get the qa user to have perms on the jujusolutions repo
<wallyworld> both
<wallyworld> we need to push somewhere to test prior to release
<wallyworld> we can push to a qa namespace and override the source in juju for testing
<kelvinliu__> yes, agreed, we could make it automated
<kelvinliu__> or should we consider to create a separate dockerhub account for test to avoid some weird thing happened like test flow pushing image to the released image namespace. wallyworld veebers
<wallyworld> kelvinliu__: a separate account is what i was thinking yes. there's a controller setting to override the default "caas-operator-image-path"
<kelvinliu__> yeah
<veebers> wallyworld, kelvinliu__ excuse my ignorance, what is getting pushed? can it be uniquely named? i.e. can I push something named ci-run-abc123?
<veebers> (if so) We can then inject that as an env var so the test jobs can pick it up
<wallyworld> veebers: it's a docker image for the jujud agent on k8s. it is tagged with the release number
<wallyworld> theoretically that would be sufficient
<wallyworld> eg if we have released 2.3.7 and are testing 2.3.8 there's no clash
<wallyworld> as they have different labels/tags
<wallyworld> so they could all live in the jujusolutions namespace. depends on how paranoid we want to be about accidentally overwriting
<veebers> wallyworld: but for ci runs where it's the same version but different builds it will need to be unique (ci runs can be run in parallel)
<wallyworld> right, hence we'd use the controller config setting
<wallyworld> push with a tag and then use that path in the controller config
<wallyworld> the image path is <dockerusername>/<imagename>:<tag>
<veebers> wallyworld: ah right, if <imagename> can be anything we're sorted
<wallyworld> CI tests should not pollute the offcial user namespace with test images
<wallyworld> so we'd want to use a qa namespace for that
<veebers> having a namespace for ci runs seems nice, de-clutters actual release stuff
<wallyworld> yup
<wallyworld> kelvinliu__: free for 5 minutes before next meeting, quick chat?
<kelvinliu__> yup,
<wallyworld> am in standup
<thumper> review anyone? kinda simple, just big https://github.com/juju/juju/pull/8707
<thumper> 229 files
<thumper> very mechanical changes
<anastasiamac> could i plz have a review for https://github.com/juju/juju/pull/8709? removing distant relations in status... really simple and straightforward :D
<babbageclunk> anastasiamac: looking
<blahdeblah> anastasiamac: +1 to feature, and great function names like TestFilterOutRelationsForRelatedApplicationsThatDoNotMatchCriteriaDirectly
<blahdeblah> I have no idea whether the code is good or not, though. :-)
<babbageclunk> blahdeblah: I'll factor your enthusiasm into the review!
<blahdeblah>  ð
<anastasiamac> blahdeblah: \o/ i had to look at a test recently that i myself wrote in another life... it took me longer to figure out what it does than to actually write it originally... had to b very explicit here with status ;)
<anastasiamac> another simplistic review plz - https://github.com/juju/juju/pull/8710 ... this one just reduces loggin noise :D literally 4 lines of changes maybe...
<veebers> anastasiamac: +1
<anastasiamac> veebers: ta :)
<anastasiamac> babbageclunk: fwiw, i wrote the test first to see it fail with current impl... then fixed the code :)
<babbageclunk> anastasiamac: ooh, fancy! Approved.
<anastasiamac> babbageclunk: :)
<babbageclunk> anastasiamac: hmm, looks like it still fails though?
<anastasiamac> babbageclunk: yep :( looking
<anastasiamac> babbageclunk: it seems to b racy \o/
<babbageclunk> oh stink, I've got a few of those too.
<wallyworld> kelvinliu__: tiny review? https://github.com/juju/juju/pull/8711
<kelvinliu__> wallyworld, looking now
<kelvinliu__> wallyworld, looking all good to me.
<wallyworld> ty
<wallyworld> i'm happy about the size reduction
<kelvinliu__> wallyworld, the image size is much smaller, nice!
<wallyworld> indee
<wallyworld> d
<myrat> hello guys
<manadart> Small one for review: https://github.com/juju/juju/pull/8713
<myrat> how create a openstack project
<manadart> externalreality: Any chance you could take a look at that one? ^
<bobeo_> kwmonroe: rick_h_ do either of you know what port the termination port for spiceclient proxy is for nova-cloud-controller? Im having issues with web based access to my project instances, and I know it has something to do with spice and the nova cloud controller, im assuming it needs to terminate on that server, and pass to the novaa servers from there.
<kwmonroe> bobeo_: beisner or admcleod_ are my openstack gotos.  one of them might know deats about spice/nova ^^.  you also might want to try asking in #openstack-charms.
<admcleod_> bobeo_: you just enable the config option on nova-cloud-controller, then you use the api to get an access url (i.e. openstack console url show instance_id)
<bobeo_> admcleod_: how do I "use the api" to get the access url? Via the web UI, it provides a token for the instance, how do I forward for that?
<admcleod_> bobeo_: when you say webui what do you mean? openstack dashboard? horizon?
<admcleod_> + whats the token?
<bobeo_> admcleod_: openstack dashboard (horizon)
<bobeo_> it provides a project instance id via /horizon/project/instances/<id>
<bobeo_> it also does it via /spice_auto.html?<token>
<bobeo_> i figured I could use haproxy to forward for those
<admcleod_> bobeo_: do be honest, i dont use horizon, and we typically dont use spice, so anything i could tell you i would read here: https://docs.openstack.org/nova/pike/admin/remote-console-access.html
<manadart> Another up for review: https://github.com/juju/juju/pull/8715
<bobeo_> kwmonroe: beisner admcleod_ I think i found the issue as to why spice isnt working. I noticed in the nova compute after reading the docs that the update for spice applies at the nova compute, but openstack via openstack dashboard (horizon) processes the connection with the nova cloud controller directly, but I dont see anything that also modifies the connection for spice in the
<bobeo_> nova cloud controller. with that being said,a nd with spice expressly requiring vnc settings to be disabled as per openstack docs, this would mean that I dont believe there is a proxy pass to the nova compute nodes, which would create a break inside the config files correct? or am I missing something?
<bobeo_> nova compute - http://paste.ubuntu.com/p/DsPF7wRbpW/  nova cloud controller - http://paste.ubuntu.com/p/TH9jNQ6s3B/ with spice configured to be used.
<bobeo_> further review shows that the [spice] section inside the nova cloud controller isnt configured with anything for spice in the nova.conf file, even though inside the juju config file, for the nova cloud controller, according to juju, spice is configured.  nova cloud controller config: http://paste.ubuntu.com/p/7bx68YZCW4/   is this another bug?
<admcleod_> bobeo_: is there anything for spice in nova.conf on the nova-compute nodes?
<bobeo_> admcleod_: yes, which is what makes me think the issue is either with the nova cloud controller config since there is a section for spice in it, but its blank, and horizon forwards to the nova cloud controller
<bobeo_> or its with nova cloud controller not proxy forwarding to the nova compute nodes directly, which would simply require the nova cloud controller to proxy to the nova computes. My only question would be how would it know how to do that, except through nova-consoleauth
<admcleod_> bobeo_: if you think there is a bug you should definitely log it..
<bobeo_> admcleod_: I dont think I would be the best person for that. I honestly dont understand the charms enough to speak well enough for it.
<admcleod_> bobeo_: if you think there is a problem, thats understanding well enough - the docs should be clear enough, or things should 'just work'
<admcleod_> bobeo_: unfortunately i have less experience with spice + horizon than you do
<bobeo_> admcleod_: Ill definitely try then. its done through launchpad right?
<admcleod_> yep - if you find the charm you think is at fault on the charm store, there is a link to 'submit a bug' on the right hand side
<kwmonroe> bobeo_: +1 to admcleod_'s suggestion for filing a bug -- you definitely don't need to know the inner workings of the charms to do that.  worst case, somebody comes along and says "you're doing it wrong; do it this way".  best case, there's an actual doc/bug that needs fixing.  either way, you'll get unstuck :)
<TheAbsentOne> hey guys, is it possible in a bundle to have an option that says how many charms need to be deployed? For example I have charm a, b, c (all of them only once) that all have a relation to charm x. In some cases I only need 1 x but sometimes I need 5 x. Is this possible?
<TheAbsentOne> kwmonroe you will probably know that x)
<bobeo_> TheAbsentOne: I have seen this done
<TheAbsentOne> Ohn cool bobeo_ do you have any docs on that?
<bobeo_> TheAbsentOne: I have you one better, https://api.jujucharms.com/charmstore/v5/~elasticsearch-charmers/elk-stack/archive/bundle.yaml
<bobeo_> it deploys two seperate instance units of elasticsearch inside the ELK bundle, which if Im understanding correctly is what you are looking to do, except maybe not with ELK
<kwmonroe> TheAbsentOne: you lost me on your use case
<bobeo_> TheAbsentOne: is that what you were talking about?
<TheAbsentOne> haha xD I need to figure out if it's gonna work with multiple units but I really want different machines for it to work
<kwmonroe> what is 'x' where you would sometimes need 1 and other times need 5 in the same bundle?
<TheAbsentOne> wait I will illustrate it with a pastebin!
<TheAbsentOne> x is a charm!
<bobeo_> so like -machine 1  ; -machine 2 ; -machine 3
<bobeo_> https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml TheAbsentOne
<TheAbsentOne> https://pastebin.com/LANAB7pN
<TheAbsentOne> yeah bobeo_ that was my first thought too, with machines
<TheAbsentOne> But it would be cool if a user of the bundle could give the number of x's that need to be deployed
<bobeo_> hmmm..I know after the bundle is deployed they can easily deploy additional units
<TheAbsentOne> yeah exactly therefore it's no big deal, it was just a showerthought or rather question x)
<TheAbsentOne> thanks for the help though guys!
<admcleod_> num_unis
<kwmonroe> TheAbsentOne: afaik, there is not a way to do dynamic bundles -- meaning you can't adjust num_units or applications at deploy time.
<bobeo_> TheAbsentOne: I think its a great point, especially towards the point that I would like to have isolated instances of postgres in the same model. FOr instance, I have app A that I want to keep data from app B completely isolated from, and have a different config file for. Currently, im required to deploy an additional model so I can deploy additional postgres instances.
<kwmonroe> TheAbsentOne: for your use case, you might be better served with multiple bundles.. mybundle-core could have 1 'x' deployed and related.  mybundle-scaled could have 5 'x' deployed and related per your paste.
<TheAbsentOne> that's clear kwmonroe, that was basicly my question. Sorry for the poor formulation and yeah multiple bundles or clear instructions for users to edit it to their needs
<kwmonroe> bobeo_: you can deploy the same postgres charm multiple times
<bobeo_> kwmonroe: yea, id agree with what he said. ELK did something something simliar, and bdx did something like that as well.
<bobeo_> kwmonroe: yea, but I want different deployments. for instance, app A, i dont want replication on, app b, I do.
<TheAbsentOne> bobeo_: you can deploy the same charm multiple times without issue, just give them a name
<TheAbsentOne> ahn I see
<TheAbsentOne> and kwmonroe was way ahead of me xD
<kwmonroe> yeah bobeo_, what TheAbsentOne said.  you can just pick a different name for the pg deployment
<bobeo_> TheAbsentOne: wait, I think I misread that, you mean I can have different charm instances of the same type of charm, like postgres? So I can have postgresA, which has no replicaton turned on in the config, and then postgresB, which does? in the same model?
<kwmonroe> yup
<bobeo_> kwmonroe: 8O
<bobeo_> kwmonroe: can you provide me a command line on that?
<TheAbsentOne> juju deploy postgresql postgres-a ; juju deploy postgresql postgres-b
<TheAbsentOne> and you will have 2 different machines
<TheAbsentOne> available at your command x)
<kwmonroe> yup ^^ that's it on the command line.. here it is in a bundle: http://paste.ubuntu.com/p/T2RxVsHJQ2/
<kwmonroe> bobeo_: ^
<bobeo_> kwmonroe: TheAbsentOne with totally separate postgres configurations? so id do juju config postgres-a for config a and juju config postgres-b for config b?
<kwmonroe> yup bobeo_
<TheAbsentOne> yep completely different machines
<bobeo_> kwmonroe: DATAFARM!!!! I can build replicated databases on the same MODEL!
<kwmonroe> it's like i can see the lightbulb burning from here
<TheAbsentOne> Haha x) good stuff guys
<kwmonroe> :)
<bobeo_> juju deploy postgres postgres-a --to lxd:0 && juju add-unit -n 2 postgres postgres-a --to lxd:0 && juju add-relation postgres:peer postgres:peer postgres-a
<bobeo_> kwmonroe: TheAbsentOne yes?
<bobeo_> replicated triplicate databases on a single machine, machine 0, in lxd.
<kwmonroe> bobeo_: once you provide that first name, postgres-a, that's how you'll be referring to it.  so it's not "juju add-unit postgres postres-a", but rather "juju add-unit postgres-a"
<TheAbsentOne> yah wanted to say thing, once it gets a name always use the name and not the charm name anymore
<kwmonroe> bobeo_: and there's no need to explicitly add the peer relation -- juju does that for you
<bobeo_> kwmonroe: oOOOH! So postgres effectively bcomes postgres-a or postgres-b, depending on which one I refer to
<kwmonroe> yup bobeo_
<bobeo_> so the charm name is not a permanent name, but more so a placeholder name, with the permenant name declared at deployment
<bobeo_> OMG!
<bobeo_> I CAN HAVE DIFFERENT TYPES OF NOVA COMPUTES!
<kwmonroe> yeah bobeo_, postgres-a in your example is what you've chosen for your application name.  it's optional and defaults to the charm name if not specified, but you're welcome to give it whatever you want:
<kwmonroe> $ juju deploy --help|head -1
<kwmonroe> Usage: juju deploy [options] <charm or bundle> [<application name>]
<kwmonroe> bobeo_: i just double checked, and your add-unit won't quite work as written.  juju add-unit -n 2 postgres-a --to lxd:0 would add 2 new postgres-a units, but only the first would go to a new lxd container on machine 0; the second will go to a new machine.
<kwmonroe> not sure if that's by design or not.. but to get what you want, you'd have to do:
<kwmonroe> juju deploy postgresql postgres-a --to lxd:0 && juju add-unit postgres-a --to lxd:0 && juju add-unit postgres-a --to lxd:0
<TheAbsentOne> ohn did not know that
<kwmonroe> yeah, that's new to me too
<bdx> https://www.dropbox.com/s/g5e85591y1l4ahd/IMG_9161.JPG?dl=0 - just got these 40Gb switches in, looks like they run debian ð ð ð
<bdx> have no idea what to do with this lol
<wpk> bdx: install Ubuntu? :)
<bdx> wpk: definitely crossed my mind
<bdx> I'm wondering if its just straight linux networking going on here
<bdx> each port shows up as an interface or something
<bdx> thats all I can think of ... unless there is some special networking application running
<bdx> well I know not the user name and password, so I'm going to have reinstall something on it
<zeestrat> bdx: Those Dell/Broadcom S6000?
<bdx> ya
<bdx> zeestrat: you have experience with these?
<kwmonroe> bdx: did you try admin/admin?
<bdx> from what I can gather, it looks like they just use they network stack of whatever os you install on it
<kwmonroe> Windows For Workgroups!  make it happen bdx
<bdx> yes
<bdx> tried a bunch lol
<veebers> Morning
<zeestrat> bdx: Nah, just seen them around. As you say, I think there are a couple of different stacks. Looks like there's some cumulus stuff and https://github.com/Azure/SONiC
<bdx> totally, I'm finding a bunch of stuff out there, just another new thing to learn thats all  :)
<zeestrat> Any special plans or just TOR?
<bdx> ahh, I was hoping these most inexpensive switches could lay the foundation for my SAN
<bdx> picked up 3 of them used for < 2k
<bdx> :)
<bdx> now, now is when I pay the prie
<bdx> price
<bdx> the price of time
<zeestrat> Haha. Man, I do get jealous of the decent used and refurbished market in the US.
<zeestrat> What kind of storage you doing?
<bdx> I've been grabbing used gear from https://unixsurplus.com and https://www.orangecomputers.com - possibly they will ship to you
<bdx> zeestrat: I have a bunch of these https://unixsurplus.com/collections/4u-servers-1/products/freenas-server-36-bay-supermicro-4u-2x-e5-2680-2-8ghz-192gb-2-port-10gbe-sfp-nic
<bdx> filled with 6TB drives and ssds
<zeestrat> Cool. Most outlets don't ship to eu, nevermind Norway which is outside. Then you get hit with vat and customs
<bdx> ahh
<bdx> well shoots
<zeestrat> Yeah, I remember you showed them last time around. How much a pop? How many disks?
<kwmonroe> zeestrat: sounds like you need a 3d printer
<kwmonroe> then bdx can just send you a few detail pics and BAM, free switches
<bdx> zeestrat: we can devise a plan to smuggle you cheap used hardware from the US
<bdx> have  you ever seen the movie "blow"?
<zeestrat> Now we're talking. Can I be Penelope Cruz?
<bdx> ahhhh sure ....
<bdx> haha:)
<zeestrat> Forgot how it ends so I figured Johnny Depp bought it in the end
<zeestrat> kwmonroe: I'll take my chances printing my own money
<kwmonroe> that... that's a terrible idea.
<zeestrat> That's exactly what Photoshop said when I tried to scan some bills when I was younger
<zeestrat> bdx: got 40g nics in there as well?
<bdx> sure do
<bdx> 40Gb, 10Ggb, 1Gb, these things are stacked
 * thumper blows out
<thumper> setting up a merge from 2.3 -> develop
<thumper> quite a few conflicts there... wasn't expecting any
<rick_h_> thumper: surprise!
<thumper> hmm...
<thumper> I think these are me reordering and regrouping imports properly
<thumper> dumb me
<babbageclunk> thumper: no good deed yada yada
<admcleod_> thumper: is that anything to do with 2.3.8? :}
<thumper> admcleod_: no... it was due to a branch I had which was breaking apart packages
<admcleod_> bugger
<thumper> admcleod_: all fixed now though
<admcleod_> \o/
<thumper> well... just checked and no it isn't
<thumper> I think I have another conflict
 * thumper enfixorates
<wpk> thumper: o/
<thumper> hey wpk
<thumper> ok... I've had enough of this bollocks
<thumper> why does 'make pre-check' on my machine fail differently to the merge bot?
<thumper> they are both running the verify script
<thumper> and both should be using the same go version
<thumper> wallyworld, veebers ^^ ? anyone
<rick_h_> wpk: what's up?!
<veebers> thumper: OTP will look shortly
<wpk> rick_h_: not much, not much. watching natgeo, drinking an APA.
<rick_h_> wpk: sound like a fine evening.
<rick_h_> I'm waiting eagerly for the blue planet ii stuff to drop
 * thumper just doesn't understand why his local verify is failing...
<veebers> thumper: did you do a make add-patches (or haven't added patches)
<thumper> veebers: hadn't added patches
<veebers> thumper: or are your local deps been patched when they shouldn't have?
<thumper> nope
<thumper> veebers: I'm about to head to class, can I grab you after lunch?
<thumper> perhaps you can shed light on this
<veebers> thumper: sure thing
#juju 2018-05-17
<wallyworld> kelvinliu__: i left a comment on the juju-managed-units PR - just a small change needed
<kelvinliu__> wallyworld, ah, u r right. just pushed the change, thanks.
<wallyworld> nw, looking
<wallyworld> kelvinliu__: what about the other bit?
<wallyworld> the if numUnits > 0
<kelvinliu__> wallyworld, u mean the deployment controller?
<wallyworld> kelvinliu__: yeah, a few lines down where it does "if numUnits > 0". that check will alsways be true now
<wallyworld> so it's no longer needed
<wallyworld> ie we always want the deployment controller
<kelvinliu__> wallyworld, that's right.
<wallyworld> kelvinliu: small fix for a code comment and then good to $$merge$$
<wallyworld> kelvinliu: actually
<wallyworld> we don't need the cleanups slice anymore
<wallyworld> maybe
<wallyworld> ah, i think we do after all
<wallyworld> so leave it as is
<kelvinliu> ic, wallyworld thanks
<thumper> veebers: ping
<thumper> veebers: nm, found the issue
<kelvinliu> wallyworld, can i have ur 2 minutes?
<wallyworld> sure
<veebers> thumper: oh, please share :-)
<thumper> veebers: I had golang 1.6 deb installed
<veebers> ah, that'll do it
<veebers> :-)
<babbageclunk> wallyworld and/or thumper: could you please take a look at https://github.com/juju/juju/pull/8680?
<babbageclunk> ended up being a bit bigger than expected
<wallyworld> babbageclunk: sap you?
<wallyworld> swap
<babbageclunk> wallyworld: Doh. Yup!
<wallyworld> :-D
<babbageclunk> wallyworld: you mean 8700?
<wallyworld> yup
<babbageclunk> cool cool
<wallyworld> babbageclunk: i still wish we called the worker raft-unfucker :-)
<babbageclunk> wallyworld: yeah, that would be pretty cool sort of.
<wallyworld> babbageclunk: does the raft backstop worker need to be a singular worker?
<wallyworld> the manifold definition looks like it sytarts up in every controller?
<babbageclunk> wallyworld: that's right - it needs to run in each controller just in case that's the only one left standing.
<babbageclunk> Hmm. I guess it could be a singular worker. Except when that's being run by raft it couldn't be.
<wallyworld> ok. i'll read more of the code. i am wondering how they all then coordinate and not get confused
<babbageclunk> Well, it only ever does anything when it gets told by the peer grouper that it's the only one left alive.
<wallyworld> the whole idea of a singular work is that there's always one, no matter how many controllers there are
<babbageclunk> Right, but we can't use something that will be powered by raft to save raft.
<wallyworld> ok, i'll read more of the code
<babbageclunk> The singular worker uses leases, which will be implemented by raft.
<babbageclunk> wallyworld: approved with some comments.
<wallyworld> babbageclunk: ty, 75% done on yours
<babbageclunk> awesome
<wallyworld> babbageclunk: i have a meeting, so left some initial comments for now
<babbageclunk> ok, thanks wallyworld
<veebers> wallyworld: FYI percona-cluster and mediawiki bugs I filed re: CMR/upgrade issues https://bugs.launchpad.net/charm-percona-cluster/+bug/1771729 and https://bugs.launchpad.net/charms/+source/mediawiki/+bug/1768710 (I'll follow up the mediawiki one too)
<mup> Bug #1771729: When using CMR db username is too long <OpenStack percona-cluster charm:New> <https://launchpad.net/bugs/1771729>
<mup> Bug #1768710: workload status does not survive juju upgrade. <mediawiki (Juju Charms Collection):New> <https://launchpad.net/bugs/1768710>
<wallyworld> veebers: great ty
<wallyworld> babbageclunk: let me know to take another look at the PR so I can +1
<babbageclunk> wallyworld: ok, will do - just replying to comments at the moment, then I've got some fixes to do.
<wallyworld> ok, np, just want to be sure i'm not blocking you
<babbageclunk> wallyworld: just pushing review fixes now, also lots of argumentative replies to your comments ;) - can you take another look?
<wallyworld> sure :-)
<babbageclunk> only slightly argumentative.
<wallyworld> babbageclunk: +1 with the main followup required being using the machine id rather than tag string
<anastasiamac> wallyworld: if u get a chance, PTAL https://github.com/juju/juju/pull/8717
<anastasiamac> :)
<wallyworld> sure
<wallyworld> anastasiamac: lgtm, awesome
<anastasiamac> wallyworld: \o/
<sanju> join
<srihas> hi guys, I am unable to login to openstack from dashboard, the error is "Unable to establish connection to keystone endpoint."
<srihas> apache error log shows the openstack dashboard is trying to access keystone on the localhost instead of the keystone host
<srihas> local_settings.py has the OPENSTACK_HOST set to the keystone hsot
<srihas> how can I debug this?
<srihas> can some one help?
<srihas> or may be is there a place where I can post these questions?
<TheAbsentOne> I'm not that familiar with openstack but I assume most experts are offline right now, what timezone do you live in srihas? You could always try the mailinglist if it's urgent
<srihas> TheAbsentOne: CEST
<srihas> thank you
<srihas> but I have fixed it by upgrading the charm with 258 version
<srihas> but we have issues with neutron-api charm when neutron_plugin: aci
<magicaltrout> charmers, I have a little competition running with my interns where they've been tasked to build a dashboard over the charmstore API using opensource software of their choosing
<magicaltrout> the entries are being judged on, usefulness, ease of install and look and feel. We want it judged by some 3rd parties to remove bias, anyone want to volunteer for judging in a few weeks time?
<kwmonroe> magicaltrout: i'm in!
<magicaltrout> thanks kwmonroe
<magicaltrout> i'm sure i can convince rick_h_ to waste some cycles on it also
<admcleod_> magicaltrout: sure
<magicaltrout> cheers egghead
<admcleod_> you're just such a great guy, i cant find anything to tease you about
<admcleod_> :D
<rick_h_> magicaltrout: :) do I get a gavel? I want a gavel.
<admcleod_> i hope https://bugs.launchpad.net/juju/+bug/1771885 is an easy fix...
<mup> Bug #1771885: 2.4-beta2 - lxd containers missing search domain in systemd-resolve configuration <juju:New> <https://launchpad.net/bugs/1771885>
<thumper> morning peeps
<wallyworld> babbageclunk: hey, you aware of this? http://ci.jujucharms.com/job/github-check-merge-juju/1418/testReport/junit/github/com_juju_juju_worker_raft_raftclusterer/TestPackage/
<babbageclunk> wallyworld: yup - it bit my PR last night. It's my top priority
<babbageclunk> wallyworld: then your "no tags anywhere" vendetta
<wallyworld> :-)
<wallyworld> babbageclunk: also, did you see my comment about the ref count? i forgot about the global vs model collection thing
<babbageclunk> wallyworld: ooh, no - just looking
<babbageclunk> ah yeah - that makes a lot of sense
<babbageclunk> wallyworld: did you see my comment about the select-clause priority thing?
<wallyworld> oh, no, looking :-)
<wallyworld> babbageclunk: huh. maybe that changed. maybe it was always like that. but so much of our code is written to assume that the source order determines which one of multiple ready channels gets the result first
<babbageclunk> I can see some confusion because of the evaluation order thing
<babbageclunk> What do you think, should I still simplify it? I'm not sure the priority argument really holds water anyway.
<babbageclunk> wallyworld: that's the last thing I've got, if you're ok with it I'm landing the sucker!
<wallyworld> babbageclunk: if you think the order doesn't matter then simplification would be gr8. we don't seem to make the distinction in other owrkers
<wallyworld> thumper: wouldn't mind a teddy bear for something if you are free at some point
<thumper> wallyworld: I'm using babbageclunk for that just now
<wallyworld> be gentle with him
<thumper> wallyworld: how urgent is your teddy bear needs? as I'd like to complete this bug fix
<thumper> and I need to talk to veebers to do that
<wallyworld> thumper: not too urgent, i'll relocate soon and maybe we can chat after
<thumper> ok
<thumper> veebers: got a few minutes?
<veebers> thumper: sure do
<thumper> sweet
<thumper> veebers: in our 1:1
<veebers> whenever I add --destroy-all-models to a command I picture Bender saying "destroy all humans". I wonder if I can alias that argument
<babbageclunk> thumper: Am I right in thinking that Catacomb.Err() will never return Catacomb.ErrDying()?
<thumper> I couldn't say without looking
<babbageclunk> thumper: it looks like it from my reading - tomb.Kill(err) will never store tomb.ErrDying as the reason it's dying.
<thumper> babbageclunk: https://github.com/juju/juju/pull/8720
<babbageclunk> thumper: looking
<babbageclunk> thumper: approved with minor comment.
<thumper> babbageclunk: thansk
<thumper> reasonable comment
<thumper> babbageclunk: although wrong
<babbageclunk> ?
<thumper> babbageclunk: we need to check against a nil initial error
<thumper> because a dial error is better than nothing
<babbageclunk> thumper: oh, right
<babbageclunk> I had the sense wrong.
<thumper> :0
<thumper> all good
<babbageclunk> Will other ever be nil?
<thumper> nope
<thumper> initial is only ever nil the first time combine is called
<babbageclunk> ok, in that case fine.
<thumper> assuming that combine returns non-nil
<thumper> it is only called when there was an error returned
<babbageclunk> cool cool
#juju 2018-05-18
<wallyworld> vino: PR looks pretty good so far. left a comment about the AllBindings() method
<vino> Sure thanks wallyworld
<vino> i will get back to u soon.
<kelvinliu> hi wallyworld, the mock Stdin pipe has been solved.
<wallyworld> kelvinliu: that's great! i'll look at PR
<kelvinliu> thx, wallyworld
<wallyworld> kelvinliu: looking good, see my comments. happy to clarify if needed
<kelvinliu> wallyworld, can i have ur 2mins ?
<wallyworld> sure
<veebers> wallyworld: you have a couple of moments, would like some pointers for the model tear down issue. I had a good destroy and a bad destroy and both exihibited the same logging (that I had tacked in there) So I'm missing something and/or looking in the wrong spot
<wallyworld> veebers: just talking to kelvin, give me a sec
<veebers> wallyworld: sure thing
<anastasiamac> can i get a review for https://github.com/juju/juju/pull/8724, please? more status tests - this time for filtering on machine number :D
<wallyworld> veebers: free now but i fear i won't have a good answer for you
<wallyworld> anastasiamac: can look soon
<anastasiamac> wallyworld: just as i was going to say "could u plz have a good answer for me" :D
<veebers> wallyworld: hah, all good, even just pointing in maybe the right direction would be good
<wallyworld> veebers: am inhangout
<wallyworld> stabdup
<veebers> omw
<wallyworld> anastasiamac: a couple of small things
<babbageclunk> so I've got a fix for that test hang but writing the explanation in the commit message has got me worried about a bug in the raft implementation
<anastasiamac> best type of tests/comments :D
<veebers> babbageclunk: remove the comment and voila bug is gone
<babbageclunk> I'm gonna push the one-line fix with a shruggie in the commit message
<kelvinliu> wallyworld, cluster-name and context-name in ~/.kube/config file are different, and we are using context-name to find related context from ClientConfig.Contexts. Do u think we should change the `context-name` flag to `cluster-name` and then we find the related context-name according to the cluster-name?
<anastasiamac> wallyworld: ta \o/
<wallyworld> kelvinliu: if it works. from a user perspective, i think they look at a named k8s setup as a cluster?
<wallyworld> kelvinliu: are cluster names unique within a given config file?
<wallyworld> or can different contexts have the same cluster name
<wallyworld> if cluster names are not unique then we need to stick with context name
<kelvinliu> i think context-name <- 1: 1 -> cluster-name, and they should be all uniq
<wallyworld> ok, then i reckon cluster-name is better
<kelvinliu> wallyworld, yeah, changing it to use cluster-name now
<wallyworld> great, ty
<babbageclunk> easy review anyone? https://github.com/juju/juju/pull/8725
<babbageclunk> anastasiamac, veebers, thumper, wallyworld: ^
<veebers> babbageclunk: looks
<veebers> looking even
<babbageclunk> ta
<veebers> babbageclunk: I've asked a question for clarification. Im never sure to mark that as a 'Comment' or a 'Request changes'
<babbageclunk> veebers: I'd say a comment unless you've specifically asked for a change.
<veebers> ack, makes sense.
<kelvinliu> wallyworld, done. would u mind take a look on the change?
<wallyworld> kelvinliu: looking
<kelvinliu> wallyworld, thx
<wallyworld> anastasiamac: maybe if you have a chance, here's a CLI command to remove a k8s cloud (builds on remove cloud stuff er discussed) https://github.com/juju/juju/pull/8726
<veebers> wallyworld: if I understood correctly, Application destroyOps should be called many times if any of the asserts fail, right?
<wallyworld> veebers: the build loop will get called 3 times. depending on what happens in the txns, potentially new cleanup jobs will be queued which will call destroy again
<anastasiamac> wallyworld: sure thing.. as soon as i finish this error msg m working on \o/
<anastasiamac> like "on the tip of my tongue" kind of moment...
<wallyworld> no rush need to land kelvin's one first and then rebase as there will be conflicts
<babbageclunk> veebers: that's a raft library function - I guess we could have our own local DefaultConfig func somewhere...
<veebers> babbageclunk: ah right, makes sense. Not sure it's worth it just to set False for that arg (for real and tests)
<babbageclunk> yeah, I was hoping you'd agree :)
<veebers> :-)
<wallyworld> kelvinliu: done, just a couple of minor typos, let's fix and land :-)
<veebers> wallyworld: ok, so I see the logging I put in (a *Application) destroyOps output only once, however I see the application fail to be destroyed. I'm continuing to add more debugging everywhere (well, maybe not everywhere)
<kelvinliu> wallyworld, fixed
<wallyworld> kelvinliu: go ahead and $$merge$$
<kelvinliu> yup thx wallyworld
<wallyworld> veebers: destroyOps will only get called again if it results in another cleanup job gettin queued which calls it
<wallyworld> debugging should help us figure it out (eventually)
<veebers> wallyworld: ah right. Ok, will continue w/ debugging logs :-)
<wallyworld> no easy answer sadly
<wallyworld> will also help to print what txns are being queued
<wallyworld> potentially
<veebers> wallyworld: ack, that's a good way to fill up a log file :-)
<anastasiamac> wallyworld: if u have a chance, PTAL https://github.com/juju/juju/pull/8727?
 * anastasiamac lloking at 8726 now
<wallyworld> righto
<wallyworld> anastasiamac: all of our state code returns jujutxn.ErrNoOperations if attempt > 0 and the entity is gone
<wallyworld> the general pattern is to do a Refresh() and check for not found
<wallyworld> cloud don't have refresh() so we just use Cloud(name)
<wallyworld> eg see line 186 of state/application.go
<anastasiamac> wallyworld: plz send a link, i dont know which version of application.go u r looking at :)
<anastasiamac> wallyworld: most of the code does not fuss about and just returns NotFound... However, I do believe u want to cater for something specific, hence,u've coded for it but I do not understnad the reasoning... maybe it deserves a descriptive comment?
<anastasiamac> wallyworld: specifically, why would it matter if another call deleted the entity? ur code above still should cater for both NOtFound and NoOp.. right?
<anastasiamac> wallyworld: i guess what I'd like to see is a test that if you got a no-op in state, under the circumstances u've described, then ur api call will not err and hence local store will still be cleaned up... even if someone else removed controller-side reference
<wallyworld> anastasiamac: i added a remove race test. the application file is the one in state /home/ian/juju/go/src/github.com/juju/juju/state/application.go
<wallyworld> tremove cloud behaves consistently as for removing app, machine etc
<anastasiamac> wallyworld: :) i was hoping for a link on githu :) i understnad which ile u r talking about... i cannot guess its version, nor can navigate to what u've supplied ;)
<wallyworld> this is on develop
<wallyworld> https://github.com/juju/juju/blob/develop/state/application.go#L186
<anastasiamac> yep
<anastasiamac> awesome ! Tahnk you the race test :D
<anastasiamac> thank*
<manadart> externalreality: If you are inclined to review: https://github.com/juju/juju/pull/8730
#juju 2018-05-19
<TheAbsentOne> Any brave soul online today? I'm not admin on a controller but I'm a user that can use models. Now my admin created a new model for me but it's not listed when I do show models. I think I still have to provide ssh access. Can someone tell me the exact command for this?
<TheAbsentOne> In other words do I need to create a new pair of ssh keys for a new model?
<bdx> your admin needs to also 'grant' you access
<pmatulis> TheAbsentOne, https://docs.jujucharms.com/2.3/en/users-models
<TheAbsentOne> hey thx pmatulis that's the stuff I need to do when I have access to the controller but I don't an admin just added a new model to my user that's it and I'm a bit stuck on how to switch to it cuz juju switch gives an error
<bdx> TheAbsentOne: as you will find in the docs, your admin need do 2 things, 1) `juju add-model <model-name>` to add the model, and 2) `juju grant <username> <acl> <model-name>` to grant you access to the newly created model
<bdx> TheAbsentOne: if he only did #1, then you will need to get him to also grant you access (#2) before the model shows up in your list
<bdx> s/your admin need do 2 things/your controller superuser need do 2 things/
<TheAbsentOne> bdx I'm and idiot
<TheAbsentOne> when I did juju switch I didn't include everything >.<
<bdx> :)
<TheAbsentOne> it was something like admin/my-username and I didn't do the admin/ part bakka me
<TheAbsentOne> thanks for the help you both
<TheAbsentOne> ahn I think I'm now at the step my admin meant, I can create and deploy charms but can't ssh into them in my new model. Do I need to generate a new pair key bdx?
<TheAbsentOne> or add my existing public key
<TheAbsentOne> nvm got it working, I was once again a fool
<raghav> Hi All
<raghav> i am using juju version 2.2.3
<raghav> i am using command "juju deploy bundle.yaml" to deploy using bundle file
<raghav> i am updating bundle file and redeploying same charm
<raghav> but next deployment doesnt update config instead takes old bundle file config
<raghav> does juju store old config some where and which needs to be cleaned up
<raghav> how to update same deployment with new bundle file
<raghav> Hi can anyone tell
<raghav> how to do bundle deploy
<raghav> with updated bundle file
<raghav> juju deploy -p ./dist bundle.yaml
<raghav> here my built charms are in dist
<raghav> i am updating my bundle file
<raghav> and redeploying it
<raghav> but still its taking old config
<raghav> its taking old settings from previous bundle file
<raghav> seems all are new to juju
<pmatulis> https://docs.jujucharms.com/2.3/en/charms-bundles
#juju 2018-05-20
<veebers> Morning o/
#juju 2020-05-11
<wallyworld> tlm: your latest model operator changes pushed up? just wanted to start a casual review
<tlm> nah i'll push in a sec
<tlm> just smashing some lunch
<wallyworld> sure, np
<wallyworld> it is food time indeed
<tlm> pushed wallyworld, will be back in about 40 minutes
<wallyworld> sure thing, ty
<hpidcock> PR for someone thanks https://github.com/juju/juju/pull/11556
<wallyworld> hpidcock: lgtm ty
<wallyworld> tlm: i left some initial feedback based on the WIP PR, let me knw if there's any questions. the structure all looks ok
<wallyworld> thumper: i can see the gui/dashboard PR - did you have a branch as well?
<thumper> it is possible that he has merged it in already
<wallyworld> i guess i can look at gh myself
 * thumper is raging against hardware
<wallyworld> ah ok
<thumper> wallyworld: bought a second screen
<thumper> and it won't run at 4k
<wallyworld> oh noes!
<thumper> though the usbc adapater I got
<thumper> screws everything up
<wallyworld> you pluggin into thundebolt?
<thumper> So I now have it on my desk not being used
<wallyworld> i have a 4k monitor pluged into the thunderbolt port all good
<thumper> the monitor only has hdmi and display port
<wallyworld> hdmi works as well for me
<thumper> the hdmi one is working at 4k
<thumper> but adding a second isn't working
<wallyworld> well don't bee so greedy then :-)
 * thumper is greedy
<wallyworld> thumper: did they commit to fixing their simple streams?
<thumper> I believe so
<thumper> let me get an email together summarising everything
<wallyworld> righto
<thumper> and have you, hatch and ant on the list
<thumper> wallyworld: email sent
<wallyworld> ty
<wallyworld> thumper: so far, code doesn't look tooooo bad, but yet to look in detail at the http handlers in the api server
<thumper> wallyworld: it may be best to look at my branch that I proposed
<thumper> it should apply cleanly to old develop
<wallyworld> yeah, just came to that realisation after reading hte email
<tlm> wallyworld, hpidcock : is there a quick way to add engine report to the model command ?
<wallyworld> yeah, just implement the Report() method
<tlm> cheers wallyworld
<wallyworld> there's examples lsewhere
<wallyworld> let me know if it's not clear
<tlm> ta
<thumper> we need someone to take the latest facades schema from 2.8-rc1 and get it into pylibjuju
<thumper> see issue #409 on github pylibjuju
<thumper> unknown facade CAASAdmission
<thumper> hpidcock: in order to rebuild juju-db, is it just hitting the jenkins job?
<hpidcock> thumper: I can take a look at the pylibjuju
<hpidcock> thumper: yes, then you need to manually promote the snaps on the store
<hpidcock> what does it need building for?
<thumper> hpidcock: outdated ubuntu packages
<thumper> it seems that just rerunning the job won't notice any changes
<wallyworld> the joy of snaps
<hpidcock> we have a candidate off core18
<thumper> it looks like it is just running off a timer
<hpidcock> we have the 4.0.18 snap that hasn't been released yet
<hpidcock> I built it last week
<thumper> the email lists all the 4.0 tracks
<thumper> stable separately from candidate/beta/edge
<thumper> looks like some recent updates to libldap
<hpidcock> thumper: so if I kick off a rebuild of 4.0.18 that should include the new package?
<wallyworld> hpidcock: if you make libjuju changes, can we have it so that new juju facades do not cause a KeyError in libjuju. not sure if it's easy to do with current codebase. it was a mess. but we need to be able to add facades (and in this cases it's not even one a client api lib needs to use) without breaking libjuju
<wallyworld> just as we don't break older juju cli
<wallyworld> when we add new facades
<hpidcock> wallyworld: I might just do that for now then. We can look at updating facades later
<wallyworld> yeah, no issue with doing what's needed for 2.8 release
<wallyworld> but a better fix needed for sure
<thumper> hpidcock: if we force a rebuild of the snap, I think it gets the new deps, yes
<hpidcock> ok I'll kick that off now, it will take around 3 hours, arm builders seem pretty slow
<hpidcock> kicked off a new build of juju-db
<wallyworld> kelvinliu: any luck with the init container retry on 137 error?
<kelvinliu> wallyworld: still investigating, found a different problem
<wallyworld> yay
<kelvinliu> there are some weird things happening
<wallyworld> kelvinliu: you need any help?
<kelvinliu> wallyworld: ho?
<kelvinliu> yes plz
<wallyworld> sure
<hpidcock> looks like pylibjuju just needs a release. Simon already fixed that issue
<hpidcock> I'm not well versed in the release process for the python lib
<wallyworld> hpidcock: i think kelvinliu has a long time ago in a galaxy far far away
<kelvinliu> i don't have the pypi account access anymore.. because I lost my phone.
<kelvinliu> I need setup a new account and gain the access again. Because pypi doesn't provide a way to find a lost account.
<achilleasa> manadart: can you do a quick sanity check on https://github.com/juju/juju/pull/11557 before I get started with CI day?
<manadart> achilleasa: Sure.
<manadart> achilleasa: https://github.com/juju/juju/pull/11559
<achilleasa> manadart: looking
#juju 2020-05-12
<timClicks> what is the rationale for prohibiting read access to application-level data (via relation-get/state-get) to non-leader units?
<timClicks> is the recommended course of action for the leader to read those values, then propogate them via peer relations?
<babbageclunk> timClicks: the basic idea is that you shouldn't let a unit read a value unless it'll be notified if the value changes.
<babbageclunk> timClicks: and we don't want the leader setting an application bag value to trigger a hook on the other units.
<babbageclunk> (except in the case of peer relations)
<babbageclunk> I mean, triggering a hook on the other units on the same side of the relation.
<babbageclunk> Anticipating the "why not" though - I'm not totally sure... Having a look in the spec doc for rationale
<timClicks> babbageclunk: I can certainly understand the rationale for only allowing leaders to modify application-level values
<timClicks> but I'm less clear on why other units are not allowed to access that data
<timClicks> is there a recommended pattern for propogating data to peers?
<babbageclunk> timClicks: I think it's to have a peer relation - leader sets the information in application data there, then the units can read it because they're on the "other" side.
<hpidcock> wallyworld: https://github.com/juju/juju/pull/11561
<wallyworld> looking
<wallyworld> hpidcock: i merged directly
<wallyworld> hpidcock: so the issue with the rc is that the juju binary in the snap has the OfficialBuild label set. the "stable" release jobs don't do that. and we need to not do it for rc either, or if we do a proper  beta1, ie anything not published to the edge channel
<hpidcock> hpidcock: ok I'll have a look at that too
<wallyworld> only the edge snaps built from a PR need the OfficialBuild set
<wallyworld> so that juju constructs the qualified image tag
<hpidcock> wallyworld: problem is the ci-run doesn't know this
<hpidcock> I have some thoughts, let me look at that first
<wallyworld> righto, let me know if you want to chat about it
<wallyworld> i think the requirements are clear now, we just need to make the jenkins jobs comply
<wallyworld> we *could* publish another rc snap without the OfficialBuild to test
<wallyworld> or just make sure it's fixed for rc2
<wallyworld> hpidcock: for now, i've simply added the missing tag that the rc1 snap expects and it's all good
<pmatulis> i upgraded the series of a machine hosting a single principle charm (and one subordinate) and the value under the Series column (juju status) did not change. i am using containers and MAAS. is this a charm bug or a juju bug?
<wallyworld> pmatulis: it looks like a juju bug to me (as an educated guess looking at a few things in the code)
<wallyworld> i'm not 100% sure though, but it seems like it could be
<wallyworld> i can't see where the series is updated on the machne
<wallyworld> there's an old deprecated method but i can't see a replacement
<pmatulis> wallyworld, thanks. it actually happened to two charms (different model). but the second one only updated the series of the leade'sr machine
<pmatulis> (each application/charm had 3 units)
<wallyworld> hmmm,there must be a code path to do it then if one of the machnes was updated.
<wallyworld> raise a bug i think is best and the folks who wrote upgrade-series can take a look
<wallyworld> they'll probably know straight away what any issue is without having to poke around
<pmatulis> wallyworld, i will reproduce and open a juju bug. thank you!
<wallyworld> ty, sorry i couldn't help immediately. not 100% familair off hand with the code
<hpidcock> wallyworld: can we HO?
<thumper> wallyworld: I think this is the issue we were talking about this morning... https://bugs.launchpad.net/bugs/1878110
<mup> Bug #1878110: ErrImagePull on k8s deploys with freshly bootstrapped 2.8-rc1 controller <juju:New> <https://launchpad.net/bugs/1878110>
<thumper> wallyworld: and perhaps this is another dupe of the other k8s delete model work? https://bugs.launchpad.net/bugs/1878086
<mup> Bug #1878086: juju destroy-model of the k8s model hangs <juju:New> <https://launchpad.net/bugs/1878086>
<thumper> I don't want to triage them incorrectly given this is something you have touched recently
<pmatulis> what is the upgrade-charm syntax to upgrade from https://jaas.ai/percona-cluster/286 to https://jaas.ai/u/openstack-charmers-next/percona-cluster/365 ?
<pmatulis> there is a --channel option but i can't get it to work
<wallyworld> hpidcock: yeah, we can, sorry mssed ping
<hpidcock> wallyworld: in stdup
<wallyworld> thumper: that rc1 issue is solved now, i added an image tag to dockerhub
<wallyworld> rc2 will have it fixed. it's a build issue
<thumper> wallyworld: I thought it was, but I figured you were best to comment and close / assign the bugs
<wallyworld> thumper: yup will do, on my list
<wallyworld> just messing with gui stuff
 * thumper nods
<thumper> wallyworld: how's it going?
<wallyworld> thumper: sec, otp
<thumper> pmatulis: is the charm in the same channel?
<thumper> ah... different owner
<thumper> I think you need to use --switch
<pmatulis> thumper, ohh
<thumper> pmatulis: as far as juju is concerned the charms are unrelated
<pmatulis> that actually worked really well
<pmatulis> and was instantaneous
<thumper> perhaps just very quick :)
<thumper> wallyworld: I think perhaps we should land the GUI parts as much as we have them (as we fix the minor issues)
<thumper> and rely on getting the simplestreams data updated soonish
<thumper> outside of our chagnes
<wallyworld> thumper: that's exactly my plan. i am almost done with code fixes to address *all* the issues I identified. need to test 2.7 -> 2.8 upgrades. code will not care if streams is broken, you can still deploy gui from a local tarball. and i had a longish meeting with jeff this morning to communicate the fixes we need to make to the workflow and behaviour and the streams and i am hoping for updated streams tomorrow
<wallyworld> kelvinliu: are you having luck with the retriable error concept?
<kelvinliu> wallyworld:  it didn't work, still investigating
<wallyworld> kelvinliu: ah, so leaving the operation as pending didn't cause it to be picked up again next time through the loop. i haven't got the exact logic intrnalised right now for the resolved FSM, but let me know if you need help
<kelvinliu> no, it didn't work..
<kelvinliu> nws, im investigating a bit further
<thumper> wallyworld: do you remember we we can tweak cert complexity for controllers in tests?
<wallyworld> hmmm, rings a bell, but details not paged in :-(
<wallyworld> probs in same package as the test ModelTag stuff
<thumper> nah, me neither
<thumper> I thought briefly that juju conn suite might set it
<thumper> but seems not
 * thumper digs some more
<wallyworld> thumper: juju/juju/testing/cert.go
<wallyworld> pregenerated certs we then use elsewhere
<thumper> looks like we are using those for the dummy provider
<thumper> I'm trying to track down a weird test timeout
<thumper> we spin doing 7.5 seconds of nothing
<hpidcock> wallyworld: thumper: tlm might have touched these in the recent certs work
<thumper> hpidcock: locally my machine pauses for about 300 ms
<thumper> so it is clearly doing something
<hpidcock> yep
<thumper> on a slower loaded machine, it could easily take a lot longer
<hpidcock> I had a branch that fixed this, let me have a look
<thumper> trying to find out what it is doing
<hpidcock> thumper: this was my commit that sped up all the cert generation a while back https://github.com/juju/juju/commit/866052a37846a0230e5b3eaf89b373eb5bb5e764 but tlm's recent work might have undone this
<thumper> hpidcock: that still seems to be there
<hpidcock> which test is taking a long time?
<thumper> a bootstrap test
<thumper> tim@terry:~/go/src/github.com/juju/juju/cmd/juju/commands (develop)$ go test -check.vv -check.f TestAutoUploadAfterFailedSync
<thumper> the pause is between these two lines:
<thumper> [LOG] 0:00.031 DEBUG juju.cmd.juju.commands provider attrs: map[broken: controller:false secret:pork]
<thumper> [LOG] 0:00.370 INFO cmd Adding contents of "/tmp/check-2439852674146353662/1/.ssh/id_rsa.pub" to authorized-keys
<thumper> on my machine it was 320ms
<thumper> on the test machine it was 7.5s
<hpidcock> let me have a quick look
<thumper> my machine is probably faster and doing less
<wallyworld> thumper: i've pushed a dashboard PR, but there's a fundamental issue (from when it was written) concerning version checking i need to fix, it currently uses the CLI version not the controller version. i'll work on that and push a commit, but what's there *should* work  with the new simplestreams metadata when published tomorrow (hopefully)
<thumper> ok, thanks
<tlm> hpidcock: thumper Some of my pki tests generate keys
<tlm> that could be the slowdown ?
<thumper> tlm: you aren't doing that in the bootstrap tests though?
<tlm> thumper: I'll look. Can't remember off the top of my head
<hpidcock> thumper https://usercontent.irccloud-cdn.com/file/ceDs66KD/out.png
<thumper> hpidcock: fyi, I made a bug for it https://bugs.launchpad.net/juju/+bug/1878135
<mup> Bug #1878135: BootstrapSuite.TestAutoUploadAfterFailedSync times out intermittently <intermittent-failure> <juju:Triaged> <https://launchpad.net/bugs/1878135>
<hpidcock> thumper: out of interest what architecture was this on?
<thumper> hpidcock: I need to work out how to get that
<thumper> hpidcock: pretty sure it was just amd64
<hpidcock> ok amd64 should have been fast
<hpidcock> but 3072bit rsa primes are pretty chonky
<thumper> hpidcock: probably from environs/bootstrap/config.go:211
<hpidcock> ideally for tests that are not testing certs we either use pregenerated certs or drop the key strength to 512
<thumper> hpidcock: should be fast, but if the machine is very busy and swapping
<thumper> hpidcock: right
<thumper> perhaps these tests are just missing some patching of values
<hpidcock> yeah
 * thumper wonders what that bug is
<thumper> bug 1558657
<mup> Bug #1558657: many parts of the code still don't use clocks <tech-debt> <juju:Triaged> <https://launchpad.net/bugs/1558657>
<thumper> left there as a todo
<thumper> doesn't look like we patch pki.DefaultKeyProfile anywhere
<hpidcock> I think there is room for us to do work on the test waiting. If we have a known long running operation, if it could suspend test timers
<thumper> I'd rather use a smaller key for tests so they are faster
<wallyworld> tlm: i just going to relocate, can look at PR again soon if lyou need me to
<tlm> wallyworld: just gonna take the dogs for a run and punch on for a little bit longer. Have fixed a few bugs today and done most of your feedback. About to fix the password generation point you made in the PR
<wallyworld> no worries, enjoy the walk, i might do the same amd come back later
<kelvinliu> wallyworld: hpidcock  https://github.com/juju/juju/pull/11558 got this WIP pr to let the 137 error retry, could u take a look to see if it makes sense ? ty
<wallyworld> sure
<kelvinliu> if all good, i will sort out the test
<wallyworld> kelvinliu: so it all works now?
<kelvinliu> so when I turn the wrench on, I can see nextOps returns a new retry==true remote-init operation. then it finishes if I remove the wrench toggle file
<kelvinliu> I made a bit change in the container resolver, Im not 100% sure if the change makes sense. would be good to get hpidcock to take a look
<wallyworld> kelvinliu: can we put the retable check inside RemoteInit implementation to avoid importing k8s into uniter worker
<wallyworld> *retryable
<kelvinliu> it's currently in remoteInit.go, we need to mutate the local state, not sure if we can do this outside of uniter package
<wallyworld> kelvinliu: i mea just the error wrapping
<wallyworld> state mutation is fine where it is
<kelvinliu> ok, u mean add one more layer of this special error ?
<wallyworld> the aim is to avoid importing k8s package into worker/uniter, but i'd need to look closer at the code
<wallyworld> kelvinliu: so in here func (op *caasOperator) remoteInit(
<wallyworld> that's what is ultimatly called by callbacks.RemoteInit()
<kelvinliu> sure, im already on it now
<wallyworld> awesome, that keeps the k8s stuff in one place
<wallyworld> kelvinliu: also, i thought for sure we have a generic CanRetry() type error somehwere already. that would then fully abstract away any k8s from the worker/uniter
<wallyworld> maybe we don't, i'll check
<wallyworld> kelvinliu: ah,, this is what i was thinking of.  golang net/Error interface has a Temporary() bool method
<wallyworld> so not directly usable here as it's not really a network error
<wallyworld> we could add a new interface to core package perhaps
<wallyworld> that would decouple worker/uniter from k8s related knowledge
 * wallyworld bbiab
<kelvinliu> wallyworld:  I added a new uniter error `retryableError` to detect if an operation should be retried or not
<wallyworld> tlm: i've unresolved a few comments that were either resolved without the necessary changes, or resolved without a comment as to why the request was invalid
<manadart> achilleasa: https://github.com/juju/juju/pull/11563.
<achilleasa> manadart: looking
<skay> I'd like to deploy another postgresql unit to an environment so that I can have a swapable unit and I've never done it before. is there a post I can read about that somewhere?
<skay> on the charm page it looks like all one does is add another unit and magic happens. I'd like to know a little more beyond trying to read the src code in the charm about this. like, how long does it take for the database to get replicated (based on size of db)
<skay> etc etc etc
<skay> and I never set anything on the charm when I deployed it, do the defaults magically work or do I need to set different ones. etc etc
<achilleasa> hml: the changes in 11523 LGTM. I just have a comment about the relationer constructor. Can you take a look?
<hml> achilleasa:  looking
<hml> achilleasa:  iâll give it a try.
<hml> achilleasa:  thank you.
<achilleasa> hml: it shouldn't make a difference in total LOC as the PR already lowercaases the type
<hml> manadart:  qa good, looking at the code now
<hml> achilleasa:  i agree in principal, however itâs breaking down for me here:  https://github.com/juju/juju/blob/8ed98bba62d60df44e5648962bf25bb10060f488/worker/uniter/relation/statetracker.go#L52.
<hml> achilleasa: the StateTracker canât get the Relationer  at creation.
<hml> achilleasa:  there are work arounds, but they seemed to defete the idea behind the change.  Or use a PatchValue call?  eh.
<hml> achilleasa:  open to more ideas
<achilleasa> hml: you could hoist the method to RelationStateTrackerConfig and provide a closure 'func(....) Relationer { return relation.NewRelationer(...) }' when you populate the uniter config
<achilleasa> this way, it can be easily (in relative uniter test-suite terms) mocked at test time
<hml> achilleasa:  i did that, but it seemed to defete the purpose of the change for me.
<hml> achilleasa:  the only use of it not in test, would require a wrapper
<achilleasa> hml: I guess you are right... it's not like we will have a different Relationer implementation anyway right?
<hml> achilleasa:  nope.  the only reason I added the func, was for test.  to avoid doing a PatchValue()
<achilleasa> hml: ok then, I can approve the PR and you can land as-is. Sorry for sending you down a rabbithole; should have spotted that func bit earlier
<hml> achilleasa:  it mostly has to do with how the StateTracker is written. no easy way around
<hml> achilleasa:  np problem.  itâs a good learning for me.  perhaps i can use it later.  does solve a few things iâve noticed over time with the take an interface return a struct interacting with mocked testing
<hml> if it wasnât for the darn func.  :-/
<achilleasa> hml: approved
<hml> achilleasa:  ty
<achilleasa> hml: can you take a look at the appended commits to get https://github.com/juju/juju/pull/11557 green before I hit merge?
<hml> achilleasa:  sure, give me a min or
<hml> 2
<achilleasa> hml: since I hit EOD feel free to land the PR if it looks OK
<hml> achilleasa:  rgr
#juju 2020-05-13
<hpidcock> wallyworld: just got this bootstrap lxd on 2.8-rc branch
<hpidcock> Unable to fetch Juju GUI info: error fetching simplestreams metadata: cannot unmarshal JSON metadata at URL "https://streams.canonical.com/juju/gui/streams/v1/com.canonical.streams-released-gui.sjson": json: cannot unmarshal string into Go struct field Metadata.juju-version of type int
<wallyworld> hpidcock: yup, they messed up the metadata
<wallyworld> a fix is in progress
<hpidcock> awesome
<wallyworld> indeed
<hpidcock> wallyworld: https://github.com/juju/python-libjuju/pull/412
<hpidcock> and anyone else interested in python-libjuju
<wallyworld> looking
<wallyworld> +43000!!!!!!
<timClicks> wgrant: appreciate the time taken to write that response, thanks
<wgrant> timClicks: It's possible that I may have some Opinions :)
<wgrant> Thanks for starting conversations like this
<timClicks> wgrant: am glad that thread was started with a warning ;)
<wgrant> Heh, indeed.
<wallyworld> hpidcock: ideally we'd use 2.8.0 for the schema not 2.8-rc2 as that reference will be out of date in days. can we add in the remaining schema from the model operator branch and use 2.8.0
<hpidcock> wallyworld: rc2 is 2.8.0 at the moment. If stuff hasn't landed in the rc branch it's not 2.8.0 yet
<wallyworld> sure, but it seems unfortunate to release a new libjuju with will be out of date in terms of that in a matter of days
<wallyworld> i guess we can do a 2.8.1 after 2.8 ships
<thumper> hpidcock: reviewed libjuju
<wallyworld> thumper: i'm still waiting on fixes/changes to the dashboard tarball, but this works with the one that's currently published. i also need to retest upgrades after changes to accommodate the recent tarball revision https://github.com/juju/juju/pull/11562
<thumper> wallyworld: ack
<thumper> wallyworld: did you check that the old gui continued to work after the upgrade?
<wallyworld> i did
<wallyworld> will retest everything though after final tweaks for the new new new dashboard
<wallyworld> thumper: everyone else super busy, so sorry, https://github.com/juju/juju/pull/11565
<wallyworld> ah balls, targetted to wrong branch, should be 2.7
<thumper> wallyworld: did you want me to retarget?
<thumper> or do you need to rebase first?
<wallyworld> thumper: i just rebased and retargetted
<wallyworld> now i need coffee before i hack on the dashboard stuff again to support the new new format
<thumper> wallyworld: asked a question...
<wallyworld> thumper: answered
<wallyworld> previously the offer stirng was a const
<wallyworld> now it's built from series, and we also need the sku
<thumper> wallyworld: right, but the number at the front is something they increment
<thumper> not a fixed 0001
<wallyworld> supposedly
<wallyworld> it's not something we know ahead of time, it's not something were can query. we could make it a vonfig somewhere but that has its own issues
<thumper> which is why I asked can we do a substring match on the offer?
<wallyworld> it's an arbirary decision outside of juju
<wallyworld> no, becaue it's a param passed to azure
<wallyworld> we don't match on it
<wallyworld> we pass it to azure
<thumper> right
<thumper> I understand that
<thumper> can we pass a substring?
<thumper> is it an exact match?
<thumper> because we shouldn't be using 0001
<wallyworld> it's used to tell azure to pick a particular image by tag, no substring involved, eg
<wallyworld> 	return &compute.StorageProfile{
<wallyworld> 		ImageReference: &compute.ImageReference{
<wallyworld> 			Publisher: to.StringPtr(publisher),
<wallyworld> 			Offer:     to.StringPtr(offer),
<wallyworld> 			Sku:       to.StringPtr(sku),
<wallyworld> 			Version:   to.StringPtr(version),
<wallyworld> 		},
<wallyworld> 		OsDisk: osDisk,
<wallyworld> 	}
<wallyworld> Offer is a straight out arg to an api call
<wallyworld> we either have it or we don't
<thumper> in which case we have a problem
<wallyworld> if they ever chabge it yes
<thumper> guarantee that they'll change it
<kelvinliu> wallyworld: https://github.com/juju/juju/pull/11566 got this pr to move the retry to initialization function to retry the coping charm request only rather than retrying the whole remote init operation
<wallyworld> we need to tell them not to
<kelvinliu> could u take a look?
<wallyworld> sure
<kelvinliu> ty
<wallyworld> kelvinliu: lgtm but retarget to 2.8-rc branch
<kelvinliu> wallyworld: yep, ty
<thumper> https://github.com/juju/juju/pull/11567 for a fix to the full status query tracker tests
<wallyworld> hpidcock: kelvinliu: looks like another remote-init issue, bug 1878329 application-mattermost: 15:32:59 ERROR juju.worker.uniter resolver loop error: executing operation "remote init": caas-unit-init for unit "mattermost/0" failed: ERROR failed to remove unit tools dir /var/lib/juju/tools/unit-mattermost-0: unlinkat /var/lib/juju/tools/unit-mattermost-0/goal-state: permission denied
<mup> Bug #1878329: stuck k8s workload unit following upgrade-charm with new image <juju:New> <https://launchpad.net/bugs/1878329>
<wallyworld> thumper: looking
<kelvinliu> that's weird, operator is the root user
<wallyworld> thumper: just a small suggestion
<wallyworld> thanks for the libjuju relase hpidcock, you've made several folks very happy, including the osm guys
<thumper> not to mention solutions qa
<kelvinliu> wallyworld:  https://bugs.launchpad.net/juju/+bug/1877935 we still have this bug,
<mup> Bug #1877935: operator trys to exec into the workload container but the container is not running yet <juju:New> <https://launchpad.net/bugs/1877935>
<wallyworld> yeah we do :-(
<kelvinliu> forgot mention in standup, I think this one and the watcher one should have higher priority than the block storage one?
<wallyworld> yup
<wallyworld> the block storage one is more a guard rail
<kelvinliu> ok, im gonna look on these two first
<wallyworld> with the init one, maybe we can try looking for the pod a few times, using retry()
<wallyworld> or at least query the cluster to see if things are stareting up
<wallyworld> and give them time to get done if they are
<kelvinliu> i think there might be a upgrade charm flow missing in the init process
<wallyworld> could be, i'd need to look at the logic again to see what's been implemented
<kelvinliu> also I guess the watcher bug is related operator rather than the watcher itself. It seems the operator is not responding correctly if any uniter got those errors(like 137, etc).
<thumper> wallyworld: you think I should just say "FullStatus" without the prefix ?
<thumper> I think it would probably match too
<wallyworld> thumper: it's more filtering out other method names that contain the one currently being filtered on
<thumper> wallyworld: I don't know what you just said
<wallyworld> so if the tracer has been set up to look for "FullStatus" and the traceback has "FullStatusVerbose", it would match both with the current code
<thumper> FWIW, 	tracker := s.State.TrackQueries("FullStatus") works
<wallyworld> we want to exclude "FullStatusVerbose"
<thumper> well, then the caller should say "FullStatus("
<wallyworld> ok, I wasn't sure if the expectation / desire was for a single method match
<wallyworld> to avoid ambiguity
<thumper> it is sufficient for now, but we may need to tweak later, I'll add more of a comment on the TrackQueries method to explain so future people will know
<wallyworld> sgtm
<thumper> wallyworld: added more of a comment to explain on the tracker
<thumper> so it shouldn't be a surprise to any future people using it
<wallyworld> ty
<thumper> wallyworld, or anyone: https://github.com/juju/juju/pull/11568
<thumper> tlm, hpidcock, kelvinliu: ^^?
<kelvinliu> looking now?
<thumper> kelvinliu: thanks
<thumper> kelvinliu: generally for things we need to wait on, we use testing.LongWait
<thumper> even if we expect them to happen quickly
<thumper> there is no real change to the test times for running in that package
<thumper> with the patch I had 2.2, 3 and 4s for the package on three different runs
<thumper> before it was 2.7  seconds
<thumper> so within the realm of ok IMO
<kelvinliu> thumper: it's weird to see this errors because we mocked anything in caas provider, so no db or any network access, but the fix lgtm.
<thumper> yeah, I'm betting it is just CPU contention
<thumper> and the number of concurrent goroutines
<thumper> yes it shouldn't take long
 * thumper shrugs
<thumper> I'm pretty sure this will make the intermittent issues go away
<kelvinliu> yeah, thanks for the fix
<wallyworld> thumper: that PR should land in the 2.8 branch IMO
<wallyworld> not 2.8-rc but 2.8
<wallyworld> tlm: your PR ready for another look?
<timClicks> an excellent thread has emerged on a charm's error state for anyone who would like a few minutes of procrastination ahead of them https://discourse.juju.is/t/when-to-send-an-application-into-an-error-state/3046
<manadart> hml: Cherry-pick to the RC branch of Tim's ENI/Netplan patch: https://github.com/juju/juju/pull/11569
<hml> manadart:  looking
<hml> manadart:  tick
<manadart> hml: Ta.
<manadart> hml: Small utility addition. No hurry; I am EoD: https://github.com/juju/juju/pull/11570
<cory_fu> petevg: I have a present for you: https://github.com/juju/python-libjuju/pull/415  There are still failing tests, but you can actually see what they are now and they would have caught the open issues with the 2.8 release.
<petevg> cory_fu: nice! Thank you :-)
<cory_fu> petevg: It would also be nice to see all of those "Event loop is closed" errors when a test fails cleaned up to make the results less noisy, but I'll leave that to you.  ;)
<petevg> +1
<petevg> :-)
<cory_fu> petevg: I see that the Juju repo is also using GH Actions now.  It would be really interesting if a portion of that PR could be ported over to the Juju repo so that breakages to libjuju are caught immediately rather than depending on a change to land in libjuju to find it.  I guess one aspect of that is that you'd need to do the upstream sync as part of the test so that things like facade or definition changes got picked up.  That probably worth
<cory_fu> doing on the edge portion of the libjuju tests as well, TBH.
<petevg> cory_fu: that makes sense. It'd be nice to catch this stuff right away ...
<cory_fu> petevg: Downside is the 20+ minute duration of the integration tests, plus the fact that while the change in Juju might lead to a break, it would likely need to be fixed in libjuju rather than the Juju PR.
<petevg> True.
<petevg> That would create a chicken and egg issue!
<cory_fu> I wonder if GitHub Actions could do a daily run with notifications?  Or since you have a Jenkins already, adding a daily run to that to watch for libjuju breakage.
<cory_fu> petevg: I actually thought that last one was planned when the Juju team took over libjuju, but I guess it got dropped.
<rick_h> cory_fu:  petevg one of the issues we had was the whole "land the thing that generates the new schema" and then go update the schema in the library chicken and egg issue
<rick_h> cory_fu:  petevg I think there is a test that's non-gating in jenkins that checks if things are likely to be broken. You can check with stickupkid on those as he's managed most of that to date
<petevg> rick_h: cory_fu made a pr to make that test gating :-)
<cory_fu> petevg: Different test
<petevg> Aha! Got it.
<cory_fu> rick_h, petevg: Would that be this test?  https://jenkins.juju.canonical.com/view/github/job/github-integration-tests-pylibjuju/
<cory_fu> Hrm.  Maybe this one?  https://jenkins.juju.canonical.com/job/github-schema-tests-pylibjuju/
<petevg> You're showing me red circles and making me sad, cory_fu :-(
<cory_fu> petevg: Red circles that haven't been run in months, too
<cory_fu> petevg: Maybe this one will make you happier?  https://jenkins.juju.canonical.com/job/github-check-merge-juju-python-libjuju/147/
<cory_fu> That is running the unit tests, at least, but that doesn't help with catching any of the stuff that the integration tests caught.
<cory_fu> petevg: Thinking about it more, there's a chicken-and-egg issue in the other direction as well.  If we make the libjuju edge tests do the upstream sync and it fails, it wouldn't really be relevant to the PR on that side either.
<cory_fu> petevg: It definitely seems like the right thing is instead something like a daily build that uses master of Juju and master of libjuju, syncs and builds them, then runs the integration tests from libjuju.  As long as that actually generated an alert that got paid attention to, it wouldn't block specific PRs but would let you know if there was an issue that would need to be addressed.
<cory_fu> Plus the 20 or so minute runtime of the integration tests wouldn't be so onerous if only being run once a day.
<thumper> wallyworld: https://github.com/juju/juju/pull/11571
#juju 2020-05-14
<wallyworld> tlm: testing the model operator, i notice that the service and pod and deployment names etc start with <modelname>. That's not needed and it just adds clutter. Can you change to just "modeloperator"
<wallyworld> thumper: +1 on the pr
<thumper> wallyworld: ta
<wallyworld> tlm: also, the introspection script is missing
<hpidcock> python-libjuju PR https://github.com/juju/python-libjuju/pull/416
<thumper> hpidcock: I'm wondering if recent jenkins updates missed something, I had a new PR (https://github.com/juju/juju/pull/11571) and a merge job was kicked off for it with no $$merge$$ anywhere
<thumper> the merge job unit tests passed, but the merge failed (obviously) because it hadn't been reviewed
<thumper> https://jenkins.juju.canonical.com/job/github-juju-merge-jobs/1559/console
<thumper> the job thinks it merged fine
<thumper> but github said no
<thumper> a bit weird
<hpidcock> thumper: looking
<hpidcock> thumper: I don't see a failed merge job for 11571
<hpidcock> there is only the pending one
<thumper> no, it passed
<thumper> the one I linked above was for it
<thumper> but it shouldn't have been started even
<hpidcock> thumper: that is for 11564 not 11571
<thumper> ah... it showed up on my branch...
<thumper> perhaps it is github that has the error
<hpidcock> probably because of this pr https://github.com/juju/juju/pull/11564
<thumper> I just proposed the same branch to land on a different target
<thumper> that makes me feel better
<thumper> I think we're ok then
<hpidcock> yeah you scared me for a second
<thumper> I was a bit scared too
<thumper> and now I'm scared for an entirely different reason, for a different issue
<thumper> just emailed crew
<thumper> need to examine code
<tlm> wallyworld: np
<tlm> wallyworld: got 5 minute for HO ?
<wallyworld> tlm: sure, once sec
<wallyworld> tlm: merged, tag is 2.0.1
<tlm> thanks wallyworld
<babbageclunk> thumper: do you think I should remove the featureflag worker? It's not used now that the legacy-leases-flag is removed, but it's not actually tied to that flag, and I could see wanting to use it again for some other flag
<babbageclunk> I guess it could always be resurrected, but it's equally likely to just get rewritten next time it's needed because no-one remembers it was there
<babbageclunk> that might still be better than keeping the vestigial worker around
<babbageclunk> ok, you've convinced me, good talk!
<thumper> wallyworld: reviewed, just some logging to fix
<thumper> babbageclunk: I agree with yourself
<wallyworld> thumper: ty
<babbageclunk> thumper: for reference, the deletionist won
<wallyworld> kelvinliu: just checking in to see if i can help with anything, quick HO?
<kelvinliu> yep
<babbageclunk> anyone want to remove the legacy-leases removal? https://github.com/juju/juju/pull/11573
<wallyworld> looking
<babbageclunk> oops I meant review it
<babbageclunk> thanks!
<wallyworld> babbageclunk: should the leases collection be removed after upgrading
<babbageclunk> wallyworld: mmmmmaybe?
<babbageclunk> probably
<wallyworld> remove the const from allcollections at least so we don't create it
<babbageclunk> yeah, good call - I'll just use "leases" in the places that use leasesC now.
<wallyworld> yup
<wallyworld> in upgrade steps i'm guessing
<babbageclunk> yup
<wallyworld> hpidcock: a very small fix https://github.com/juju/juju/pull/11574
<tlm> i can do it wallyworld ? hpidcock has gone AFK I think
<wallyworld> ok, ty
<kelvinliu> wallyworld: free to HO?
<wallyworld> kelvinliu: sure
<kelvinliu> stdup?
<thumper> anyone? https://github.com/juju/juju/pull/11575
<thumper> merges 2.8-rc branch into 2.8
<thumper> everything applied cleanly
<thumper> (thankfully)
<tlm> i'll take a look
<thumper> tlm: ta
<kelvinliu> wallyworld: application-mariadb-k8s: 02:00:59 ERROR juju.worker.uniter resolver loop error: executing operation "remote init": Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "49f43e921bc7252d4180672fb6535b9b09bcff09a50b7495c19d117ef10f10ad": cannot exec in a stopped state: unknown
<kelvinliu> this is the error I mentioned last week, now I got it again. I might finally have to retry for many different errors.
<skatsaounis> Hi there, the Juju public events calendar had an entry for Juju Office Hours for today. Apparently I was not able to find the relevant YouTube link.
<skatsaounis> Did you cancel it or transfer it to a different date?
<manadart> skatsaounis: I assume that has been left up in error. I don't see it on my schedule.
<manadart> achilleasa: https://github.com/juju/juju/pull/11576
<achilleasa> manadart: any particular reason for dropping the []InterfaceInfo type? I am actually working on adding a filter function to select NICs created via OVS
<achilleasa> s/actually/currently
<achilleasa> manadart: ah crap. sorry; just saw that it moved
<manadart> achilleasa: Ja.
<achilleasa> manadart: left one comment
<manadart> achilleasa: Thanks. That method's only usage is in it the test for it. Any issue if I delete it?
<manadart> *is in the test for it...
<achilleasa> Don't think so. You could make it a method on []InterfaceInfo though. I will also be adding one for Filtering
<achilleasa> we may need a sorted list at some point
<manadart> achilleasa: Sure; I'll remove it. Specific sort(s) can be added to the new type as you say, at need.
<manadart> achilleasa: Removed it. Also relocated tests for the type that were still under the network package.
<achilleasa> manadart: great; I will wait for it to land and rebase my stuff on top so I can make use of the exported InterfaceInfos
<skatsaounis> manadart, ok thanks. The most important thing is that I didn't miss it :) Just to be on the safe side, I can see the next one is on 21th of May. Is that correct?
<manadart> skatsaounis: timClicks is organising those. He should be able to confirm/deny.
<achilleasa> manadart: can you also forward-port your changes to develop?
<manadart> achilleasa: Yep.
<manadart> achilleasa: https://github.com/juju/juju/pull/11577
<achilleasa> manadart: did it merge cleanly or did you have to tweak anything?
<manadart> achilleasa: Just the same old version bits.
<manadart> All clean otherwise.
<achilleasa> manadart: ok, going through the change list but just wanted to double-check if I should do a thorough review or not ;-)
<manadart> achilleasa: Ja.
<skay> is there a way for me to specify machine constraints when I add another unit to an application?
<skay> *different* constraints than what I originally deployed
<rick_h> skay:  just change the constraints with set-constraints and then when you add-unit it'll follow the new updated constraints
<skay> rick_h: thanks!
<achilleasa> manadart: I will push a PR against 2.8 (and forward port to develop) to replace []InterfaceInfo with InterfaceInfos. Will that cause any issues with the stuff you are working on?
<manadart> achilleasa: No. I was going to do that ultimately, so go ahead.
<manadart> hml: Can you look at his one in your day? https://github.com/juju/juju/pull/11579. No rush I am heading off now.
<hml> manadart:  looking
<achilleasa> hml: small "sed-rename" PR https://github.com/juju/juju/pull/11580
<hml> achilleasa:  added to the queue.  :-)
<hml> achilleasa:  approved
<achilleasa> hml: thanks!
<achilleasa> achilleasa: I will push another one to forward-port to develop so I can rebase my ovs PR tomorrow
<achilleasa> hml: ^
<hml> achilleasa:  rgr
<achilleasa> hml: PR for the forward port: https://github.com/juju/juju/pull/11581
<hml> achilleasa:  looking
<hml> achilleasa:  approved
<hpidcock> pylibjuju PR for wallyworld or someone else https://github.com/juju/python-libjuju/pull/422
<wallyworld> looking
<wallyworld> hpidcock: btw, last night i landed the gh actions PR, got pinged by a us person
<wallyworld> hpidcock: why is status.relations handled but not status.applications?
<wallyworld> it seems this would be confusing for a user writing python to use the library to us different approaches
<hpidcock> wallyworld: I'm just testing various ways of getting attributes
<wallyworld> ah ok
<wallyworld> might be work a different test then
<wallyworld> a small test with just that bit tested
<hpidcock> wallyworld: yep can do
<wallyworld> lgtm, ty
#juju 2020-05-15
<wallyworld> babbageclunk: is it true we hard code leases auto expire to false? did we have plans to change that? if auto expire is always false, then we can remove more code?
<babbageclunk> wallyworld: ? not sure what you mean
<wallyworld> / Autoexpire is part of the lease.Store interface.
<wallyworld> func (*store) Autoexpire() bool { return false }
<wallyworld> controls how the worker expires leases
<wallyworld> whether it does it on the next tick or if a client call i needed
<babbageclunk> Ah - right! Yes, I'll remove that - it's because the db leases needed to be expired, but the raft ones get expired on clock tick
<babbageclunk> thanks for the reminder!
<wallyworld> babbageclunk: np, but for raft, if it is done automatically, wouldn't auto expire be true?
<babbageclunk> it is
<babbageclunk> I mean, I'll remove the code in lease manager that does stuff when !autoexpire
<wallyworld> but it's hard coded false?
<babbageclunk> in the non-raft store
<wallyworld> ah, ok, i was looking at the wrong one
<wallyworld> duh
<babbageclunk> yup yup
<wallyworld> babbageclunk: you can also remove ExpireLease() right? and reduce code in provider dummyLeaseStore, or even remove that entirely
<babbageclunk> yeah, that was what I was planning
<thumper> https://github.com/juju/juju/pull/11582 for someone
<thumper> backporting develop fix to 2.8
<thumper> should really have landed in 2.7, but I don't think it is worthwhile going that far back now
<wallyworld> lgtm
<wallyworld> thumper: wanna land in 2.8-rc though
<wallyworld> that will get forward ported today later
<thumper> wallyworld: ok
<kelvinliu> wallyworld: hpidcock could any of you take a look this PR https://github.com/juju/juju/pull/11583 for fixing upgrade-charm? ty
<wallyworld> sure
<thumper> wallyworld, hpidcock, kelvinliu, tlm: do any of you have any running k8s models that have been upgraded from earlier versions?
 * thumper wants to test a theory
<wallyworld> not currently
<wallyworld> kelvinliu: +1 but with a request to fix the RemoteInitFunc signature to remove runningStatus. the cahce cleanup can happen later, but will be nioce to have the func clean
<kelvinliu> wallyworld: we still need get the pod name from the status, or we can change the status from podName string. or do u think we can change it later when we decide the remove the status entirely?
<kelvinliu> typo, change the status to podName string
<tlm> thumper: I don't
<wallyworld> kelvinliu: ah right, i missed that, sure
<tlm> wallyworld: cooking with gas when you're ready
<wallyworld> tlm: looking
<kelvinliu> wallyworld: I think we just need to test for upgrading different workload type - stateless, stateful, daemon, but probably no need to test different k8s cloud, agree?
<wallyworld> we should at some point but can be microk8s for now i think
<wallyworld> tlm: SetPasswords() also needs changing to be an api that takes just the password. it can internally create the params.EntityPassword since we use a pluggin at the back end, but the api called should just pass in a single password
<wallyworld> and result.OneError()
<wallyworld> func sig just returns a single error, not ErrorResults
<wallyworld> tlm: also, i added tag 2.0.1 to juju/description
<wallyworld> i left comments in the pr
<tlm> wallyworld: cheers doing now
<babbageclunk> gah, can't remove leasesC - we can't run transactions against an unknown collection.
<babbageclunk> wallyworld: ^
<wallyworld> which txns?
<babbageclunk> the ones in MigrateLeasesToGlobalTime
<wallyworld> can we chck if the collection exists before running hte upgrade step logic
<wallyworld> if the collection is not there, no need to migrate anything
<babbageclunk> yes, but it needs to be in allcollections, otherwise running a transaction fails
<wallyworld> point me at the code that fails?
<wallyworld> the collection will be there in older dbs right
<wallyworld> and we run the upgrade step on that existing collection
<babbageclunk> wallyworld: state/upgrades.go:1293
<wallyworld> babbageclunk: right, so that the top of the method we check if the collection exists
<wallyworld> if it doesn't nothong to do
<babbageclunk> it's not the existence of the collection - it's the entry in allCollections
<wallyworld> HO?
<babbageclunk> Otherwise the tests (and the upgrade step) fail with  forbidden transaction: references unknown collection "leases"
<babbageclunk> sure, in stdup
<tlm> jjkkjj
<tlm> wallyworld: got 5 minutes for HO ?
<thumper> https://github.com/juju/juju/pull/11584 - cleanup for some future work
<wallyworld> tlm: sure
<hpidcock> thumper: lgtm
<thumper> hpidcock: thanks
<thumper> https://github.com/juju/juju/pull/11585 - for a bug that had been assigned to me for over a year
<thumper> or two
<hpidcock> lgtm
<thumper> hpidcock: thanks again
<tlm> hey wallyworld you didn't push a tag for juju/description
<tlm> still have it? Not sure I have push access to that repo
<wallyworld> ah, it was addd to the branch for which i created a PR
<wallyworld> i'll add
<tlm> ta
<wallyworld> tlm: try now
<tlm> works cheers wallyworld
<tlm> wallyworld: I think go mod wants tags in the form of v2.0.1 hpidcock can you confirm ?
<hpidcock> tlm depends on the package
<wallyworld> tlm: the previous tag was 2.0.0
<tlm> hmmm go mod is not liking life
<wallyworld> tlm: i added v2.0.1
<wallyworld> does that help?
<tlm> changed the error but hasn't helped
<tlm> will dig into it
<tlm> can you remove the v tag wallyworld ?
<wallyworld> ok
<wallyworld> done
<wallyworld> babbageclunk: you also removing store.Refresh() ?
<wallyworld> tlm: looks good after the go.mod issue is sorted
<tlm> wallyworld: go.mod issue should be sorted now
<tlm> ?
<wallyworld> tlm: ah ok, sorry i was probably looking at an outdated diff
<babbageclunk> wallyworld: yeah, I will - just working out how I can make the dummy store do autoexpiry
<wallyworld> ah joy
<wallyworld> babbageclunk: i have the unit->dead revocation fully working :-D
<babbageclunk> ooh nice
<wallyworld> kelvinliu: tlm: so i think your stuff is good to go; i have release notes organised; as soon as things land wash through jenkins, i'll kick off rc2. any issues i'm missing?
<kelvinliu> wallyworld: ho?
<wallyworld> ok
<wallyworld> hpidcock: you free to jump into standup?
<hpidcock> wallyworld: sure
<wallyworld> hpidcock: kelvinliu: sorry, hit button too soon
<kelvinliu> nws
<hpidcock> wallyworld: sure
<babbageclunk> wallyworld: hey, I'm just going to hit merge on my pr so yours doesn't need to hold up - I can always put the extra store tidy-ups in after.
<babbageclunk> that was poorly worded but I think you can probably see what I'm getting at
<wallyworld> babbageclunk: whoohoo, tyvm
<achilleasa> manadart: I have landed the []InterfaceInfo -> InterfaceInfos PR on 2.8 and dev; make sure to rebase if working with those types to avoid conflicts
<manadart> achilleasa: Yep, rebased already.
<achilleasa> manadart: have a question on 11579
<manadart> achilleasa: HO?
<achilleasa> omw
<manadart> achilleasa: Pushed change and replied to your comment.
<achilleasa> looking
<manadart> achilleasa: Simple one: https://github.com/juju/juju/pull/11587
<manadart> achilleasa: Thanks. Also need a tick on this backport of the earlier one: https://github.com/juju/juju/pull/11588
<achilleasa> manadart: done
<manadart> achilleasa: Ta.
<manadart> achilleasa: Trivial forward-merge https://github.com/juju/juju/pull/11589.
<achilleasa> manadart: trade you for https://github.com/juju/juju/pull/11590
<manadart> achilleasa: Deal.
