#juju 2013-08-19
<marcoceppi> kurt_: If you're using GUI, you'll want to use cloud:precise-updates/grizzly
<kurt_> macroceppi: thank you.  is there anything I need to do w/r to the other components or repositories and such?
<kurt_> marcoceppi: That failed too. :( http://pastebin.com/Lht2LULb
<marcoceppi> kurt_: and you've tried cloud:precise-grizzly ?
<marcoceppi> kurt_: the openstack charms are mad complicated to deploy
<marcoceppi> I have a deployer file I used during a demo that successfully sets up everything but swift
<marcoceppi> if you want to use it for referrence
<marcoceppi> With the demo I was able to log in to horizon and push images to glance, but networking was broken. It was enough for my demo but might help you sort out configuration details, etc
<marcoceppi> kurt_: http://paste.ubuntu.com/6001802/
<marcoceppi> I'm traveling ATM, but I have a goal next week to get OpenStack deployed using MAAS + Juju and document the process (as long with document Juju + MAAS)
<marcoceppi> kurt_: are you deploying openstack on MAAS?
 * marcoceppi boards a plane
<kurt_> marcoceppi: yes
<kurt_> with juju-gui
<marcoceppi> kurt_: using juju-gui shouldn't cause a problem here. It's method of deployment is near identical to that of the CLI. Do you get different results using the CLI?
<kurt_> I haven't tried via command line
<marcoceppi> kurt_: the issue I think is with the proper configuration of the charms, not nessisarily with juju-gui
<marcoceppi> kurt_: that deployer file I gave you is a YAML representation of a near working openstack deployment I did earlier
<marcoceppi> each service has the configuration options I used under it, it might be a good starting point for you
<kurt_> Ok
<kurt_> why didn't you have to enter the vip parameter for keystone?
<kurt_> it was considered mandatory under the gui
<kurt_> anyways I'm going off on tangents
<kurt_> marcoceppi: thanks.  I'll try that
<kurt_> marcoceppi: and btw - yes I tried cloud:precise-grizzly in the gui
<weblife> jrwren: I was wrong it was cpu-power=0
<geme> I've just deployed the tomcat6 charm on a private openstack instance and port 8080 hasn't been added to the security group rules. Is this a known problem ?
<weblife> how could I expose the juju bootstrap server since it doesn't get assigned a service?
<kurt_> weblife: are you referring to your bootstrap server or your root node 0 bootstrapped node?
<kurt_> weblife: either way, just ensure at least one interface is on your public facing network and you are golden.
<weblife> kurt_:  do you mean in the etc/networks/interfaces file?
<kurt_> yes
<kurt_> weblife: are you using MAAS?
<weblife> AWS ;x
<kurt_> Ah, yeah, you'll need to ensure whatever interfaces you want exposed are publicly routable.  In MAAS, you need to use NAT to route from an external interface to an internal interface.  At least that's what worked for me.
<jcastro> weblife: the docs are wrong
<jcastro> juju doesn't support instance-type
<weblife> jcastro: ?? Oh lol.
<weblife> jcastro: thumper and marcoceppi helped me out on that.
<jcastro> you need to specify it with cpu-cores and memory
<weblife> jcastro: I found specifying cpu-power=0 alone works also
<jcastro> oooh
<jcastro> nice trick, I didn't know that one
<jcastro> can you paste me your entire line? I can put it in the docs as an example
<weblife> jcastro: juju bootstrap --constraints "cpu-power=0"
<jcastro> ok I added that as an example
<jcastro> it'll be in the next doc rerun.
<jcastro> evilnickveitch: ^^^
<weblife> kurt_: do you understand AWS?  This would mean I need to creat an Elastic IP and associate it with my instance right
<evilnickveitch> jcastro, ack, am actually working on that page now
<weblife> jcastro: good deal
<kurt_> weblife: I've not played with AWS extensively.  Sorry mate.
<jrwren> weblife: what do you mean by expose? and why would you do that?  I was able to ssh to my aws node 0 without anything special.
<weblife> jrwren: I am playing around with using juju on a single node.  I know it defeats the purpose, but juju also allows you to avoid that you can ssh into e need to understand ec2 cli commands. One line and you have a instance that you can ssh into and setup from there.
<weblife> thats where my cursur went, lolo
<jrwren> bootstrap gets me exactly what you just said.
 * jrwren goes and tries.
<weblife> jrwren: I am playing around with using juju on a single node.  I know it defeats the purpose, but juju also allows you to avoid the need to understand ec2 cli commands. One line and you have a instance that you can ssh into and setup from there.
<weblife> Yeah but you cant connect though the public address it seems
<jrwren> i found using the boto python api to be a good way of avoiding the ec2 cli tools :)
<jrwren> i don't understand? of course you can connect through the public address.
<jrwren> that is the only way that you can connnect unless you are on another ec2 node in teh same security group
<weblife> ec2-54-218-135-114.us-west-2.compute.amazonaws.com  Running a node.js program
<sarnold> weblife: perhaps you need to deploy an ssh charm or an 'ubuntu' charm or something similar, so that you can then use 'juju expose <service>' and then use the public ip?
<jrwren> that is why i asked "what do you mean by expose?"
<jrwren> a node.js program doesn't get exposed. do you mean a node.js server using something like socketio?
<weblife> a server, yes sorry
<jrwren> specifically a port 80 web server?
<weblife> yep
<jrwren> then, what sarnold said. :)
<weblife> im going to try that
<weblife> It's kinda funny deploying Ubuntu charm but I see the purpose.
<jrwren> that is the only way AFAICT
<weblife> jrwren: darn you.  marking me decrypt you terminology. haha http://en.wiktionary.org/wiki/AFAICT
<weblife> thank you again
<jrwren> i always assume LKML abreviations are allowed :)
<weblife> either your saying inux kernel mailing list or lord keep me strong or your f-ing with me. haha
<weblife> linux kernel mailing list
<weblife> or all of the above
<weblife> :)
<weblife> oh wow. AWS gave me close to $200 in credits sweet I can run t1.micro instance for 14.88 a month compared to the $44 something for a m1.small.  plus the 750 hours of free t1.micro.
<kurt_> I'm trying to solve juju-gui installation problem for nova-compute with cloud:precise-grizzly.  I'm looking at the hooks and running in to 403 download error as shown here http://pastebin.ubuntu.com/6003723/
<kurt_> Anyone know how to solve this?
<jcastro> is there a proxy in the way?
<kurt_> no sir
<kurt_> there is NAT
<jcastro> can you wget the file from behind the nat?
<jcastro> wget http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/grizzly/main/binary-i386/Packages
<kurt_> jcastro: (I assume this is Jorge?) That works on the node itself without problems
<jcastro> yeah it's me
<kurt_> :)
<jcastro> hrmph
<jcastro> jamespage: or adam_g ^^^ any ideas?
<kurt_> I am actually trying to follow Scott's tutorial on deploying openstack with juju-gui on your blog mate
<jcastro> oh lol, yay good info
<jcastro> oh you mean the video
<kurt_> I've ran in to a few problems along the way, but working through them
<kurt_> yup
<jcastro> oh hold one
<jcastro> I have something that might help
<kurt_> great
<kurt_> FYI - this is on vmware, but that shouldn't matter
<jrwren> i thought juju didn't support vmware
<jcastro> http://pastebin.ubuntu.com/6003753/
<jcastro> kurt_: I can also see about just getting you the bundle
<kurt_> jrwren: doing this on my own with great success
<jcastro> so you can just deploy it all in one go
<kurt_> that would be awesome!
<jrwren> kurt_: do you have juju code to support vmware "environments" ?
<jcastro> hey adam_g, do we have a bundle like that handy?
<kurt_> jrwren: remâ¦do I need that?  So far things have been going ok
<jcastro> I think he's deploying the nodes for openstack on top of vmware
<jcastro> not using vmware directly with juju
<kurt_> nope - correct
<jrwren> kurt_: i don't know. i'm trying to figure out what you are doing, becuase I might like to do it too :)
<kurt_> I'm using vmware for MAAS, then deploying on top of that
<kurt_> so far it's working pretty well
<jrwren> I see, so MAAS to get openstack on vmware, then juju to that openstack.
<kurt_> yes, but also using juju-gui
<jcastro> kurt_: do those notes I pasted help?
<kurt_> I hope to have a working openstack model in the end and write it up
<kurt_> looking...
<kurt_> I can try that Jorge.  Hey - one more question for youâ¦keystone required a VIP, which it shouldn't
<kurt_> it should be able to deploy without that
<adam_g> kurt_, that 404 is strange. the archive seems to be working fine from where i sit
<jcastro> yeah works for me too
<kurt_> yes, me too adam_g
<jrwren> 403
<adam_g> kurt_, VIP is only required if you plan on multi-unit clustering.  the VIP becomes the highly-available API endpoint
<kurt_> as I said, it works even from the node, but not from the hook - and also when I do apt-get update
<adam_g> kurt_, are you hitting a proxy somewhere?
<kurt_> adam_g: yes I agree - but via the gui, it won't deploy without it
<adam_g> kurt_, huh?
<adam_g> it should
<kurt_> ->should<_ :)
<jcastro> I wonder if it works through the API
<jcastro> errr, through the CLI I mean
<jcastro> there's been a few cases where the GUI won't deploy something but the CLI will
<kurt_> I did not try that yet - marcoceppi asked me to to do the same thing
<jcastro> I am willing to be 5 internet dollars the CLI will work fine
<kurt_> Will items deployed via CLI pop up in GUI?
<jcastro> yep
<kurt_> COOL - nice feature
<jcastro> yeah it's nice to demo to people
<jcastro> have one guy driving in the background, etc.
<adam_g> does the GUI take into account whether a config option is required/optional?
<kurt_> lol, nice feature for demos yes
<jcastro> it was a config issue last time the GUI failed but CLI worked, I bet it's the same thing this time
<jcastro> rick_h: ^^^^
 * jcastro can't wait for the CLI to talk to the API natively so we can not have problems like this anymore.
<kurt_> how far are we off for that? :)
<jcastro> not within the next few weeks, heh
<jcastro> but maybe we can just CLI this one to get you on to the next step
<kurt_> yes - are we talking keystone now or nova-compute
<jcastro> and then when the GUI guys are around I can file a bug and they'll fix it. Last time it was a 24 hour turnaround
<kurt_> sweet
<kurt_> can we make that keystone VIP field optional?
<jcastro> that's an adam_g question
<adam_g> kurt_, in any case, the VIP config setting isn't used until you add an optional relation to an hacluster service. you can just put in any IP for the time being
<kurt_> ah ok - maybe just some documentation then around that adam_g?
<kurt_> <- the layman get's confused :)
<jcastro> maybe in the README for the charm?
<kurt_> that would be awesome
<adam_g> kurt_, maybe. i'm not familiar with the GUI. not sure if its a bug or a feature of the GUI's config handling
<jcastro> I can file the bug, which charm does it exhibit this in? keystone?
<kurt_> yeah, as jcastro said, it will become a non issue when gui talks directly to api
<jcastro> man dudes, we need to sort https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<kurt_> yes, keystone
<jcastro> the GUI talks to the API directly, it's the CLI that doesn't
<bac> m_3, jcastro: who maintains jenkins.qa.ubuntu.com?  she's dead.
<kurt_> oh :)
<kurt_> sorry
<m_3> bac: actually, dunno... canonical IS I'd imagine
<bac> m_3: thx
<m_3> bac: looks like there's discussion of it ongoing atm
<bac> m_3: great
<jcastro> kurt_: how do you like the GUI so far?
<kurt_> Love it
<kurt_> its a nice piece of work and definitely the right directly
<kurt_> direction IMHO
<kurt_> If I can get openstack working with it, I will be so excited.
<kurt_> Once I get openstack working, I'm going to look in to converting some of the apps I work with in to charms.
<jcastro> there's a contest dude
<kurt_> I would love to go back to the company I work with and show them how easy it -could be- to deploy services.
<jcastro> up to 10k in prizes!
<jcastro> https://wiki.ubuntu.com/ServerTeam/OpenStackHA will be interesting to you
<jcastro> kurt_: you've picked the right time to try it. We use it but we've lacked feedback from other users as to what to fix/make better.
<kurt_> I already had that bookmarked :D
<jcastro> kurt_: what command did you use to set the VIP thing? I can put it in as an example
<kurt_> I just added it manually
<kurt_> in to the gui
<jcastro> oh, so just in the field then
<kurt_> I used an unused IP address on the MAAS network.
<kurt_> yes
<kurt_> I think adam_g said its only needed once you start deploying HA services
<kurt_> so it can be anything
<jcastro> yeah I'm just trying to figure out what to put in the charm README
<jcastro> "VIP is only required if you plan on multi-unit clusterming. The VIP becomes a highly-available API endpoint. If you don't need it you can define an unused IP on your network."
<jcastro> how's that?
<adam_g> jcastro, you should be able to deploy without setting it
<kurt_> that's fine.  he actually said it could be *any* IP address
<kurt_> I'm deploying with precise, so maybe you guys fixed it
<jcastro> precise is the target, so that should work, I'll go ahead and file a gui bug
<kurt_> jcastro: there was also a "best practices" deployment guide too
<jcastro> yeah so we want to do a documentation sprint soon
<jcastro> consolidating all these openstack docs would be a nice goal
<kurt_> I'm willing to give you guys feedback as I go along or assist if you need testing of things (time permitting of course)
<jcastro> absolutely!
<jcastro> I have shirts to send your team too!
<kurt_> my platform is vmware fusion on mac os x.
<kurt_> *laughs* - my team of one
<jcastro> just as important as a team of one million!
<kurt_> cheers
<jcastro> http://blog.xtremeghost.com/2012/11/lets-shard-something.html
<jcastro> FYI too ^^
<kurt_> I'll check that out.  Juan probably lives somewhere near me.
<jcastro> we hit the coast regularly, we should hang out
<kurt_> definitely!
<kurt_> I'm in the bay area
<kurt_> if you like beer, I find you the right places.
<kurt_> i can rather
 * jcastro nods
<kurt_> burn brighter at h o t m a i l
<kurt_> just let me know!
<jcastro> https://bugs.launchpad.net/juju-gui/+bug/1214087
<_mup_> Bug #1214087: GUI fails to deploy keystone <juju-gui:New> <https://launchpad.net/bugs/1214087>
<jcastro> blam!
<adam_g> jcastro, should that also fail with the mock environment on jujucharms.com?
<jcastro> kurt_: if you have any other notes or pain points, just lmk and we'll fix them as we see 'em
<kurt_> Ok, is this channel best way?
<jcastro> sure, or mail juju@lists.ubuntu.com
<jcastro> or jorge@ubuntu.com
<jcastro> whatever works for you
<kurt_> very good.  thanks Jorge :)
<jcastro> kurt_: did you have any problems getting maas up and running?
<jcastro> that's the one I'm real curious about
<kurt_> MAAS was fine.  I have to manually start the nodes is the only pain.  I am working with the libvirt devs to get libvirt working under mac osx so maybe I can have auto starting nodes too
<kurt_> when the nodes don't manually start, it does mean some fiddling with restarting and redeploying the nodes
<kurt_> Oh - and clock sync is a major pain - I have to go fiddle some clock sync bits
<kurt_> what is key is that maas has a direct connection to the internet
<kurt_> I used NAT on my primary clustered node to get around that
 * jcastro nods
<jcastro> are you using the mac juju client or the ubuntu one in a vm?
<kurt_> ahâ¦I have two interfaces on my primary clustered node
<kurt_> ubuntu one in VM, all VMs
<kurt_> running with Fusion on mac osx
<kurt_> I manually created a vmnet3 interface with the internal MAAS network
<kurt_> so maas could handle dhcp/dns
<jcastro> when you're all done you should write this up so other people can set that up
<jcastro> that's pretty slick
<kurt_> yes, I am very happy with the set up
<kurt_> thanks - I plan to
<kurt_> as long as I can get all the way through the openstack stuff :D
<kurt_> Jorge - what part of the world are you in if I may ask?
<jcastro> Ann Arbor, MI
<jcastro> hey so, did the deploy keep going or is it stuck somewhere else?
<kurt_> I haven't had a chance to check yet
<kurt_> deploy got stuck on repositoriesâ¦haven't fixed that yet
<jcastro> ah
<jcastro> the 403
<kurt_> apt-get update gets same errors
<kurt_> yes
<kurt_> maybe its a problem with the precise image in maas
<jcastro> yeah we might need to post to the list for that one
<adam_g> kurt_, are you hitting an apt proxy somewhere on the MAAS node?
<adam_g> i suspect you might be, and its denying access to the cloud archive
<kurt_> adam_g: no proxy, though it does go through NAT
<adam_g> doesn't MAAS configure an apt-proxy and provision new nodes to use it, by default?
<jcastro> yeah but how does it work when he does it directly on the node but not from the hook?
<jcastro> if it was a proxy that wouldn't work either right?
<adam_g> oh, didnt know it worked in some cases.
<adam_g> 'apt-get update' fails in the hook, but succeeds manually from the same node?
<kurt_> does it help that I get the same results when I do apt-get update from the node?
<kurt_> no
<kurt_> both fail
<adam_g> ok
<jcastro> oh, I am confused, I thought you said it worked on the node, my bad
<kurt_> apt-get update exhibits same 403 errors
<kurt_> http://pastebin.ubuntu.com/6003921/
<kurt_> apparently the wget failed too, but then resolved?
<adam_g> kurt_,  sudo grep "Acquire::HTTP::Proxy" /etc/apt/* -R
<kurt_> ahh - bad command line
<kurt_> wget - http://pastebin.ubuntu.com/6003924/
<kurt_> adam_g: nope http://pastebin.ubuntu.com/6003928/
<kurt_> same results on maas clustered node FYI
<jcastro> m_3: pavel would like to sync up on the rack charm
<jcastro> m_3: he's back from holiday
<adam_g> kurt_, what about: sudo grep ProxyAutoDetect /etc/apt/* -R
<jcastro> kurt_: did the keystone VIP thing return an error or did it just not work?
<jcastro> in the gui I mean
<kurt_> adam_g: same results - nothing
<adam_g> kurt_, hmm
<kurt_> jcastro: I believe I saw the error in the debug-log
<kurt_> let me look through archives
<jcastro> kurt_: if you have it that would help
<kurt_> 2013-08-18 14:16:11,995 unit:keystone/3: unit.hook.api INFO: FATAL ERROR: ERROR: Config option has no paramter: vip
<kurt_> jcastro: is that sufficient or would you like a more complete log dump in pastebin?
<jcastro> pastebin wouldn't hurt, just in case
<kurt_> http://pastebin.ubuntu.com/6003993/
<m_3> jcastro: yeah, he pinged me earlier then disconnected
<m_3> available whenever
<jcastro> m_3: can you just coordinate with him via mail?
<m_3> jcastro: yeah
* jcastro changed the topic of #juju to: Share your infrastructure, win a prize: https://juju.ubuntu.com/charm-championship/ || OSX users: We're in homebrew! || Review Calendar:  http://goo.gl/uK9HD || Review Queue: http://manage.jujucharms.com/review-queue || http://jujucharms.com || Reviewer: ~charmers"
<jcastro> we're in homebrew!
<weblife> whats the link for charms school now? Or they only on your youtube jcastro?
<m_3> whoohoo!
<m_3> jcastro: is that `brew install go && go get ...` or just `brew install juju`?
<weblife> m_3: you get my response email?
<adam_g> kurt_, what juju version are you using?
<kurt_> juju 0.7
<m_3> weblife: not sure... lemme look
<jcastro> weblife: https://juju.ubuntu.com/resources/videos/
<jcastro> has a link to all the charm school videos
<jcastro> m_3: should be brew install juju
<jcastro> jrwren: do you use homebrew?
<jrwren> yes
<jcastro> wanna give it a shot?
<jrwren> totally sweet that he wrote that.
<jrwren> I tried and said "how the hell do you do this with go?"
<jrwren> does not work
<jrwren> he must not have tested it.
<jrwren> wait a sec... that is my failed attempt
<m_3> weblife: yes, now I did :)
 * m_3 needs to relax the bug email filters a bit
<weblife> m_3:  just wanted to be sure I responded from my blackberry. Wasn't sure how launchpad would treat it without the encryption tool.
<jrwren> jcastro: works great.
<m_3> weblife: would using express as an example app have enough there to go end-to-end?
<m_3> i.e., to curl the server and verify that all's good?
<jcastro> jrwren: _awesome_
<jrwren> very fast to fetch and build and install go 1.1.2 and juju 1.12
<jcastro> yeah
<jcastro> I left your manual instructions there too
<m_3> weblife: or would it need an actual application on top?
<m_3> interesting, amazon.com goes down but ec2 seems fine
<weblife> m_3: it wouldn't demonstrate the mongo config but you would see a page with header "Express" and content "Welcome to express" .
<m_3> ah, ok, well maybe us-east-1 is having some problems
<m_3> weblife: perfect... great idea
<m_3> mongo isn't necessary for a "default" install
<weblife> m_3: possible issues are it launches on port 3000 and creates the server file app.js not server.js.  But should make the app vs server name convention not matter anyways.
<m_3> but something coming up on the webpage is
<m_3> weblife: as long as somebody can just `juju deploy node-app` without config... then `juju expose node-app` and hit the page
<weblife> m_3: its pasicaly a apache "It Works!!!" page
<weblife> basically
<m_3> weblife: we could default it to 80 instead of 3000
<m_3> weblife: but whatever makes sense for default behavior
<m_3> the 3000 default is a necessary thing to do in the world of services sharing containers
<m_3> not so much here
<m_3> at this point we should just do what people would expect
<jrwren> jcastro: I was about to say kill them, but I guess not everyone uses homebrew.
<m_3> weblife: oh, and app.js -vs- server.js is fine
<weblife> is the sharsum a good enough check for verification of the upstream?
<m_3> weblife: that's somewhat of a mess... perhaps should be a config parameter itself
<weblife> not a bad idea
<m_3> weblife: ok defaulting it to app.js... that was totally just _me_ picking config out of thin air
<m_3> weblife: also we can change 'config/config.js' to just 'config.js' if that's more common... dunno
<weblife> no I think thats good.  Shold always seperate modules in their own folder
<m_3> I'm fine erring on the side of an established framework like express
<jcastro> jrwren: thought about killing them, but I think it's handy for people to build it themselves too
<weblife> m_3: I'm going to get going on this today.  (After I go though those charm school videos) Kinda want to make sure I am clear on charm caveats
<m_3> weblife: awesome
 * m_3 hopes you don't fall asleep during those _absolutely riveting_ videos :)
<weblife> jcastro: thank you
<weblife> m_3: probably will once or twice. But I have all day to myself.
<weblife> sitting on 3 cups of joe right now though
<m_3> :)
<weblife> jcastro: is charismatic on screen; haha
<jcastro> are you close to Ohio? I'm speaking about Juju there next month!
<weblife> California
<kurt_> adam_g: any more thoughts on the 403 error?
<adam_g> kurt_, no idea, other than a proxy somewhere 403'ing the request.
<kurt_> hrmâ¦trying to think about how to trace this and figure this out
<roaksoax> kurt_: howdy! so you are having issues again with the deployment?
<roaksoax> what issues are you seeing now?
<kurt_> getting 403s roaksoax
<kurt_> I'm trying to sort it myself
<roaksoax> kurt_: when exactly?
<roaksoax> kurt_: when do you get those 403s? apt-update?
<kurt_> 2013-08-19 10:50:33,321 unit:nova-compute/7: hook.output INFO: Failed to fetch http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/grizzly/main/binary-i386/Packages  403  Forbidden
<kurt_> yes
<kurt_> in both the nova-compute fetch and on apt-get update
<roaksoax> kurt_: try adding ubuntu-cloud.archive.canonical.com to /etc/squid-deb-proxy/mirror-dstdomain.acl.d/99-maas
<roaksoax> and then restart squid-deb-proxy
<adam_g> (on the MAAS node)
<roaksoax> kurt_: yes, on the MAAS node ^^
<kurt_> I'm not running squid - is that a part of the MAAS infall?
<roaksoax> kurt_: yes, it is part of MAAS install
<kurt_> interesting, ok :)
<kurt_> nice, that fixed apt-get update :)
<roaksoax> cool :)
<kurt_> watching the debug log with baited breath...
<kurt_> AND WE'VE GOT NOVA-COMPUTE :D
<kurt_> wahoo!
<kurt_> that was driving me nuts! thanks roaksoax
<adam_g> kurt_, i knew it was a proxy!
<roaksoax> kurt_: np :)
<kurt_> we need that documented or fixed :)
<kurt_> that config needs to be added to the cloud-init?
<roaksoax> kurt_: yeah please file a bug against maas saying "cloud archive is not allowed by default" for maas please :)
<roaksoax> kurt_: that file I made you modify comes from maas rather than cloud-init
<kurt_> adam_g: yes - thank you for your help too.  I was looking at my firewall
<kurt_> roaksoax: file you made me? :)
<roaksoax> kurt_: the file i made you modify comes from maas yes
<kurt_> ahhhh, gotcha
<kurt_> so let me ask this - what other repositories should make their way to this file?
<kurt_> or hosts rather I guess
<roaksoax> kurt_: non really. only cloud-archive if you are installing from it, and the ubuntu repositories are alread enabled by defauult
<roaksoax> so you should not need any other
<kurt_> ok dokey
<kurt_> you watch both lists :)
<kurt_> ok, lunch time
<weblife> do we have a way to start and stop the bootstrapped node?  Use case: AWS charges for each instance and I wish to save money by keeping it down as long as I don't need it.  I have alarms setup if something happens to an important node and need to start up juju though my Ubuntu Edge (wish I could afford to sponser) so i can do a status check and get juju to redeploy(which is automatic from my understanding with all my relations and co
<weblife> nfigurations)
<marcoceppi> roaksoax: whoa, that's kind of a big deal
<marcoceppi> good catch
<kurt_> marcoceppi: yes - good find for him
<kurt_> marcoceppi: btw - thanks for your help this weekend too
<roaksoax> marcoceppi: thats supposed to befixed on s-d-p side tho
<roaksoax> i dunno why it wasnt but havent look at it yet
<marcoceppi> roaksoax: I was going to say, that's going to be a good thing to have fixed and backported
<marcoceppi> I think ppa.launchpad.net should be available by default as well
<roaksoax> it is
<marcoceppi> not in the raring package it's not
<roaksoax> but CA was better targetted for s-d-p side i guess the maintai er never sru'd it
<marcoceppi> I think the ACL should basically by *.ubuntu.com *.canonical.com *.launchpad.net personaly
<marcoceppi> at least for the ubuntu package
<roaksoax> marcoceppi: https://launchpad.net/ubuntu/+source/squid-deb-proxy/0.6.3.1
<roaksoax> it seems that it didn't work
<marcoceppi> huh, could it be that kurt_ doesn't have backports enabled in precise?
<marcoceppi> Or is this an actual SRU?
<roaksoax> that's an SRU, it is in -updates
<marcoceppi> interesting
<kurt_> Ok, so I should file a bug against this then? Or its already in the pipeline?
<roaksoax> marcoceppi: do you know if maas based constraints work with juju core?
<roaksoax> fwereade: ^^
<marcoceppi> roaksoax: maas-name and maas-tag work
<roaksoax> marcoceppi:
<fwereade> roaksoax, marcoceppi: not yet, I'm afraid
<marcoceppi> fwereade: whoops, I gave someone the wront information
<roaksoax> fwereade: ok, is there an ETA/
<roaksoax> fwereade: ok, is there an ETA?
<fwereade> roaksoax, marcoceppi: tags we hope to manage this cycle
<weblife> Does anyone know of any charms that do upstream checks with Shasums that I could look at for an example?
<roaksoax> fwereade: k thanks!
<fwereade> roaksoax, marcoceppi: name we're not so keen on, but we have a possible mechanism for handling that if it's really needed
<roaksoax> fwereade: 0.7+bzr628+bzr631~precise1 --> that's pyjuju right?
<fwereade> roaksoax, yeah
<roaksoax> fwereade: ack! thanks :)
#juju 2013-08-20
<mwhudson> hello
<mwhudson> is it possible to compile juju with gccgo?
<mwhudson> asking because it would be cool to run juju on (simulated) arm64 nodes
<davecheney> mwhudson: maybe, i've never tried
<davecheney> i'd be interested in hearing your results
<davecheney> a few
<davecheney> things
<mwhudson> i guess i only need to compile the 'tools' with gccgo?
<davecheney> 1. i thought that arm32 could run on arm64
<davecheney> mwhudson: oh, that is going to make it it a lot harder
<mwhudson> harder?
<mwhudson> i don't need to do that i guess
<davecheney> i think we hard code the gc toolchain
<davecheney> 2. does gccgo support arm64 ?
<mwhudson> not all arm64 implementations can run arm32, though indeed most can
<davecheney> mwhudson: can the implemetation you are planinng on deploying to run arm32 bins
<davecheney> ?
<davecheney> oh
<mwhudson> davecheney: i heard rumours that it does but i don't know
<mwhudson> um, i don't know :)
<davecheney> and if lsb_release -a returns aarch64 or something then there will be more problems
<mwhudson> currently just targeting the arm foundation model
<davecheney> mwhudson: you're brave
<davecheney> i don't have 5 years to spend waiting for it to compile
<mwhudson> hm
<mwhudson> darn it
<mwhudson> my build of aarch64-unknown-linux-gnu-gccgo just failed :(
<davecheney> mwhudson: i'm sure i can knock up a branch or the release tarball that will *think* it is aarch64 when its' just arm32
<davecheney> mwhudson: that doens't suprise me
<mwhudson> make[4]: *** No rule to make target `../libatomic/libatomic_convenience.la', needed by `libgo.la'.  Stop.
<davecheney> no offense to the fast model developers
<mwhudson> uh hah, that looks architecture dependent
<davecheney> but unless you are paid to use the fast model
<davecheney> it's not worth your time
<mwhudson> davecheney: i'm not going to _build_ anything on the fast model
<mwhudson> i'm not that daft
<davecheney> :)
<davecheney> mwhudson: i'm pretty sure I can make a version of 1.12 that will bootstrap on arm64
<davecheney> with a little bit of hacking
<davecheney> you'll have to use juju bootstrap --upload-tools
<kurt_> is there a "condensed" version of juju status?  I'm fond of using watch in conjunction with status and my newer services are all pushing off the bottom of the terminal.  I guess I could do something fancy with perl or sed/awk.  But I'm lazy.
<mwhudson> right
<davecheney> kurt_: juju status --format=json | something that understands json ?
<davecheney> kurt_: you can also do
<mwhudson> i guess there is another issue looming, as i don't suppose juju has a provider that uses the foundation model
<davecheney> juju status {service/unit}
<kurt_> ah ok, thanks
<davecheney> mwhudson: no, i don't think the foundation model is considered a cloud
 * davecheney shudders
<davecheney> i said coud
<davecheney> cloud
<mwhudson> ok, seems gccgo requires libatomic and libatomic isn't there on aarch64 yet
<davecheney> mwhudson: also libgo contains a port of the go standard library from the Go project
<davecheney> and the Go standard lib doesn't have support for aarch64 atomics and things
<davecheney> so I'd say, unless it's been confirmed to work
<mwhudson> ah ok
<davecheney> gccgo won't work on aarch64
<mwhudson> so this is all a bit of a stretch
<mwhudson> that's ok :)
<davecheney> mwhudson: if you want to deploy juju workloads on aarch64
<davecheney> i recommend using the 32bit tools
<davecheney> of all the things that could go wrong, that would be the least of them
<weblife> m_3: finished those changes ( http://pastebin.com/UnFsJBNT ).  I need a little help with my bash (I never use python), when I reach the end of my for statement I want it install the PPA.  I thought $SHA = $shaCheck[::-1] would work but it doesn't.  I should enclose the entire statement for null if the config is left blank, do that later.
<weblife> this question goes to any other python pros
<sarnold> weblife: I would prefer to use the --status option to sha256sum or shasum program and not try to check the text value of the output
<sarnold> weblife: is there any way you can use gpg to verify the hashes?
<weblife> sarnold: maybe this is my first experience of using hashes.  m_3 asked if I could make it do upstream verification, so gave it a try.  What do you mean --status?
<weblife> juju status?
<sarnold> weblife: sha256sum --status
<sarnold> weblife: that hides the output and lets you use the exit status of the sha256sum program to tell success/failure..
<weblife> if I recall from the help there is an algorithm option for option I saw to do something like that but I would need to find that document again.  Have no clue if its possible though. let me look.
<sarnold> weblife: "man shasum" or "man sha256sum"  :)
<sarnold> (I haven't actually checked which sums are in the node upstream, I just sort of assume 256 by now..)
<weblife> http://nodejs.org/dist/v0.10.16/SHASUMS.txt
<sarnold> aha, sha1sum
<sarnold> aha!
<sarnold> http://nodejs.org/dist/v0.10.16/SHASUMS256.txt
<weblife> but if it doesn't exist it gives you a only a return
<sarnold> and better still, there's a signed version :) http://nodejs.org/dist/v0.10.16/SHASUMS256.txt.asc
<sarnold> weblife: you can probably assume sha256sum will be provided in the future, if not the past. :)
<weblife> sarnold: I can only try.  Which one SHASUMS256.txt.asc or SHASUMS256.txt.gpg ?
<sarnold> weblife: I'd use .asc
<weblife> sarnold: back to researching this, your making this fun :x
<sarnold> weblife: woo! :)
<sarnold> weblife: I don't know how familiar you are with gpg, gpg --recv-key 6C481CF6 ; gpg SHASUMS256.txt.asc
<weblife> sarnold: I'm not familiar with hashes at all :x  Never worried about it.
<sarnold> weblife :)
<weblife> sarnold: It looks like I will still run into the same problem above if there is no matching file.
<weblife> I know nothing about hashes except for the basics and there is a lot to learn, gonna leave to the experts since I will probably never use these. Submitting as is soon as I figure this last check out.
<kurt_> Hi - anyone on here know about deploying cinder on MAAS from juju-gui?
<marcoceppi> kurt_: I think we typically use ceph
<kurt_> marcoceppi: jcastro passed this on to me today: http://pastebin.ubuntu.com/6003753/plain/
<kurt_> the template you gave me uses cinder too
<kurt_> ie. http://paste.ubuntu.com/6001802/
<kurt_> marcoceppi: the template shows setting the block device to "None" and that seemed to work.  LOL, I'm not understanding how its allocating blocks if that's set to none.
<marcoceppi> you'd have to read the charm
 * marcoceppi checks
<marcoceppi> are you taking about the charm itself? I don't see that in the paste
<kurt_> I'm looking at the template you kindly passed on yesterday
<kurt_> I think its your environments.yaml
<kurt_> cinder is definitely included, and the block-device string is set to None
<kurt_> "block-device": None
<marcoceppi> kurt_: I copied those settings from the following wiki
<marcoceppi> https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<kurt_> i c
<kurt_> so its really either cinder or ceph, right?
<kurt_> yawnsâ¦is tired
<stub> Is it on the roadmap for the local provider to use ephemeral lxc containers?
<varud> Hello, I hate to be the guy that jumps in with a stupid question ... but I can't figure out what the point of the juju gui is?  Can I export from there to some sort of yaml file for deployment to MAAS or AWS?
<jamespage> guh - anyone know which version of maas works with juju-core 1.12?
<jamespage> mgz, jam: ^^
<jcastro> varud: yes. You can export deployments from the gui itself and then import them into other environments
<varud> I just had an epiphany, the demo site is a demo site
<jcastro> hazmat: heya, do you have docs somewhere on deployer?
<varud> and I need to deploy juju-gui as a charm to my environment
<jcastro> right
<varud> While that's obvious now, it's extremely cryptic to somebody new to juju
<varud> had my head scratching for a while
<hazmat> jcastro, there's some in source / sphinx docs directory.  i was going to publish to read the docs site
<jcastro> any ideas on how we can make that more obvious?
<varud> https://jujucharms.com/
<varud> Instead of saying 'Environment on Demonstration'
<varud> Say 'Demo Mode' with a link to https://juju.ubuntu.com/resources/the-juju-gui/
<mgz> jamespage: er, the current vrsion of maas should, no?
<jamespage> mgz, in precise?
<jcastro> varud: that's a good idea, I'll file the bug now
<jamespage> #bang
<jamespage> nope
<rick_h> varud: we're working through some demo/walkthrough material as recent feedback was just along your line there.
<varud> On a side note, while I've got your ear
<jcastro> rick_h: oh nice, is that tracked anywhere?
<varud> I'm trying to deploy OpenStack on one machine (I know, that's crazy)
<jcastro> varud: I am always all ears!
<varud> and would like to use MaaS
<rick_h> jcastro: not yea, came out of IoM feedback and the UX'y people are thinking/working on how to present it best
<mgz> precise almost certainly needs a newer thing, I'm not sure what the sru/backport status is
<varud> But doesn't there need to be a management node
<rick_h> hazmat: let me know if you get it working, I was trying to get the charmworld api docs up on there and hit https://github.com/rtfd/readthedocs.org/issues/435#issuecomment-22929015
<jamespage> mgz, there is a plan
<varud> so in that case, I'll need to manually set up MaaS on a handcrafted VM on the machine
<varud> and then deploy to the rest of the machine
<varud> or am I missing something
<jcastro> I think this is the virtual MAAS use case
<jcastro> jamespage: right?
<jcastro> being able to just do it all on one machine.
<hazmat> jcastro,  rick_h, actually it wasn't read the docs i was doing the pypi doc setup direct with sphinx..  ala http://pythonhosted.org/an_example_pypi_project/buildanduploadsphinx.html
<rick_h> hazmat: ah, never mind then
<jamespage> jcastro, varud: hmm - that would work
<jamespage> varud, how big is your machine?
<hazmat> jcastro, http://pythonhosted.org/juju-deployer/
<varud> 12 cores
<jcastro> hazmat: I don't care where they are or in what format, I have an idea for a bundle and I'd like to play with deployer
<varud> 128GB RRAM
<jcastro> hazmat: perfect!
<jcastro> hazmat: so here's my idea. We take an openstack deployment and just use it as a bundle
<jcastro> and then we tell people like varud, here is your openstack bundle, just deployer it.
<hazmat> jcastro, sure there are a few examples of that, and its how we do openstack testing.
<jcastro> right, so what I think we need to do is basically put the bundles somewhere
<varud> One more thing as a new juju person ... can you guys consider not using wordpress everywhere in your docs :-/
<hazmat> jcastro, charmworld and gui support for bundles is coming
<jcastro> so people can just fire it up instead of going through all the manual openstack steps
<jamespage> varud, should work OK
<jcastro> varud: I totally agree with that and we'll be updating that.
<rick_h> jcastro: hazmat very soon
<hazmat> jcastro, at which point openstack could just be deployed from the gui against a maas provider
<varud> it makes it seem like juju is old ... which it's not
<jcastro> people think all we do is wordpress, lol
<jamespage> varud, lemme dig out the charm - you can run it standalone...
<varud> put something sexier there
<jcastro> varud: what would you put in there?
<jcastro> rick_h: when you say very soon, how soon?
<varud> maybe a bitcoin miner :-) I doubt people would like that though
<varud> let me think
<rick_h> like week-soon I believe. That's in charmworld. Then gui needs to show it.
<rick_h> jcastro: ^
<jcastro> rick_h: dude, that is awesome.
<jcastro> rick_h: so, are you telling me that in a week people will be able to share and deploy bundles from the gui?
<rick_h> jcastro: so end of week the plan is to support pulling bundles into the backend and start work on juju gui front end. Next week finish bundle workd in gui.
<rick_h> jcastro: so end of month...worky worky
<jamespage> varud, https://code.launchpad.net/~virtual-maasers/charms/precise/virtual-maas/trunk
<jamespage> varud, I've not tried it standalone
<jcastro> rick_h: so I think it'd be awesome to do an openstack bundle
<rick_h> jcastro: definitely
<varud> thanks jamespage
<jcastro> with or without vmass
<jcastro> depending on what jamespage says
<rick_h> jcastro: yea, basically an addendum to the video Mat did deploying openstack via the gui
<jcastro> rick_h: actually, openstack will probably be the hairiest bundle anyway, might as well use it as the testcase
<jamespage> jcastro, I agree
<varud> yes, since that video is why I'm here actually
<jcastro> rick_h: awesome, so TLDR, you guys were already working on this.
<varud> outreach works
<rick_h> jcastro: like "Here's the how to, now let's take the shortcut and use the bundle"
<jcastro> kurt_ could have really used the bundle too
<rick_h> jcastro: not creating the bundle specifically, but on the radar as a use case pushing the work
 * jcastro nods
<varud> In case my use case is relevant (I think it's not atypical actually)
<jcastro> now my next question ... do we have a working bundle somewhere people can use with deployer in the meantime?
<jcastro> your use case is very relevant!
<varud> I want to deploy all the openstack services on one machine isolated with LXC
<varud> and then allocate nova-compute with KVM hypervisor
<varud> using MaaS
<varud> as a proof of concept but one that's capable of handling real traffic
 * jcastro nods
<varud> I think that scenario is a great way to dive into juju
<rick_h> varud: yep, and we're working on getting there bit by bit
<varud> let me know if I can help
<varud> since I'm basically focused on that very thing
<rick_h> thanks varud
<varud> As for a replacement for wordpress in docs/demos, I'll think of something tonight ... hopefully Python based :-)
<varud> Later and thanks for the help
<jcastro> discourse maybe?
<jcastro> the one thing that we got feedback on is one time we did a talk on juju and demo'ed hadoop
<jcastro> and a bunch of people didn't grok that
<jcastro> maybe we should have a simple demo + an advanced demo
<varud> that looks great
<varud> I was thinking of pelican and stuff like that but it doesn't show much since you don't need multiple services for a static site generator
<varud> hadoop is certainly a use case but you want something that you can see
<varud> owncloud might work too
<jrwren> is there a discourse charm?
<jcastro> yeah, it's not in the store yet though, it's in ~marcoceppi
<varud> or etherpad
<varud> gotta go, but I'll be back on the list now that I know people are here
<jcastro> jrwren: https://jujucharms.com/~marcoceppi/precise/discourse-HEAD
<varud> I'm in Nairobi so UTC+3 is my day
<rick_h> jrwren: doh, jcastro beat me to it
<varud> I like discourse though, great idea
<jrwren> sweet
<varud> later
<jrwren> lol @ tcmalloc. it must help ruby
<jcastro> m_3: ping me when you're around
<kurt_> jcastro: good morning.  Something I'm wondering.  The guides all have cider for block storage, but I'm hearing ceph is the new direction.  Any comments on that?
<jcastro> ceph is the hotness
<jcastro> jamespage: I am assuming we put cinder in the guides because that's the official openstack thing?
<jamespage> kurt_, yes
<jamespage> kurt_, ceph is implemented as a cinder backend for block storage
<jamespage> kurt_, so the API is still cinder; its just backed by ceph rather than local disk + iscsi
<jamespage> kurt_, and ceph freaking rocks!
<jamespage> highly scalable, highly avaliable storage - sweet!
<kurt_> jamespage: are there any deployment guides you can share?  I've seen this: http://ceph.com/dev-notes/deploying-ceph-with-juju/
 * jamespage has to declare that he also maintains Ceph in Ubuntu....
<jamespage> kurt_, are you using juju?
<jamespage> that might be a stupid question bearing in mind which channel we are in
<kurt_> jamespage: yup
<kurt_> lol
<jamespage> kurt_, OK - so first of all deploy ceph using charms
<jamespage> and then juju add-relation cinder ceph
<jamespage> and juju add-relation nova-compute ceph
<jamespage> and bobs your uncle
<kurt_> lol
<jamespage> kurt_, this is still current - http://javacruft.wordpress.com/2012/10/17/wrestling-the-cephalopod/
<kurt_> does it matter the order of deployment?
<jamespage> aside from the reference to nova-volume at the bottom.
<kurt_> if I've already got cinder deployed?
<jamespage> kurt_, so long as you have not already provisioned storage from cinder local disk you should be OK in any order
<kurt_> cool
<kurt_> I agreed ceph looks cool and promising
<kurt_> agree rather
<kurt_> jamespage: thanks for your comments
<jamespage> kurt_, np
<kurt_> jamespage: do you recommend not using the gui for the ceph portion and doing it manually as shown in the WP guide?
<jamespage> kurt_, you can do it with the gui - the tricky bits are generating the uuid and the ceph monitor secret - unfortunately they still have to be done via a shell
<jamespage> kurt_, do you just want to try it with a single node?
<jamespage> that is possible - its obviously not HA but its good for testing
<jamespage> and limits machine consumption!
<kurt_> I have 27 VMs at my disposal :D
<kurt_> Maybe that's better for a proof of concept run
<kurt_> The guide uses 3, which is also fine too
<kurt_> but if you have guidance on single node deployment other than just doing -n3 and anything I couldn't figure out on my own, do share.
<m_3> jcastro: hey
<jcastro> m_3: hey for charm testing ....
<jcastro> m_3: is there a way we could automate checking to see which ports a charm uses?
<jcastro> m_3: so for example a bunch of private clouds only use 80/443, etc.
<jamespage> kurt_, I'd recommend a read of https://jujucharms.com/precise/ceph-14/#bws-readme
<jcastro> And last week I added a best practice that charms shouldn't use weird ports
<jamespage> kurt_, if you want todo it with a single node just drop the 'monitor-count' to 1
<m_3> jcastro: oh, not dynamically
<m_3> jcastro: you just wanna see which 'open-port' calls are made
<kurt_> ok, thanks jamespage
<jamespage> kurt_, if your instances get ephemeral storage
<jamespage> you can use that for storage
<jamespage> ephemeral-unmount: /mnt
<jamespage> should ensure that this works ok
<jamespage> with osd-devices: /dev/vdb or whatever
<jcastro> m_3: not just the open-port, I mean if you're doing weird stuff like adding GPG keys via a keyserver, stuff like that.
<jcastro> so like let's say my charm uses some custom PPA or repo
<m_3> jcastro: lemme look at status output... I just tore stuff down, so one minute
<kurt_> this part of the deployment is new for me.  still some stuff to figure out
<jcastro> m_3: this is more of a "hey I wonder if this is a good idea"
<m_3> jcastro: wait, you mean _outbound_ traffic?
<m_3> no
<m_3> that'd probably be um... harder
<fwereade> m_3, stub, marcoceppi: seeking opinions re https://bugs.launchpad.net/juju-core/+bug/1192433 -- I feel that early departure is a god idea for peers (as noted in bug) and for providers, but not for requirers, which run the risk of being cut off by the provider if they appear to depart early
<_mup_> Bug #1192433: relation-list reporting dying units <jujud> <relations> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1192433>
<m_3> fwereade: k, looking
<fwereade> m_3, the main issue is that this is actually the first provider/requirer asymetry we'd be implementing if we did this, and it's not clear what the impact might be -- if people have generally been using pro/req in the "natural" way -- ie requirers connect to providers -- it will be fine; but it will be somewhat surprising for any charms with relations implemented "backwards"
<m_3> fwereade: and total confusion wrt subs imo
<m_3> oh, wait but that wouldn't apply... nm
<fwereade> m_3, I think it would still apply -- but I'm not sure the situation is actively different there
<fwereade> m_3, it all hinges on how people have interpreted pro/req
<jcastro> fwereade: https://juju.ubuntu.com/docs/authors-charms-in-action.html
 * m_3 wouldn't expect that to be consistent across charms
<m_3> didn't william actually write that?
<jcastro> yes
<jcastro> at the sprint I moved it over to the docs
<jcastro> just wanted to point out that it got generated, etc
<m_3> ah, gotcha
<fwereade> jcastro, this change would involve a tweak or two there, tis true
<m_3> hmmmm... I'm ok with asymmetry wrt prov/req... have to think a bit more about ramifications
<m_3> fwereade: although I catch myself often blurring the line between juju's relation/service state and the actual state of the _service_ when thinking about dying or unresponsive counterparts in a relation
<fwereade> m_3, yeah, a case could be made that it's the charm author's responsibility to deal with unresponsive relatives regardless, and that any situation that would be helped here is actually a symptom of a charm bug
<m_3> but I guess the net result is the same with unresponsive bits... so earlier departure would work
<m_3> fwereade: getting more info... in an actionable way (hook)... is probably easier to deal with though
<fwereade> m_3, yeah, I would in general prefer to make charmers' lives easier ;)
<jamespage> mgz, tried recent everything - still no luck - https://bugs.launchpad.net/juju-core/+bug/1214451
<_mup_> Bug #1214451: Unable to bootstrap maas environment <juju-core:New> <https://launchpad.net/bugs/1214451>
<mgz> hmmm
<m_3> fwereade: appreciate that!  yeah, I'm thinking that asymmetry (which is sort-of implied in the naming) is an ok thing given we're getting a lot more info from juju about the relation
<jamespage> mgz, I got the same error with 12.04 maas + juju-core 1.12 fwiw
<jamespage> juju sync-tools worked like a dream tho!
<marcoceppi> fwereade: the problem is we never really enforce what provides and requires means, and since relations are bidirectional most charms just ignore it
<marcoceppi> it being the convention of provides and requires. In fact, the whole idea of provides and requires is extremely opinionated from charm author to author
<roaksoax> jamespage: upgrade to whatever is in -proposed
<jamespage> roaksoax, daily-backports works
<roaksoax> jamespage: that contains the fix for that maas error with juju core
<jamespage> roaksoax, whats in -proposed?
<jamespage> ah - right
<roaksoax> jamespage: or that too. (yeah meant 12.04 -proposed for the SRU'd fixes)
<jamespage> roaksoax, I'd rather test -proposed
<jamespage> roaksoax, but downgrading maas is a purge/reinstall I suspect!
<roaksoax> yep
<fwereade> marcoceppi, can you point me to some examples? ISTM that it can only really meaningly vary across unconnected groups of charms
<fwereade> marcoceppi, hmm, well, I guess it's not necessarily transitive
<fwereade> marcoceppi, but defining, say, an app server charm that *provides* both http and mysql would seem... surprising
<fwereade> marcoceppi, but perhaps given that we don't enforce internal consistency it may be too much to ask :/
<marcoceppi> fwereade: let me see if I can find a few examples
<marcoceppi> I vaguely remember coming across a few examples where a charm was providing an interface it actually meant to consume and another charm was requiring an interface it was to provide just to fit in with previous charms developed
<fwereade> marcoceppi, m_3: quick aside: would you anticipate problems if we implemented perceived-early-departure for peer relations only (for now at least)?
<marcoceppi> fwereade:  I don't think that'd cause a problem, can't see that being an issue off the top of my head
<m_3> fwereade: dunno... I was thinking haproxy before which is direct, non-peer relations
<m_3> fwereade: but yeah, sounds reasonable to start with peers and we'll see where to go from there
<fwereade> m_3, is there a problem with haproxy and early-provider-departure?I don't see it
<fwereade> m_3, (I know that's not quite what you said, but I'm even more confused about that -- a peers change wouldn't affect it at all, surely?)
<m_3> fwereade: no, haproxy manages down relation counterparts at the service level... but it's the prototypical example of a charm needing `relation-list`
<m_3> so it was the first one that popped to mind earlier
<m_3> but it could still eventually benefit from earlier notification of down relation counterparts (not peers)
<weblife> getting the following error with 'charm create test': Failed to find test in apt cache, creating an empty charm instead.
<marcoceppi> weblife: that's not an error
<marcoceppi> that's expected output of charm create, it's just saying that it can't find that name "test" in apt cache, it's creating an exmpty charm instead
<weblife> Okay but it seems to be missing things like the svg icon file. So I figured this was an error.
<marcoceppi> weblife: make sure you're getting it from the ppa
<marcoceppi> weblife: ppa:juju/pkgs
<weblife> marcoceppi: thank you as always
<sinzui> hi ~charmers. I see https://jenkins.qa.ubuntu.com/ is not responding, and mange.jujucharms.com wants to collect charm test data from it
<sinzui> Do we need to change were test data is collected from?
<hazmat> m_3,  do you know who maintains on that jenkins^
<m_3> hazmat: there was a problem yesterday with jenkins version upgrades...  in #is
<hazmat> m_3, thanks
<m_3> sinzui: ^^
<sinzui> thank m_3
<m_3> the jenkins build_publisher plugin is very sensitive to versions... but this was a bigger problem in general for jenkins.qa.ubuntu.com
<hazmat> sinzui, fwiw the response i got back is that its being worked on from webops vanguard
<lamont> it's a #is thing, not #webops, being actively worked since before I went checking
<sinzui> hazmat, lamont fab. charmworld itself isn't broken, It is just logging a lot of hate
 * m_3 thinks it's destructive to internalize too much of that
<sarnold> hehe
<lamont> heh
<lamont> logging hate is better than eating hate
<roaksoax> lin/win 3
<beuno> lin/win!
<weblife> yeah back up and running normal
<adam_g> jamespage, i think this fell by the wayside, any chance of a poke again? https://code.launchpad.net/~gandelman-a/charm-helpers/sync_include_hints/+merge/174320
<mhall119> arosales: smoser: you guys have some Cloud & Server sessions imported into Summit, but they need to be added to the schedule.  I sent an email this morning about how to do that, let me know if you have any questions.
<arosales> mhall119, thanks I'll take a look at that today
<mhall119> thanks arosales
<jamespage> adam_g, +1
<adam_g> jamespage, cool. thanks. also, this look like a sensible approach? http://paste.ubuntu.com/6007366/
<natefinch> arosales: got a second to talk about juju on windows?
<adam_g> jamespage, disregard that paste. such a config wouldn't really work
<arosales> natefinch, hello
<arosales> natefinch, sure, I don't know in what context but I am glad to talk
<natefinch> arosales: hi... Mark Ramm said you might have the info on our obligations for delivering juju on windows... and looks like I'm the one that'll be making sure we meet those obligations
<arosales> natefinch, ah ok. Do you want to G+?
<natefinch> arosales: yeah, perfect
<arosales> natefinch, https://plus.google.com/hangouts/_/2177b0954df808545cd6dac802822b1fcb7316bf?authuser=1&hl=en
<kurt_> jamespage: you mentioned earlier about generating a ceph monitor secret and the uuid.  Must this be done on the node to be deployed on?  There's a bit of a chicken and egg thing if the answer is yes unless it's automated in the charm?
<marcoceppi> kurt_: I believe th uuid and secret are derived from the fsid which is a required configuration option needed at deploy time
<kurt_> marcoceppi: pardon my ignorance, but are those indigenous to the node being deployed on - or can they be generated on a completely different node?
<marcoceppi> kurt_: they're the same between all nodes and are proivded by the user via configuration.
<marcoceppi> basically you're just running juju set ceph fsid=`uuid`
<kurt_> marcoceppi: but how do I create this? Is it a randomly generated thing?
<marcoceppi> kurt_: the fsid just needs to be a random string. The charm author recommends `uuid` but you could have it set to anything
<kurt_> I am doing my deployment via MAAS
<kurt_> ah, ok
<marcoceppi> kurt_: but it's set during deployment, so in the juju-gui, prior to deployment, you can fill the fsid with any random string
<kurt_> And the same thing for the monitor secret?
<marcoceppi> kurt_: I assume so, let me check
<kurt_> Can I assume I can use exactly what's already been used in the examples?
<marcoceppi> kurt_: which examples? I'd not recommend using anything in examples as it's the same reason there aren't default optoins for these. It creates an attack vector where someone knows the password to access your disks
<kurt_> yes, understood
<marcoceppi> kurt_: the readme for monitor-secret recommends running `ceph-authtool /dev/stdout --name=mon. --gen-key`
<marcoceppi> you can run this on any node that has ceph-authrool, doesn't need to be run directly on the node
<marcoceppi> even your local machine
<kurt_> awesome, thanks.  I was going to run this on my maas master node, so that works.
<kurt_> fyi - jamespage sent me this earlier:  http://javacruft.wordpress.com/2012/10/17/wrestling-the-cephalopod/
<kurt_> he said its mostly up to date, just the nova bits at the bottom are out of date
<marcoceppi> kurt_: well, he did say mostly ;)
<kurt_> true indeed
<marcoceppi> We realize MAAS + Juju and the openstack charm docs aren't quite up to date. The charms have had a lot of work done on them and there just hasn't been time to document everything. I hope we can resolve both of these soon
<kurt_> I'm figuring it all out slowly but surely
<kurt_> I'm 90% there
<marcoceppi> please feel free to document and report back what you find! We really apprecaite you working through all this
<kurt_> sure.  I do plan to blog it.  I'm hoping for promotion from somebody over there once I get it done. :)
<kurt_> maybe on one of your own blog sites
<marcoceppi> id definitely promote it
<kurt_> can I have url to your blog?
<marcoceppi> http://marcoceppi.com
<kurt_> GRRR--several of my VMs crashed.  This may be too much stress for Fusion to handle.
<kurt_> Thanks!
<jamespage> kurt_, that was all great advice about the ceph config options from marcoceppi
<jcastro> kurt_: huh did you mention earlier you're on juju .7?
<marcoceppi> jcastro: did he?
<kurt_> yes
<kurt_> .7
<jcastro> out of curiosity, how did you end up with that?
<jcastro> did you search via apt-cache search for juju or ... ?
<kurt_> I was trying to remember
<kurt_> maybe I installed from ppa?
<kurt_> I started off from this guide http://ceph.com/dev-notes/deploying-ceph-with-juju/
<jcastro> scuttlemonkey: heya, maybe that page should be updated?
<jcastro> kurt_: so at some point when you're ready to do it all again for repeatability I'd like to see how juju 1.12 does with what you're trying
<kurt_> sure.
<jcastro> adam_g: though you guys are still just starting to test the openstack charms with juju-core right?
<kurt_> I'm reeling from a vmware crash right now. :) I may have pushed it to it's limits
<adam_g> jcastro, we are testing it
<jcastro> as long as it's not our fault for the crashes I'm happy. :p
<kurt_> LOL
<kurt_> you would think vmware is stable as all get out
<kurt_> ok, time for system reboot
<marcoceppi> jcastro: I would even say use 1.13 and wait for 1.14 - it's already better than 1.12
<scuttlemonkey> jcastro: wha?
<marcoceppi> In fact I have strong feelings about this whole odds are dev evens are stable, but that's for another channel another day.
<scuttlemonkey> oh, yeah it's a bit stale...I should at least put a warning on there until I can go through and clean it up
<jcastro> scuttlemonkey: your older ceph/juju blogpost.
<scuttlemonkey> I'll drop a note box on top of it here in a few
<scuttlemonkey> hopefully have time to poke at it later this week
<kurt_> scuttlemonkey: you wrote that?
<scuttlemonkey> kurt_: yeah
<kurt_> nice
<scuttlemonkey> thx
<kurt_> are you pmcgarry then? :)
<scuttlemonkey> yah, that's me
<kurt_> ah ok.
<kurt_> I'm doing the ceph stuff right now.  So I've been looking at it quite a bit.
<scuttlemonkey> nice
<scuttlemonkey> ceph-deploy has come a long way in the last couple weeks
<scuttlemonkey> was hoping I could set aside some time to put it against the charm and normalize things a bit
<scuttlemonkey> but I haven't really been watching the charm dev
<kurt_> cool.  One point of slight confusion (from the layman <- that's me) was the notion of needing to create a monitor secret and fsid prior to deployment and that they are random
<kurt_> not random
<kurt_> but can be any string generated by the tools
<kurt_> and that those tools can be on any node
<kurt_> not necessarily the one being deployed to
<scuttlemonkey> ahh
<kurt_> I was talking through that with marcoceppi
<kurt_> So maybe you could add information related to generating those two things?
<scuttlemonkey> I should see if I can talk alfredo into monkeying with the charms...he is doing most of the ceph-deploy work these days and even did the ansible playbooks
<scuttlemonkey> didn't I have that in there?
<kurt_> Ok.  That would be awesome.  I'd also just be happy documenting the process too. :)
<jcastro> maybe he can reuse the playbook in the charm?
<scuttlemonkey> We need to generate a uuid and auth key for Ceph to use. > uuid insert this as the $fsid below > ceph-authtool /dev/stdout --name=$NAME --gen-key insert this as the $monitor-secret below. - See more at: http://ceph.com/dev-notes/deploying-ceph-with-juju/#sthash.1rrpTWly.dpuf
<jcastro> For an integration win!
<scuttlemonkey> jcastro: hehe inception-y goodness :)
<kurt_> ah - was that there before???
<kurt_> FFS, am I blind?
<scuttlemonkey> kurt_: haha, yeah no worries :)
<scuttlemonkey> jcastro: ok, disclaimer included until I can get back to it
<jcastro> <3
<jcastro> holla at me if you need anything
<kurt_> Oh yeah - one more bit of feedback - your guide is tailored to AWS.  The steps aren't much different for a private deployment, right?
<kurt_> if there is anything different, you may want to call it out
<scuttlemonkey> jcastro: will do, hopefully I can get alfredo all fired up :)
<jcastro> yeah
<jcastro> or maybe we should shove all that in the README
<jcastro> along with jamespage's blog post information
<scuttlemonkey> kurt_: yeah, there are the potential for many so I wrote it for what I had and figured people would adapt as necessary
<kurt_> and final thing w/r to "Prep for Ceph Deployment" section - you may want to make it clear those tools can be ran on *any* node that has the tools.  I know that's obvious for AWS, but not for someone doing a MAAS deployment.  That was where my confusion was.
<scuttlemonkey> ahhh
<scuttlemonkey> a fair point
<scuttlemonkey> ok, wandering back off into the ether
<scuttlemonkey> thanks for keeping me honest :)
<kurt_> scuttlemonkey: thank you for your work with ceph and putting the guide together
<scuttlemonkey> kurt_: my pleasure!
<kurt_> When my nodes go down, is there any easy way to restart juju services after a crash? Do I have to redeploy everything?
<kurt_> ie.     agent-state: not-started
<kurt_> This is actually my root-node, so I can't even "juju ssh 0" to the node to fiddle the bits
<kurt_> Am I SOL and having to destroy the environment?
<kurt_> ie. start from scratch (shudder)
<kurt_> it appears hosts don't like this condition and I'm forced to start over.  Is that a fair assumption?
<kurt_> Nope figured it out.  http://askubuntu.com/questions/271312/what-to-do-when-juju-machine-0-has-got-agent-state-not-started-state
<weblife>  is it possible to upgrade a charm from a repository ? I tried this but no go: juju upgrade-charm --repository charms local:node-app
<marcoceppi> weblife: was the charm already deployed from local? or from charm store?
<weblife> marcoceppi: From local.  I just went ahead and re-deployed but if I can do it I would like to
<marcoceppi> weblife: you don't need to specify local: next time
<marcoceppi>  juju upgrade-charm --repository charms node-app
<marcoceppi> since juju already knows it's local:
<weblife> because the charm is tagged already with local? but if I wanted to make it convert to charm store use cs:  ?
<marcoceppi> weblife: so, fair warning I've only just recently learned about the switching part, and from what I've learned I'm terrified to use to too often
<marcoceppi> When you're running upgrade-charm you never need to specify protocol (ie cs: or local:)
<weblife> marcoceppi: thank you for clarifying
<marcoceppi> However. If you deployed a charm from the charm store you can switch to a local version using --switch
<hazmat> roaksoax, ping
<marcoceppi> weblife: I'd recommend looking at the output and warnings of `juju help upgrade-charm`
<marcoceppi> weblife: if you were to pursue --switch
<weblife> marcoceppi: that sounds useful for testing a charm then customizing
<marcoceppi> weblife: a good use case is "I've deployed from the store, and found a vital bug that I need to patch and can't wait"
<marcoceppi> so you can switch to a local version with the hopes of eventually switching back to charm store (or not!)
<jcastro> http://askubuntu.com/questions/335108/juju-cant-ssh-service-units
<jcastro> I think he doesn't need the sudo right?
<marcoceppi> jcastro: did askubuntu bot stop?
<jcastro> I might just be faster?
<marcoceppi> I'll bounce it anyways
<kurt_> Hey guys - I'm trying to recover from vmware crashing
<kurt_> systematically going through and having to restart the agents, then reboot the nodes
<kurt_> the gui is slow as molassis
<kurt_> just spinning
<marcoceppi> jcastro: weird, I don't know any posts that recommend not having the ssh key in maas, in fact it didn't work when I tried.
 * marcoceppi looks forward to getting his maas setup running
<weblife> the bot is slow.  I made a post before and it lagged.
<kurt_> any ideas, or am I pretty much SOL and need to destroy my entire set up?
<marcoceppi> weblife jcastro it appears it errored out
<jcastro> kurt_: it sounds to me like restarting everything will take longer
<weblife> kurt_: vmware gui is super slow.  That was my experience when doing blackberry development.
<kurt_> weblife: not vmware gui - juju-gui
<weblife> kurt_: then I dunno, your SOL :)
<kurt_> jcastro: I take it juju isn't very good at recovering from such situations?
<marcoceppi> kurt_: well the gui is communicating live via an API to bootstrap, so that might explain why it's so slow
<marcoceppi> kurt_: if you restart a machine Juju agents _should_ restart as long as the bootstrap node is running
<jcastro> kurt_: I would capture the logs from the bootstrap and see what a core dev says
<jcastro> bb later tonite, dinner
<kurt_> marcoceppi: which logs - just debug?
<marcoceppi> kurt_: the bootstrap node has logs in /var/log/juju
<marcoceppi> actually, all nodes have /var/log/juju/
<kurt_> is the debug-log simply and aggregation of that info from all nodes?
<marcoceppi> kurt_: yeah... debug-log iirc, is just rsyslog forwarding all logs from all nodes to the bootstrap node, then just tailing that log
<kurt_> ok, what I thought...
<kurt_> nothing interesting there. darned
<hazmat> kurt_, so re gui
<hazmat> kurt_, if you reload it what do you see?
<kurt_> heh heh, now that I ..just..seconds ago destroyed my environment? yes? :D
<hazmat> doh
<kurt_> nothing - it just spins
<kurt_> the basic palette was there, and its logged in, but the circle was just spinning
<hazmat> kurt_, so typically in chrome, you can go into the inspector (right click inspect element), switch to network tab, reload the page.. and see the websocket traffic
<kurt_> ah
<hazmat> which is basically a connection to the environment
<kurt_> I looked at the error and access logs for apache and didn't see anything really interesting either
<hazmat> the gui charm has to play a game, where it proxies the websocket api endpoint via haproxy, so that it can use the same ssl cert for serving up the page.. but its basically the same
<hazmat> kurt_, you'd have to login to the gui unit and check the haproxy logs i suspect
<kurt_> are there any common issues you've ran in to?
<hazmat> kurt_, once things are in a steady state, your not going to find much interesting in the juju debug-log output
<hazmat> kurt_, no
<hazmat> kurt_, it generally just works
<hazmat> kurt_, hence the curiosity around what went odd
<kurt_> right.
<hazmat> kurt_, you've got vmware, pxe boot instances registered in maas
<hazmat> ?
<kurt_> yeah for sure
<hazmat> kurt_, it could be a network issue between the gui proxy and the state server instance, or between the browser and the gui proxy.
<kurt_> that whole part works, with of course the exception that vmware won't actually boot the nodes
<hazmat> i've seen similiar setups with virtualbox pxe it works... and we have an old kvm charm that does something similiar (virtual-maas)
<kurt_> yes it could.  It may have been worth troubleshooting NAT.  Didn't think too deeply about that partl
<kurt_> yes, I figured out kvm work, but it can use libvirt
<kurt_> I'm working with one of the libvirt devs to get libvirt working correctly with vmware under mac osx, then I should be good
<kurt_> hopefully then maas can boot vmware fusion like it does kvm
<hazmat> cool
<kurt_> but pxe under vmware with the lower cost versions is known not to work
<hazmat> i've heard the virtualbox w/ pxe + maas setup on osx works, though its a bit manual to setup
<hazmat> although could be scripted
<kurt_> sorry, let me rephrase - WOL does not work
<kurt_> pie boot works fine
<hazmat> ah
<kurt_> pxe boot
 * hazmat wishes there was a software emulation of ipmi
<kurt_> that would be nice
<kurt_> why don't you guys write the stack for it? :D
<hazmat> would make testing ipmi setups much simpler.. as it is now.. its get a bunch of hardware for a lab setup.
<kurt_> pxe boot is working fine under vmware - its just the WOL stuff that I wished worked correctly
<kurt_> that's part of what makes MAAS so cool
<kurt_> it would be even cooler if it supported true elasticity
<adam_g> using juju-core, is it not possible to terminate a machine that is in the 'pending' state?
<hazmat> adam_g, using what version ? sounds similar to https://bugs.launchpad.net/juju-core/+bug/1190715
<_mup_> Bug #1190715: unit destruction depends on unit agents <juju-core:Fix Committed by fwereade> <https://launchpad.net/bugs/1190715>
<hazmat> its in trunk
<adam_g> hazmat, 1.13.1-raring-amd64 + juju-deployer
<adam_g> hazmat, thanks ill read that over in a few
<hazmat> adam_g, is there anything in the provisioning logs?
<fwereade> adam_g, if it has a unit, you the unit's existence blocks the machine's removal; but you should be able to remove the unit and then the machine
<adam_g> hazmat, actually, i just destroyed and rebootstrapped. ill check again in  a few
<fwereade> adam_g, it's awkward but shouldn't be unrecoverable
<adam_g> fwereade, i successfully destroyed the service units associated with the pending machine.  after they were gone, i tried to terminate the machine but nothing happened
<adam_g> ill poke again in a few
<fwereade> adam_g, that's not immediately familiar; the provisioner ought to have picked that up then
<fwereade> adam_g, do you recall what the machine's life status was, and/or whether an instance id was set? those would help me figure it out
<adam_g> fwereade, http://paste.ubuntu.com/6008312/
<adam_g> machines 5 and 9 were the ones that never came up to begin wtih, and seemed not to terminate
<adam_g> unless i was being impatient
<adam_g> (didnt notice the 'dying' state at the timne)
<fwereade> adam_g, yeah, the provisioner ought to have spotted that; the logs from machine 0 would be helpful
<adam_g> fwereade, ill see what i can get together next time the provider decides not to give me a machine
<fwereade> adam_g, thanks
<adam_g> fwereade, ok, back in the same situation
<adam_g> provider issue seems to be instance coming up with no networking
 * fwereade winces a bit
<adam_g> fwereade, what do you want from machine 0? /var/log/juju/* ?
<fwereade> adam_g, machine-9.log should be enough
<fwereade> adam_g, -0
<adam_g> fwereade, http://paste.ubuntu.com/6008358/
<adam_g> (machine-0.log)
<adam_g> http://paste.ubuntu.com/6008361/
<adam_g> (juju status(
<fwereade> adam_g, hmm, not clear what's happening there... thumper, any thoughts?
<fwereade> adam_g, in a spirit of devilment, I'd be interested to know what happens if you kill the machine agent on 0 and see what happens when it comes back up
<thumper> ?!
<adam_g> fwereade, here is some more log tail since last paste: http://paste.ubuntu.com/6008379/
<adam_g> one sec, ill kill it
<adam_g> http://paste.ubuntu.com/6008381/
<adam_g> including kill and restart
<fwereade> adam_g, sorry, but I think I'm baffled for tonight -- would you create a bug please?
<fwereade> adam_g, maybe it'll be obvious in the morning but I'm not seeing it now
<adam_g> fwereade, sure
<adam_g> fwereade, https://bugs.launchpad.net/juju-core/+bug/1214651.
<_mup_> Bug #1214651: Machine stuck in 'pending' state cannot be terminated <juju-core:New> <https://launchpad.net/bugs/1214651>
<adam_g> fwereade, thanks for the help so far
<fwereade> adam_g, thanks for the report :)
#juju 2013-08-21
<stub> fwereade: Still there?
<stub> fwereade: I'm not sure exactly what you mean by early departure for peer relations regarding Bug #1192433, but if it stops the unit being listed in relation-list while all the -depart and -broken hooks are running that addresses the bug.
<_mup_> Bug #1192433: relation-list reporting dying units <jujud> <relations> <juju-core:Triaged by fwereade> <https://launchpad.net/bugs/1192433>
<stub> fwereade: Ordering of provides/requires actually sounds more interesting to me. For example, it would have addressed Bug #1187508
<_mup_> Bug #1187508: adding a unit of a related service to postgresql is racy <postgresql (Juju Charms Collection):In Progress by davidpbritton> <https://launchpad.net/bugs/1187508>
<stub> fwereade: If the provider's join hook was run before the requirer's join hook, then I can guarantee that access from the require has been granted by the provider before the requirer attempts to connect. This makes the client charms much easier to write.
<marcoceppi> stub: I'd be a slight departure from the current story, not that I don't have a problem with that perse, but the idea I've gotten enstilled as to what each hook means is as follows:
<marcoceppi> joined - "handshake" hook, expect to have no data available
<marcoceppi> changed - "stuff on the wire", alwasy check values, idempotency
<marcoceppi> departed - "waving goodbye", relation values should still be available, do what you need to with these values
<marcoceppi> broken - "clean up", make any changes post-mortem
<marcoceppi> how does having the clients joined/changed hook run prior to the providers create a race condition? That just seems like non-idempotent hooks to me
 * marcoceppi reads bug report
<stub> marcoceppi: Lets say you have an existing setup, with a single unit of service A related to a single unit of service B.
<stub> marcoceppi: Now add a new unit to service A (the requirer or client).
<stub> marcoceppi: It looks at the relation, picks up authentication credentials, and attempts to use them *before B has had a chance to authorize the new unit's IP address*)
<marcoceppi> right, I see. db-admin relation data is already set because that unit's done it's thing, but relation-changed needs to run on the postgresql side to complete the setup
<marcoceppi> stub: I'm inclined to say this is a bug with juju, from my understanding the data should be unique per unit <-> unit (though core may need to correct my understanding of this)
<marcoceppi> In this case, each unit should get it's own credentials, not the credentials cached from the prior units relation
<marcoceppi> and should follow the normal hooke execution as if it was a new add-relation
<stub> marcoceppi: Yes, you could make that argument. You could even make it somewhat backwards compatible if there was relation->unit and unit->unit data, with the unit->unit data overlayed over the top of the relation->unit data.
 * marcoceppi nods
<stub> I don't think we would see a change like that until 2.0+ though, even if it was agreed on today.
<marcoceppi> I feel like this would resolve your issue and is what the expected behavior of juju, at least the first part of a fresh relation each time
<marcoceppi> Possibly
<marcoceppi> In the mean time your patch for postgres landed (while not the best answer to the question)
<stub> ta :)
<marcoceppi> I'd hope this could be documented and solidified either as expected behaviour or fixed in core soon to represent the actual anticipated behaviour
<marcoceppi> I'll poke #juju-dev tomorrow
<marcoceppi> "tomorrow"
<stub> marcoceppi: The documentation certainly needs to be clearer. I think everyone has troubles when they start dealing with more complex charms, and end up writing test cases to work out wtf is actually happening.
<marcoceppi> stub: yeah, even during my demo last week I was like "wtf is going on" and had to step through debug-hooks as if it was part of the training, only so I could know understand what was actually going on
<stub> marcoceppi: I don't know if the change you propose to relation storage will make things better conceptually or not.
<marcoceppi> I'd like to think so, but I may be biased
<stub> It means that to retrieve data from a specific bucket in the relation, two units need to be specified rather than one.
<stub> I like specifying the ordering of hooks. I don't think it will slow things down much in the real works, and greatly reduces the number of states hooks need to cope with.
<stub> c/in the real works/in the real world/
<marcoceppi> stub: I don't see how you'd have to specify two units?
<marcoceppi> I think relation-get always assumes the current unit, as you technically can't/shouldn't be able to spy on other units
<stub> marcoceppi: Right. But how do you retrieve the current units data? The data on its end of the relation? It has one bucket per remote unit.
<marcoceppi> stub: in the relation-* hooks $JUJU_REMOTE_UNIT is prefilled and used by the relation-* commands
<marcoceppi> it's just pulled from env
<stub> ok, so you would make it impossible to access the other buckets
<stub> peer relations certainly need to spy on other units data
<marcoceppi> stub: With the exception of peer relations, I think the idea is you shouldnt' be able to tap in to other relations that the unit isn't aware of
<marcoceppi> peers are a bit different, but still follow the same event cycle, it's just each unit is individuall connected to each other so you could relation-get -r <rel-id> <unit> and get data as you're technically connected to it
 * marcoceppi heads to bed
<stub> Goodnight
<stub> I'll continue chasing race conditions from the hoops I have to jump through because there is no defined order that my hooks get called :)
<stub> A maze of twisty hooks, all of them alike.
<fwereade> stub, heyhey
<stub> fwereade: heya
<fwereade> stub, so my current plans are to cause peer units only to (appear to) depart the relation as soon as they're dying
<fwereade> stub, because that directly addresses the case in your bug (for which many thanks, btw)
<stub> That sounds fine to me.
<fwereade> stub, it is not a watertight guarantee of ordering, though -- any given unit might be busy with, say, a long-running config-changed hook while the remote unit is departing
<fwereade> stub, and *could* thus still end up only observing the departure after it actually happened
<stub> Yes, if other hooks are allowed to run at the same time, then anything that the hook knows may be a lie. It only has a snapshot of the environment from when it started.
<stub> I'm just avoiding that whole issue at the moment in places where it is important, such as failover.
<fwereade> stub, I don't think that there is much realistic hope of enforcing the sort of lockstep ordering I think you would ideally like to see across the whole system
<stub> So if you remove-unit a master, you are going to blow your foot off if you are silly enough to make other changes before it is finished.
<stub> fwereade: I agree lockstep accoss the system is probably not a good idea. However, I'm thinking the sequence of relation joined/changed/broken/departed would be helpful to write less buggy charms.
<fwereade> stub, however I am very keen to make it *easier* to write charms that are sane and effective
<stub> The number of possible states even a simple relationship can be in is very large, and some of those states are rare (eg. a unit is being removed before another units -joined hook has started running).
<fwereade> stub, are you familiar with https://juju.ubuntu.com/docs/authors-charms-in-action.html ? I wrote it quite a long time ago but I don't think it's had quite the exposure it should have (I wasn't paying enough attention to the user-facing docs, I kinda threw what I'd written over the wall and didn't follow up)
<stub> urgh... proxy seized up
<fwereade> stub, sorry, didn't see you come back
<fwereade> stub, not sure if you saw; I asked if you'd seen https://juju.ubuntu.com/docs/authors-charms-in-action.html ?
<stub> fwereade: yes, I've seen that
<stub> absorbed it, maybe not :) I see it is describing the current departed/broken/actually leaves behavior
<ehw> hey guys, does ppa:juju/stable (1.12 I think) support completely offline maas+openstack deployment?
<stub> fwereade: In case it didn't get through, I said before that my highest priority issue is Bug #1200267 to make my tests reliable. The hook ordering stuff is all complex and tricky, but the charm I'm working on is complex and tricky.
<_mup_> Bug #1200267: Have juju status expose running and pending hooks  <juju-core:Triaged> <https://launchpad.net/bugs/1200267>
<fwereade> stub, ok, that's interesting... there's a bit of a problem there in that it's impractical to try to track the possible future, but it would be possible for the unit agent to report whether or not it's *currently* idle
<fwereade> stub, and knowing that every unit in play is currently idle would probably be sufficient -- *at the moment* -- to determine that the system's in a steady state
<jam> ehw: we may have bugs in that area (especially with charms themselves), but with some config, I think stable has enough to work disconnected from the internet. You'll need to do stuff like get tools into the local cloud first, etc.
<stub> fwereade: Yes. If no hooks are running, and no hooks are queued, we are steady.
<fwereade> stub, I'm just not sure how useful that guarantee is in practice -- there's always the possibility that some other user will add a unit, or change config, or something
<stub> fwereade: This is for writing tests.
<ehw> jam: is there a way to get the tools in? are they stored in MAAS?
 * ehw has seen the FileStorage bits in MAAS, but they don't seem connected to anything
<jam> ehw: there is a 'juju sync-tools' command. I believe in the 1.12 stable release it always downloads them from amazon, in 1.13 there is a "--source" flag so you can copy them from your local disk.
<jam> ehw: there is also juju:ppa/devel if you find 1.12 d
<jam> doesn't work
<fwereade> stub, and if in the future we add a juju-run mechanism (ie allowing a unit too feed info back into juju outside of a regularly scheduled hook) it's almost no guarantee at all
<fwereade> stub, ok, the test context makes it much saner
<ehw> jam: ok, I'll see if I can try this out with 1.12
<fwereade> stub, is it very important to know *which* hooks are queued/running, or is a single bit of steadiness info per unit sufficient?
<stub> fwereade: A single bit of steadiness is just fine for my use case.
<jam> ehw: so I'm pretty sure that for 1.12 the client (the place you are running 'juju *' from) needs at least some internet access. Once you're set up, I don't think the actual MaaS nodes need public internet access (ex
<jam> except for some specific charms that talk out the network, etc)
<stub> It could even just be 'juju wait', that blocks until a steady state is reached. Then I don't even have to bother with polling juju status.
<ehw> jam: have a deployment next week in a secure environment; working on a methodology for this atm
<fwereade> stub, that feels like overreaching on juju's part, considering the possibility of juju-run, which prevents us from making that guarantee -- I would be much more comfortable with exposing current steadiness per unit, with the caveat that the information can only be acted upon if you have a lot of specific knowledge about the environment
<stub> fwereade: ok
<stub> I just need some way to tell that it is ok to proceed with making the test, rather than the current approach of sleep() and hope :)
<stub> fwereade: Since we are talking about this only being useful for tests, this might sit better in amulet if there is some way of reaching into juju and inspecting what needs to be inspected.
<stub> (the current implementation of wait there just parses the 'juju status' output, similar to my own test harness)
<stub> juju-run might fix this horrible bit of code. For a certain operation, I need to block until the database is out of backup mode. It might be in backup mode because backups are being run, or it might be in backup mode because a backup started and failed to complete. The charm currently emits details and instructions how to clear it manually to the log file every few minutes until it is unblocked.
<stub> Hmm... I can see this turning into twisted :)
<jcastro> Reminder: Charm meeting in ~1 hour!
<jcastro> marcoceppi: marcoceppi utlemming arosales ^^^
<utlemming> I'm using ppa:juju/devel and looking at "juju debug-log"...
<arosales> jcastro, ack and thanks for the reminer
<arosales> *reminder
<jamespage> jcastro, what time is the charm call?
<jcastro> jamespage: top of the hour
<jamespage> jcastro, ack
<jcastro> ~40 minutes
<jcastro> arosales: pad is up to date and ready!
<arosales> jcastro, thanks are you setting up the hangout?
<jcastro> yep
<jcastro> http://ubuntuonair.com/ for those want to follow along in the meeting
<jcastro> https://plus.google.com/hangouts/_/96e374b291c31f8310be00593580b52eb8418b02?authuser=0&hl=en if you want to participate
 * m_3 having hardware issues with latest nvidia updates :-(
<marcoceppi> m_3: yaeh, those killed me last night
<m_3> marcoceppi: maybe write that as a full deploy line
<arosales> jamespage, jcastro also this bug we need to address in precise https://bugs.launchpad.net/juju-core/+bug/1203795
<_mup_> Bug #1203795: mongodb with --ssl not available in precise <doc> <papercut> <juju-core:Confirmed> <https://launchpad.net/bugs/1203795>
<arosales> at least figure out the story since if we can't backport mongo and lxc
<kurt_> jamespage: must the ceph.yaml go in the .juju directory?
<jamespage> kurt_, no
<kurt_> where should it go?
<jamespage> kurt_, you pass it to juju deploy with the --config flag
<kurt_> ah ok
<jamespage> juju deploy --config --config
<jamespage> juju deploy --config config.yaml
<kurt_> its unnecessary after the configuration, correct?
<kurt_> sorry, deployment
<AskUbuntu> is openstack swift required for juju integration? | http://askubuntu.com/q/335447
<kurt_> jamespage: is the ceph.yaml in scuttlemonkeys guide (http://ceph.com/dev-notes/deploying-ceph-with-juju/) up to date? I'm looking specifically at the ceph-osd source and device information and ceph-radosgw source.
<arosales> jcastro, jamespage http://bazaar.launchpad.net/~openstack-charmers/+junk/openstack-ha/view/head:/python-redux.json
<kurt_> and I'm wondering in the source string for gui for ceph would then be changed to quantal to try it that way?
<hazmat> kurt_, precise is fine
<hazmat> re source string for gui
<hazmat> er default
<kurt_> hazmat: default in gui for source string is "cloud:precise-updates/folsom"
<hazmat> kurt_, oh sorry you meant for ceph within the gui, not for deploying the gui
<kurt_> hazmat: my questions above may seem confusing, sorry - I had separate ones for CLI and gui methods
<hazmat> kurt_, if your deploying the precise charm that's a good default .. its the charm that determines which distro release your getting, so you want the source string to match the charm series name.
<kurt_> CLI - my questions were on the ceph.yaml and for the gui my question was about the source and now looking at it, the odd-devices string.
<kurt_> ok, there appears to be no guide using the gui to deploy ceph.  I can see there is a separate charm for ceph-radosgw, ceph-osd, and ceph
<kurt_> i should probably just stick to manual method for now
<weblife> Is there any important data stored on ephemeral storage?  Use Case: Stopping AWS bootstrap and staring it again.
<jcastro> hazmat: bah I lost the URL, deployer docs are at ... ?
<hazmat> http://pythonhosted.org/juju-deployer/
<jcastro> ta
<jcastro> hazmat: so this doesn't explain how to use it though, no examples in the man page either
<jcastro> assuming I have example.yaml as an exported bundle how do I deploy it?
<hazmat> jcastro, good point. juju-deployer  -v -W -c example.yaml name_of_bundle
<jcastro> hazmat: aha! I get an error when trying to deploy this: http://bazaar.launchpad.net/~openstack-charmers/+junk/openstack-ha/view/head:/python-redux.yaml
<hazmat> jcastro, which is? pastebin? on maas?
<jcastro> http://pastebin.ubuntu.com/6011028/
<kurt_> scuttlemokey: I have some updated instructions for the monitor-key generation section for your ceph documentation
<kurt_> http://pastebin.ubuntu.com/6011030/
<hazmat> jcastro, that reads like an error in the config file
<hazmat> jcastro, ie. its not valid yaml
<jcastro> hazmat: ah nm, I found the problem
<hazmat> jcastro, html instead of yaml ?
<jcastro> yeah so if you wget that page from lp you get the html
<jcastro> duh
<hazmat> jcastro, yeah.. just verified that the python-redux.yaml is correct
<hazmat> er.. is valid yaml
<jcastro> heya kurt_ you gotta check this out, you still have maas?
<kurt_> yep.
<jcastro> snag that yaml file
<jcastro> then get juju-deployer from ppa:juju/pkgs
<jcastro> http://pastebin.ubuntu.com/6011079/
<kurt_> jcastro: will this tear up what I have running currently?
<jcastro> It will probably clobber the universe
<marcoceppi> jcastro: wait, is juju-deployer in juju/pkgs?
<jcastro> actually, I have it in saucy
<jcastro> I've seen it in juju/pkgs
<hazmat> its in the distro
<hazmat> in saucy i think
<marcoceppi> jcastro: that's not the right version
<marcoceppi> I need to remove that
<kurt_> jcastro: shall I wait then?
<jcastro> yeah
<jcastro> jamespage: it bombed out during this part: http://pastebin.ubuntu.com/6011085/
<jcastro> do I need to have a bunch of locally branched stuff?
<marcoceppi> I've never heard of switf-storage-z1
<hazmat> marcoceppi, 3 swift nodes
<hazmat> juju-deployer also in a daily ppa https://launchpad.net/~ahasenack/+archive/juju-deployer-daily with dep https://launchpad.net/~ahasenack/+archive/python-jujuclient or from pypi.. virtualenv --system-site-packages deployer && source deployer/bin/activate && pip install juju-deployer
<ahasenack> about swift-storage, the charm is called swift-storage, but the config deploys it as swift-storage-zN
<ahasenack> where N is 1, 2 or 3 (if we are talking about the openstack juju-deployer config file)
<ahasenack> so I think that config is missing a "charm: swift-storage" entry under swift-storage-z1
<ahasenack> 2013-08-21 13:29:48  Deploying service swift-storage-z1 using local:precise/swift-storage-z1 <-- wrong
<ahasenack> should be
<ahasenack> 2013-08-21 13:29:48  Deploying service swift-storage-z1 using local:precise/swift-storage
<marcoceppi> evilnickveitch jcastro: I also found out there's about 25 or so extra undocumented environments.yaml config options for juju-core
<marcoceppi> okay, 25 might be an overestimate, but there's a quite a few
<kurt_> jamespage: in looking at your cephalopod article - do you have precise instructions instead of quantal?
<hazmat> ahasenack, fwiw that's fixed in deployer trunk, charm: becomes optional, it will use the name in the charm md if none is specified
<ahasenack> ok
<marcoceppi> ahasenack hazmat btw I've got two merge proposals for deployer, once is minor and cosmetic the other is a blocker for amulet:  https://code.launchpad.net/~marcoceppi/juju-deployer/local-branches/+merge/181207 if you could reveiw sometime this week that'd be great!
<hazmat> marcoceppi, ack i'll get those merged in this evening.
<marcoceppi> hazmat: thank you sir!
<sidnei> adam_g: did you have issues with rabbitmq-server 2.7.1? it's failing to start when deploying on precise local provider
<adam_g> sidnei, no, its been working fine in non-local deployments for a while
<adam_g> sidnei, with the exception of clustering in environments with funny DNS
<sidnei> i guess it might be a lxc-specific failure
<adam_g> sidnei, what is the failure?
<sidnei> there's nothing interesting in the rabbit logs, only 'failed to start'
<sidnei> and epmd is left running
<adam_g> sidnei, can you resolve your localhost name and IP?
<adam_g> er, local hostname
<adam_g> :)
<sidnei> uhm, hostname is set to ubuntu, it might be resolving to multiple addresses
<sidnei> or not
<sidnei> well, it's probably that
<sidnei> $ ping ubuntu
<sidnei> PING ubuntu (10.0.3.15) 56(84) bytes of data.
<sidnei> but local ip is 10.0.3.187
<sidnei> will dig further after the break
<sidnei> https://code.launchpad.net/~sidnei/charms/precise/haproxy/trunk/+merge/181421 up for review again
<kurt_> jamespages: do you have any guides for ceph deployment with cinder and openstack?
#juju 2013-08-22
<hazmat> marcoceppi, merged, incidentally i've been splitting out the various bits of deployer into plugins in the core-plugins branch
<marcoceppi> hazmat: ah, cool
<AskUbuntu> agent-state-info: 'hook failed: "config-changed" deploy wordpress using juju | http://askubuntu.com/q/335720
<mthaddon> any chance of a review for https://code.launchpad.net/~mthaddon/charms/precise/pgbouncer/package-holds/+merge/180533 - been sitting in the queue for a little while and is relatively trivial
<sinzui> hi charmers. I am seeing "ERROR Invalid SSH key" for juju status for charmworld on canonistack. I got a juju update for saucy today, but I also set the alternative to juju-7. Any clues about resolving this?
<sinzui> My access was fine 18 hours ago
<marcoceppi> sinzui: when you type juju --version does it say 0.7?
<sinzui> yes
<sinzui> I can confirm that I can ssh to each machine when I specify the proper key. I think the wrong key is being selected.
<marcoceppi> sinzui: has your id_rsa changed in the last 18 hours?
<sinzui> marcoceppi, no keys have changed
<marcoceppi> sinzui: can you ssh directly in to the bootstrap?
<sinzui> marcoceppi, I certainly can
<marcoceppi> sinzui: hum, sorry. It's been a while since I've used 0.7, >1.x has really spoiled me. Verify that your id_rsa.pub is in the authoized-keys list for .ssh/ on bootstrap?
<sinzui> marcoceppi, We use team credential to manage staging.jujucharms.com. I can access every instance shown by nova list.
<marcoceppi> Then verify you're trying to status against the right environment?
<sinzui> the env is always right.
<sinzui> marcoceppi, I just bootstrapped a new env on canonistack. Juju cannot get the status for the same reason as the old env.
<marcoceppi> sinzui: there was an update to 0.7 recently
<marcoceppi> I have no idea what it was, but it appears to have broken this
<sinzui> thanks marcoceppi
<marcoceppi> sinzui: well, http://bazaar.launchpad.net/~juju/juju/0.7/revision/632 does not look like it'll hurt anything. I wonder if there was a change earlier that didn't get built because of build errors that caused this
<marcoceppi> sinzui: either way, bugs should be filed, probably even want to bother #juju-dev about it
<sinzui> marcoceppi, I agree. I am checking if other in the team can work with canonistack. I might spend some time with the GSAa
<marcoceppi> sinzui: might be a good time to move to juju 1.12.0 ;)
<sinzui> marcoceppi, I think it would be very irresponsible to build on modern juju but deploy on juju 0.7
<sinzui> When prodstack allows new juju, we can switch
<sinzui> marcoceppi, I can work with canonistack again after a reboot. I have no idea what was buggered. Possibly the flip-flop from juju 0.7 -> 1.13 -> 0.7 tainted something
<sidnei> sinzui: it already does
<sidnei> sinzui: as in, we just deployed a couple prod services with new juju
<marcoceppi> sinzui: that's really weird.
<marcoceppi> glad you were able to get it resolved
<sinzui> sidnei, I was told we cannot use new juju until fenchurch is proven
<sidnei> sinzui: i don't know what's that about. i was told we cannot use old juju for new deployments at all.
<sinzui> great. mixed messages. I would love to have just one juju installed
<marcoceppi> sinzui: I have no idea what that is about, but we're really moving away from 0.7 asap
<sidnei> mthaddon can probably clarify
<mthaddon> so what's the question here?
<sidnei> "<sinzui> sidnei, I was told we cannot use new juju until fenchurch is proven"
<mthaddon> sinzui: who told you that?
<sinzui> elmo mentioned it at IoM
<sinzui> ^ mthaddon
<mthaddon> sinzui: I'll check with elmo, but I highly doubt he said that, as he's pushing us to use juju-core for any new environments in prodstack right now
<sinzui> mthaddon, I would be happy to move charmworld and gui to new juju
<elmo> sinzui: sorry, for any confusion over what I said; but use of juju-core is not blocked by anything in IS.  As mthaddon said, we have a mandate to use juju-core and only juju-core for any new services or complete redeployments of an existing service
<sinzui> elmo: thanks. I will bring this up with the gui team
<kurt_> Does anyone recognize the error "Unable to retrieve authorized projects." from openstack-dashboard?
<Ex1T> Heyhi
<marcoceppi> jamespage: so, charm-tools has changed a lot since the last time it was sync'd to the archives. How would I start the process of getting a new version sync'd to Saucy? Also, what's the latest I could sync as I've got a new version coming out soon
<jamespage> marcoceppi, feature freeze is next thursday
<jamespage> so ideally wednesday
<jamespage> marcoceppi, is the packaging itself still OK?
<marcoceppi> jamespage: the recipie has changed in addition to the contents
<marcoceppi> recipe*
<marcoceppi> jamespage: actually, it looks like the saucy recipe has been updated
<marcoceppi> https://code.launchpad.net/~marcoceppi/ubuntu/saucy/charm-tools/fix-deps/+merge/165161
<jamespage> marcoceppi, thats what I wanted!
<jamespage> marcoceppi, so juju -> juju-core in Saucy
<jamespage> juju-0.7 will be the old package name
<marcoceppi> jamespage: gotchya, so for saucy it can still recommend/suggest "juju" as juju is the new metapackage that installs juju-core with update alternatives?
<jamespage> thats it
<marcoceppi> jamespage: cool, that fix was because in precise, if you install charm-tools from ppa and juju-core from ppa, you get a broken install as juju installs 0.7
<marcoceppi> I'll open another update
<jamespage> the ppa builds should really all do the right things by now - i.e. use alternatives
<marcoceppi> jamespage: right, but back whenthis change was made the ppa version of juju core and juju in the precise archives clashed a bit
<jamespage> marcoceppi, probably still does
<marcoceppi> hum, so maybe I'll keep this for the ppa of charm-tools, just so things don't die, but I'll at least update the saucy version, thanks!
<kurt_> Can a version mismatch for keystone cause problems?  ie. client (openstack-dashboard) and keystone node?
<kurt_> http://pastebin.ubuntu.com/6014649/
<kurt_> I'm working hard to trace keystone auth issues and trying to understand where my problems are.
<marcoceppi> kurt_: iirc versions of openstack and keystone (grizzly vs Folsom for instance) cause problems
<kurt_> marcoceppi: from my paste bin - is that what you see is a version mismatch?
<jcastro> jamespage: hey have you tried that python redux bundle on AWS or HP Cloud?
<jamespage> jcastro, sorry - no
<jcastro> ok so hp cloud doesn't work for me
<jcastro> however, for about the first 3 minutes it works awesome
<marcoceppi> kurt_: not sure which version is which. one second
<jcastro> jamespage: I need to sort some environment issues but I think I am close
<marcoceppi> jcastro: I couldn't get networking to work on HP cloud. dashboard and glance worked
<jcastro> jamespage: it's pretty badass watching deployer fire up stuff like that.
<jamespage> jcastro, yeah - sorry - up to my eyeballs in kernel incompatibility problems with openvswitch in saucy right now
<jamespage> (if I seem a little distracted)
<jcastro> no worries
<jcastro> I Was expecting you to be EODed anyway
<kurt_> marcoceppi: any chance to look at my pastebin?
<marcoceppi> kurt_: sorry, was mobile. Back at desk
<kurt_> marcoceppi: no worries - just trying to sort through my final set of problems getting openstack running :)
<kurt_> I'm so close
<kurt_> I have these weird auth issues and still the cinder/ceph stuff to figure out
<kurt_> one layer at a time :)
<marcoceppi> kurt_: I hear yeah, let me see what's in the cloud archive. Ideally you want all your services running the same openstack release, IE grizzly, folsom, etc
<kurt_> yes, there is definitely a mixture there
<kurt_> folsom/grizzly
<marcoceppi> in that case you'll probably definitely want keystone on grizzly if it isn't already
<kurt_> but I think all of it is the stock stuff from the gui
<kurt_> keystone itself is, I believe
<kurt_> the paste bin should validate that
<marcoceppi> kurt_: not sure if this is a valid statement, maybe jamespage can correct me, but you'll want almost all the openstack charms using cloud:precise-updates/grizzly as their openstack-origin
<kurt_> funny thing is both openstack-dashboard and keystone have that
<kurt_> validating...
<kurt_> cloud:precise-grizzly
<kurt_> that's what I have been using
<kurt_> that's what all of the docs say to use I believe
<kurt_> keystone does not have it's origin explicitly set
<kurt_> from the gui anyways
<marcoceppi> jamespage: could you, when you get a chance, verify the right openstack-origin for grizzly and openstack charms?
<kurt_> I found in some cases, like ceph I believe, that grizzly wasn't availlable
<kurt_> but as I said, I want to strip off one layer at a time till I get this to work
<marcoceppi> kurt_: so when I had keystone problems and dashboard, I ended up using the wrong version of keystone
<kurt_> yes, I definitely cannot pull a token from the dashboard.  500 error.  So that's a basic problem
<kurt_> dashboard -> keystone
<kurt_> but I can get token fine locally from keystone
<kurt_> jcastro:  IMHO it would be very useful to print DNS names on the deployed nodes on the canvas of the gui
<kurt_> it would save the user a step of lookup when troubleshooting from the MAAS perspective
<marcoceppi> kurt_: I hear there's a bunch of updates to the gui coming wrt the way you drill down in to a service, while I don't think you'll get node names on canvas (imagine a service with 100 units deployed) it should be less tedious to drill down in to going forward
<kurt_> I was thinking it would be much less tedious on the user to see the actual node name on the icon on the gui rather than having to drill down to the node to see it
<kurt_> small production/time-saver thing
<marcoceppi> kurt_: but those icons represent a service, which has 1 or more units. The units are the ones that get a name
<kurt_> ah yes, true
<sarnold> if there's only one unit in a service, it might be  a nice optimization to also show the unit name
<marcoceppi> so in the event of multiple units that representation would be lost either by not having it displayed or by having too much info to show
<marcoceppi> sarnold: possibly, but I don't want to break users expectations for a unit name
<kurt_> what would be *really* nice is mouseover with a pop-up of all hosts associated with the service :D
<marcoceppi> one mysql unit and three units wordpress, mysql shows unit name wordpress doesn't. May be preceived as broken
<marcoceppi> kurt_: that might be a better use case actually
<sarnold> marcoceppi: yeah, but there's something to be said for reducing needless clicks where it can make sense, too.
<marcoceppi> kurt_: you could file a bug aginst juju-gui witht he suggestion, to get feedback from them. I have no say over this in the end :)
<sarnold> mouseover, okay, I like that idea. :)
<kurt_> I can add that to the list :D
<sarnold> a few thousand can hide in a mouseover without too much hassle :)
<marcoceppi> sarnold: ;) I imagine at that point you'd want to run juju status from the command line
<kurt_> sure.  but from a quick view administrative perspective it would save a lot of time in many cases
<rick_h> I think there's something that will help underway. Try setting :flags:/serviceInspector/ to the url
<marcoceppi> kurt_: certainly
<rick_h> it will show the units like mongodb/0 /1 and such
<rick_h> is that what you're looking for?
<kurt_> is that directed to me rick_h?
<rick_h> kurt_: kinda, at the general conversation
 * marcoceppi tries
<rick_h> about wanting to see some 'name' when clicking on a service
<rick_h> http://comingsoon.jujucharms.com/:flags:/serviceInspector/ - deploy mongodb - click on the service icon - go to units and it shows the unit name
<kurt_> I was thinking more along the lines of the specific hostnames associated with the service, like qxkgb.master
<rick_h> kurt_: ah, the hostname, hmm
<kurt_> juju is but one layer of the information
<rick_h> so I think clicking on a unit mongodb/0 will show you that info then
<rick_h> but it's not on hover
<rick_h> so it'd be 3 clicks in
<kurt_> my suggestion would mean 0 clicks :D
<marcoceppi> rick_h: whoa, this is cool. What's S C E D do?
<rick_h> yea, there's a ton of things we can try to show, but too much info sucks
<rick_h> marcoceppi: go to coming soon. Updates there add the icons and such
<marcoceppi> rick_h: ack, will upgrade-charm on a deployed juju-gui give the me the latest?
<rick_h> marcoceppi: eventually
<marcoceppi> ;__; okay
<rick_h> marcoceppi: oh, sorry, thought you meant 'update-charm' from something in the gui. That's in progress.
<marcoceppi> booya contstraints
<rick_h> marcoceppi: but to your original question no, it'll only get you the last 'release'
<marcoceppi> rick_h: gotchya, I'll just wait for the next release
<rick_h> and we're a couple of weeks from our last release right now so comingsoon is the latest/greatest
<rick_h> marcoceppi: rgr
<marcoceppi> rick_h: possibly putting the address/hostname in () next to the running units list in the serviceinspector might be a plausible alternative
<marcoceppi> at least a truncated version of it with a hyperlink to that hostname
<rick_h> marcoceppi: maybe, but hostnames tend to be long sucky things. Think about the hostnames on the aws instances :/
<rick_h> heh, yea
<rick_h> some work to figure out something there
<marcoceppi> mongodb/0 (az-123.1231.4...)
<marcoceppi> rick_h: ack, but there's some user feedback for you.
<marcoceppi> thanks for the feedback btw, kurt_
<kurt_> sure
<rick_h> marcoceppi: definitely. Wasnted to clear up what was being asked for. That helps for sure
<marcoceppi> rick_h: will replace be used to juju upgrade-charm --switch?!
<rick_h> marcoceppi: no idea
<jamespage> kurt_, marcoceppi: the ceph and openstack-charms differ a  little
<jamespage> source: cloud:precise-updates/grizzly for ceph
<jamespage> and openstack-origin: cloud:precise-grizzly for the openstack charms
<marcoceppi> jamespage: thanks for the confirmation!
<jamespage> marcoceppi, np
<kurt_> jamespage: I've been looking for a better guide on deployment/integration cinder with ceph for openstack in grizzly
<kurt_> I'm having a lot of problems figuring what works and what doesn't
<kurt_> scuttle monkey's guide is very good, but its outdated and doesn't deal with integrating stock grizzly.
<weblife> Darn someone beat me to it.  Thought gitolite would make a good charm.
<marcoceppi> weblife: I don't think it's in the store yet
<marcoceppi> weblife: oops, yes it is
<weblife> someone called it gitlab
<marcoceppi> weblife: gitlab != gitolite
<weblife> That would have been a winning charm
<marcoceppi> https://github.com/sitaramc/gitolite/wiki  http://gitlab.org/
<marcoceppi> weblife: I know the gitlab charm could use some work
<marcoceppi> and there isn't a gitlab_ci charm yet
<marcoceppi> nor a gitlab-shell charm
<marcoceppi> If you were interested in growing that gitlabhq services
<weblife> I am.  Probably will help expand on it where I can.  Just been thinking what I can do to enter this contest. Need the money :)
<marcoceppi> For instance, you could have a gitlab-shell charm that talks with NFS, allowing you to scale out the git repositories side, then have gitlab setup to talk to gitlab-shell (which is required in newer versions), and have gitlab_ci deployed and a relation between the two to automatically setup CI for repos
<marcoceppi> the gitlab charm would have to be updated to reflect the new gitlab-shell (and probably other things) so you'd have quite  abit, but I think with time you could eventaully demo a github* alternative at scale :)
<weblife> not bad idea.  Every software company wold love that.
<weblife> Saved this convo for later review, would like to do something like this.
<jamespage> kurt_, have you seen https://wiki.ubuntu.com/ServerTeam/OpenStackHA?
<jamespage> its for a full HA deployment; but it also documents alot of general details about deploying openstack with juju
<jamespage> obviously you can drop the HA configurations (specifically vip's) and the '-n 2' for most of the services
<weblife> I know what I can do. An IRC bot charm!
<kurt_> jamespage: I have indeed.  But I believe that guide to be out of date.  Swift for example I thought was no longer used.
<kurt_> jamespage: information related to cinder/ceph deployment/integration with juju openstack (especially the gui).  I'm having to piece together information from various sources, some of which unfortunately are out of date.
<kurt_> sorry - info is hard to find
<weblife> m_3: you there
<weblife> m_3:  I updated https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix with your request.  It nows loads the express template default if no app ( http://ec2-54-212-165-14.us-west-2.compute.amazonaws.com ).  It will also load specified node version source from config.yaml and do a sha1 check and build if passed.  PPA if no version is entered or if sha1 fails.
<weblife> I gonna do a few extra things too.  Do I need to submit merge again?  It looks like its still pending.
#juju 2013-08-23
<kurt_> Success! Finally!
<kurt_> Now just need to figure out cinder/ceph integration and I'm golden!
<adam_g> kurt_, did you get your horizon + keystone issue sorted? was just gonna respond on list
<kurt_> yes
<kurt_> thanks adam
<adam_g> ah
<kurt_> I tore down everything and started at keystone - set it to precise-grizzly
<kurt_> explicitly set the password too
<kurt_> and now it works :D
<adam_g> kurt_, ah. i was going to suggest just upgrading keystone from essex -> folsom -> grizzly
<adam_g> kurt_, setting the openstack-origin on the running service will upgrade the services if the origin you're setting is > than current
<adam_g> kurt_, also, you should be able to set the admin-password config option, and have it set it on a running keystone
<kurt_> adam_g: can I change it there any time and keystone will work it's magic?
<adam_g> kurt_, the password? yes, you should be able to
<kurt_> that's awesome
 * kurt_ sighs
<kurt_> now just cinder/ceph stuff is all I need to figure out
<adam_g> kurt_, note ceph isn't a requirement
<adam_g> kurt_, without ceph, cinder will just serve its local storage as instance volumes
<kurt_> well, I set my cinder block device to "None"
<kurt_> adam_g: will it still use local storage then?
<adam_g> kurt_, no, you'd need to destroy it and redeploy it with block-device set to an available device on the system
<kurt_> adam_g: can I do that without disrupting other parts of the deployment?
<kurt_> just simply re-deploy cinder?
<adam_g> kurt_, yeah, you'd need to re-add the correct relations to it
<kurt_> Ok, but I don't have to tear anything else down and start over, right?
<kurt_> and can you confirm - if I would have destroyed keystone, would I have had to have destroyed the other services?
<kurt_> I think all I did was update the distro origin string and it worked after ripping everything else out beside mysql and rabbitmq
<adam_g> kurt_, you should be good to destroy cinder, redeploy it and re-add it to your cloud.  it gets a bit hairy if you are already using cinder and have volumes attached to instances, but if its a new cloud it should be okay
<kurt_> adam_g: thanks.  Are there any readmes for cinder set up or is google my uncle?
<adam_g> kurt_, what do you mean by setup?
<adam_g> kurt_, deployment using juju? or just general usage?
<kurt_> prepping volumes, configuration, set up
<kurt_> I'm assuming I need to set up the cinder node in MAAS
<kurt_> how do I create persistent storage??
<kurt_> ie. is it possible to recover volumes after a crash :D
<adam_g> kurt_, right, so... if you're not using ceph, you'd need to ensure there is an avaiable block device on the cinder node(s) when you are deploying it.  if you are deploying with ceph, ceph essentially gives you that block device (and its HA and redundant)
<adam_g> actually thats wrong :)
<adam_g> without ceph: cinder uses local storage to create volumes and exports it from cinder nodes to instances via iSCSI
<adam_g> with ceph: cinder creates ceph block devices in the ceph cluster (instead of locally) and ceph exports them to instances via RBD (between ceph  and nova-compute, cinder is just the mediator)
<kurt_> adam_g: can I add more cinder nodes which add additional storage to the system?
<adam_g> kurt_, yeah
<adam_g> kurt_, however there are no HA/redundancy guarantees.  if you lose a cinder node that is exporting local storage, you lose those volumes as well
<kurt_> right
<kurt_> that's what ceph brings to the table
<adam_g> yup
<kurt_> can a combination of ceph and cinder nodes be used?
<kurt_> so can i start by playing around with cinder, then add ceph in after the fact?
<kurt_> ie. without losing my cinder volumes?
<adam_g> kurt_, you'd be using cinder in both cases, just different backends. cinder is growing support in Havana for multi-backends, so you could have some cinder exporting local storage, and others exporting ceph
<kurt_> interesting, ok
<adam_g> kurt_, one idea being that, as a cloud vendor, you'd charge more for the volumes that are replicated and HA :)
<kurt_> sure
<kurt_> then you'll need to start thinking about COS
<kurt_> :D
<adam_g> ya.
<adam_g> but atm im thinking about dinner. and need to run. :)
<adam_g> cya
<kurt_> LOL
<sarnold> :)
<kurt_> thanks adam
<adam_g> np
<mxmln_> hi folks!
<jamespage> hazmat, thanks for the updates to the 0.7 branch btw - did the trick on saucy
<jamespage> just uploaded for re-introduction alongside juju-core
<jamespage> jcastro, ^^ fyi
<hazmat> jamespage, awesome
<jcastro> jamespage: link?
<jcastro> jamespage: oh you mean you reuploaded .7
<jamespage> jcastro, yes
 * jcastro nods
<jamespage> adam_g, https://code.launchpad.net/~james-page/charm-helpers/fixup_upstream_version/+merge/181840
<jamespage> and adam_g: https://code.launchpad.net/~james-page/charm-helpers/ceph-redux/+merge/179948
<X-warrior`> If I would like to have a wordpress and all its infrastructure on the same machine the mysql would be a subordinate?
<rick_h> X-warrior`: check out jcastro's post http://www.jorgecastro.org/2013/07/31/deploying-wordpress-to-the-cloud-with-juju/
<X-warrior`> rick_h: will look it, ty
<jcastro> marcoceppi: ping!
<jcastro> we're doing a charm school in 20 minutes on how to use the local/lxc provider!
<rick_h> jcastro: woot
<jcastro> hmm, I getting the install hook failing with both juju-gui and mysql
<jcastro> on the local provider
<jcastro> actually the charm school is on in 60 minutes
<jcastro> sorry for the mixup
<mthaddon> how do I find the default port for the juju api endpoint?
<kurt_> Do you guys recommend using JeOS for ubuntu images within Openstack?
<sidnei> mthaddon: i think kapil landed a command to expose that
<mthaddon> sidnei: he has a branch in progress for it, but it hasn't landed yet
<sidnei> ah, i see
<mthaddon> https://bugs.launchpad.net/juju-core/+bug/1181382
<_mup_> Bug #1181382: command for returning the api endpoint <bitesize> <cmdline> <juju-core:In Progress by hazmat> <https://launchpad.net/bugs/1181382>
<mthaddon> but in any case, I think I've figured it out
<sidnei> mthaddon: it says merged?
<mthaddon> oh, so it does - not sure why the bug is in progress still then
<smoser> hey...
<smoser> i'm using juju on precise
<smoser> and before you tell me "dont do that", please listen further
<smoser> i'm trying to test juju and precise for 12.04.3 relase (with plars)
<smoser> against maas, i get
<smoser> http://paste.ubuntu.com/6018407/
<smoser> rvba, ?
<smoser> heres a full explanation.
<smoser> http://paste.ubuntu.com/6018470/
<smoser> jamespage, ^ any ideas? adam_g ?
<jamespage> smoser, yes - you need :80 in your url
<jamespage> smoser, helping plars in #ubuntu-release now
<smoser> thanks
<smoser> jamespage, do you happen to know about the other environments default release error ?
<smoser> error: Environments configuration error: /home/ubuntu/.juju/environments.yaml: environments.quantal.default-series: expected 'precise', got 'quantal'
<smoser> cause i'll just fix virtual-maas for that too.
<jamespage> smoser, that's cause juju 0.5 don't know nothing about raring
<jamespage> this does not get reported because no-one actually uses the juju in precise anymore
<smoser> well, yes.
<jcastro> ok we're going to have a charm school on using the LXC/Local provider
<jcastro> starting nowish on http://ubuntuonair.com!
<arosales> looking forward to it the local provider charm school jcastro and marcoceppi
<rick_h> no love at the url?
<jcastro> should be working
<jcastro> investigating
<marcoceppi> ppa:juju/devel or ppa:juju/stable
<rick_h> is it working for others?
<m_3> more beer for thumper
<m_3> can I tell you how excited I am about this provider?
<m_3> marcoceppi: ec2-54-245-207-248.us-west-2.compute.amazonaws.com
<X-warrior`> I'm creating a charm to deploy some git site... What is the best way to git clone it? Create an ssh private key that will be on server and add this private key to git access?
<m_3> X-warrior`: pass that "deployment-user" key into the charm as config
<jcastro> https://juju.ubuntu.com/docs/charms-config.html <-- docs for that
<X-warrior`> jcastro: m_3 looking it, ty
<m_3> jinx
<rick_h> not seeing the juju local package in https://launchpad.net/~juju/+archive/devel
<jcastro> http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<jcastro> sudo apt-get install juju-core lxc mongodb-server
<jcastro> I don't think juju-local has landed
<rick_h> ah ok, following along it sounded like the ppa would help me if I wasn't on saucy
<sidnei> it landed on saucy but not on the ppa, jcastro?
<rick_h> sidnei: right
<jcastro> it's not in the stable PPA
<rick_h> jcastro: or the devel
<jcastro> jamespage: do you know if we're putting "juju-local" in the PPA?
<kurt_> Is there a good post-deployment openstack-dashboard (horizon) set up/configuration guide available?
<rick_h> jcastro: is http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage missing an expose command?
<jcastro> you don't need to expose locally
<rick_h> ah, gotcha
<m_3> jcastro: is there a g+ setting that turns off the "mute while I type" thing?
<rick_h> wooo this is sweet! juju-gui deployed in lxc deploying charmworld now.
<m_3> rick_h: nice
<m_3> jcastro: we should look at ingress rules in a vagrant file
<weblife> I'm about to go Saucy.  <--Sounds funny
<m_3> marcoceppi: awesome job man!
<jcastro> marcoceppi: sorry I missed that part but where do I tell lxc/juju to use the squid-d-proxy?
<jcastro> I know the entire setup and have it, I just need to know where to point to
<marcoceppi> jcastro: you just put in your /etc/apt/apt.conf.d/ with the following
<jcastro> oh and that will migrate down to the containers?
<marcoceppi> http://paste.ubuntu.com/6018714/
<marcoceppi> jcastro: yeah, Juju looks for Acquire::HTTP::Proxy line and copies it via cloud-init on deploy
<jcastro> don't forget to document this!
<marcoceppi> jcastro: I AM READY FOR DOCUMENTING
<jamespage> jcastro, it should already be in the stable PPA
<jamespage> jcastro, we need to sync up on packaging in devel
<m_3> jcastro: crap... we forgot to talk about how well the containers survive reboot now
<jcastro> jamespage: doesn't appear to be in either PPA, stable or devel
<jamespage> jcastro, its in stable
<jamespage> https://launchpad.net/~juju/+archive/stable/+packages
<jamespage>     juju-local dependency package for the Juju local provider
<jcastro> jamespage: oh I see, I didn't expand the expander
<jcastro> jamespage: my bad!
<jcastro> jamespage: or jam: should I go ahead and update the local instructions?
<jcastro> to use juju-local?
<kentb> juju-core 1.13.2-4~1703~precise1  If I try and run 'juju bootstrap' against maas 1.2+bzr1373+dfsg-0ubuntu1~12.04.2 I end up with "ERROR juju supercommand.go:282 command failed: gomaasapi: got error back from server: 502 cannotconnect..."
<kentb> what do I need to check?
<weblife> Woohoo.  Official acceptance for MS degree!!! Feels good.
<weblife> Prerequisite waived!!!
<m_3> weblife: congrats!!
<m_3> MS in what?
<weblife> Software Engineering
<weblife> thank you also
<m_3> cool
<kurt_> Can anyone tell me if https://bugs.launchpad.net/ubuntu/+source/quantum/+bug/1211764 affects the overall stability of the quantum-gateway in grizzly and makes it unusable?
<_mup_> Bug #1211764: Grizzly's python-quantumclient wrong dependencies <quantum (Ubuntu):Confirmed> <https://launchpad.net/bugs/1211764>
<kurt_> Should I not be able to run "quantum -v net-list" on the quantum node?
<kurt_> http://pastebin.ubuntu.com/6019060/
 * kurt_ thinks everyone has gone home for the weekend already
 * kurt_ thinks he should too
<weblife> I am just noticing that if I deploy a instance from local 'juju deploy --repository charms local:my-site my1'  that trying to launch a second 'juju deploy --repository charms local:my-site my2' after I changed the code launches the same code from the 'my1' instance.  Is this a bug or should I alter my second command?
<henninge> weblife: AFAIK you will need to bootstrap new if you change the code. Maybe there is a short cut but I am not sure.
<henninge> weblife: did you bootstrap with --upload-tools?
<weblife> henninge: I could always change the name in the metadata file as a shortcut. Haven't tried upload-tools yet I will look into it. Thank you for the advice.
<henninge> weblife: oh, there is also the revision thing.
<henninge> weblife: you need to increase the revision number of the charm or else it will use the cached version, I believe.
<weblife> henninge: That sounds better and less of a hassle.
<marcoceppi> weblife: You will need to use juju deploy -u
<marcoceppi> -u will upgrade from the repo if it's local, otherwise it'll use the cached version
<marcoceppi> weblife: upload-tools wont do what you need, that's just for versions of juju itself
<marcoceppi> and just changing the revision file for a deploy I don't think will automatically trigger it either
<weblife> marcoceppi:  Okay, will try next time around.  Thank you!
<marcoceppi> weblife: `juju help deploy` for additional details
<marcoceppi> You can also opt to use upgrade-charm to upgrade the service running in place, depends on your workflow
<kurt_> marcoceppi:  is there some flush that needs to be done when you change IPs in the nova-compute, quantum gateway? ie. floating IPs?
<marcoceppi> kurt_: for juju status?
<kurt_> in the dashboard
<kurt_> the floating IPs for VMs
<marcoceppi> kurt_: not that I know of. The floating ips should just work
<kurt_> I completely changed my topology and when I try to allocate IP, old ones still appear
<kurt_> maybe delete the project?
<marcoceppi> kurt_: possibly, still don't have that much openstack experience. Hoping to deploy it against maas soon and really put it through the works
<kurt_> yes, I've been successful getting it all the way up, but I'm seeing oddities in the interface that make me think I don't have everything right under the covers
<kurt_> in the dashboard rather - doing all the steps necessary to get a VM up, but things aren't working
<weblife> m_3: hold off on that review of node-app.  I just remembered that Chris Leas PPA installs everything to /etc/bin but the source installs to /etc/local/bin so it will have a problem when setting up the service configuration.  Will fix this later this evening.  bbl
<weblife> I know what I need to do to correct it. Though
#juju 2013-08-25
<AskUbuntu> Ubuntu Juju firewall expose question | http://askubuntu.com/q/337075
<kurt_> Hi Guys - I saw an earlier reference on how to force deployment of a charm to a particular node, but I no longer see references to that.  Especially under MAAS, is this possible if I want to consolidate processes on nodes, like as shown in the example architecture provided by openstack? (http://docs.openstack.org/trunk/openstack-ops/content/example_architecture.html) ?
#juju 2014-08-18
<ajay_> hi
<ajay_> I'm seeing this error while deploying a juju charm for nova-compute :-
<ajay_> 2014-08-18 03:55:13 ERROR juju apiclient.go:119 state/api: websocket.Dial wss://bootstrap.maas:17070/: x509: certificate has expired or is not yet valid 2014-08-18 03:55:13 ERROR juju runner.go:220 worker: exited "api": websocket.Dial wss://bootstrap.maas:17070/: x509: certificate has expired or is not yet valid
<ajay_> please let me know what I might be missing here
<ayr-ton> If I want, for example, to build a charm for zabbix and want to deploy this inside my units, it will be possible?
<lazyPower> ayr-ton: It is. You can charm up zabbix as a subordinate service - which runs with scope:container
<lazyPower> ayr-ton: https://juju.ubuntu.com/docs/authors-subordinate-services.html - take a look here - feel free to ping me direct if you have any questions
<ayr-ton> hmmmm
<ayr-ton> lazyPower, very interesting. Thank you very much (: Is that what I'm looking for.
<merlin2144> hello folks, has anyone gotten openstack HA to work with LXCs ?
<ayr-ton> merlin2144, http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
<lazyPower> mbruzek, marcoceppi: hey, so sam has a pretty good point/idea here about the docs. He's goign through charming his first service and the fact our docs are focused on bash - has left him kind of in the dark for how to get moving with python. He's going to document his learning, and submit that to us. If we get this before we head out to the sprint, that'll give us a good base for things to focus on.
<marcoceppi> okay
<lazyPower> thoughts? comments? suggestions?
<marcoceppi> it's been on a long list of things to do
<lazyPower> yeah, we do have a huge todo list
<marcoceppi> the idea behind bash hooks is that it's easy for most anyone to grok
<lazyPower> but this is on teh focus list for our sprint.
<marcoceppi> but we have "Create a howto for Python charms and charmhelpers" on a list somewhere
<marcoceppi> not saying it's not
<mbruzek> lazyPower, python is a language that not everyone knows.
<lazyPower> yeah - that was my comment initially - that we dont want to just focus on python even though its recommended, charms are agnostic.
<marcoceppi> charm author docs in general are lacking compared to user docs
 * lazyPower nods
<lazyPower> thats another good idea for a blog post to start the story
 * arosales has an idea of showing a few charm author workflows
<mbruzek> lazyPower, I would be OK with creating a getting started with python in addition to bash
<lazyPower> getting started with charming in python
<lazyPower> and we can then translate that post into docs
<arosales> 1. bash workflow 2. python workflow w/ charm helpers 3. Multiple language workflow
<mbruzek> arosales, +1
<arosales> then have good flag bearer charms at the end of each workflow with pointers to examples on relations, and config.
 * arosales put that on todo list
<mattyw> stokachu, do you know what I'm doing wrong here: https://github.com/Ubuntu-Solutions-Engineering/cloud-installer/issues/142
<stokachu> mattyw, looking into it now
<stokachu> mattyw, can you post your environments.yaml file to the bug?
<stokachu> scrubbing out your admin-pass
<mattyw> stokachu, I've destroyed that install, I'll give it another go and send over the yaml if I get the same error
<stokachu> mattyw, i also updated the bug with some other info to capture too
<stokachu> mattyw, it could be a service didnt get related
<mattyw> stokachu, ok thanks very much
<stokachu> np
<mattyw> stokachu, just pasted the details on the bug
<stokachu> thanks will take a look
<stokachu> mattyw, looks like the relations aren't completed
<merlin2144> Hi has anyone gotten openstack HA to work with LXCs ? Just wanted to find out if this is something that is currently supported
<mattyw> stokachu, which relations aren't done yet?
<stokachu> mattyw, looks like all of them
<stokachu> mattyw, keystone is the big one that needs to be related to the majority of the openstack services
<mattyw> stokachu, the output of cloud-install seemed to suggest it was ready - maybe I was just being too eager
<mattyw> stokachu, I only waited till I got all the ticks
<stokachu> mattyw, thats probably something we need to address, some way of determining when all relations are complete
<stokachu> that satisfy an openstack deployment
<stokachu> mattyw, yea it usually takes a few minutes after all checkmarks to have the relations finish
<mattyw> stokachu, I'll try it again and give it a bit
<stokachu> i created a card for us to review the relationship status of a deployment
<ezobn> Hi All! When I trying to do juju restore - got the error because on freshly installed via maas bootstrap node there are not mongodb-clients installed
<ezobn> so is it possible to add this package to automatically install via juju when restore happened ...
<ezobn> The same - when doing backup ... you need manually first login to bootstrap node with juju state server and install mongodb-clients package
<marcoceppi> ezobn: that's a bug, could you file it against https://bugs.launchpad.net/juju-core
<ezobn> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1358477
<mup> Bug #1358477: for juju backup/restorre mongodb-clients package missing  <juju-core:New> <https://launchpad.net/bugs/1358477>
<noodles775> hazmat: Hi, I updated the two elasticsearch charm MP's I've got based on your feedback the other week. If you get a spare few mins, can you check? (I've been using the updated charm in a staging service for a few weeks too): https://code.launchpad.net/~michael.nelson/charms/trusty/elasticsearch/add-ufw/+merge/225934
<hazmat> noodles775, looking
<hazmat> noodles775, lgtm
<hazmat> mbruzek, when your up in the review queue could you have a look at merging those ^
<mbruzek> ack
<mbruzek> those as in 2 mps?
<mbruzek> noodles775, What is the other mp?
#juju 2014-08-19
<jrwren_> marcoceppi: we were going to look at that yaml thing tonight. I'm exhausted. Another time?
<marcoceppi> jrwren_: yes, another time indeed
<james_w> hello, how do I upgrade-charm from my locally modified copy when I initially deployed from the charm store?
<marcoceppi> james_w: hello, you can use the --switch flag
<james_w> marcoceppi: thanks
<benji> dpb1: /quit
<benji> pfft
<hazmat> mbruzek, the other mp is referenced in the bottom of that first's comments
<mbruzek> hazmat, got it.
<aisrael> Using vagrant for work on a charm. I have it and postgresql deployed, and add a relation between the two. If I ssh to my charm's machine and try to use psql to connect to postgresql, using the relation's private-address/user/password, it's failing, saying there's no entry in pg_hba.conf.
<aisrael> I'm connecting from 10.0.3.232, but postgresql is seeing it as 10.0.3.1
<aisrael> Should I be able to connect from machine to machine that way?
<html> well  you got to first figure out why the ip is not working- then move on to the config file.
<html> aisrael,
<aisrael> thanks html. I'll dig in deeper and see what I can find.
<html> aisrael,  im just a noob that is saying the basics
<aisrael> Understood :)
<html> aisrael,  luckly you that you got juju working- i dont know or even get it working
<html> i wish :/
<aisrael> I'm still learning myself, but this is a good place to ask questions if you're stuck on something.
<lazyPower> aisrael: are you executing that from the host, the vagrant virtual environment, or from the postgresql lxc container running in the vagrant environment?
<Flint_> Hi Bruzer
<mbruzek> Hello Flint_
<Flint_> how goes the battle?
<Flint_> ..er work
<mbruzek> round and round.
<Flint_> do you have bluetooth with your linux box?
<Flint_> ...looking for a stable setup
<mbruzek> Yes I have bluetooth working on Ubuntu 14.04
<mbruzek> Flint how would you plan to use it?
<mbruzek> Flint_, although I suspect that depends on the hardware using it.  I am using a Thinkpad with an intel wireless + bluetooth card.
<aisrael> lazyPower: inside vagrant. It fails the same way if I juju ssh mycharm/0 and try to connect, or via debug-hook on db-relation-joined
<lazyPower> aisrael: have you confirmed the postgresql service is running on postgresql/0?
<Flint__> sorry it lagged
<Flint__> I'm suspecting its my headset going bad
<Flint__> although Motorola should be reliable
<mbruzek> Flint__, I use a bluetooth mouse when I travel and it works pretty well.
<mbruzek> Flint__, trouble connecting with your computer or having trouble keeping it connected?
<Flint__> keeping it connected
<Flint__> I have seen a decrease in battery life, so it might be the root failure
<Flint__> I'm setting up Kubuntu 14.04, and had heard that some bluetooth drivers weren't up to speed for a while.
<mbruzek> Flint__, Well that actually is something I noticed.  The bluetooth mouse's battery status is not correctly reported in linux.
<mbruzek> Flint__, when I connect the mouse it reads 0% charged, and they are fresh batteries
<aisrael> lazyPower: Yep. I suspect it might be vagrant at fault, with the way it's routing requests between machines
<Flint__> ok, thanks
<lazyPower> aisrael: hmm.. its all being routed through a local loopback interface though.
<lazyPower> are you running 2 vagrant boxes, with services spread between them?
<aisrael> Just one vagrant box for all the services. I pretty much followed the juju vagrant workflow.
<lazyPower> ok, that's the recommended method to use the vagrant box - interesting that you're having an issue connecting though. If you are stuck in a blocked state, file a bug aganist the pgsql charm and assign it to me please. I'll investigate it at my earliest availability
<lazyPower> (i'm lazypower on launchpad as well)
<aisrael> Will do, thanks.
<marcoceppi> aisrael: did you get your psql figured?
<marcoceppi> aisrael: you need to vagrant ssh, then you'll probably need to `juju ssh` in to the unit of the charm you're writing
<marcoceppi> that should give you the proper access to psql
<marcoceppi> there's also a psql charm you could deploy which will give you console access
<aisrael> marcoceppi: not yet. I did those steps, but it appears that postgresql/0 sees the connect coming from the gateway, not the unit ip
<marcoceppi> aisrael: so you're inside your deployed LXC machine? interesting
<marcoceppi> sounds like a weird artifact of the local provider
<aisrael> Correct.
<aisrael> I'm walking through the config(s) to see if anything sheds some light on it
<Flint_> sorry it cut em out again
<Flint_> getting W: Failed to fetch http://ppa.launchpad.net/ubuntu-audio-dev/ppa/ubuntu/dists/trusty/main/binary-i386/Packages
<marcoceppi> Flint_: I think you're in the wrong room
<Flint_> nah, just tried the laod instructions
<mbruzek> Flint_, what operation generated that error?
<Flint_> sudo apt-get update && sudo apt-get install juju-core
<sarnold> it's just a warning
<sarnold> and the failure of a ubuntu-audio-dev PPA is unlikely to have an influence on how well juju works :)
<mbruzek> Flint_, I found that that ppa does not yet have a trusty release.
<Flint_> ah ok
<mbruzek> Flint_, I suspect that was installed for an older version of Ubuntu?
<Flint_> trusty, i think so
<Flint_> although the documents say it should have been final realease in april
<lazyPower> marcoceppi: i'm having cranial flatulence. If we exit(0) from a hook due to the environment not being ready - how do we re-execute the hook context? we have to relation-set on the initiator dont we?
<marcoceppi> lazyPower: well, yes, each relation-set will re-execute the -changed hook for the opposing unit in the relation
<natefinch> marcoceppi: how do I get help on juju charm subcommands?  juju help charm generate,  juju charm help generate, juju charm-help generate, charm-help generate, juju-charm help generate  all fail
<natefinch> marcoceppi: nevermind... juju charm generate --help works.
<stokachu> marcoceppi, this recent change in mysql http://manage.jujucharms.com/charms/trusty/mysql, pretty much breaks anything that was using juju add-relation nova-compute mysql
<stokachu> since now it is required to set the interface we want
<jrwren_> stokachu: what recent change?
<stokachu> jrwren_, https://bazaar.launchpad.net/~charmers/charms/trusty/mysql/trunk/revision/124
<stokachu> added an additional interface
<jrwren_> stokachu: don't know about that one, sorry.
<hazmat> stokachu, its not about the additional interface surely but the multi-request around shared db
<stokachu> hazmat, either case you used to be able to do juju add-relation nova-compute mysql
<hazmat> stokachu, ack, change broke extant
<hazmat> but diagnosis is the key to recovery
<stokachu> indeed
<hazmat> stokachu, what breaks exactly.. i don't see additional interface of note
<stokachu> it doesn't know whether you want shared-db or nrpe-external-master
<hazmat> ugh
<hazmat> fair enough
<hazmat> stokachu, nova-compute is wrong btw
<hazmat> the only charms that can require on a container scoped relation are subordinates
<hazmat> the hosts are all providers
<stokachu> jamespag`, ^
<hazmat> stokachu, he's on vacay this week
<stokachu> ah
<hazmat> since its broken anyways the simplest fix is remove that line from nova-compute
 * hazmat runs blame out of curosity
<hazmat> about a year ago bundled in an ancilliary change
<rharper> is it ok to mix juju 1.20.1 and 1.20.5 ?  the state-server is 1.20.1, the allocated nodes (maas env) are getting 1.20.5 --
#juju 2014-08-20
<jose> QUESTION: Why am I seeing [Review Queue] emails now?
<sarnold> jose: oops, did you do enough good work to earn more work? :)
<jose> sarnold: hehe nope, they're on the juju mailing list
<sarnold> oh :)
<jose> I was just wondering if it had something to do with the new revq that's on the works!
<tvansteenburgh> jose: no that's just a new thing we're doing. we're taking 2-hr time blocks on the review queue and reporting our progress to the list
<jose> tvansteenburgh: oh, it's good to know :)
<jam> I have a question about the appropriate way to write a charm configuration.
<jam> I have in mind a very simple charm that I give my account credentials to, and it issues "juju destroy-environment" after a given timeout.
<jam> So that I can deploy the charm when I have just a testing env, and make sure it dies even if I forget about it.
<jam> But I'd like to have it allow a syntax like "destroy after 45 minutes" or something along those lines
<jam> but it seems odd to have a config setting that would "reset" the state every time config-changed runs.
<marcoceppi> stokachu: what do you mean it breaks it?
<jam> marcoceppi: there you are. I ran into https://bugs.launchpad.net/charm-tools/+bug/1359170 trying to use charm-tools with trunk juju
<mup> Bug #1359170: juju charm create fails in an unhelpful fashion with juju-1.21 in PATH <Juju Charm Tools:New> <https://launchpad.net/bugs/1359170>
<jam> I'm not sure that you're the maintainer, but I thought I'd start with you
<marcoceppi> jam: I am one of them, this is concerning, I'm not sure what I can do to fix it
<marcoceppi> seems like juju-core is not being so nice
<marcoceppi> ?
<jam> marcoceppi: so something is failing in the subprocess starting other things, but its unclear what
<marcoceppi> jam: I'll compile from trunk and punch on it a bit
<marcoceppi> jam: any idea when 1.21 is expected?
<jam> marcoceppi: so I think we set "command.Env"
<jam> and probably juju-create expects PATH to be in there, but we aren't putting it in?
<jam> no, we are appending to os.Environ() so that should be ok.
<jam> ...
<marcoceppi> i mean, we're just using argparse in python
<jam> though the way we are doing that makes me wonder if we are actually mutating os.Environ() in-procses (not that it really matters)
<marcoceppi> nothing too crazy
<jam> marcoceppi: you're looking for /usr/bin/charm-create and invoking that
<jam> from juju-chram
<jam> juju-charm
<jam> which then uses setuptools to again look up where main is, etc.
<marcoceppi> oh, right, for "plugin mapping"
<marcoceppi> let me compile, throw in some debug statements, see what I can find
<jam> marcoceppi: doing "print sys.argv" in juju-charm shows me:
<jam> $ juju charm create -h
<jam> args ['/usr/bin/juju-charm']
<jam> marcoceppi: vs 1.18 does:
<jam> $ /usr/bin/juju charm create -h
<jam> args ['/usr/bin/juju-charm', 'create', '-h']
<marcoceppi> ah, so that's what is different
<marcoceppi> bug in core I guess?
<jam> marcoceppi: yeah
<marcoceppi> \o/ less work for me
<marcoceppi> jam: thanks for catching that though
<rm4> could anyone help possibly with charm debug
<rm4> juju debug-hooks wordpress/3 loadbalancer-relation-joined
<rm4> I was hoping to see a special prompt with %
<rm4> but I just get:
<rm4> root@vagrant-local-machine-34:~#
<jrwren_> rm4: did you resolved --retry yet ?
<rm4> don't quite follow sorry
<jrwren_> rm4: something needs to trigger the hook executing again, assuming it is in a failed state. juju resolved --retry <unit> will trigger this.
<jrwren_> rm4: do that in a different terminal and your debug-hook tmux will open a new tmux window with the special prompt
<rm4> vagrant@vagrant-ubuntu-trusty-64:~$ juju resolved --retry wordpress/3 ERROR unit "wordpress/3" is not in an error state
<rm4> it all works but I would like to see what is happening
<rm4> so I can make a charm
<jrwren_> rm4: In that case, you could remove the unit and add it again and immediately after running add run the debug-hook and hope you were fast enough to catch the hooks (you likely will be).
<jrwren_> rm4: or if you are only interested in that relation, remove and add that relation
<aisrael> In cloud-images, what's the functional difference supposed to be between the vagrant boxes in http://cloud-images.ubuntu.com/vagrant/trusty/ vs. http://cloud-images.ubuntu.com/vagrant/trusty/current/?
<rm4> thanks jrwren
<rick_h__> aisrael: might be good to pin utlemming on that. He's in CO so might not be on for a bit.
<rm4> I'll try and come back
<rick_h__> adeuring: do you know the answer to aisrael's question ^ ?
<jcastro> aisrael, the first one has all the boxes
<jcastro> the 2nd one only has the latest builds
<jcastro> the boxes get published every few weeks, so we keep those around too
<jcastro> what is in /current will always be the latest one
<aisrael> The reason I'm asking: the image in current/ only sets up one adapter (nat) and doesn't provision juju.
<aisrael> The larger image bootstraps, but sets up two adapters (nat and hostonly)
<jcastro> ok, the ones without juju in the filename
<jcastro> are just the normal vanilla vagrant boxes
<jcastro> the ones with juju in them are supposed to bootstrap/gui, etc. with juju
<adeuring> aisrael: the smaller files are "plain" ubuntu cloud images, while the larger files have juju preinstalled, and the Juju GUI is installed during Vagrant's provisioning
<aisrael> jcastro: Oh, duh.
<aisrael> ok, so that's one doc I should look at getting updated
<jcastro> which one?
<aisrael> https://juju.ubuntu.com/docs/howto-vagrant-workflow.html
<aisrael> points to a non-existent box, in current/
<jcastro> ah yes
<jcastro> damn, I fixed the config-vagrant page
<jcastro> but totally forgot about this one, yeah, fix it in the docs please.
<aisrael> and I'm still troubleshooting why the networking problem
<jcastro> https://github.com/juju/docs has the instructions
<jcastro> they're in markdown, very easy to fix
<aisrael> excellent, I'll add to my list today
<jcastro> mbruzek, coreycb: did you guys not review yesterday?
<mbruzek> jcastro What makes you say that?
<coreycb> jcastro, review what?
<jcastro> the review queue, you guys were on the calendar yesterday
<coreycb> jcastro, hmm not familiar with the calendar.. sorry.  I do need to start doing more reviews though.
<jcastro> we each were  supposed to pick 2 hours a week to review, then send the results to the list.
<mbruzek> Yes I reviewed the ufw changes for elasticsearch and I DID email the group about it.
<mbruzek> jcastro, I think you have coreycb mixed up with cory_fu
<jcastro> hah yes I do! sorry about that coreycb
<jcastro> I meant cory_fu
<coreycb> jcastro, ahh, np
<jcastro> oh excellent!
<jcastro> mbruzek, I now see both of your [Review Queue] posts, that is quite excellent!
<mbruzek> thanks for the positive reinforcement jcastro
<mbruzek> and for checking up on me
<jcastro> hah
<jcastro> it is unclear to me if you merged his changes though
<jam> marcoceppi: turns out it only happens if you don't have an environment set
<jam> I *might* be rare in not having a default env
<marcoceppi> jam: that's fun!
<jam> or using juju switch
<jam> marcoceppi: I always run "juju command -e env"
<jam> so that I know what environment it is running in
<ctlaugh> jamespag`: Hi, mwhudson sent me... I'm having a problem with the cinder charm on a system with only 1 drive.  I am trying to have it use a loopback file for storage.  The file is created, the loop device (/dev/loop0) is created, but the volume group is never created.
<rm4> I managed to get charm debug working
<rm4> thanks for that
<rm4> bit of a different prompt to documentation though
<rm4> https://juju.ubuntu.com/docs/authors-hook-debug.html shows ~
<marcoceppi> rm4: ah, yes, debug-hooks shows unit/hook-context now
<rm4> root@vagrant-local-machine-34:/var/lib/juju/agents/unit-wordpress-3/charm/hooks# relation-get
<rm4> private-address: 10.0.3.181
<rm4> yes I think so
<marcoceppi> rm4: that's the prompt you get for debug-hooks?
<marcoceppi> it should look like wordpress/3@<hook-name>
<rm4> that was the prompt for debug-hooks yes
<jrwren_> rm4: is that window 0 of tmux or window 1 ?
<rm4> 12.04.5  0:- 1:website-relation-departed*
<rm4> I think that is 1
<rm4> I was in /home/ubuntu
<natefinch> marcoceppi: are you and the other charmers on the juju-dev mailing list?
<marcoceppi> natefinch: I am
<natefinch> marcoceppi: ahh, right, I remember you responding to the default-hook thread.
<natefinch> marcoceppi: you offered to survey the current charms and see which ones used the all-symlink model... do you know when you might get that done?  I know it's a big job.
<marcoceppi> natefinch: I can do that now, should take about 15 mins
<natefinch> marcoceppi: that would be fantastic, thank you
<marcoceppi> natefinch: here are precise charms in the store, slightly curated: http://paste.ubuntu.com/8100495/
<marcoceppi> natefinch: and the command used to get that, by running it in a directory which `charm getall` was fetched
<marcoceppi> for f in `find */hooks/ -maxdepth 1`; do if [ -L $f ]; then echo "$(echo $f | awk -F/ '{print $1}') `readlink $f`"; fi; done | sort | uniq
<marcoceppi> charm common file
<marcoceppi> well, "charm" "common file" is the output
<natefinch> marcoceppi: nice
<natefinch> marcoceppi: so, are there any that symlink only some of the hooks?
<marcoceppi> natefinch: yes
<marcoceppi> natefinch: most all hooks that say relation_common
<marcoceppi> indicates that only some of the hooks of that are symlinked
<marcoceppi> I also removed entries that were just sylinks between relation-joined and relation-changed
<marcoceppi> since that's a stupid pattern and easy enough to clean up
<natefinch> marcoceppi: heh
<natefinch> marcoceppi: so how many charms are there total?  I'd like to know what this is 62 out of :)
<marcoceppi> 162
<marcoceppi> also, there are some dupliates in there
<marcoceppi> as some are inconsistently symlinked (./hooks.py vs hooks.py)
<marcoceppi> that list needs to be curated a bit more
<marcoceppi> also does not include trusty charms
<natefinch> marcoceppi: thanks, I got rid of dupes, easy enough
<mwhudson> ctlaugh: :(
#juju 2014-08-21
<marcoceppi> o/ carlasouza thanks for droping by today
<carlasouza> marcoceppi: thank you for the talk!
<jam> TheMue: I just looked at how the setDoc stuff was tested, and it isn't quite what we want.
<TheMue> jam: Iâm listening
<jam> sorry, meant to be in dev, see you over there
<TheMue> :)
<jcastro> lazyPower, marcoceppi: hey guys don't forget to commit your presentations to github.com/juju/presentations
<marcoceppi> jcastro: uh, dont' forget to merge my request  https://github.com/juju/presentations/pull/4
<marcoceppi> 11 mins ahead of you ;)
<jcastro> oh, I didn't get a mail on that, merged, thanks!
<marcoceppi> Ever since core moved to github my notifications have been a mile long
<mattyw> stokachu, just saw you close those issues, thanks very much - any idea when the latest code will be in the experimental ppa?
<stokachu> mattyw, ill kick off an import on launchpad
<mattyw> stokachu, that fast cool, I'll give it a spin in the morning and see how it goes
<stokachu> mattyw, cool man, yea it'll be there by morning
<mbruzek> ERROR invalid constraint value: arch=i386
<mbruzek> valid values are: [amd64]
<mbruzek> Where does Juju get the valid values from?
<mbruzek> Could I "configure" my local provider to be able to use i386 images?
<noodles785> mbruzek or marcoceppi: I'm no longer able to land charm-helper changes, which is OK, but who can I ping to land approved MP's like this one: https://code.launchpad.net/~bloodearnest/charm-helpers/ansible-change-relations-format/+merge/230710
<marcoceppi> noodles785: we can add you to charm-helper-maintainers
<noodles785> marcoceppi: great, can you add bloodearnest too? (we often review each others ansible-related charm-helper changes) Thanks
<marcoceppi> noodles785: yeah, just keep up the review process :)
<bloodearnest> marcoceppi: always :)
<marcoceppi> okay, have fun guys!
<SP33D> why is juju so hard to install and use?
<SP33D> :D
<SP33D> it looks so well in the demo
<SP33D> can anyone with juju expirence please please do a docker.io compatible container of it
<elarson> SP33D: docker needs special kernel modules in earlier kernels (ie ubuntu precise), so I don't know that it would improve things that much. just my 2 cents (that no one asked for ;) )
<elarson> juju worked for me really easily, that said, I'm on ubuntu and would expect it to be reasonably automatic.
<SP33D> i am on ubuntu too
<SP33D> but got it not running maybe don't got the concept
<SP33D> some juju parts was running and the webinterface i couldn't deploy
<SP33D> :D
<SP33D> i am on +
<SP33D> so i have all kernel patches
<SP33D> :D
<SP33D> oh and if you don't know i am running ubuntu 14.10 + 3.17 current nightly
<SP33D> so maybe a trusty image would help as a dockerfile so i understand it
<SP33D> i would simply need local deployment first
<tiger> Hi
<Guest31041> can i setup juju as a local but on my public IP ?
<tiger7117> HI
<tiger7117> i am trying to setup JuJu on my cloud server ubuntu
<tiger7117> issue is that .. i want to also setup juju-gui on my public IP eth0.. currently its using lxc 10.3.0.x IP.. so how can i run gui on Eth0 ?
<tiger7117> btw.. juju-gui is for accessing it through web interface ?
<tiger7117> i was seeing this doc.
<tiger7117> http://askubuntu.com/questions/376087/install-juju-on-the-main-network-interface
<tiger7117> i have stuck at third pointâ¦ should i put in my public IP then what will be Network and DHCP range etc ?
<tiger7117> any one ??
#juju 2014-08-22
<finisherr> Can someone explain what a charm does? I'm looking at the demos and I"m seeing someone plug these apps together, but what is happening there?
<finisherr> when you pull an app out on to the canvas and create a relationship, what is happening, are the services being configured?
<finisherr> I guess without knowing what is actually happening, the high level interface is a little confusing
<ashipika> \o/
<jcastro> marcoceppi, do you plan on making the new queue live today?
<jcastro> jose, when the orangebox gets to mhall's place I can G+ you and show you how to use it.
<jose> jcastro: sure thing!
<marcoceppi> jcastro: that's the plan, technically off today
<jcastro> oh ok, nm
<mhall119> jose: why does the box need to be at my place for you to show jose how to use it?
<Odd_Bloke> * mhall119 looks out the window and sees jose peering in.
<mhall119> lol
<jcastro> aisrael, https://juju.ubuntu.com/docs/authors-charm-store.html
<jcastro> scroll down to the reviewer tips, etc.
<aisrael> Looks like it's moved here? https://juju.ubuntu.com/docs/reference-reviewers.html
<jcastro> oh, right
<jcastro> lazyPower, question, why file a bug to push to trusty? Why not just push to trusty?
<lazyPower> jcastro: some of the precise charms simply wont make it
<jcastro> right
<lazyPower> so with a bug, we can track the progress of that jump from precise => trusty and give it a thorough review
<jcastro> ah ok, ta
<lazyPower> its kind of like completing the bits from the audit, in an ad-hoc fashion
<lazyPower> hazmat: juju-deployer understands maas tagging?
<jose> mhall119: I think you wanted to highlight jcastro instead of myself :P
<lazyPower> hazmat: https://bugs.launchpad.net/charm-tools/+bug/1360374 - here's what i'm referencing re: tags
<mup> Bug #1360374: bundle proof is unaware of MAAS tagging <Juju Charm Tools:New> <https://launchpad.net/bugs/1360374>
<hazmat> lazyPower, yes it does
<hazmat> lazyPower, deployer doesn't validate constraints
<hazmat> lazyPower, thats a proof error
<hazmat> proof doesn't understand constraints.. probably quite alot of them
<hazmat> instance-type, maas-tags, maas-name, and probably others i'd guess
<hazmat> deployer passes constraints through
<hazmat> and bubbles up the juju error if juju doesn't understand them
<lazyPower> hazmat: yeah - i filed the bug against bundle proofer - and i thought that might be whats going on is since you're shelling out you can just hand all that off
<lazyPower> thanks for clarification hazmat
<hazmat> np
<jcastro> http://askubuntu.com/questions/514728/juju-deploy-issues?noredirect=1#comment697187_514728
<jcastro> this looks to me like his file structure on disk is missing $charmname, thoughts?
<hazmat> jcastro, he's got a hooks directory in precise
<hazmat> jcastro, he can move that and remove the warnings
<marcoceppi> jcastro: yea
<jcastro> right
<jcastro> but I am wondering if it's supposed to be
<jcastro> /home/eduard/charms/precise/stack/hooks
<jcastro> and not
<jcastro> /home/eduard/charms/precise/hooks
<hazmat> jcastro, he has both
<jcastro> ok
<hazmat> well maybe
<hazmat> jcastro, he has a valid charm metadta in charms/precise/stack
<jcastro> oh
<jcastro> because it would error otherwise!
<jcastro> I am wondering how you got to "he has both"
<hazmat> the hooks dir in precise/hooks is confusing juju causing the warning.. whether he also has them in precise/stack/hooks i dunno
<hazmat> jcastro, i don't know that for sure
<hazmat> jcastro, in which case yes he'd neeed to move 'hooks' to under precise/stack
<jcastro> marcoceppi: I would add that to the answer
<jcastro> since you torpedo'ed my rep man!
<lazyPower> jcastro: we're coming for you... one upvote at a time
<aisrael> Is there a good template to follow for filing a bug report?
<lazyPower> aisrael: depends on the context of the bug. Is this against a charm?
<aisrael> juju-core. mysql is broken on local providers
<lazyPower> this is a known bug, it has to do with a memory constraint
<aisrael> Yeah, innodb buffer.
<lazyPower> its being set to consume 80% of the given ram, and it totally breaks when deployed on the local provider.
<lazyPower> afaicr a bug has been filed and marcoceppi has triaged this in the forthcoming mysql re-release
<aisrael> https://bugs.launchpad.net/charms/+source/mysql/+bug/1294334
<mup> Bug #1294334: mysql charm blows up on out of memory error <mysql (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1294334>
<aisrael> That looks like the one. It's undecided, though
<aisrael> There's a couple bugs that are duplicates but not marked as such. Should I tag those as duplicates?
<marcoceppi> aisrael: yeah. There's a duplicate bug link
<jcastro> marcoceppi, I thought your rewrite fixed this?
<aisrael> marcoceppi: ok, I marked them as dupes. It seems like just having the caveat might not be enough.
<marcoceppi> aisrael: it's being patched
<hazmat> marcoceppi, that zone doc is ready for review incidentally
<marcoceppi> hazmat: ack, will take a look in a bit
<aisrael> marcoceppi: rock on, thanks
<mhall119> jose: yeah I did
#juju 2014-08-24
<rtp2144> has anyone gotten neutron ha setup through charms ?
#juju 2015-08-17
<ennoble> does anyone use jujuclient.py?
<lazyPower> ennoble: deployer and a few other tools are based on it
<ennoble> lazyPower: Is it still actively being maintained? It seems like there haven't been many changes to juju client in about six months.
<lazyPower> ennoble: i cant say for certain that we are maintaining it, but as its a core utility of deployer, i'd default to saying it is under maintenance - but isn't actively receiving code changes as its stabilized for our needs today.
<ennoble> lazyPower: thx, the Actions code in it has gotten more broken as juju-core has iterated under it. I'm guess I'm one of the few people using that
<lazyPower> ennoble: ah, have you filed a bug against the broken behavior?
<marcoceppi> ennoble: it's still maintained
<beisner> hi coreycb + gnuoy` - fyi, the os next charms will need a c-h sync to pull in the liberty uca charmhelpers/fetch bit from r425   once that is done, we can begin to exercise the charms and packages.
<coreycb> beisner, ok good so that's all ready for a sync?
<beisner> coreycb, afaik, that should do it.  i locally syncd in a couple at random, which got me past "Unsupported cloud: source option trusty-liberty/proposed" and onto bigger/better issues ;-)
<beisner> coreycb, fyi, jp has landed the liberty pkg versions helper bits, and i had already landed liberty amulet origin/source bits.
<coreycb> beisner, great, I can work on the sync
<beisner> coreycb, awesome, thanks!
<coreycb> beisner, np!
<beisner> wolsen, coreycb - while extending/refactoring the rmq amulet tests, i'm running into some issues along the release edges (precise & vivid).  middle ground combos ok.   i'm going to start raising those as bugs, and only enabling amulet tests in my MP for known-passing combos, so I can finally get that refactor bit landed.
<beisner> wolsen, coreycb - first one is bug 1485722 ... will finish getting details on the precise issues and file separately.
<mup> Bug #1485722: rmq on >= vivid has mnesia (no data dir) <amulet> <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1485722>
<ennoble> lazyPower, marcoceppi: Yes, I submitted a bug for 1.23 https://bugs.launchpad.net/python-jujuclient/+bug/1455302 but the Actions() is completely broken for juju 1.24
<mup> Bug #1455302: enqueue_units doesn't correctly pass parameters to action <python-jujuclient:New> <https://launchpad.net/bugs/1455302>
<marcoceppi> ennoble: odd, I was able to use it
<ennoble> marcoceppi: the issue looks like if you have a '-' in the unit name it fails
<ennoble> marcoeppi: and the issue that I mentioned in my bug report prevents parameters from being specified (there is a fix in the bug report though)
<skylerberg> The following appears in the charm testing documentation: "a sub-directory named 'tests' will be scanned by a test runner for executable files matching the glob *.test"
<skylerberg> Is this true? or are all executables run?
<lazyPower> skylerberg: anything with chmod +x is executed
<lazyPower> there is no requirement of the file being [a-zA-z0-9].test
<skylerberg> Okay, I will file a bug against the documentation then. Thanks.
<lazyPower> thanks for finding that and pointing it out skylerberg
<beisner> coreycb, just hit this for the first time in an ol' stale bundle.  woot for good feedback!    ...blocked    Charm has reached end-of-life. Please use neutron-gateway charm.
#juju 2015-08-18
<gnuoy`> Tribaal, I have a more involved ch mp ( https://code.launchpad.net/~gnuoy/charm-helpers/hugepages/+merge/268214 ) if you have a sec that would be great but I understand if you don't :)
<Geetha> hi
<stub> marcoceppi: you dropped
<aisrael> lazyPower: http://www.fastcompany.com/3027907/what-engineers-at-facebook-pinterest-snapchat-airbnb-and-spotify-listen-to-while-coding
<lazyPower> haha the first playlist listed is a bunch o trance
<lazyPower> niiiice
<lazyPower> dece find, thanks aisrael
<aisrael> lazyPower: my pleasure!
<GS_> Hi, I am trying to deploy mysql charm using "juju deploy mysql" command on ubuntu ppc64le platform, start hook is failing with exit status 1..Can you any one please help me out on this?
<GS_> Hi, I am trying to install mysql on ubuntu ppc64le platform using "juju deploy mysql" command, but start hook is failing. Can any one please help me out on this?
<GS_> Hi, I am trying to install mysql on ubuntu ppc64le platform usig "juju deploy mysql" command, but start hook is failing with exit status 1. And It is working fine on ubuntu x_86 platform.
<apuimedo> lazyPower: jamespag`: any idea why `juju ssh` to lxc container is not working
<apuimedo> and also I can't ping connect to openstack servers running on those machines
<GS_> Can any one help me out on this? why start hook is failing in ppc64le ?
<ddellav> GS_, when you run juju debug-log what error messages do you see?
<GS_> 2015-08-18 08:41:44 INFO config-changed Processing triggers for ureadahead (0.100.0-16) ... 2015-08-18 08:41:44 INFO config-changed Setting up mysql-server (5.5.44-0ubuntu0.14.04.1) ... 2015-08-18 08:41:45 INFO juju-log dataset size in bytes: 6845104128 2015-08-18 08:41:46 INFO config-changed mysql stop/waiting 2015-08-18 08:41:49 INFO config-changed start: Job failed to start 2015-08-18 08:41:49 INFO juju-log Restart failed, trying again 
<GS_> 2015-08-18 08:41:49 INFO config-changed stop: Job has already been stopped: mysql 2015-08-18 08:42:19 INFO config-changed mysql start/running 2015-08-18 08:42:20 INFO start mysql stop/waiting 2015-08-18 08:42:22 INFO start start: Job failed to start 2015-08-18 08:42:22 ERROR juju.worker.uniter.operation runhook.go:103 hook "start" failed: exit status 1
<ddellav> GS_, try: juju ssh mysql/0 that should get you onto the container with mysql installed. Then you can debug mysql directly. It looks like it didn't like some config options and couldn't start
<n3m8tz> Hi
<ddellav> i'd check /var/log/mysql or wherever the logs are on the container (it
<ddellav> (it'll be in the my.cnf)
<GS_> 150818  4:42:22 InnoDB: Completed initialization of buffer pool 150818  4:42:22 InnoDB: Fatal error: cannot allocate memory for the buffer pool 150818  4:42:22 [ERROR] Plugin 'InnoDB' init function returned error. 150818  4:42:22 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 150818  4:42:22 [ERROR] Unknown/unsupported storage engine: InnoDB 150818  4:42:22 [ERROR] Aborting
<lazyPower> apuimedo: in a charm school will be with you shortly
<apuimedo> ok, thanks
<rick_h_> GS_: this looks like the defaults issue that eco saw withthe memory usage and the default.
<rick_h_> jcastro: have a link handy? ^
<rick_h_> aisrael: I think you were poking at it as well? ^
<rick_h_> GS_: see https://bugs.launchpad.net/charms/+source/mysql/+bug/1373862 if that matches?
<mup> Bug #1373862: MySQL doesn't deploy due to oversized dataset <mysql (Juju Charms Collection):Fix Committed by marcoceppi> <https://launchpad.net/bugs/1373862>
<aisrael> oh yes.
<rick_h_> GS_: so you might need to deploy with a different config value to get that going.
<aisrael> When I'm running locally, I: juju set mysql dataset-size='256M'
<aisrael> GS_: If you do that immediately after deploy, the charm should work.
<GS_> Thank you all...I will try out this.
<ddellav> :)
<ejat> anyone can help me with this error : http://paste.ubuntu.com/12119236/
<lazyPower> ejat: what substrate, charm, juju version, ubuntu version is this?
<ejat> lazyPower : im trying the openstack-install
<ejat> vivid
<lazyPower> ejat: openstack-install as in autopilot from landscape?
<ejat> yups ..
<ejat> lazyPower : the team in #ubuntu-solution helping me now .. thanks
<lazyPower> allright, if you need anything feel free to ping back :)
<jeand> hi all
<jeand> I pushed a new charm to my namespace on launchpad
<jeand> http://bazaar.launchpad.net/~jean-deruelle/charms/trusty/mobicents-restcomm-charm/trunk/files
<jeand> how long does it take to be indexed and available on the charm store at https://jujucharms.com/q/restcomm?series=trusty&type=charm ?
<rick_h_> jeand: it takes 1-2 hours atm. We've got two systems (legacy and modern) than have to be kept in sync so one waits for the other to be ready
<jeand> Thanks rick_h_ for the information
<jeand> another thing
<jeand> I submitted a bug
<jeand> to have it officially in the charm store
<jeand> https://bugs.launchpad.net/charms/+bug/1473509
<mup> Bug #1473509: Mobicents RestComm Juju Charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1473509>
<jeand> how long does it usually takes to get a review ?
<rick_h_> jeand: hmm, so there's a process for this. I think you have to invite or assign the charmers team to it.
 * rick_h_ looks up the docs around that
<jeand> it seems I can't assign it to anyone else than me
<jeand> "You may only assign yourself because you are not affiliated with this project and do not have any team memberships."
<rick_h_> jeand: https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms
<rick_h_> not assign but subscribe it looks like per the docs
<jeand> Thanks rick_h_ I should have RTFM better
<rick_h_> jeand: looks like charm proof doesn't like your yaml for the tags either atm
<rick_h_> jeand: it throws an error, it should be more like https://api.jujucharms.com/charmstore/v4/trusty/juju-gui-38/archive/metadata.yaml possibly
<rick_h_> or I've got a really old charm-tools (/me checks that next)
<jeand> rick_h_, when I run juju charm proof on my side it doesn't complain
<jeand> juju --version
<jeand> 1.24.4-trusty-amd64
<aisrael> What's `charm version` say?
<jeand> charm version
<jeand> charm-tools 1.5.1
<rick_h_> jeand: yea, fresh install here and no PPA so looks like I've got a really old charm-tools of 1.0.0 :)
<rick_h_> jeand: ok, so yea follow the process for the review queue, I'm not sure what the current times are on that atm though
<jeand> ok cool
<jeand> thanks for the help here
<rick_h_> np, good luck!
<jeand> looks like it's present on the charm store now !
<jeand> https://jujucharms.com/u/jean-deruelle/mobicents-restcomm-charm/trusty/0
<jeand> Thanks !
<rick_h_> jeand: very cool
<rick_h_> jeand: thanks for your patience. Once we kill off the old system we'll be making it a lot faster.
<aisrael> jeand: Your charm should pop up here within a couple hours: http://review.juju.solutions/
<aisrael> There's a bit of a backlog we're working through, unfortunately.
<jeand> no worries, thanks for the notice
<jeand> I'm testing a tentative bundle in the meanwhile
<coreycb> beisner, charm-helpers have been synced to openstack next charms
<beisner> coreycb, ok thanks.  fyi, fired off a trusty-liberty-proposed next deploy to see how we fair.
<lazyPower> mbruzek: do you have a spare cycle? i need a hot review on this for OIL - https://github.com/whitmo/etcd-charm/pull/17
<skylerberg> Is there a way to run amulet tests with a local charm? I want to specify a repository with a path, but I haven't seen anything besides the default resolver that checks the charm store when I use add.
<lazyPower> export JUJU_REPOSITORY=path/to/charm/repo
<lazyPower> ensure the amulet test doesnt' specify CS:<series> and isntead just declares the service
<lazyPower> for example if you're testing the charm "foo"
<lazyPower> amulet.deploy("foo")
<lazyPower> it should auto-pick the local copy of the charm to deploy
<skylerberg> lazyPower: Can I mix and match? I want to use some charms from the store and then my charm locally.
<lazyPower> well, amulet should only be deploying the local charm thats under load
<lazyPower> if you need multiple
<lazyPower> use the local:series/charm directive
<lazyPower> and make sure when you commit, its using the proper resource locations that are not local:
<lazyPower> as that will just confuse CI and things will blow up on reporting
<lazyPower> alai:  https://github.com/whitmo/etcd-charm/pull/17 - could use your input on this
<alai> lazyPower, cool i'll take a look
<skylerberg> lazyPower: Thanks for the help, that should get me going on writing these tests.
<alai> lazyPower, +1 thanks for a quick patch
<beisner> wolsen, gnuoy`, coreycb - fyi - in resuming the rmq edge issues @ vivid-kilo, got the 2nd of two bugs filed and I'm calling the VK test disabled-ufn:
<lazyPower> apuimedo: i understand you're having issues with LXC on yoru cloud?
<lazyPower> which cloud provider is this?
<lazyPower> and how are the services deployed?
<beisner> wolsen, gnuoy`, coreycb :  bug 1486177
<mup> Bug #1486177: vivid-kilo 3-node native cluster race:  cluster-relation-changed Error: unable to connect to nodes ['rabbit@juju-X-machine-N']: nodedown <amulet> <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1486177>
<apuimedo> some OSt
<apuimedo> but it's the same I had with DO
<beisner> wolsen, gnuoy`, coreycb :  bug 1485722
<mup> Bug #1485722: rmq on >= vivid has mnesia (no data dir) <amulet> <openstack> <uosci> <nrpe (Juju Charms Collection):New> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1485722>
<apuimedo> I can't connect to lxc containers
<apuimedo> except from the instance they run on
<apuimedo> my main suspect is the arp filtering that cloud providers usually do
<beisner> wolsen, gnuoy`, coreycb - tldr:  half the time, one unit fails to cluster, and when all 3 do cluster ok, a separate blocker exists.
<lazyPower> apuimedo: this is the host only networking exception
<lazyPower> which version of juju? i was under the impression 1.24 removed this limitation, but i may be wrong
<lazyPower> let me fetch the guide that i did to fix this w/ overlay networking
<apuimedo> 1.24.4-trusty-amd64
<lazyPower> http://blog.dasroot.net/container-networking-with-flannel.html
<lazyPower> yeah it seems like the networking only works on certain substrates (aws, and openstack)
<apuimedo> I'm using OpenStack
<lazyPower> weird, i'll have to re-ping dimiter on it then
<lazyPower> its quite possible i am misinformed
<lazyPower> There are bolt on services you can use to work around this
<lazyPower> the problem is the LXC containers that are being spun up on the lost are 10.0.3.x addressing, and the juju state server has no means to communicate with them
<lazyPower> networking addons like the fan, calico, flannel, et-al are designed to offer an SDN approach to fixing this
<apuimedo> the juju state server works
<lazyPower> but this also requires the LXC containers to be reconfigured to attach to that networking bridge that the state server can connect to
<apuimedo> the relations are established
<lazyPower> so, you can reach the LXC container based service, from the state server
<lazyPower> but not when you juju ssh <service>/<unit> ?
<apuimedo> I think it is most likely a matter of the undercloud having a too strict security group
<apuimedo> yeah, juju ssh does not work
<lazyPower> thats a bug
<lazyPower> do you mind filing it and linking me? i'd like to track this and reference it when i poke dimiter about it
<apuimedo> ok ;-)
<lazyPower> we'll want the output from juju status,  the service attempting to connect to via juju ssh, any verbose debug logging, and which cloud provider if applicable
<apuimedo> ok
<lazyPower> alai: fix is upstream in ~kubernetes namespace, open review item here - https://code.launchpad.net/~kubernetes/charms/trusty/etcd/trunk/+merge/268373
<lazyPower> which is for the charm store copy in ~charmers namespace.
<alai> cool... testing it now
<marcoceppi> rbasak: you around? I have a packaging question
<apuimedo> lazyPower: from what I see in my machines
<apuimedo> there's only eth0 that has the "public ip" and lxcbr0 that gets a 10.0.3.1/24
<apuimedo> but that has no link device
<apuimedo> only veths to the lxc containers
<apuimedo> I can't see how the different lxc containers could be able to talk to each other
<lazyPower> apuimedo: correct
<lazyPower> there's no cross host networking by default in juju w/ those containers
<lazyPower> this is why SDN as a work around exists today
<lazyPower> and juju is slowly growing support for cross-host networking natively w/ the juju networking modules.
<apuimedo> lazyPower: so I would have to run flannel for that?
<apuimedo> which is the simplest solution?
<lazyPower> flannel, calico, the fan,  just to name a few that we have charms for.
<apuimedo> fan?
<lazyPower> https://insights.ubuntu.com/2015/06/22/fan-networking/
<lazyPower> i use flannel quite a bit because it works cross host over an encrypted ip tunnel. its quite slow, requires ETCD to work/ do coordination, but it gets the job done.
<apuimedo> lazyPower: is there one for the flannel integration?
<apuimedo> article, I mean
<lazyPower> apuimedo: the article covers the bases of how to properly get hosts talking
<lazyPower> it requires deploying the flannel charm first, then deploy --to lxc: on that host
<cholcombe> lazyPower, do you have RSI in your hands?  You seem to type more than anyone i've ever seen :)
<lazyPower> cholcombe: Richardson Space Industries? nah... it would be cool if i had star citizen in my hands :D
<cholcombe> lol
<cholcombe> no repetitive stress injury
<lazyPower> well, to a degree, yeah
<lazyPower> my hands hurt on a regular basis
<lazyPower> you'll see me massaging my hands in hangouts if you're looking
<cholcombe> yeah
<cholcombe> seems to be a common problem with engineers
<lazyPower> years of being a keyboard cowboy i suppose takes its toll
<lazyPower> some day we'll get hazard pay for it
<cholcombe> yep
<cholcombe> haha i wish
<lazyPower> it was terrible cholcombe, there were cheese cakes, pizzas and beers everywhere. Our hands were cramping and we couldn't even hold on to any of the goodies!
<cholcombe> hahah
<cholcombe> lazyPower, well the joints in my hands are starting to sting.  i've been wondering what others are doing to alleviate it
<cholcombe> lazyPower, i didn't mean to side track ya :)
<lazyPower> all good man, i'm on a roll over here. running support and hacking on kubes
<cholcombe> nice!
<skylerberg> Is it standard practice to upload partially complete juju charms to our personal namespaces or should I not do that until I think it would be usable by others?
<skylerberg> I am not sure if I am supposed to use that as version control or as a publishing platform.
<hazmat> skylerberg: publishing not vcs
<hazmat> skylerberg: vcs in git(hub) ;-)
<hazmat> lazyPower: plus vs fan, it support aws and gce native backends no ip in ip
<hazmat> lazyPower: what's the state of the art on monitoring
<lazyPower> hazmat: in the jujuverse?
<lazyPower> hazmat: i'm assuming thats what you're asking for - we haven't had extra cycles to devote to it that i'm aware of. the community may be working on additional componentry like promethius or carbon.  The question is rather contextless at this point.
<hazmat> lazyPower: fair enough, jujuverse.. best charms for monitoring for context.. say big data
<lazyPower> hazmat: we recently published to the mailing list a rather large story about syslog analytics
<lazyPower> including bundle
<hazmat> oh.. cool.
<lazyPower> thats state of the art w/ big data and monitoring + some interesting ganglia metrics coming
 * hazmat trawls archives
<lazyPower> as it has native integration with the big data components
<skylerberg> exit
<skylerberg> oops, lol
<marcoceppi> o/ skylerberg thanks for the contributions so far
<skylerberg> marcoceppi: No prob. And thank y'all for being really responsive whenever I run into any issues.
<plars> wget: unable to resolve host address ''ubuntu-14.04-server-cloudimg-amd64-root.tar.gz'
<plars> getting this when trying to deploy things to lxc under maas
<plars> here's a more complete log, any ideas? http://paste.ubuntu.com/12121232/
<skylerberg> In amulet I cannot find a way to relate a service that is deployed locally. The problem is that the service name is "local:trusty/cinder-tintri". Then when I try to add the relation I have to provide a string like "local:trusty/cinder-tintri:storage-backend". The extra colon then confuses the relate function.
<marcoceppi> skylerberg: the "local:trusty/cinder-tintri" is the name of the charm, the service is just "cinder-tintri"
<marcoceppi> so, it's just cinder-tintri:storage-backend
<marcoceppi> s/name/url of the charm
<skylerberg> marcoceppi: If I try that I get a ValueError on deployer.py:261 saying that the service is not deployed. I think the whole "local:..." string is being stored as the name of the deployed charm inside amulet.
<marcoceppi> skylerberg: can you share your amulet test file?
<skylerberg> yeah, just a sec
<marcoceppi> something like gist or paste.ubuntu.com should suffice
<skylerberg> http://paste.ubuntu.com/12121295/
<marcoceppi> skylerberg: ah, that's why, don't call the CINDER_TINTRI charm "local:trusty/cinder-tintri" that should just say cinder-tintri. amulet will detect that if the test is in the charm's name that matches the deploy line to use that instead of trying to resovle a charm store address
<marcoceppi> skylerberg: you'll also want to make sure you have amulet 1.11.0 installed, it has some fixes for testing with more recent versions of deployer/juju
<skylerberg> marcoceppi: Changing it to just "cinder-tintri" gives me a 404 message about the charm not being found. I just checked the version and it is 1.11.0.
<marcoceppi> skylerberg: and this test resides within the tests directory in the charm and the charm directory is named "cinder-tintri" ?
<skylerberg> Yes, that is correct.
<marcoceppi> skylerberg: that shouldn't be happening
<skylerberg> If you can point me to the line where it checks if the charm name matches the directory then I should be able to insert a debugging print statement and figure out why that isn't triggering.
<marcoceppi> skylerberg: https://github.com/juju/amulet/blob/master/amulet/charm.py#L54
<marcoceppi> skylerberg: that's coming from https://github.com/juju/amulet/blob/master/amulet/deployer.py#L115
#juju 2015-08-19
<skylerberg> marcoceppi: charm == 'tintri-cinder', self.test_charm == 'deployer'
<marcoceppi> skylerberg: that's so very wrong
<marcoceppi> skylerberg: how are you running the tests?
<marcoceppi> skylerberg: charm_name (and test_charm) are derived from either os.getcwd() or if JUJU_TEST_CHARM is in the environment
<skylerberg> I am running them with python, but I was in the wrong directory I think. I just realized deployer was the directory I was in. However, it doesn't work in tests either.
<marcoceppi> to get around this you can export JUJU_TEST_CHARM as cinder-tintri, but this should work if you're either running bundletester, juju test, or the test file directly (ala tests/test-file-name)
<marcoceppi> skylerberg: all the test runners execute from the charm directory root, not the tests directory
<marcoceppi> python tests/name-of-test should work from the root
<skylerberg> Okay, I will give that a try
<skylerberg> Thanks, that looks like it solved that problem.
<beisner> hi coreycb, looks like ceph/next needs the c-h sync
<beisner> coreycb, also fyi, that bit i mentioned earlier buggified:  T-L deploys are blocked on bug 1486293
<mup> Bug #1486293: trusty-liberty - multiple packages cannot be authenticated <amulet> <openstack> <uosci> <keystone (Juju Charms Collection):New> <https://launchpad.net/bugs/1486293>
<coreycb> beisner, ceph's updated now and I'll take a look at the liberty issue
<stub> bcsaller: https://bugs.launchpad.net/charms/+source/postgresql/+bug/1486257 could be a real life use case for composer and relation stubs.
<mup> Bug #1486257: make port configurable and send port+protocol on the syslog relation <postgresql (Juju Charms Collection):New> <rsyslog (Juju Charms Collection):New> <https://launchpad.net/bugs/1486257>
<magicaltrout> hello chaps
<magicaltrout> quick question
<magicaltrout> if I want to require tomcat, but specifically tomcat 7
<magicaltrout> can I configure that in metadata.yaml?
<plars> wget: unable to resolve host address ''ubuntu-14.04-server-cloudimg-amd64-root.tar.gz'
<plars> getting this when trying to deploy things to lxc under maas
<plars> here's a more complete log, any ideas? http://paste.ubuntu.com/12121232/
<plars> marcoceppi maybe? or any idea who to ask? The only significant difference I can see in this environment vs. the one I have at home (that works) is the working one has juju 1.24.4 and the non-working one has 1.24.5, but I haven't updated the one at home to see if it breaks
<beisner> gnuoy, coreycb - along the lines of liberty uca enablement in the charms, i've selectively synced the fetch bits into mongodb as a merge proposal for review @ https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413
<gnuoy> beisner, why a selective sync? Does a full sync break mongo>
<gnuoy> ?
<beisner> gnuoy, coreycb - i went minimal since it's not in the next cadence, and didn't want to introduce other potential issues with a full sync.  but i will if you think that'd be best.
<beisner> gnuoy, i'll re-sync all of it now and let tests run.
<marcoceppi> plars: is there any proxy in place in this environment?
<plars> marcoceppi: no
<marcoceppi> plars: it's trying to download the trusty template from the bootstrap node, https://10.101.49.149:17070/environment/4861d530-e810-405c-8f57-4b686db9581e/images/lxc/trusty/amd64/ubuntu-14.04-server-cloudimg-amd64-root.tar.gz but it doesn't appear to be working
<marcoceppi> actually
<marcoceppi> weird
<plars> marcoceppi: yeah, it seems to complain about the certificate?
<marcoceppi> it doesn't have a server name for the wget line
<plars> marcoceppi: then later it's just... yeah, bad url
<marcoceppi> looks like some weird logic tree, possibly an error. what base environment are you using? MAAS?
<plars> marcoceppi: trusty+ppa, maas version is 1.8.0+bzr4001-0ubuntu2~trusty1
<plars> marcoceppi: maas version is the same as the one I have at home, and works fine there
<marcoceppi> plars: if possible I'd try downgrading to 1.24.4 - this might be a regression
<plars> marcoceppi: that was going to be the next thing I tried, would I just need juju and juju-core?
<plars> marcoceppi: and do you have a link to the old version somewhere?
<marcoceppi> plars: just juju-core, juju is a meta package
<marcoceppi> if you're on x86 I could upload a file, I don't think they're just floating around
<marcoceppi> plars: http://ppa.launchpad.net/juju/stable/ubuntu/pool/main/j/juju-core/
<plars> marcoceppi: thanks, downgrading it now, then I'll redeploy everything
<g3naro> how do i juju deploy local and set the bridge device  to use?
<g3naro> withouth changing /etc/lxc/default.conff ?
<plars> marcoceppi: it hasn't given up yet, but I ran juju status in another session and it still seems to have the same problem after downgrading: http://paste.ubuntu.com/12125553/
<lazyPower> g3naro: to clarify, you want to use a juju local provider, on a different networking bridge, but not edit the default lxc network bridge?
<puzzolo> strange problems with juju. My lxc machines dont get ips, while maas offers them
<g3naro> what is the configuration in /etc/lxc/default.conf ?
<puzzolo> LXC_AUTO="true"
<puzzolo> USE_LXC_BRIDGE="false"  # overridden in lxc-net
<puzzolo> [ -f /etc/default/lxc-net ] && . /etc/default/lxc-net
<puzzolo> LXC_SHUTDOWN_TIMEOUT=120
<puzzolo> even configuring lxc statically wont do the trick
<marcoceppi> plars: this seems like a misconfiguration somehwere, can you confirm that the agent-version for node 0 is 1.24.4?
<puzzolo> lxc-net, which overrides lxc.conf uses lxcbr0
<plars> marcoceppi: hmm, no it seems to be 1.24.5, but I downgraded... does it cache somewhere?
<g3naro> hmm do you have the lxcbr0 made?
<plars> marcoceppi: and I did re-bootstrap after downgrading
<marcoceppi> plars: agent stream will always look for latest tools, you may need to bootstrap with an explicit version
<plars> marcoceppi: of course, I just downgraded juju-core, not juju
<plars> marcoceppi: how do I do that?
<marcoceppi> plars: I'm not entirely certain now
<puzzolo> g3naro: machine was provisioned by juju. lxbr0 is there. Still in containers config i get: lxc.container.link = juju-br0
<puzzolo> i changed config to not using lxc bridge. lxcbr0 disappeared
<puzzolo> problem is still there
<g3naro> ok
<g3naro> try this
<g3naro> edit your ~/.juju/environments.yaml configuration
<g3naro> you have an option to specify the bridge device there
<puzzolo> i am no expert, as this is my first deployment with lxc containers. And i did not quit get which layer of configuration comes first. For which container I get a config file with points to juju-br0, which would expose all containers to maas dhcp. And that should be ok, making the services reacheable. lxcbr0 instead is a natted network for lxc continaers. This seems like a bug to me.. still I cant get to understand why it fails.
<puzzolo> juju-br0 is the right bridge for lxc-containers or should services be natted?
<puzzolo> strange is that with default deployment, containers do point to juju-br0, do ask dhcp .. and maas does offer them. ufw disabled anywhere. I must use tcpdump at least to see where the udp/tcp breaks
<asanjar> kwmonroe: updated and tested hbase with jujubigdata4
<kwmonroe> excellent asanjar!
<plars> marcoceppi: yeah, sshing to node 0 I see that it had the old version, but also seems to have downloaded 1.24.5
<puzzolo> mu problem seems to be related to juju-br0 bridged to eth0, which goes to maas internal network. juju-br0 does receive dhcp offers from maas for all containers, still wont "redirect" them to veth interfaces
<lazyPower> puzzolo: is this a LXC networking reachability issue you're encountering? eg: Juju deploy --to lxc:# and then attempting to relate/route to those containers fails?
<jogarret6204> hi all.   I have juju 1.24.3 upgrade to 1.24.5 stuck.  anyone tell me how to "undo" or kik it along to finish?
<plars> marcoceppi: is there some way to force it to downgrade? or to debug where things might be going wrong and why it's getting those errors in 1.24.5?
<lazyPower> jogarret6204: any further details than the upgrade is stuck? has the bootstrap node completed the upgrade cycle and its now stuck pushing out to the agent nodes?
<lazyPower> jogarret6204: also do you have any debug log output during the upgrade that could help point us to a root cause? juju debug-log can assist here, but you might have to specify a legnth and scroll back through the election spam if its been a long while since you initiated the upgrade   juju debug-log -n <number>
<puzzolo> lazyPower: indeed. I tryed deploying a juju charm, landscape-maas-dense. It installes apache2 on phys0, inquiring maas for a new node. On that machine it builds 5 lxc containers. Everything ok, till containers attached on juju-br0 need an ip. They'll ask for it. Maas will provide them, but ack all stop at juju-br0.
<jogarret6204> lazyPower: ERROR WatchDebugLog not supported
<jogarret6204> .
<jogarret6204> dont think bootstrap node is upgraded either
<puzzolo> it is truly frustating.. as i cant get this continaers to get an ip, or understand where i am wrong.
<jogarret6204> machine 0: agent-version: 1.24.3.1
<lazyPower> dimitern: ping
<lazyPower> jogarret6204: which provider are you using?
<jogarret6204> I'm in no hurry this is a lab.
<jogarret6204> maas
<lazyPower> hmm, its not giving you a debug log, thats a weird bug
<jogarret6204> I have machine 0 as a VM.  I can cat the machine-0 log there...
<lazyPower> puzzolo: we had some networking changes land in 1.24 that should have addressed that
<lazyPower> but it appears we've missed the TZ window to contact the developer, let me put out some feelers and see what i can turn up about this
<lazyPower> i've run into this with other substrates however, where LXC networking isn't adding the forwarding rule to the machines and therefore the containers are unreachable outside of the host
<dimitern> lazyPower, pong
<lazyPower> dimitern: heyo sorry for the late ping :)
<dimitern> lazyPower, no worries :) what's up?
<puzzolo> lazyPower: it felt like a bug. Anyway to check this is my issue?
<lazyPower> dimitern: i've had a couple questions over the networking in 1.24 wrt lxc containers. Correct me if i'm wrong but on certain substrates the forwarding should "just work" and cross host container communication w/ lxc should be enabled?
<lazyPower> puzzolo: this conversation above is related to your scenario :)
<puzzolo> i guessed that guys :*
<lazyPower> jogarret6204: if you can grab the all-machines log that would be helpful
<lazyPower> jogarret6204: that machine-0 log might have the details of whats happening with the upgrade as well, so thats probably a good place to start
<jogarret6204> relevant log message seems to be this one
<jogarret6204> https://10.20.0.36:17070/environment/b2c12e86-2a25-4d90-886b-e595c500f432/tools/1.24.5-trusty-amd64
<jogarret6204> sorry - was trying it.. all of it
<jogarret6204> failed to fetch tools from "https://10.20.0.36:17070/environment/b2c12e86-2a25-4d90-886b-e595c500f432/tools/1.24.5-trusty-amd64": bad HTTP response: 400 Bad Request
<jogarret6204> I see my lab proxy changed when I try that...  let me go fix that
<puzzolo> lazyPower: can this "forwarding" thingie be handput, or should i just wait for devs to do the dev's things?
<lazyPower> puzzolo: i'm  not certain what is actually put on the host in terms of forwarding, so i'm pending a response from dimitern
<puzzolo> lazyPower: we'll wait together then. Strange is that those containers do reach maas, but get nothing back.
<puzzolo> req ok. offer ok. never "acks"...
<lazyPower> weird
<puzzolo> but sure is forwarding issue, as static ips wont do either
<lazyPower> yeah, i think its just an iptables rulechain or route thats added
<lazyPower> i'm not sure which
<lazyPower> its been a while since i've dealt with this by hand
<jogarret_6204> lazyPower: fixed proxy.  logs still showing upgrade in progress in error logs of juju state VM.  but nothing is upgrade.  can I back out and redo upgrade?
<jogarret_6204> On puzzolo issue - you guys check ebtables and ufw too?
<lazyPower> marcoceppi: do you know if there is a way to stop an in progress upgrade?
<dimitern> lazyPower, hmm in 1.24 we have this only in a few places
<lazyPower> dimitern: AWS, and MAAS correct?
<puzzolo> lazyPower: sure this thing affects quite a bunch of charms. I tried landscape just to get the feeling of what will be openstack deployment.
<dimitern> lazyPower, a few more - AWS and MAAS with address-allocation feature flag
<lazyPower> ahh so thats behind a feature flag during deployment?
<dimitern> lazyPower, by otherwise by default only on MAAS
<lazyPower> puzzolo: you're using maas correct?
<dimitern> lazyPower, yeah, it's possible to lift it for 1.25, but it's not decided yet
<firl> yay, I will be able to come to the juju charm summit
<lazyPower> firl: awesome!
<firl> yeah, I work remote in texas but the office I Work with is 2 miles from the summit so itâs perfect
<jcastro> marcoceppi: which would you say is a good hello world for explaining actions to people
<lazyPower> jcastro: a great one would be to get troubleshooting information from a system
<lazyPower> dump the versions of software, configuration data, and relevant things from the service
<jcastro> sorry I meant, hello world example charm
<lazyPower> o
<jcastro> like, one I can point to people and then explain
<lazyPower> i got meta there fo ra second, sorry
<jcastro> no worries
<lazyPower> etcd has a system health action
<lazyPower> very simple to parse
<lazyPower> https://github.com/whitmo/etcd-charm/blob/master/actions/health
<jcastro> cool, any others?
<lazyPower> i have some actions for DroneCI
<lazyPower> https://github.com/chuckbutler/drone-ci-charm/tree/master/actions
<jcastro> perfect, anyone else have some decent actions they want to share?
<puzzolo> lazyPower: maas
<puzzolo> jogarret_6204: i checked ufw only. defaulting it to accept all, or even disabling it. From maas down to phys to even lxc containers
<puzzolo> jogarret_6204: didnt try ebtables.. as it never touched me in these years to custom firewall  bridge prots
<puzzolo> [    8.062781] IPv6: ADDRCONF(NETDEV_UP): vethQ3EQVL: link is not ready
<puzzolo> [    8.062787] juju-br0: port 2(vethQ3EQVL) entered forwarding state
<puzzolo> [    8.062790] juju-br0: port 2(vethQ3EQVL) entered forwarding state
<puzzolo> [    8.065762] juju-br0: port 2(vethQ3EQVL) entered disabled state
<jogarret_6204> puzzolo:  Those are just some things I have tried.  I'm not experienced with containers.
<plars> marcoceppi: it's worth noting that if I use the --no-check-certificate flag to wget, I can get it to download from the url in the error, but I don't think I can inject that anywhere
<plars> marcoceppi: It downloads ok, but still gives: WARNING: certificate common name â*â doesn't match requested host name â10.101.49.149â
<puzzolo> on maas, everything seems to be working just fine
<puzzolo> Aug 19 20:23:01 maas-1 dhcpd: DHCPDISCOVER from 00:16:3e:43:ce:3f (juju-machine-11-lxc-0) via eth1
<puzzolo> Aug 19 20:23:01 maas-1 dhcpd: DHCPOFFER on 10.1.1.136 to 00:16:3e:43:ce:3f (juju-machine-11-lxc-0) via eth1
<puzzolo> tell me if i'm flooding too much guys. Bridge configuration, seems to be outstanding
<puzzolo> http://pastebin.com/JHvQRs3s
<lazyPower> puzzolo: I'm not positive on where to go from here. I need to talk with dimiter more tomorrow in the AM to get a better view of whats implemented and how to consume it
<lazyPower> puzzolo: once i've had that conversation i should be able to lend a better helping hand. You're the third user this week thats had container networking issues and we've got work landed that should make any manual intevention moot
<lazyPower> juju should be able to do what is right for that networking component in the stack, and if we do anything by hand its not as reproduceable as juju doing it for you :) so i hesitate to help in the creation of a snowflake
<puzzolo> a small bit different in my setup, is that i am using a kvm machine with libvirt bridged networking.
<lazyPower> puzzolo: i can circle back tomorrow if you're going to be here, i'll also be pinging the list with my findings to help the broader user audience in general.
<plars> marcoceppi: I tried to fake it out by resetting the symlink under /var/lib/juju/machine-0, but it replace it. Also tried just replacing the 1.24.5 binary with the 1.24.4 one, but it just gets stuck in an endless update loop if I do that, and doesn't let me deploy :(
<puzzolo> thank you for your time. I'll check what those users had on lxc netwroking, to get a better picture myself
<lazyPower> puzzolo: sorry i didn't have better info today. this is very much new stuff that we've been working on for the last cycle
<lazyPower> so its a known problem, but we've got some tools out there to help, i jsut need to re-up on that info and we should have you sorted in short order
<puzzolo> no problem. One thing did catch my eye now, on lds-1, the first service unit phys machine, where containers are deployed for the other services in the charm bundle:
<puzzolo> 2015-08-19 18:39:28 INFO juju.networker networker.go:163 networker is disabled - not starting on machine "machine-11"
<plars> ah, hang on, it was way easier. I didn't see that upgrade-juju takes a version parameter, and it doesn't seem to try to circumvent what you specify there :)
<plars> marcoceppi: ok, I'm getting farther... once it's completely done, I'd like to file a bug on this. Is lp the best place for bug reports? github?
<plars> marcoceppi: I've just about convinced myself it's a regression in 1.24.5
<puzzolo> lazyPower: mumble mumble, can it actually be a problem regarding my managed switch?
<lazyPower> i wouldn't think so, but its possible
<beisner> gnuoy, coreycb - mongodb c-h sync for liberty uca - mp ready for review @ https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413
<coreycb> beisner, I'll have to defer to gnuoy.  I can't land to mongodb
<beisner> coreycb, ok np, thanks.   fyi, mbruzek is on the mysql sync review as soon as tests complete.
<beisner> o/ mbruzek :-)   the queue was thick today, but that mysql amulet test is next up.
<puzzolo> upgrading to 1.24.5 over 1.24.4
<mbruzek> in meeting beisner I will get to that after that
<beisner> mbruzek, yep np, the test will be +~1hr i imagine.  thanks a ton.
<puzzolo> lazyPower: http://askubuntu.com/questions/615433/landscapes-openstack-installation-fails-due-to-containers-unable-to-obtain-an-i after a couple of hours of diggin' around, this guys in the link seem to have my same issue. I am not using esxi, but libvirt alone, with kvm as phys0 where lxc get created. I will try to see if my problem is related to libvirt other then juju.
<puzzolo> I am starting to think that problem is related to macvlan on libvirt
<puzzolo> which is a sort of  cannibilized bridge, working splendidly for kvm networking... but is not aware of lxc containers mac addresses inside kvm.
<lazyPower> puzzolo: that is very possible
<lazyPower> sorry i've been unresponsive, in a community hangout with some users of our k8's bundle
<lazyPower> split brain today :)
<skylerberg> I need to change the configuration in nova-compute (I need to pass in some nfs protocol settings). However, there doesn't seem to be an applicable interface for me to connect to from my charm. What is the way forward? Add an interface to nova-compute? Hijack an interface meant for something else?
<lazyPower> skylerberg: is this just for adding NFS support? or is this to extend into a new region of NFS goodness?
<lazyPower> skylerberg: meaning the NFS charm relation
<skylerberg> lazyPower: I just need to edit a couple of settings in nova.conf (nfs_mount_options and maybe cpu_mode).
<lazyPower> skylerberg: its a bad idea to have something else editing nova.conf outside of the nova charm
<lazyPower> that smells of config race conditions should nova ever receive a hook event firing that will re-write the template
<lazyPower> iirc there is a config manager lib, that you can send data to over the relation. we have a similar construct in teh cinder charm
<lazyPower> beisner: does nova have the same config manager that cinder does?
<lazyPower> skylerberg: let me loop in our test master of openstack, if thats the case i should be able to get you a recommended path forward that has a high liklihood of getting accepted as a PR for the charms
<skylerberg> lazyPower: Right, so I am using the cinder storage-backend relation and that is working great. I just don't see an equivalent in nova.
<skylerberg> lazyPower: It might be nice to have a generic interface for editing nova's config. Right now it looks like all the interfaces are very specific (ceph, rabbit, etc.)
<lazyPower> skylerberg: thats by design as its p2p orchestration in that instance :)
<lazyPower> which makes it ery clear to anyone thats relating into nova, what data is being sent/received
<lazyPower> and allows nova to respond in kind based on whats incoming, eg: installing any storage drivers, et-al
<beisner> hi lazyPower - they both use the contexts approach with regard to collecting config data and rendering the conf template, which keeps things safe-ish.
<lazyPower> beisner: so is it safe to link to the cinder-vnx charm to illustrate how that hsould be used?
<lazyPower> or are they different context managers?
<puzzolo> this problem is killing me
<beisner> not sure i'm fully following.  i think the cinder-vnx + cinder charms are a good example of a subordinate affecting a principle's config if that's what you mean.
<lazyPower> beisner: well skylerberg is wanting to add NFS storage backend to nova
<lazyPower> and i'm recommending using the context approach to add that config update, vs a racey interface approach that edits the config outside of the context aware template approach
<beisner> oh neat, for instance image storage?
<beisner> if so, we already have nova-compute plumbed for ceph-backed instance storage, and it may be worth looking at that code
<beisner> caveat there of course is:  all instance disk i/o then traverses the wire, so the network needs to be ROCKING.
<lazyPower> skylerberg: seems like you've got some template code you can consume, and i would recommend adding an interface unless you're going to recycle the same data coming from nfs, then use the NFS provided interface :)
<lazyPower> sorry that wsa a bit round about, i just wanted to make sure you were setup for success vs running into a racing config scenario. I've had my fair share of those and have lost sleep over it.
<beisner> +1000 for race avoidance
<skylerberg> lazyPower, beisner: So the way forward is to add an interface to nova-compute that uses the same type of mechanism as the storage-backend interface in cinder to update the config?
<beisner> lazyPower, skylerberg - so re: design logic in adding new features, i'd prefer to pull in one of the folks like coreycb, gnuoy, wolsen or dosaboy who are primary authors.  i become familiar with the code paths through testing, but i don't generally pave new paths in these charms as such.
<coreycb> skylerberg, sorry, catching up
<beisner> that, and inspecting how we've already done the ceph-backed instance storage in nova-compute.
<coreycb> skylerberg, there's a config-flags optoin in nova.conf to edit general config options, but be careful
<coreycb> skylerberg, in nova-compute that is
<coreycb> skylerberg, nevermind me, saw you had a question about general nova.conf settings earlier
<skylerberg> coreycb: It seems like I would either need to still use an interface so that users could connect my charm to nova and then my charm would set the config-flags or I would need to have instructions with my charm saying that they need to set this option on nova-compute, which seems less than ideal.
<skylerberg> coreycb: What I need to do with nova-compute is pretty simple, it just needs to set a couple config options and make sure the service is restarted so that the config is loaded. It should be a lot like how cinder-vnx connects to cinder.
<wolsen> skylerberg, this configuration setting needs to be applied after a subsequent charm is related to it?
<skylerberg> wolsen: That is what I am imagining. Someone deploys cinder and nova-compute, then they connect my charm to both to alter their configurations to use the proper storage backend.
<wolsen> skylerberg, yeah unfortunately thare's not a generic interface that exists to relate nova and cinder backends, its more specific (e.g. the ceph-client interface which ceph can be related with nova-compute)
<wolsen> skylerberg, though there's the possibility of having a shared-storage relation or something similar that may be more generic
<skylerberg> wolsen: I think my use case is even more generic than relating storage backends, it is just editing the config settings. Could we add a generic config setting interface?
<wolsen> skylerberg, sorry my son & I are both sick today - can I get back to you?
<skylerberg> wolsen: Yeah, no problem. Take it easy.
<beisner> thanks skylerberg, lazyPower - i've also got to run, eod.  i think most of the openstack charmers are also end-of-day.  it may be worth starting a thread on the juju mailing list to state the end goal, gather input, ideas, etc.
<beisner>  (thanks too, coreycb wolsen  )
<wolsen> skylerberg, one of my concerns is that adding a generic config interface for the charms will now compete with the charm config itself
<wolsen> skylerberg, so it becomes ambiguous if both the charm itself and a related charm specify "create this config option with this value" - which one is the right one?
<skylerberg> wolsen: I see. Yeah, I will think about what makes sense for my use case and get back to you.
<puzzolo> lazyPower: nothing. I tried everything i could. from kernel flags on bridge till promisc and "allmulti" options on all layers  of network interfaces. Still nothing. Hope to hear from you and dmnitry tomorrow
<lazyPower> puzzolo: i've got you at the top of my list to have that discussion before i switch feet back to k8's :)
<puzzolo> thank you dude :*
#juju 2015-08-20
<tasdomas> hi
<tasdomas> what is the best way to run bundletester on my charm in a "pristine environment" ?
<anastasiamac> tasdomas: m afraid to ask :) what do u consider "pristine"?
<tasdomas> anastasiamac, basically as close to the automated test runner used by the juju charms team
<g3naro> can i search juju modules from the cli ?
<rick_h_> g3naro: juju charms?
<g3naro> yeah
<rick_h_> g3naro: hmm, we're working on a store cli plugin but it's not public yet that does have some search abilities.
<rick_h_> I don't recall if the eco folks had a plugin for that. marcoceppi lazyPower was there something you all had in charm helpers/tools?
<g3naro> ohh k
<marcoceppi> rick_h_: g3naro `juju charm search`
<g3naro> nice 1 bro
<puzzolo> lazyPower: ping
<lazyPower> pong puzzolo
<magicaltrout> help, bzr newbie here
<magicaltrout> https://jujucharms.com/docs/devel/authors-charm-store i completely suck and don't understnad how to get my bzr repo into launchpad
<magicaltrout> charm/trusty/saikuanalytics I have a folder structure like that and created a bzr repo in the charm directory
<magicaltrout> but can't figure out how to get it into launchpad, I just get No such source package
<beisner> magicaltrout - still around?
<redelmann> Hi, is there any juju-restart function/method in charmhelper?
<redelmann> or just need to call subprocess.check_call
<redelmann> marcoceppi, ^?
<redelmann> juju-reboot, sorry
<marcoceppi> redelmann: not sure, let me check
<redelmann> marcoceppi, i search on charmhelpers docs, and didn't find anything
<marcoceppi> redelmann: then not yet, you're welcome to submit a patch though
<redelmann> marcoceppi, ok, thank you
<skylerberg> beisner, wolsen, coreycb: To follow up on our conversation yesterday about connecting to nova-compute. I think it will be best to keep it simple and just use the config-flags options in nova-compute. I can use my README to communicate that users need to do this.
<skylerberg> However, I see that config-flags takes a comma separated list. One of the values I need to pass in does itself have a comma in it.
<skylerberg> Without having looked at the code, I am not sure if nova-compute will choke on a config flag that has a comma in it.
<whit> what's the command that shows all the tools available to a charm?
<whit> ah.. juju help-tool
<puzzolo> jo lazyPower any news on these lxc networking thingies?
<lazyPower> puzzolo: hey there
<lazyPower> puzzolo: i dont have any updates unfortunately
<beisner> hi skylerberg - as i understand it, the config-flags config option is there for one-off edge case needs.  what are the config options you're needing to pass in?  if they don't collide with others, that may be a sane approach today.  but there's no way to say there wouldn't be future collisions if it's not in the nova-compute code base for reviewers to know about down the line.  i do think the better / more-official (tm) approach may be to add a charm confi
<beisner> g option to twiddle the conf bits you need.
<jcastro> jose: ping
<skylerberg> beisner: I would be adding something like "cpu_mode=none,nfs_mount_options=vers=3,proto=tcp". Looking at it, I don't think it is possible to parse that correctly.
<beisner> skylerberg, ok so we already have another need to control cpu_mode in ppc64 scenarios, just need to add that.  keep in mind, in kilo and later the cpu_mode directive got moved to a different conf file that the charm currently has no other reason to touch (so that's not plumbed just yet).
<beisner> a lot of conf directives scooted around between icehouse and kilo, so we just handle that via the templates in the charm.
<beisner> makes it transparent to the user.  they just set a charm config option, the charm lands it in the right file
<beisner> so anyway, that kind of makes the case for using charm config options instead of that little backdoor-ish conf hole we have open.  make sense?
<skylerberg> beisner: I need to talk with someone on my team to see if we really need cpu_mode=none. So for the nfs_mount_options we should probably just add that as a nova-compute charm configuration option and then I can just tell users to set it.
<skylerberg> beisner: Okay. All I actually need to do is set nfs_mount_options=vers=3. So we just need to add nfs_mount_options as a config option for the nova-compute charm, right?
<skylerberg> I was going to try to make a pull request to add the configuration option for nfs-mount-options, but the repo on github is two years out of date. Is there somewhere I can get the source and make a pull request?
<lazyPower> skylerberg: https://code.launchpad.net/~openstack-charmers
<lazyPower> any of the /next branches are the current dev focus and where you want to make merge proposals against
<skylerberg> lazyPower: Thanks for pointing me to the repo. I tried deploying nova-compute based on the latest code and it failed http://paste.ubuntu.com/12138183/.
<lazyPower> skylerberg: i would certainly file that as a bug, it seems something changed in /next thats causing a problem. (is this unmodified /next, or was it patched w/ the modifications?)
<skylerberg> Perhaps this is because I am running it in a container and it wants to install kvm.
<lazyPower> thats indeed possible.
<lazyPower> fwiw - you can us ea KVM based local provider and it will be slow (hypervisor in hypervisor) but should complete without issue
<lazyPower> https://jujucharms.com/docs/devel/config-KVM
<skylerberg> lazyPower: Thanks. I think I can get around it by just marking the issue as resolved. I am really just trying to see the effect on the config file.
#juju 2015-08-21
<beisner> skylerberg, indeed nova-compute needs to be on metal or in a vm.
<skylerberg> beisner: Yeah, it worked out though because I didn't need it to run correctly to check if it set the config properly. Now I am just learning how to make a merge request on launchpad.
<skylerberg> beisner: Got it! I now have a merge request proposed for the change I need: https://code.launchpad.net/~sberg-l/charms/trusty/nova-compute/next/+merge/268680
<beisner> skylerberg, i won't be able to take a deep dive, but i can confirm that you have it proposed against the right charm in the right place.  woot!   the testing bot will see that and start to give feedback.  once those tests are done, humans will take a deeper look and provide feedback.  appreciate your work!
<skylerberg> beisner: Cool! Sounds good.
<Egyptian[Home]> hi - when i run juju status or juju debug-log .. nothing happens it just hangs there
<Egyptian[Home]> i installed ntp on the maas server and it worked - it was able to deploy and all
<puzzolo> ping lazyPower
<puzzolo> i am seeing quite some bugs floating on this lxc networking issues. I'll take a look, but something is definitly wrong between lxc networking and juju-br0
<lazyPower> puzzolo: i had the talk with dimiter this morning
<lazyPower> puzzolo: question about your setup. Are you letting maas manage the networking? or are you using an un-managed network w/ maas?
<puzzolo> maas manages dhcp and dns.
<puzzolo> on dns side i use a forwarder to resolve external hosts
<lazyPower> puzzolo: if maas is managing the network, it cross host container reachability should work ootb
<lazyPower> as the containers would be requesting ip addressing from the DHCP server, and get a proper address for the LAN
<lazyPower> puzzolo: additionally, the containers should be visible in the MAAS UI
<lazyPower> if they are not, its time to start filing bugs, and getting eyeballs on the issue as its probably a localized issue or we've got a gap in our testing coverage
<puzzolo> If your setups are working ... problem lays on my libvirt setup with macvlan canibilized bridges
<puzzolo> on top of kvm vm passed to maas... and used for containerization by juju
<lazyPower> puzzolo: if you could get that configuration posted, along with versions of maas, and juju, and a bundle of the services, we can try to reproduce.
<lazyPower> it might be a configuration detail, without having the details of how things are currently configured, its difficult to discern where the root cause is.
<puzzolo> i will try a deployment on a physical server. If issue remains i'll post a bug. Is it safe using maas on 15.04?
<lazyPower> Certainly
<lazyPower> so long as maas is managing the network it should "just work"
<puzzolo> once i installed maas i had huge problems resolving outside. I had to manually edit bind9 configuration for outside resolving. Is it normal?
<lazyPower> puzzolo: i haven't had that issue myself, the DNS was forwarded when not found.
<lazyPower> but that might be a recent change. I haven't had my hands in maas in exess of 4 months.
<puzzolo> lost ya after asking
<lazyPower> puzzolo: i haven't had that issue myself, the DNS was forwarded when not found.
<lazyPower> but that might be a recent change. I haven't had my hands in maas in exess of 4 months.
<lazyPower> doh, i should probably re-enable join/part messages
#juju 2015-08-23
<puzzolo> hallo pple, a development question. Any chance to see drbd9 deployed instead of ceph for quicker couple-shared storage?
<puzzolo> with drbd two machines volumes are mirrored together into a singole volume. If you divide that volume in two units, first mastered by server 1 for its vm and backup of second server , and  second volume to server2  machine and backup of machine 1...
<puzzolo> we can achieve faster speeds with less hardware, obtain a two-by-two shared storage .. not suffering from split-brains and abrupt singolarities on disks.
<jobot> Hello, is there a command or conditional statement to test if a mysql database is in use by more than one service?
#juju 2016-08-22
<kjackal> Hello Juju World!
<jamespage> o/
<magicaltrout> https://ibin.co/2sQKsBt5MB9f.jpg blind IRC for Lasek people
<magicaltrout> officially the largest IRC client in the world I reckon
<gnuoy> Hi, I'm using Juju 2.0-beta15-0ubuntu1~16.04.1~juju1 with the openstack provider. There are multiple networks defined so I'm using "--config network=<network UUID>" when bootstrapping. However, I don't see a way to set this as the default when deploying applications in the model. "juju model-defaults" does not list 'network' as a configurable option. Trying to set it results in 'key "network" is not defined in the known model configuration'
<gnuoy> I'll report a but unless I'm doing something obviously wrong?
<gnuoy> ok, so I can set the default when creating the model rather than adding the default afterwards, thats ok. I see network in model-defaults now too. Seems like a bug that you can't add a value to the models defaults after creation
<BlackDex> Hello there. Is it possible to have lxc deployed charms to have a static ip instead of dhcp? This incase the dhcp is offline and you need to reboot/restart stuff?
<magicaltrout> awww
<magicaltrout> marcoceppi: I think the charm dev aws stuff has run out of room :'(
<magicaltrout> one of those days when I could do with a MAAS server under my desk
<magicaltrout> okay I have no clue
<magicaltrout> every machine in aws and lxd ends up in error state
<magicaltrout> kjackal: where can I look to find some logs regarding why my machines are failing to bootstrap?
<magicaltrout> s/bootstap/get allocated
<kjackal> let's see..
<kjackal> magicaltrout: which provider are you using?
<magicaltrout> I've tried AWS, now trying LXD
<magicaltrout> I get bootstrapped
<magicaltrout> but if I try and deploy something if just lands in an error state
<kjackal> Juju keeps logs all machines under /var/log/juju/all-machines.log (think)
<magicaltrout> I thought it was because the AWS for Charm Devs might have been full up
<magicaltrout> on the bootstrap node kjackal ?
<kjackal> yes, on the coordinator
<magicaltrout> how do I ssh to that these days?
<kjackal> wait, wait
<kjackal> on your local client not the coordinator
<magicaltrout> hmm
<kjackal> do you have enything under /var/log/juju ?
<magicaltrout> I don't have a /var/log/juju to start withj
<magicaltrout> oooh
<magicaltrout> hold on
<magicaltrout> formatting status to json
<magicaltrout> gives me an error
<magicaltrout> thats a bit $hit
<magicaltrout> surely those errors should be more obvious
<kjackal> what juju version are you using?
<magicaltrout> beta15
<magicaltrout> I was using 9 earlier when my problems began
<kjackal> ls -ld  /var/log/juju*
<magicaltrout> I worked yesterday on AWS fine
<magicaltrout> today it doesn't like me
<magicaltrout> no such file or directory
<kjackal> nothing under var log!
<magicaltrout> no
<magicaltrout> i have no logs
<magicaltrout> I'm old enough to know where to look by default ;)
<magicaltrout> thats on my client
<magicaltrout> on the controller I'm sure there are lgos
<magicaltrout> logs
<kjackal> :) I am sorry did not meant any offence
<magicaltrout> hehe I'm only messing
<magicaltrout> i looked there first, clearly on the units etc they exist when stuff breaks
<magicaltrout> but it doesn't seem to aggregate on the client
<magicaltrout> anyway, it claimed there were missing tools, so I've rebootstrapped with --upload-tools
<magicaltrout> see if it unbreaks it
<magicaltrout> I don't understand why the tabular status doesn't tell you the error message though
<magicaltrout> that seems a bit silly
<kjackal> I am still on Juju 1.25. The lxd provider on Juju 2.0 had a bug that must have been fixed by now, but haven't tested it yet
<magicaltrout> you going to pasadena this year kjackal ?
<kjackal> Yeap, will we see you there?
<magicaltrout> I've been using betaX and LXD for 6 months, normally pretty stable
<magicaltrout> hope so kjackal else someones wasted a lot of money on flights for me
<magicaltrout> yeah, I submitted a couple of talk proposals
<magicaltrout> so hopefully I'm demoing something
<magicaltrout> I worked most of yesterday on finishing up DC/OS for Mesosphere EU on the 31st
<kjackal> :) The problem we had with juju 2.0 and lxd was that the machine could not immediately resolve its hostname. We had to wait or reboot the container. have you noticed any similar behavior?
<magicaltrout> ah yeah
<magicaltrout> but I get that in AWS as well now
<magicaltrout> I saw that yesterday
<kjackal> magicaltrout: Well done on Mesosphere EU!
<magicaltrout> hehe
<magicaltrout> its pretty crazy this month
<magicaltrout> I'm doing Amsterdam on the 31st for Mesos, London on the 1st for Pentaho, then Pasadena for Charmers summit
<magicaltrout> I've got 2 or 3 more talk submissions to write for this year as well
<magicaltrout> Big Data Spain and ApacheCon EU
<magicaltrout> oh and I'm doing Pentaho Europe Community Meetup in November
<kjackal> BigdataWeek London is too close for you? :)
<magicaltrout> yeah thats a Bigstep thing, Meteorite runs its servers on bigstep but they've lagged on Ubuntu support which is a pain for development
<magicaltrout> they told me the other week they were getting Xenial tested so hopefully I can turn our Bigstep servers into Juju managed DC/OS clusters soon
<kjackal> But overall your schedule is crazy!!!
<magicaltrout> lol
<magicaltrout> thats pretty normal ;)
<magicaltrout> I blame jcastro he said "submit some talks on Juju and I'll help you get to them"
<magicaltrout> so I did
<magicaltrout> and they got accepted ;)
<magicaltrout> hmm the BDW CFP is still open
<magicaltrout> maybe I shall submit a paper ;)
<kjackal> Lol!!!
<kjackal> Oh, I have a chalenge for you! http://bigdata.ieee.org/ its for next year!
<magicaltrout> cool
<magicaltrout> I'll think of something
<magicaltrout> I'm happy to talk at conferences though so if people have good ones for a non canonical employee to speak at I'm happy to pitch a talk
<magicaltrout> I'm also giving a presentation to the JPL team when i'm in Pasadena next month
<magicaltrout> so I plan to show off the DC/OS, Kubernetes stuff as I'm getting them all involved in docker stuff
<magicaltrout> but currently the deploy to single hosts
<magicaltrout> and I'm trying to get them down to the summit, but I need jcastro to publish the schedule so I can tempt thjem
<magicaltrout> them
<kjackal> crazy!
<magicaltrout> I get bored when I just work on one thing
<magicaltrout> so doing a bunch of different stuff and going to conferences at least keeps my schedule varied
<andrey-mp> jamespage: Hi! I've asked a question in the https://review.openstack.org/#/c/348336/6/metadata.yaml - how to connect glance-charm to cinder-charm in case of implementing one additional relation with subordinate property when no other configuration is needed.
<jamespage> andrey-mp, hey!
<jamespage> andrey-mp, sorry for the silience - I've been away for the last week or so...
<jamespage> andrey-mp, could you join #openstack-charms please?
<andrey-mp> sure
<bbaqar> hi guys. i need to add an interface to the lxc created by juju and connect it to an underlying bridge
<bbaqar> If i add "lxc.network.type = veth lxc.network.link = br2 lxc.network.flags = up"  to  /var/lib/lxc/juju-machine-8-lxc-8/config
<bbaqar> would that do the trick
<magicaltrout> rick_h_: can you or somebody explain multi series charm publishing please :)
<bbaqar> guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder
<bbaqar> Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host
<bbaqar1> guys i configured cinder using its juju charms correctly .. i then removed the charms because i wanted to deploy the service somewhere else .. now when i am deploying the charms again keystone is not passign the req relation data to cinder
<bbaqar1> Missing required data: admin_user service_port admin_tenant_name auth_port admin_password auth_host service_host
<rick_h_> magicaltrout: sure thing
<rick_h_> magicaltrout: so the only trick is that you claim the charm supports multiple series and when you publish the charmstore stores it under those different urls
<magicaltrout> so, I tell it it supports Wily and Xenial, but then when I "push" I push the Wily one
<magicaltrout> is that correct? or am I going wrong somewhere?
<magicaltrout> I suspect I've ballsed up somewhere
<magicaltrout> charm push . cs:~spicule/dcos-master
<rick_h_> magicaltrout: leave the series out of the push
<rick_h_> magicaltrout: yes, just push like that, without any series and let the charmstore figure out where to put it
<magicaltrout> interesting
<magicaltrout> so then if I want to push xenial
<magicaltrout> I just go to the xenial directoryu
<magicaltrout> and do the same?
<rick_h_> magicaltrout: if it's a multi-series charm you just push it once. You declare in the metadata.yaml which series you suport
<magicaltrout> or does it happen in one push?
<magicaltrout> okay, cool
<rick_h_> magicaltrout: and then when you push, the charmstore reads that file and puts it in all the right places
<godleon> Hi all, I got following error message when type "juju --debug status"
<godleon> https://www.irccloud.com/pastebin/babPt2OC/
<godleon> is there any way to fix it? I'd be very appreciated for any hint or help. Thanks!
<godleon> It seems juju bootstrap node was gone forever.... ?
<jcastro> magicaltrout: heya, did the jpl guys confirm if they're coming to the summit? If so can you have them register so I have the food count right?
<magicaltrout> jcastro: I don't know yet, have you got a schedule together (even something half done)? I want to blast around an email today or tomorrow but I'd like some content for the sales pitch, not just "turn up it'll be cool" ;)
<jcastro> I'll have a schedule for you today
<jcastro> I just drafted it on friday
<magicaltrout> thanks
<beisner> hi marcoceppi, tvansteenburgh - do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).
<tvansteenburgh> beisner: i defer to marcoceppi on that one
<lazyPower> magicaltrout - i'm here whenever you're ready to talk dee cee oh ess
<magicaltrout> hmm sooooo lazyPower
<magicaltrout> I tweaked some stuff
<magicaltrout> some things just plain don't work yet, but the main subsystems work
<lazyPower> Its a start right? :)
<magicaltrout> I did find they've dropped K8S support for now annoying =/
<lazyPower> oh really?
<lazyPower> thats interesting, there are still mesosphere people attending their sig groups. I wonder what brought that on
<magicaltrout> yeah the more recent versions of DC/OS Mesos have some incompatabilities
<magicaltrout> but they are trying to garner support on the GH project
<lazyPower> must be pending additional work w/ the scheduler
<magicaltrout> so its not gone, its just missing stuff
<magicaltrout> anyway
<magicaltrout> you can spin up dcos-master & dcos-agents
<magicaltrout> they should both be in the CS
<magicaltrout> I lie
<magicaltrout> the agents aren't
<magicaltrout> 2 mins I'll push them
<magicaltrout> don't try it in lxd you won't get very far
<lazyPower> i'm painfully aware of that
 * lazyPower points at the very expensive k8s bundle that wont properly run in lxd today
<magicaltrout> okay agents should be alive
<magicaltrout> you have to have 1, 3 or 5 masters
<magicaltrout> and any number of agents
<lazyPower> magicaltrout - no bundle?
<magicaltrout> not yet I have a day job :P
<lazyPower> not even a minimal "use this to kick the tires" formation?
<magicaltrout> the most mimimal is 1 master - 1 agent and a relation
<magicaltrout> even you can manage that ;)
<lazyPower> You're giving me a lot of credit
<magicaltrout> hehe
<magicaltrout> anyway, I have a bunch of stuff on my backlog, like, I ripped out a master today to see what happened
<magicaltrout> it locked me out :)
<lazyPower> we face similar issues with the k8s bundle
<lazyPower> nuke a master and you lose PKI
<magicaltrout> that said, if you have masters fail in DC/OS proper, you can't add new ones, so I reckon I'm on feature parity there ;)
<magicaltrout> the fact you can add-unit on the masters is already something you can't do in DC/OS officially
<lazyPower> magicaltrout https://gist.github.com/chuckbutler/ae49b395648a07222b149978c27c5402
<lazyPower> mind pushing that up @ your namespace? :)
<magicaltrout> ta
<lazyPower> feel free to remix the machine constraints
<lazyPower> that might be the slowest dc/os cluster you ever deploy in your life
<lazyPower> seconded only by.... rpiv1's
<magicaltrout> hehe
 * lazyPower gives it a whirl
<lazyPower> here goes something
<lazyPower> magicaltrout - are these in a github repo somewhere?
<magicaltrout> yup
<magicaltrout> https://github.com/buggtb/dcos-master-charm
<magicaltrout> apologies in advance for the crazy hacks and bad code quality, I've not had chance to tidy it up yet ;)
<magicaltrout> https://github.com/buggtb/dcos-agent-charm
<lazyPower> no stress amigo
<lazyPower> just filing bugs as i find them so we have somewhere to start :)
<magicaltrout> cool
<magicaltrout> I've not tried it outside of wily by the way, so your xenial test is the first blast at something a bit more modern
<lazyPower> heh
<lazyPower> thats an interesting perdicament
<magicaltrout> when i started using it Xenial images weren't in EC2 and Trusty doesn't work
<magicaltrout> so technically it should "probably" work :)
<magicaltrout> it was an upstart/systemd thing
<lazyPower> ack
<lazyPower> looks like it needs a bump for pip out the gate
<sra> hii all, I am trying to deploy openstack using juju "openstack-base bundle" all services are in pending state
<sra> how much time it will take to deploy openstack?
<sra> please someone help
<rick_h_>  sra some 30+minutes I think
<Odd_Bloke> Depends on your substrate, as well.
<sra> rick_h: I am deploying juju openstack on vm which has 6 GB RAM and 80 GB Disk
<sra> it will cause any issues
<rick_h_> sra: 6gb of ram is very light imo
<Odd_Bloke> When I've deployed it using lxd, I've seen it sit at 8-10GB of RAM.
<sra> Odd_Bloke: you deployed on VM?
<Odd_Bloke> This was lxd containers on hardware.
<sra> Odd_Bloke: can we deploy on Vm
<Odd_Bloke> sra: Are you saying you're trying to deploy it on to one VM?  Or you want to deploy on to multiple VMs with those specifications?
<sra> Odd_Bloke: trying to deploy it on single VM
<Odd_Bloke> sra: With 6GB of RAM, you aren't going to get something that works very well.
<Odd_Bloke> If it works at all.
<sra> Odd_Bloke: I started my deployment 1 hour back
<sra> still all the services it is showing agent-state in "pendig"
<sra> Odd_Bloke:  are you around
<Odd_Bloke> sra: How are you deploying them?
<sra> Using openstack-base bundle
<sra> from juju-gui
<Odd_Bloke> sra: Right, but what substrate?  EC2?  lxd?
<sra> Odd_Bloke: lxc
<Odd_Bloke> sra: OK, so you should be seeing that machine under a lot of load ATM.
<Odd_Bloke> sra: As I said, I don't think 6GB of RAM is going to work.
<Odd_Bloke> OpenStack is too complex a beast to fit in 6GB of RAM.
<sra> Odd_Bloke: So please provide me the better requirements for deploying openStack on a single VM using JUJU OpenStack-base bundle
<Odd_Bloke> sra: What are you actually trying to achieve?  I've had it work on a 16GB NUC, but you wouldn't actually have wanted to use that for anything serious.
<sra> Odd_Bloke: I want to do changes to cinder charm and have to test the changes applied or not
<Odd_Bloke> sra: Ah, OK, I see.
<Odd_Bloke> sra: So I haven't actually used the bundles for development of OpenStack.
<Odd_Bloke> jamespage: Perhaps you would be able to point sra at someone (or some docs) of how to get set up to do OpenStack charm development?
<lazyPower> sra - bare minimum you will need 12GB of ram on that unit
<lazyPower> sra - and likely 4+ cores if you expect it to work with any reasonable efficiency
<lazyPower> sra - additionally, if you're seeing lxd units in 'pending', can you do me a favor? run lxc list and see if the juju templates have been created. they should be listed clearly: with the phrase juju and xenial in the image name
<sra> my Base VM has ubuntu14.04 OS will it work?
<lazyPower> sra - i highly recommend you move to xenial so you can use the latest bits for lxd
<Odd_Bloke> lazyPower: Juju<2 would be using lxc not lxd, right?
<lazyPower> Odd_Bloke - i havent tried juju2 on trusty in quite some time
<lazyPower> so, for completeness sake, i recommend xenial
<lazyPower> better to have them on a series thats got more eyes on it, know what i mean? :)
<Odd_Bloke> lazyPower: Right, but if sra is on trusty then I wonder if they are using Juju 1.x (and therefore lxc), rather than Juju 2 (and therefore lxd). :)
<lazyPower> i assume that to be the case
<lazyPower> sra can you confirm? ^
<sra> lazyPower: yes
<sra> i am using ubunut 14.04
<Odd_Bloke> sra: Which version of juju are you using?
<lazyPower> jamespage thedac wolsen - any known blockers on using juju 1.25 with lxc for openstack-base bundle deployments?
<sra> 1.25.6-trusty-amd64
<jamespage> lazyPower, no infact that what we verfiy on still
<lazyPower> ok
<lazyPower> sra - no need to upgrade according ot the potentate of our charms :)
<jamespage> oh wait - its 16.04 based, not 14.04 based
<lazyPower> ah
<lazyPower> welp
<lazyPower> perhaps upgrade and still install juju-1
<jamespage> sra, openstack-base is not deployable in a single-vm
<jamespage> its very much designed to be deployed on multiple servers using MAAS
<jamespage> https://jujucharms.com/openstack-base/
<jamespage> README has the details for the requirements
<jamespage> if you want todo an all-in-one; https://github.com/openstack-charmers/openstack-on-lxd is your best route
<sra> jamespage: can i deploy openstack by dragging individual components from juju gui in ubuntu14.04
<jamespage> sra, well you can be its alot of clicking
<jamespage> a bundle is a much better option
<sra> jamespage: for bundle we need ubuntu16.04?
<jamespage> sra, the latest openstack-base will deploy a 16.04 based openstack cloud
<jamespage> sra, openstack-on-lxd requires a 16.04 host for the deployment
<jamespage> so yeah I guess it does - sorry
<sra> I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?
<sra> jamespage: are you around
<jamespage> sra, I am
<sra> jamespage: I want a basic openstack deployment for modifying cinder and test the changes using juju. So for this what I have to do Openstack deployment in ubuntu 14.04?
<jamespage> sra, I saw :)
<jamespage> well the charms will support 14.04, its just that the bundles we publish are all baselined on 16.04
<jamespage> sra, https://code.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk
<jamespage> has all of the bundles that most of the openstack-charmers team use for development of charms; they deploy sparse (not using LXD/LXC containers) and are designed to be deployed ontop of a cloud; we happen to use OpenStack as the base cloud as well.
<jamespage> if you propose a change against one of the openstack charms, its the same cloud that gets used to verify the changes...
<jamespage> http://docs.openstack.org/developer/charm-guide/ might be useful as a reference as well
<cory_fu> kjackal, arosales: Can you remind me why I needed to email the Bigtop list about ppc64 artifacts again?  The apache-bigtop-base layer already has a repo configuration listed for ppc64el (https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/layer.yaml#L42)
<kjackal> cory_fu: If I remember correctly when we were writting the ticket we couldn't find the actual *.deb packages.
<kjackal> cory_fu: arosales I can try to deploy something on power and see what happens if you grant me access to any such machine
<arosales> cory_fu: as I recall it the only .debs were xenial and weren't in the latest bigtop release, perhaps that has changed
<arosales> cory_fu: if you have a xenial ppc64el .deb we can use from bigtop today then no need to email
<cory_fu> I'm not sure.  kwmonroe: Did you test this at one point?
<cory_fu> arosales: Do we not have access to Power machines anymore?
<arosales> cory_fu: we do
<cory_fu> arosales: siteox?  stilson?
<kwmonroe> cory_fu: sorry, i can't seem to recall the need for a ppc artifact email.  perhaps it was just to verify where we should be pulling ppc debs from, but you already know where to get them for vivid and xenial.
<kjackal> petevg: I got this error on tha nemenode: "dpkg-query: package 'openjdk-8-jdk' is not installed" looking into it
<arosales> cory_fu: stilson
<petevg> kjackal: I thought that I had tested things in trusty, but I may have run my Zookeeper tests on xenial.
<petevg> I was just playing w/ stuff on a vagrant vm, and it looks like jdk-8 isn't in trusty by default.
<kjackal> petevg: let me understand something. In case we skip openjdk, what layer should deploy java?
<kjackal> The base layer?
<petevg> kjackal: apache-bigtop-base should install it.
<petevg> kjackal: if you just got rid of the relations, you'd need to make sure that all the charms without the relation were built on top of the updated apache-bigtop-base layer.
<jcastro> magicaltrout: https://docs.google.com/spreadsheets/d/1czOlxejWRkE5tHnX8c04Xo5ZhxVe5auiDoCBqR4mN90/edit
<kjackal> petevg: another question
<kjackal> we set the default value for bidtop_jdk config param to openjdk 8
<petevg> kjackal: ?
<kjackal> in the bigtop base layer we ask bigtop to install the jdk by doing this: 'bigtop::jdk_preinstalled': not bigtop_jdk
<bdx> good monday morning all
<petevg> kjackal: yes. So ...
<petevg> (Good morning, bdx)
<bdx> I've an implementation question if anyone wants to chime in
<kjackal> when does this "not bigtop_jdk" evaluates to false?
<petevg> kjackal: Bigtop will install opnejdk if jkd_preinstalled is *not* true.
<kjackal> sorry, when does this "not bigtop_jdk" evaluates to tru?
<petevg> kjackal: When we have no value set there. When we override the value in options with an empty string.
<petevg> By default, it should evaluate to False, which means that we do want Bigtop to install the version of jdk we specify in the config.
<petevg> I don't like the backwards logic a whole lot, but that is how it kind of needs to work.
<petevg> bdx: I will chime in on stuff -- don't mind kjackal and I working through some other stuff at the same time :-)
<cory_fu> bdx: Quest away
<kjackal> petevg: yes, agreed. Let me try to find when we empty the self.options.get('bigtop_jdk')
<bdx> I have 3 private applications, 2 of which are rails apps, 1 is a php app. Each app has its own stack of supporting micro services (e.g. redis, postgres, es, resqueue, etc, etc)),  averaging to 5 extra supporting services (or 5 extra instances) per app
<bdx> all three app talk to eachother, and all 3 apps have been charmed up
<kjackal> petevg: or is this (setting the bigtop_jdk to '') expected to be a deploy-time decision?
<petevg> kjackal: no. It is an implementation time decision.
<kjackal> petevg: by implementation you mean?
<petevg> kjackal: basically, with this change, by default, your bigtop based charms will not require a relation to openjdk, per cory_fu's request. You can override that in a charm by overriding the option in that charm's layer.yaml.
<bdx> I have been experimenting with service placement; deploying everying to lxd minus the database(s) - which works well for my use case for these apps
<bdx> heres the crux
<kjackal> petevg: at build time?
<kjackal> petevg: I see
<bdx> petevg: s/implementation/orchestration/
<petevg> bdx: got it. thx for the correction :-)
<bdx> we currently deploy everything to aws, and use opsworks/docker to get the apps to the containers
<magicaltrout> yeah thanks jcastro i'll blast around an email
<bdx> I have revised theses apps to be juju deployed, but am having trouble determining how to orchestrate the apps concerning how to get my apps similarly deployed to lxd containers  at a provider agnostic level
<bdx> this set of 3 apps need be deployed to rackspace and aws, and soon an openstack cloud per customer requirements
<bdx> so, basically I feel like I've worked myself into a rat hole
<bdx> I've charmed up our apps, but have no way to orchestrate with containers using Juju :-(
<bdx> I've plans to make use of lxd-openstack
<bdx> but that doesn't help when I have to do a KPI comparison for the apps being juju deployed vs. non-juju deployed
<jcastro> I'd write it up and send it to the list
<jcastro> see what other people are doing
<bdx> jcastro: sure thing
<bdx> petevg: do you see where this is going at least?
<petevg> bdx: I got pulled into a meeting. Catching up ...
<petevg> bdx: I agree that posting it to the list makes sense. I'm not sure how to untangle the containers in containers issue :-/
<beisner> marcoceppi, do you have an eta for the next charm-tools pypi release?  there are some fixes in master that we need to unblock osci (virtualenvs).
<marcoceppi> beisner: 10 mins
<beisner> wooo! marcoceppi
<marcoceppi> it's not going to be all of master, it'll be a 2.13 patch
<marcoceppi> 2.1.3*
<beisner> kk thx
<marcoceppi> #219, #204, and #248 PR included
<marcoceppi> beisner: 2.1.4 is on pypi
<Anita_> Can we do remote_get function during relation_departed time?
<Anita_> sorry get_remote() during relation_departed
<marcoceppi> Anita_: potentially, I can't rememeber. I know you can't during broken, but departed may still have remote data
<Anita_> ok that means during *relation_departed* we can get the values?
<cory_fu> bdx: Sorry, I also got caught up in something else.  When you say you want to deploy the apps to lxd containers, how is what you're looking for different than using lxc placement directives in a bundle (https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives)?
<Anita_> marcoceppi_: I can call relation_call function during departed and get the values?
<cory_fu> That link again without the parens messing it up: https://jujucharms.com/docs/2.0/charms-bundles#bundle-placement-directives
<Anita_> marcoceppi_:but does the producer program needs to set the values in for *relation.departed* state?
<Anita_> marcoceppi_: currently my producer application sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state
<Anita_> marcoceppi_:can you please confirm?
<Anita_> marcoceppi_: currently my provider charm sets the values for *relation_joined/changed* states only... So i am not sure, if I can get the values as a consumer charm during *relation_departed* state
<Anita_> marcoceppi_:can you please confirm?
<marcoceppi> cory_fu: ^?
<marcoceppi> beisner: is 2.1.4 working for you?
<cory_fu> marcoceppi: Anita_ signed off, apparently, but the answer is that you can do get_remote (aka relation-get) during -departed but you probably shouldn't do relation-set or set new states (rather, just remove them)
<marcoceppi> cory_fu: figured, thanks
<beisner> marcoceppi, a couple of manual checks look good.  thank you
<marcoceppi> beisner: I've pressed a charm snap and charm-tools deb update
<magicaltrout> confirm it marcoceppi !!! confirrrrrrrm it!
<magicaltrout> lazyPower: one thing I did want to mull over with you before MesosCon
<magicaltrout> is Logstash and DC/OS
<magicaltrout> to offer some logging
<magicaltrout> and I'll wire up some  nagios stuff hopefully
<magicaltrout> to demo some relations stuff
<bdx> cory_fu: the only provider supporting lxd is openstack
<bdx> and thats not even juju lxd
<bdx> as far as juju is concerned, the only provider to support lxd is maas
<lazyPower> magicaltrout - ack, when is mesoscon?
<cory_fu> bdx: Is that true?  I thought most clouds supported lxc container placement.  Perhaps I'm still not understanding what you mean by "support lxd"
<magicaltrout> 31st lazyPower ;)
<bdx> cory_fu: try placing a lxd on an aws instance
<bdx> :-(
<cory_fu> bdx: http://pastebin.ubuntu.com/23079266/
<cory_fu> That's lxc and not lxd, though.  I think we may be talking about different things
<lazyPower> magicaltrout - ack, we have a bit of wiggle room then. let me finish up this weeks demo prep and i can context switch over to getting you hooked up with the elastic stack
<magicaltrout> cool lazyPower i should have a few spare evenings this week to sort a bunch of the backlog out
<beisner> marcoceppi, fleet of :boats: - thanks again!
<lazyPower> magicaltrout \oo/,  rock on man
<magicaltrout> not really
<magicaltrout> i'm wiring up a CAS server for web app authentication
<magicaltrout> its very tedious :O
<ahasenack> hi, have you guys seen this error in a reactive charm? http://pastebin.ubuntu.com/23079428/
<ahasenack> just wondering if it's a bug in how the charm is using that layer, or in the layer itself
<ahasenack> I filed a bug against the postgresql charm for now
<lazyPower> ahasenack - i've seen that when i've rebuilt a charm using local layers, and i didn't keep my clone in sync with whats upstream
<lazyPower> namely, it didn't pull in a new interface it expected to have
<ahasenack> I see
<ahasenack> yeah, looks like a "bzr add" was forgotten or something
<lazyPower> ahasenack - what i suggest is peek at the interface archive, give the charm a build locally using the following switches:  `charm build -r --no-local-layers` and see if that interface pops up in the assembled charm
<lazyPower> the archive peek is to verify the interface exists and implements the missing class
<marcoceppi> beisner: good, because it :boat:'d a while ago
#juju 2016-08-23
<veebers> menn0: You mentioned earlier that you have a fix for the password/macroon parts for migration? What's the error message I should look for on expected failure?
<veebers> The current incorrect one is "empty target password not valid"
<menn0> veebers: you should see a permission denied
 * menn0 checks if the fix has merged
<menn0> veebers: it hasn't merged next. looks like it's next in the queue.
<menn0> veebers: but you should see "permission denied"
<menn0> (when it has merged)
<veebers> menn0: Cool, I'll make sure the test matches exactly on the error (otherwise we would have missed something like this)
<veebers> Cheers
<menn0> veebers: it occurred to me last night that it's worth have a CI test for a superuser that isn't the bootstrap user.
<menn0> such a user should be able to start a migration, but the authentication path is a bit different to the bootstrap user (uses macaroons instead of passwords)
<veebers> menn0: Similar to the test I've just proposed but with the proper permissions and thus it should work
<menn0> veebers: exactly. so add a user to both controllers with the superuser controller permission and run a migration. it should work.
<menn0> veebers: it won't work until this current change lands.
<veebers> menn0: Cool, I'll get on that after I've cleaned up this current one.
<suresh_> hii all i am deploying openstack bundle in juju
<suresh_> but in "juju status" it is showing all services in error state
<suresh_> please someone help
<suresh_> i am using this link to install https://github.com/openstack-charmers/openstack-on-lxd
<suresh_> after this command "juju bootstrap --config config.yaml localhost lxd"
<suresh_> it is saying deployed but containers created showing error statte
<suresh_> please some one help
<kjackal> hey cory_fu are you around?
<bbaqar_> hey guys I upgraded the rabbitmq server units and now seeing  Unit has peers, but RabbitMQ not clustered
<bbaqar_> any thoughts
<bbaqar_> someone must have worked with rabbitmq here
<suresh_> hii all, i am installing openstack with juju
<suresh_> while installing nova-compute it is giving error state
<suresh_> please some one help
<suresh_> hii all, I installed juju on ubuntu 16.04 and while running "juju quickstart" command
<suresh_> i am getting this error
<suresh_> interactive session closed juju quickstart v2.2.4 bootstrapping the local environment sudo privileges will be required to bootstrap the environment juju-quickstart: error: error: flag provided but not defined: -e
<suresh_> my juju version is  "2.0-beta12-xenial-amd64"
<rick_h_> suresh_: hmm, you shouldn't have a juju-quickstart command in 16.04 with the juju there.
<suresh_> please someone help
<rick_h_> suresh_: did you install juju-quickstart? can you remove it?
<suresh_> rick_h: yes i do
<rick_h_> suresh_: please check out https://jujucharms.com/docs/stable/getting-started for getting started
<suresh_> ok thank you
<suresh_> rick_h: And other problem i am facing while deploying "nova-compute" charm from the store
<rick_h_> suresh_: what is the error? have you looked at the logs of the charm? you can get there by running a juju ssh to the unit and then looking at the log in /var/log/juju/unit-xxxxx where xxxx looks like the novaa-compute unit
<suresh_> i am getting "E: Sub-process /usr/bin/dpkg returned an error code (1)"
<suresh_> it is trying to install nova-compute and some packages but it is faling and last log it is showing
<suresh_> subprocess.CalledProcessError: Command '['apt-get', '--assume-yes', '--option=Dpkg::Options::=--force-confold', 'install', 'nova-compute', 'genisoimage', 'librbd1', 'python-six', 'python-psutil', 'nova-compute-kvm']' returned non-zero exit status 100
<suresh_> rick_h: are you around
<rick_h_> suresh_: sorry, in and out on meetings/etc
<suresh_> rick_h: have you seen the error i pasted
<axino> suresh_: try running the command on the unit and see why it fails
<suresh_> in unit also i tried to run command
<suresh_> it is giving same error
<suresh_> invoke-rc.d: initscript nova-compute, action "start" failed. dpkg: error processing package nova-compute (--configure):  subprocess installed post-installation script returned error exit status 1 E: Sub-process /usr/bin/dpkg returned an error code (1)
<suresh_> axino: other components i am able to deploy without any error
<axino> suresh_: apparently it's failing on nova-compute start, try : sudo /etc/init.d/nova-compute start
<suresh_> axino: I will try and let you know the status
<suresh_> axino: i ran that command but it is saying "sudo: /etc/init.d/nova-compute: command not found"
<axino> ugh
<axino> suresh_: sudo start nova-compute
<suresh_> yeah it is giving "start: Job failed to start"
<suresh_> what i need to do
<suresh_> axino: are you around
<axino> suresh_: not really, and not for long I'm afraid
<axino> suresh_: you can look at /var/log/upstart/nova-compute.log and /var/log/nova/*.log
<axino> suresh_: good luck !
<cory_fu> kjackal: Welcome back.  I'm here now
<cory_fu> Sorry I missed you earlier
<kjackal> Hey cory_fu, I wanted some help with some python dependency hell with cwr
<cory_fu> kjackal: Hrm.  I didn't run in to any issues that tox didn't handle for me.  Jump on daily?
<kjackal> managed to deploy cwr on a clean container but i guess i also need the juju-core env
<kjackal> yes, daily
<beisner> bdx, rick_h_ - traveling a similar path :) ... https://bugs.launchpad.net/juju/+bug/1614364
<mup> Bug #1614364: manual provider lxc units are behind NAT, fail by default <amd64> <manual-provider> <s390x> <uosci> <juju:Triaged> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1614364>
<beisner> and https://bugs.launchpad.net/juju/+bug/1615917
<mup> Bug #1615917: juju openstack provider --to lxd results in unit behind NAT (unreachable) <openstack-provider> <uosci> <juju:Triaged> <https://launchpad.net/bugs/1615917>
<rick_h_> beisner: heh :)
<suresh_> hii all i am getting error in  "/var/log/upstart/nova-compute.log" file
<suresh_> like "modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-32-generic/modules.dep.bin'"
<suresh_> i deployed nova-compute using juju
<suresh_> start nova-compute giving above error
<beisner> hi suresh_ - can you give us a pastebin of the `juju status` output so we can get a sense of the topology?
<suresh_> beisner: here is my juju status
<suresh_> http://paste.openstack.org/show/562483/
<suresh_> beisner: are you arounf
<beisner> yep, one sec
<beisner> hi suresh_ can you tell us about machine 12?  is it a container?
<suresh_> beisner: yes it is a container
<beisner> suresh_, generally-speaking, nova-compute and neutron-gateway units must be on metal.  it is possible to deploy the whole stack into containers using this approach:  https://github.com/openstack-charmers/openstack-on-lxd
<suresh_> I followed this also
<suresh_> but it is giving  after this "juju deploy bundle.yaml" command it is saying deployment completed
<suresh_> beisner: but in juju status i am getting all are "error" state
<beisner> suresh_, it looks like your deployment is using juju 1.25.6, where the openstack-on-lxd example requires juju 2.0 (currently in beta).
<suresh_> beisner: not this environment
<suresh_> I deployed on ubuntu 16.04 and followed that github repo
<suresh_> there i am getting all the states are "error"
<beisner> suresh_, the pastebin shows Juju 1.25.6 is in use
<suresh_> beisner: actually i have two environments
<suresh_> and in another i have juju 2.0
<beisner> suresh_, that's the one i would focus on, as expected-to-work.
<suresh_> beisner: here i am deploying that in a vm which has 12 GB RAM, 80 GB DISK and 5 cpu cores
<suresh_> it is enough to "Deploy OpenStack on LXD"
<beisner> suresh_, if you see failures with Juju 2 current beta, and the openstack-on-lxd procedure, please provide details on that.  thanks!
<suresh_> beisner: I am deploying this one and let you know where i got strucked
<suresh_> besiner: how much time you will be here?
<beisner> hi suresh_ - +~ 6hrs
<suresh_> beisner: thanks i will report the errors i will get
 * D4RKS1D3 Hi 
<suresh_> beisner: while running this command "sudo lxd init"
<suresh_> it is asking for Name of the storage backend to use (dir or zfs):
<suresh_> what i need to give
<marcoceppi> suresh_: dir, unless you have ZFS set up
<suresh_> and this Address to bind LXD to (not including port)
<suresh_> can i leave empty
<marcoceppi> suresh_: again, that's up to you, 0.0.0.0 is generally okay
<bdx> beisner: excellent. I put some heat on there for ya'
<bdx> beisner: you may as well just make a general bug for all providers != MAAS, ya?
<beisner> hi bdx, i'll leave that up to juju core triage, but i suspect each will be tracked separately as each provider would likely be addressed separately in dev efforts.
<bdx> beisner: gotcha, thanks for filing those!
<lazyPower> rick_h_ - do you recall the command to remove a controller from your $JUJU_DATA?
<lazyPower> i'm pretty sure i mailed the list about it, but i'm having a dandy time trying to find it
<rick_h_> lazyPower: unregister
<lazyPower> thank you!
<rick_h_> lazyPower: yes, mailed the list and filed a bug and we updated the help docs in response to the bug
<rick_h_> lazyPower: np
<beisner> bdx, yw.  thanks for the input
<suresh_> besiner: when i need to run this command "sudo ppc64_cpu --smt=off"
<suresh_> marcoceppi: are you around
<lazyPower> I've noticed a lot of bugs getting moved to the /juju project in launchpad (from juju-core), should we start opening bugs against /juju? or continue filing them against juju-core?
<marcoceppi> lazyPower: check the mailing list (yes)
<marcoceppi> suresh_: yes?
<lazyPower> haha 4 minutes ago
<lazyPower> \o/
<rick_h_> lazyPower: hey, there was an email to the cloud list warning of this weeks ago :P
<rick_h_> lazyPower: but yea, it's done today
<marcoceppi> YEAH lazyPower READ YOUR EMAILS
<lazyPower> ah, well, i just noticed in my bug-mail-feed its been a slew of project moving, soooo
<rick_h_> :)
<rick_h_> we just wanted to flood your inbox
<rick_h_> and flooding my own to no end was so worth it!
<lazyPower> i make no apologies for missing information in this black hole of messaging
 * lazyPower points @ his inbox
<lazyPower> its nicknamed e-fail for a reason
<suresh_> marcoceppi: how much time will taken by this command "juju bootstrap --config config.yaml localhost lxd"
<marcoceppi> suresh_: depends, but at most 10 mins?
<suresh_> marcoceppi: can we monitor logs regarding this command
<marcoceppi> suresh_: if you issue the --debug flag when you run the command it will be more verbose
<suresh_> marcoceppi: i ran this command before 20 minutes
<suresh_> and it is still waiting at apt-get update here is i pasted the output http://paste.openstack.org/show/562516/
<suresh_> marcoceppi: can i interrupt this command to rerun with --debug
<marcoceppi> yes
<suresh_> marcoceppi: i run that command with --debug option
<suresh_> and the log pasted here http://paste.openstack.org/show/562519/
<suresh_> it is waiting at "Running apt-get update"
<suresh_> I enabled ipv6 Is this is a problem?
<suresh_> beisner: are you around
<beisner> hi suresh_
<suresh_> beisner: yeah i am deploying openstack with juju by following this link https://github.com/openstack-charmers/openstack-on-lxd
<suresh_> while execution of this command "sudo lxd init"
<suresh_> i enabled Ipv6 also
<mattrae> hi, when i did 'juju create-backup' it wanted me to switch to the controller model. after doing 'juju switch controller' i was able to run backup-create. does the backup also contain the default model? if i grep through the backup file for a name of my service, i can see some matches.. but i just want to make sure i've backed up everything
<suresh_> besiner: and my problem is while "Bootstraping a Juju controller" it is waiting at "Running apt-get update"
<suresh_> and the log of that bootstarp command is pasted here log pasted here http://paste.openstack.org/show/562519/
<suresh_> beisner: have you seen my log
<beisner> suresh_, i see fd7d:b856:c794:1a4:216:3eff:fea1:5c73 port 22: Connection refused.  i've not personally validated this with ipv6.  my suggestion would be to first run through the example pretty much verbatim (ipv4), make sure everything works as expected.
<suresh_> beisner: I will try only enabling ipv4
<suresh_> and will update if any issues
<suresh_> sudo lxd-init it is asking Address to bind LXD to (not including port)
<suresh_> beisner: can i give localhost here
<beisner> suresh_, i believe so, but for the all-on-one deploy, i usually answer 'no' to 'Would you like LXD to be available over the network (yes/no)?'
<suresh_> beisner: i have given ip as 0.0.0.0
<suresh_> and Would you like LXD to be available over the network is 'yes'
<suresh_> and i enabled only ipv4
<suresh_> and i ran juju bootstrap --config config.yaml localhost lxd command
<suresh_> http://paste.openstack.org/show/562529/
<suresh_> the above is the log and i got strucked at Running apt-get update
<beisner> suresh_, it seems like your containers may not have internet access?
<beisner> suresh_, i just bootstrapped successfully against a fresh xenial install, after doing sudo lxd init, yes to network, localhost as the binding.
<suresh_> beisner: containers are getting internet
<suresh_> and how much time it will take to bootstarp
<suresh_> and also you selected zfs or dir
<beisner> suresh_, outside of juju, and unrelated to openstack, this should all succeed:  can you try http://pastebin.ubuntu.com/23082706/ to confirm that is the case?
<mattrae> hi, i'm trying juju restore-backup, but I can't find a syntax that doesn't give an error. any idea where i'm going wrong? https://gist.github.com/raema/4b70b3593f84e852a9fd22c4ab3f139f
<suresh_> beisner: yeah it is waiting at bootstrap command
<suresh_> beisner: can you paste your logs how you executed the bootstrap
<suresh_> beisner: sorry i am seeing you pastebin
<suresh_> and let you know the output
<beisner> suresh_, sure: http://pastebin.ubuntu.com/23082729/
<beisner> suresh_, but if anything in that first pastebin **2706, there is a config or network issue on the host or network
<suresh_> beisner: you installed on baremetal or VM
<suresh_> here i am trying on VM
<beisner> suresh_, this is inside a vm
<suresh_> beisner: in the first pastebin **2706 commands are working properly but while apt-get update is giving some errors
<suresh_> http://paste.openstack.org/show/562535/
<beisner> suresh_, if you do that a few times in a row, do you get the exact failure?
<suresh_> beisner: oh can i redpoly the setup again
<beisner> suresh_, i mean just the `lxc exec test123 apt-get update` command
<suresh_> again it is giving same result
<beisner> suresh_, identical to http://paste.openstack.org/show/562535/?
<suresh_> beisner: yes
<suresh_> in my host machine is also giving the same result for apt-get update
<beisner> suresh_, perhaps that is a transient issue with a mirror
<suresh_> beisner: can i redeploy it again
<beisner> suresh_, if the host can't do an apt-get update, it wouldn't try a redeploy yet.  Err:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages   Hash Sum mismatch
<beisner> that needs to work from the host in your network before juju bootstrap will succeed
<suresh_> beisner: Name of the storage backend to use (dir or zfs):
<suresh_> what you used
<suresh_> beisner: are you around
<beisner> suresh_, i used zfs but dir should be fine too
<suresh_> beisner: i used dir
<lazyPower> mattrae - thats rough :( I haven't used the plugin myself so i'm not certain how to guide you other than to file a bug and that will get the proper eyes on the issue at hand.
<suresh_> beisner: i am following your paste-bin http://pastebin.ubuntu.com/23082729/
<beisner> suresh_, is `apt-get update` working on the host, in the vm, and in a lxc container?
<bdx> postgresql-peeps: say I have an application that needs to connect to 2 separate postgresql database instances, is there a way to react to the states of the same service under a different name? Is this done by providing service sensitive interface names?
<suresh_> yes it is working after few apt-get updates in host it got output like http://paste.openstack.org/show/562540/
<suresh_> beisner: now it make some progress on bootstrap command
<petevg> cory_fu: you were right that the hadoop processing test might unearth an issue w/ dropping the openjdk relation. It seems to mess up the namenode relation for the slave machines, though I'm not 100% clear why
<petevg> (either something is firing to soon, or some java lib isn't getting installed)
<petevg> cory_fu: error from the logs on the slave machine: http://paste.ubuntu.com/23082956/
<petevg> (That error happens if I tell bigtop to install java, whether or not I then go to add the openjdk relation, btw.)
<petevg> cc kwmonroe ^
<cory_fu> petevg: Hrm.  The UnboundLocalError is from an out-of-date jujubigdata
<cory_fu> But it's only covering up a timeout error anyway
<cory_fu> I honestly did not expect it to actually fail
<petevg> cory_fu: yep. The more relevant part of the log is probably the connection refused bits.
<cory_fu> petevg: Right.  Should probably check the NameNode log to see if it failed to start and why
<petevg> cory_fu: it has failed. All my slaves say "hook failed: "namenode-relation-changed" for namenode:datanode"
<cory_fu> petevg: I know that it *did* fail.  I'm saying that the java change *shouldn't* have caused that, according to my understanding
<petevg> cory_fu: I don't see anything obvious in the reactive handlers that would cause it to fail, or even have different timings :-/
<petevg> The openjdk charm does set JAVA_HOME to be inside of the jre directory, while we set JAVA_HOME to be one level up. Everything is symlinked from that level, though, so unless something isn't following a symlink, that should be fine ...
<petevg> cory_fu: I rebased my apache-bigtop-base branch, and I'm going to redeploy; maybe there's something interesting in the error that it's chomping ...
<firl> lazyPower you around?
<lazyPower> firl - i am, whats up
<firl> I will have some time over the next couple days if you wanted me to try getting the kubernetes bundle working
<firl> ( inside openstack )
<lazyPower> firl - sure! We verified it works inside openstack yesterday, but i'm more than happy to get additional feedback on what worked well for you vs what was rough around the edges
<firl> oh sweet
<firl> juju2 only?
<lazyPower> juju 1 actually, we had to gut the 2.0 features so we could get a clean weather report on the bundles
<lazyPower> so, either/or works swimmingly
<lazyPower> http://status.juju.solutions/test/9f58fe960c8b4216ac93c1b71aefdb07  -- latest test results with the observable bundle
<lazyPower> http://status.juju.solutions/test/fb39dcbd7f90454aa494fe6a6e6a5129 -- latest results with the core bundle
<lazyPower> i'm thinking we willl get an openstack provider enabled on this at some point in the not so distant future. but public cloud results are a decent litmus
<firl> nice
<firl> You were mentioning about having ingress working with traefik?
<cholcombe> in the layer.yaml can you point it at interfaces that are local to your machine for testing?
<mattrae> hi, how do i remove a machine from the controller model after using enable-ha to add additional controller machines? now destroy-machine is telling me that the machines are required by the model https://gist.github.com/raema/a8b8f9ab6c33572fc0ac263e91e6025e
<kwmonroe> petevg: did you get your namenode:datanode issues resolved?
<petevg> kwmonroe: nope. I'm still poking at it.
<kwmonroe> so one thing i've learned petevg, is not to trust the hook that actually failed.  like cory_fu said, check the namenode logs (/var/log/hadoop*).  i'd bet money you have an OOM or something that's not quite java related.
<petevg> kwmonroe: I did. There's nothing obviously broken in the logs (the one error I saw, I wasn't able to reproduce more than once).
<kwmonroe> petevg: if you have a broken env, check 'hdfs dfsadmin -report' to see if hdfs is there
<kwmonroe> also petevg, is this aws or lxd?
<petevg> kwmonroe: I'm just re-setting up a broken environment right now. I was trying to setup two environments in parallel, but amazon was unhappy about that (I suspect I might have a machine limit on my account).
<petevg> kwmonroe: aws. lxd fails for other reasons.
<kwmonroe> roger that petevg.. lxd failures (though concerning) would be more explainable with container hostname resolvability
<petevg> Yeah. I'm pretty certain that's the lxd issue.
<kwmonroe> i guess all that's left is to blame your code ;)
<kwmonroe> i +1 your suspicion that there's an account limit preventing you from multi aws deployments.. though i think those are region limits.. you should be able to setup an aws-east and aws-west and make gravy.
<petevg> kwmonroe: yep. At least it's not an obvious mistake. I can deploy with revised bigtop base layer, with bigtop_jdk turned off, and everything works.
<petevg> kwmonroe: Cool. I will try that next.
<kwmonroe> oh, well poop.  if bigtop_jdk changes your life, that's on us.
<petevg> kwmonroe: it does look like it might be a problem talking to hdfs: http://paste.ubuntu.com/23083287/
<petevg> (I get that error both on the namenode and the slave)
<kwmonroe> petevg: can you get on the namenode and verify there's a java process running?  (ps -ef | grep java)
<kwmonroe> petevg: and if so, verfiy the NN is listening (sudo netstat -nlp | grep 8020)
<petevg> kwmonroe: interesting. There isn't one running. (Java is installed, and setup in /etc/alternatives).
<kwmonroe> ok petevg, /var/log/hadoop-hdfs* must tell you something
<kwmonroe> if it doesn't, i'll give you a coors light in pasadena
<petevg> kwmonroe: aha. There are errors, there.
<petevg> "java.io.IOException: NameNode is not formatted."
<kwmonroe> oh ffs
<petevg> kwmonroe: http://paste.ubuntu.com/23083464/
<petevg> (context)
<kwmonroe> petevg: this is kindof a big deal.. why isn't https://github.com/juju-solutions/jujubigdata/blob/master/jujubigdata/handlers.py#L478 being run?
<petevg> grepping code ...
<petevg> kwmonroe: hmmm ... we don't call that function explicitly in layer-hadoop-namenode
<petevg> kwmonroe: it's dinner time for me. Tomorrow morning, I am going to grab all the relevant layers and interfaces and libs, and trace how that function gets called. My guess is that something is relying on a status set by the openjdk layer, but it's not trivially greppable, in the bigtop repo, or in the bigtop base layer.
<kwmonroe> ack petevg
<kwmonroe> fwiw, jbd handlers "format_namenode" might be a red herring.. i don't see where that's called at all.  which makes it true, but not right.
<petevg> Heh.
<petevg> Maybe the next thing to do is to read the openjdk charm, to see what its doing that bigtop isn't.
<petevg> (I skimmed it, but might be time for a deep dive.)
<kwmonroe> no, openjdk is my charm.  there's nothing wrong with that.
<petevg> :-)
<kwmonroe> :)
<petevg> Anyway ... going to go get noms. Thx for all the help, kwmonroe. I'll poke at it more in the morning, and bug you about it if I'm still stuck.
<kwmonroe> word.  nom for us all.
<kwmonroe> hey petevg, i see this on a normal deployment of hadoop-processing:
<kwmonroe> unit-namenode-0: 2016-08-23 21:03:51 INFO unit.namenode/0.java-relation-changed logger.go:40 Debug: Executing '/bin/bash -c 'hdfs namenode -format -nonInteractive >> /var/lib/hadoop-hdfs/nn.format.log 2>&1''
<kwmonroe> will you check your /var/lib/hadoop-hdfs/nn.formatlog to see if there are details there?
<kwmonroe> (i know you're nom'ing, just leaving here for when you get back.. petevg petevg petevg)
<petevg> kwmonroe: interesting. I do see the call to format namenode, but the only line in the log is an error about JAVA_HOME not being set.
<petevg> kwmonroe: my revised code does attempt to set JAVA_HOME, though (and I can see it successfully writing it to the bigtop defaults).
<petevg> Maybe it winds up getting set later on, in a way that works for things that I've tested to work like Zookeeper, but doesn't work here.
<petevg> kwmonroe: that's a concrete thing that I can actually go and see about fixing. Thank you :-)
<petevg> In other news, the chickens appear to have been slacking off, and I have to run to the store for eggs lest dessert and breakfast plans get spoilt. Catch ya later :-)
#juju 2016-08-24
<cory_fu> kwmonroe: Pretty sure the Puppet scripts should handle the formatting of the namenode.  https://github.com/apache/bigtop/blob/master/bigtop-deploy/puppet/modules/hadoop/manifests/init.pp#L639
<lazyPower> firl sorry i dropped off there. ping me when you're back around and i'm happy to resume where we left off
<lazyPower> firl - there's 2 ingress options, traefik and nginx. i have rc defs for both, i'm undecided which is a better option at this point
<lazyPower> both have some problems with session affinity i've noticed
<firl> lazyPower - no worries
<firl> I just remember that juju left the security groups blocked and no ports opened last time I tried.
<firl> Is there a way to have it accessible externally now?
<firl> ( svc equiv )
<lazyPower> I've got a todo item to work on a daemon to read the ingress and open ports accordingly
<lazyPower> it hasn't been completed yet its still very much a juju run open-port operation at the moment :(
<lazyPower> but, we're aware and moving towards fixing it. I suspect we'll have something for you to look at there within the next month or so assuming we dont get reprioritized
<firl> :)
<firl> haha ok
<firl> I thought you were saying you were planing on having traefik be the ingress object like the GCE load balancer implementation for juju
<firl> Is the thought just to get nodeport working?
<lazyPower> well we have nodeport basically working minus the firewalling
<lazyPower> if you open port 443/80, and stuff in the nginx/traefik  ingress controllers, that handles a good chunk of the workloads
<firl> ya
<lazyPower> what that leaves out in the cold however, is socket based services like irc bouncers, rabbitmq, and workloads like that. Where odd ports may need connectivity. NGINX isn't the best middle man for those workloads. I've been considering building a socat container to handle some of those middlewares
<lazyPower> socat is pretty good at proxying connections...
<firl> I havenât tested traefik load balancing / ws connections
<lazyPower> the LB works, WSS seemed to fall down if it required session affinity
<lazyPower> but the nginx ingress controller handled it beautifully
<firl> gotcha
<lazyPower> even though traefik says they support it, i am unconvinced
<lazyPower> and its likely PEBKAC
<lazyPower> or picnic, take your pick :(
<firl> I am using nginx proxy RCâs right now.  ployst/nginx-ssl-proxy
<firl> however I still put a svc in front so that I can keep the same ip effortlessly
<lazyPower> right, thats how you do it
<lazyPower> the svc gives you that iptables forward rule that no matter where you enter into the cluster it routes accordingly, which is somewhat nice even if overly complex
<firl> yeah for a production standpoint itâs a non starter for me
<firl> ( if itâs not there that is )
<lazyPower> so, we should sync in the very very near future again, so we can check the checklist together
<lazyPower> did you get that ss i sent over?
<lazyPower> brb
<firl> just saw it ( requested permission to see it )
<thumper> lazyPower: ping
<lazyPower> thumper pong
<thumper> lazyPower: hey, looking to test migration of a unit with payloads
<thumper> do you know of any?
<thumper> even fake ones?
<lazyPower> thumper - we gutted it, older versions of etcd have payloads though
<lazyPower> let me check my namespace
<lazyPower> thumper  charm show cs:~lazypower/etcd-21
<lazyPower> there's one for ya, including the payload(s)
<thumper> ta
<lazyPower> np
<lazyPower> firl :| i'm not the doc owner. so whenever matt gets that mail :P
<firl> no worries
<lazyPower> ah but i can edit the sharing perms. give it another go
<lazyPower> should be able to get in now
<firl> i can see it
<thumper> lazyPower: can I use that cham in the lxd provider?
<thumper> and how do I get it to register some payloads?
<lazyPower> thumper  yes
<lazyPower> and what do you mean register some payloads?
<lazyPower> OH! i misread that as resources... really sorry chap
<lazyPower> my mistake
<lazyPower> it is after all after 10pm
<thumper> :)
<thumper> lazyPower: all I need is for it to register one payload
<thumper> so I can migrate it to another controller
<thumper> and make sure the payloads are still there :)
<thumper> lazyPower: so does that charm use resources or payloads?
<lazyPower> it uses resources
<thumper> :(
<lazyPower> https://jujucharms.com/u/lazypower/idlerpg
<thumper> fwiw install hook failed
<lazyPower> but it wont run in lxd
<thumper> it doesn't have to "run" but does need to register payload
<thumper> I'm guessing it wont
<lazyPower> its going to have ot "run" to register a payload
<lazyPower> its pulling in a docker image
<lazyPower> thats what it would register as the payload
<thumper> hmmm...
<thumper> I suppose I could deploy anything then just juju run the register command right?
<rts-sander> is there an example of how to use: https://github.com/juju/juju/tree/master/api in a charm?
<kjackal> rts-sander: Hi there, not sure if this helps, but there is this python library https://launchpad.net/python-jujuclient
<kjackal> rts-sander: if you are trying to make a charm talk to juju you can look at what the juju gui is doing
<rts-sander> kjackal, I tried https://github.com/kapilt/python-jujuclient but it only works for juju 1, I'm using 2
<rts-sander> I'm reading through the juju-gui charm now to see if I can find how they do it
<kjackal> I thought jujuclient lib also supports juju2 since it has this "juju2" path...
<rts-sander> yeah the code looks more up to date than the code on github
<suresh_> hii all, I installed juju 2.0 on ubuntu 16.04 by following this link https://jujucharms.com/docs/stable/getting-started
<suresh_> I deployed two charms they are still in pending state here i pasted the juju status http://paste.openstack.org/show/562911/
<suresh_> and in log i am getting like this http://paste.openstack.org/show/562912/
<suresh_> please someone help?
<rts-sander> is there an error in the juju go project? http://pastie.org/10939526
<rts-sander> did go get and trying to use api but it doesn't even compile
<jrwren> rts-sander: there is a makefile and godeps which must be used.
<rts-sander> cheers jrwren godeps did it
<jcastro> balloons: ok so we just need them to push a new snap-confine with our LXD fixes and we should be good to go wrt that fix for running lxd with snappy juju right?
<ram____> Hi all. For writing a JUJU Charm which language is preferable with community? Actually I want to develop a charm  in Shell script.
<marcoceppi> ram____: shell script is OK, python seems to be what everyone uses
<ram____> marcoceppi : Ok . Thank you. How can I enable a customized charm as part of Ubuntu Autopilot installation? I mean How can we integrate our customized juju charm along with autopilot openstack deployment
<marcoceppi> ram____: what charm are you customizing?
<ram____> marcoceppi: I want to configure cinder to change the backend as one of our storage driver. So now we are developing a charm for that to configure.
<marcoceppi> ram____: so you don't need to modify cinder, your charm would be like the other cinder backends. However, to get into the autopilot the charm must first join our OIL program: http://partners.ubuntu.com/programmes/openstack
<marcoceppi> here are a few examples of charms that are similar to what you describe: https://jujucharms.com/u/marcoceppi/cinder-xtremio https://jujucharms.com/u/marcoceppi/cinder-vnx
<SimonKLB> hey! trying to deloy the openstack bundle on aws using juju 2.0 beta15, looks like it is panicing because it's using lxc instead of lxd containers - does this need to be updated in the bundle configuration?
<marcoceppi> SimonKLB: that's odd, juju 2.0 should translate lxc -> lxd automatically
<marcoceppi> if it's not, it's a bug. Though, a quick fix would be to update the bundle to include lxd: instead of lxc
<SimonKLB> 2016-08-24 11:32:23 INFO juju.provisioner container_initialisation.go:98 initial container setup with ids: [6/lxc/0 6/lxc/1 6/lxc/2]
<SimonKLB> 2016-08-24 11:32:23 INFO juju.worker runner.go:262 stopped "6-container-watcher", err: worker "6-container-watcher" exited: panic resulted in: runtime error: invalid memory address or nil pointer dereferencey
<marcoceppi> SimonKLB: I think that lxc line is a red herring
<marcoceppi> the second line is definitely interesting
<SimonKLB> yea i might be mistaken, but i thought it was caused by: https://github.com/juju/juju/blob/master/worker/provisioner/container_initialisation.go#L102
<marcoceppi> SimonKLB: possibly, I'd poke the developers in #juju-dev about that one
<marcoceppi> it might be quite a serious bug
<SimonKLB> will do!
<ram____> marcoceppi: Thank you .
<SimonKLB> marcoceppi: is it possible to remove a whole bundle or do you have to remove the applications individually?
<marcoceppi> SimonKLB: each application individually
<SimonKLB> okok!
<ram____> marcoceppi :  how can we certify juju charm? How much time it will take to certify?
<marcoceppi> ram____: that's something that you should inquire with OIL folks
<ram____> marcoceppi : Ok. How can I get connection with OIL folks?
<marcoceppi> ram____: http://partners.ubuntu.com/programmes/openstack
<ram____> marcoceppi : OK. Thank you.
<petevg> kwmonroe, cory_fu: With kwmonroe's help last night, I think that I know why I'm seeing namenode failures: When we setup the openjdk relation, we set JAVA_HOME in /etc/environment. This happens before puppet runs. It looks like Bigtop doesn't do this when it installs java by itself, which means that hadoop_java_home never gets set by the puppet script, and
<petevg> hdfs fails to start. (We don't setup JAVA_HOME in /etc/defaults/bigtop-utils until after puppet runs, so even if puppet has a fallback to that value if it can't find it in /etc/environment, it won't have that fallback until after it has tried and failed to start hdfs.)
<cory_fu> petevg: I'm not sure I understand.  You're saying it fails because we *do* set up JAVA_HOME correctly?
<petevg> cory_fu: nope. I'm saying it fails because bigtop *doesn't*
<cory_fu> Also, pretty sure Bigtop & Puppet ignore /etc/environment entirely
<cory_fu> petevg: We've been using the java relation with Bigtop Hadoop this entire time.  Why is it only failing now?
<petevg> cory_fu: it's no failing when we use the openjdk charm.
<petevg> It's failing when we don't use it.
<petevg> This is me testing the "make java relation optional" stuff.
<petevg> *not
<ram____> marcoceppi: Which version of Autopilot we should use to deploy Liberty OpenStack? you have any idea?
<petevg> cory_fu: I know that bigtop is *supposed* to ignore /etc/environment, but I don't think that it is doing so.
<cory_fu> I really don't understand.  The puppet scripts and Bigtop don't look at or care about /etc/environment, AFAIK.  The fact that we update that is just an artifact of how we were doing Java handling prior to Bigtop
<marcoceppi> ram____: I
<marcoceppi> ram____: I'm not sure, but Mitaka is what we currently deploy.
<petevg> cory_fu: I suspect that it's an artifact that was masking a bug in bigtop (or a bug in the way that we're asking Bigtop to setup namenode).
<cory_fu> Also, if you're saying that it fails when we *don't* use the java relation, then how does /etc/environment come in to it at all?  It should just be using the built-in Puppet installation of java at that point
<cory_fu> I know we have deployed it with that built-in java management before, because we used that prior to adding the java relation
<petevg> cory_fu: it does use it. And hdfs fails to start, complaining that JAVA_HOME is not set.
<petevg> cory_fu: there's some context that you're missing -- see kwmonroe and my convo from yesterday evening, around 18:20, Eastern time.
<ram____> marcoceppi: For Mitaka which autopilot version it used?
<marcoceppi> ram____: the latest? I'm not 100% sure
<ram____> marcoceppi : OK. Thank you.
<petevg> cory_fu: basically, namenode is failing to start hdfs, and the clue to what's happening lives in nn.format.log, which is very short: http://paste.ubuntu.com/23085142/
<cory_fu> petevg: We have definitely run this w/o the java relation successfully before.
<petevg> cory_fu: Maybe I need to pass a third value in to puppet, beyond jdk_preinstalled and jdk_package_name? (If so, I don't see where -- I'm looking at the relevant puppet script now.)
<petevg> cory_fu: does passing in hadoop_java_home sound familiar?
<cory_fu> No
<petevg> This is the relevant line from hadoop-env.sh:
<petevg> http://paste.ubuntu.com/
<petevg> That's set to undef in puppet/modules/hadoop/manifests/init.pp
<petevg> initialized to undef, I should say.
<cory_fu> petevg: I'm pretty sure that the *only* thing we did prior to adding support for the java relation was set bigtop::jdk_package_name.  So, that should be what we do if the relation is attached.
<cory_fu> Sorry, if the relation is *not* attached
<cory_fu> petevg: Also, that link isn't what you meant to send
<ram____> marcoceppi: How can we test the newly created cinder-storage driver charm locally ? any idea?
<marcoceppi> ram____: you can juju deploy openstack onto LXD then deploy your cinder-storage charm and relate it to cinder
<marcoceppi> ram____: https://github.com/openstack-charmers/openstack-on-lxd
<petevg> cory_fu: whoops. http://paste.ubuntu.com/23085177/
<kjackal> cory_fu: I have a couple of mini-fixes on cwr, should I submit a PR just to show them to you?
<kjackal> petevg: I wonder if we could tell bigtop to go and install whatever jave it sees fit, instead of us setting the java package name on the config.
<petevg> kjackal: that would be nice. It did not do so when I tried, though. I initially just set  jdk_preinstalled to false, and didn't discover jdk_package_name until I was trying to figure out why that failed.
<petevg> I think that it just tries to do "apt install jdk" in that case, which isn't a valid package name.
<kjackal> that would be an easy fix upstream
<ram____> marcoceppi: I followed the provided link https://github.com/openstack-charmers/openstack-on-lxd. I am getting an error. I pasted juju status error log. http://paste.openstack.org/show/563013/.
<marcoceppi> ram____: at this point, you should probably join #openstack-charms for support
<petevg> kjackal: possibly. The script is in the distro agnostic bits of Bigtop; I'm not sure that you can drop in a string that will make a good default across distros.
<ram____> marcoceppi: OK. Thank you.
<cory_fu> petevg: Link from HO: https://github.com/puppetlabs/puppetlabs-java
<petevg> cory_fu: thx
<kjackal> cory_fu: I also looked at why cwr does not work with mongodb test plan
<valeech> Running a fresh install of juju 2.0beta15 bootstrapped to a MAAS 2.0 rc4. Via the juju gui, I added the openstack base bundle to the canvas. When I do this, it sets all the application names to xenial-X where X is the next letter available. Is there something I need to do to get it to keep the application name (ie ceph-mon, neutron-gateway)?
<kjackal> cory_fu: it seems that the issue is not with cwr. Mongodb fails/hungs when tested through bundle tester (apt-get install permissions)
<cory_fu> kjackal: Good to know.  We should change the example to something that actually works
<cholcombe> cory_fu, i have a ceph relation that sets a parameter that has dashes in it.  It's a param in a dict.  auto accessors won't work for me.  Is there a workaround for that?  I'd like to avoid adding another relation if i can
<ram_____> Hi. I followed https://jujucharms.com/docs/stable/getting-started. I deployed wiki charm. It was giving error. pasted error log : http://paste.openstack.org/show/563091/. please provide me the solution.
<cory_fu> cholcombe: You can always use conversation.get_remote() directly, but auto-accessors also translate hyphens to underscores, so you should be able to access prop-foo as rel.prop_foo()
<cholcombe> cory_fu, oh ok.  i'll try that underscores first
<sunnnny> Hi all,
<marcoceppi> ram_____: all of your containers are stuck in pending
<marcoceppi> ram_____: can you paste the output of `juju status --format yaml`
<sunny> Hi all, I have deployed Openstack Liberty with github openstack charms- branch 16.07/stable in HA. After deployment I am hitting one issue with Nova cloud controller. root@radcmaas01:~/deployments# nova service-list +------------------+----------------------+----------+---------+-------+----------------------------+-----------------+ | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Dis
<sunny> Hi all, I have deployed Openstack Liberty with github openstack charms- branch 16.07/stable in HA. After deployment I am hitting one issue with Nova cloud controller.
<sunny> Is this right place to ask questions  regarding openstack charm ?
<jcastro> sunny: yeah, ask away!
<ram_____> marcoceppi: paste output of # juju status --format yaml    http://paste.openstack.org/show/563096/
<marcoceppi> ram_____: "Failed to get device attributes: no such file or directory" that's an interesting error
<sunny> I have HA deployment of Liberty with Openstack charms branch 16.07/stable.  After the deployment I am hitting a issue which i think is of HA Nova cloud controller(NCC). My compute service ( nova service-list) is flapping meaning UP/DOWN and that depends on which of my HA unit of Nova cloudcontroller is up.  If request goes to unit NCC/0 (when this unit is UP) then it says nova-computer state is UP but when NCC/1 services are UP then it r
<sunny> as DOWN
<sunny> This is causing VM spin up to fail as it complains "no valid host found" ( at times then nova-compute state is down). Can you please point me why I am seeing that issue ?
<sunny> jcastro: http://paste.ubuntu.com/23085632/ take a look at this link
<jcastro> I'm not an openstack charmer but one of them should be able to take a look
<jcastro> cargonza: any of you fellas around to take a look?
<sunny> Thanks a lot and please let me know if you guys need anyu other details as well.
<catbus1> Hi, after I bootstrapped juju, can I ssh to the juju controller?
<cory_fu> bdx: Hey.  Did we discuss changing the license of the puppet layer to Apache or similar instead of AGPL?  https://github.com/jamesbeedy/layer-puppet-agent/blob/master/LICENSE
<jhobbs> catbus1: juju ssh -m controller 0
<petevg> cory_fu, kjackal: do you have any updates to last week's review queue doc. Just realized that I never sent it out, but I still only see my stuff in it.
<cory_fu> petevg: I do not
<cory_fu> petevg: I got caught up in other things (I think working on the new RQ) and didn't get the charm I was looking at finished
<catbus1> jhobbs: thank you
<beisner> hi sunny can you hop on #openstack-charms ?   also first questions will be:  can you pastebin a juju status output?  and are there 3 units of each service which is in HA?
<jesse__> Good evening :)
<jesse__> If someone can point me into the right direction on the following error that would be awesome!
<jesse__> juju add-relation neutron-gateway mysql
<jesse__> ERROR no relations found
<lazyPower> welp, hard to answer when you leave :(
<Randleman> i didnt
<Randleman> changed my nick :D
<Randleman> lazyPower:
<lazyPower> ah
<lazyPower> Randleman - neutron-gateway doesn't implement a mysql relation
<lazyPower> https://jujucharms.com/neutron-gateway/
<lazyPower> the listed relations are on the right side of the store listing above the file list
<lazyPower> it only implements hacluster, and neutron-plugin
<Randleman> so, i have an old canonical workbook?
<lazyPower> beisner thedac - do we know if neutron-gateway had at one time, a mysql relation?
<lazyPower> Randleman - sorry i'm not an openstack charmer so i'm not terribly familiar with the history of the charms
<Randleman> allright, well thanks anyway :) now i can atleast continue my deployment.
<beisner> lazyPower, it did, prior to the 16.04 charm versions.  https://github.com/openstack/charm-neutron-gateway/commit/00f0edc70d68ce846db928ec2304d79fc6d1a5ae
<lazyPower> Randleman - ah, seems like thats the case. The latest revisions of the charms changed. so there's a few options. Use the older charms, or see if there's newer documentation
<lazyPower> beisner - thanks for taking a look
<beisner> yw lazyPower
<beisner> lazyPower, Randleman - it looks like the neutron-gateway readme didn't get a necessary update on that.  i'll be proposing a readme change shortly.  tldr; it's now safe to just not relate neutron-gateway to the database, as db ops now happen via rpc.
<beisner> i'd recommend using the latest stable charm release
<petevg> cory_fu, kwmonroe: are either of you available to jump into the hangout? I want to point at something and see if it makes sense to you.
<kwmonroe> yup petevg, omw
<petevg> thx :-)
<Randleman> thanks beisner
<Randleman> Got another issue :D yay
<Randleman> i deployed the neutron-gateway charm and it's up and running
<Randleman> except for the fact that i can't see it anywhere in the openstack services list.
<Randleman> Nor is their a neutron user created...
<Randleman> all the other services are running fine.
<Randleman> it looks like the environment doesn't know about the existence of neutron-gateway
<beisner> hi Randleman - please have a look at this reference bundle and its relations to check against your neutron* relations and config options:  https://jujucharms.com/openstack-base/
<beisner> https://api.jujucharms.com/charmstore/v5/openstack-base/archive/bundle.yaml
<Randleman> Thanks weird beisner , the bundle file shows me that neutron-gateway has a relation with mysql
<Randleman> But that shouldn't matter.
<Randleman> I got all relations it needs... but the neutron/network doesnt show up anywhere.
<Randleman> This could be a thing..
<Randleman> AMQP server on 127.0.0.1:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 32 seconds.
<Randleman> also not very nice
<Randleman> l3-agent cannot contact neutron server to retrieve service plugins enabled.
<beisner> hi Randleman - i don't see that there is a shared-db neutron-gateway relation in the current bundle.  neutron-api, yes.
<beisner> hi marcoceppi - do you know https://github.com/juju/charm-tools/issues/220 fixes will hit pypi?
<beisner> insert: when.  typing and thinking is hard
<beisner> tinwood, looks like that merged into master ~1hr ago.  you could confirm manually by using git-foo for charm-tools master in requirements as a check.
<tinwood> beisner, yes, I'll do that to start with.  I've got a review up for interface-keystone with unit tests on WIP at the moment too.
<beisner> tinwood, feel free to temporarily flip charm-tools to master in that gerrit review just to exercise :)
<tinwood> beisner, I think we also need to do some project-foo on it too, to enable a testing gate - it's only got pep8.  I'll take a look at that too.
<beisner> tinwood, oh yes, likely so.
<marcoceppi> beisner: it's in the snap in edge channel ;)
<tinwood> beisner, np, will do.  going back to #openstack-charms now.
<beisner> marcoceppi, test runners live on trusty (!snap) until jenkins-slave gains xenial foo.
<beisner> marcoceppi, otherwise :cat2:
<beisner> :)
<marcoceppi> beisner: it's not a patch release, it'll be a 2.2, which isn't scheduled until October
<marcoceppi> beisner: but, I'm sure we can drop 2.2 sooner
<beisner> marcoceppi, ok. it does seem like a legit bugfix patch, as the existing ignore logic is unusable in that it makes ignores from any 1 layer apply to all layers globally.
<kwmonroe> lazyPower: i'm looking at https://github.com/juju-solutions/charmbox/issues/37, but local builds are working fine for me (docker build -t charmbox .).  how can i reproduce the env causing failures on docker hub? (https://hub.docker.com/r/jujusolutions/charmbox/builds/bwfmghxnj8xbj85fptqnhw9/)
<magicaltrout> kwmonroe i hope you're priming your liver
<kwmonroe> :)
<Dougi> hi all, im new to openstack, but im very interesting to learn, so i have a ubuntu maas setup with a controller and 4 nodes deployed, but i dont know what next... and i kind of dont find any good dokumentation. can someone tell me how to find any good dokumentation on how to get juju installed correctly?
<bdx> tls-peeps: https://gist.github.com/jamesbeedy/c20d91bd0087b32dbc0aa0956cde5ed8
<bdx> does that^ look legit?
<bdx> lazyPower, mbruzek, ^^
<bdx> I'm getting this error -> http://paste.ubuntu.com/23086229/
<mbruzek> looking
<mbruzek> bdx: hrmm that is strange
<bdx> right
<bdx> here is all of feed.py, shouldn't really matter though
<bdx> https://gist.github.com/jamesbeedy/4ce0224642ae11df473771b83c5e3506
<mbruzek> https://github.com/juju-solutions/layer-tls/blob/master/lib/tlslib.py#L60
<mbruzek> bdx That tells me that your cred is not there
<bdx> mbruzek: http://paste.ubuntu.com/23086275/
<mbruzek> bdx: can you do an ls /var/lib/juju/agents/unit-feed-14/charm/easy-rsa/easyrsa3/pki/private/
<bdx> empty
<mbruzek> bdx: I need more context on how this is deployed. There are 13 other feed peers?
<mbruzek> bdx: run "is-leader"
<bdx> mbruzek: lol, no. I've been iterating
<bdx> theres only one
<mbruzek> OK so that must be the leader.
<bdx> is-leader returns 'true'
<mbruzek> Can you gist a "tree" command in:  /var/lib/juju/agents/unit-feed-14/charm/easy-rsa/easyrsa3
<mbruzek> bdx: So it has been a while since I used the tls layer. I remember the leader is the CA and the signer
<mbruzek> bdx: so maybe there is another Error earlier on?
<bdx> http://paste.ubuntu.com/23086284/
<bdx> totally .. I think I should try a barebones top layer that includes tls and just simply writes out the keys so I can isolate that to being the issue
<mbruzek> bdx it looks to me that you don't have a CA unless it did not show the stuff in the private directory.
<bdx> yeah, I def don't
<mbruzek> In the log are there any earlier errors
<bdx> not that I can see -> http://paste.ubuntu.com/23086294/
<bdx> oooo
<bdx> line 2124
<mbruzek> Yeah
<bdx> I wonder if it has something to do with nginx-passenger
<bdx> or the phusion repo being enabled
<mbruzek> It looks like you get an error there on the cnf file, have not seen that one before.
<mbruzek> For the latest uses of tls layer, checkout this https://github.com/mbruzek/layer-k8s/blob/master/reactive/k8s.py#L98
<bdx> thanks
<bdx> I've thoroughly looked over that though
<bdx> lol
<bdx> I'm doing nothing different
<mbruzek> I only use the user password parameters when you want non root
<bdx> oooh
<bdx> tru
<mbruzek> But yeah other than that
<bdx> e
<mbruzek> bdx as you suggested build a simple layer with just tls and if you find that it is a bug in my code please create an issue against layer-tls and I will fix it asap
<bdx> totally, thanks for your insight here
<mbruzek> bdx: but I totally think the earlier error is giving you problem down the line
<bdx> totally
<mbruzek> But again I have not see that error before. The tree output shows the file exists, but the error says it is not there.
<mbruzek> I don't know nwhat is going on
<bdx> alright
<bdx> thx
<mbruzek> bdx: Chuck is using the tls layer in swarm https://github.com/juju-solutions/layer-swarm/blob/master/reactive/swarm.py#L219
<mbruzek> But it looks like you are using it correctly
<mbruzek> so I don't know, I suspect those earlier error. If that cnf file is not there or readable I guess that would be a problem.
#juju 2016-08-25
<rts-sander> I've build juju using the github project and ran "./juju switch" on it
<rts-sander> now my existing juju environment lost all its configuration
<rts-sander> is this recoverable?
<kjackal> Hello Juju World!
<KpuCko> hello guys, im trying to play with juju, im absolute beginer with this, i have installed juju and juju-quickstrt, and now im running juju-quickstart with -i (interactive) mode trying to setup lxc environment (local) but my juju-quickstart fails with juju-quickstart: error: error: flag provided but not defined: -e
<KpuCko> im using juju 2.0
<evilnick_> KpuCko, quickstart doesn't work with Juju 2.0. You should follow the documentation here: https://jujucharms.com/docs/stable/getting-started
<KpuCko> thanks evilnick_
<xnox> are there examples of people using layers/reactive and interface:juju-info ?
<ram_____> marcoceppi:  Hi. I have a question. While selecting components from landscape UI for autopilot openstack deployment , can we give set external configuration parameters for a particular component?
<ram_____> marcoceppi: any idea?
<kjackal> cory_fu: are you there?
<cory_fu> Yep
<cory_fu> kjackal: What's up?
<kjackal> Hey I wanted to ask about boto on cwr
<kjackal> should it really be optional?
<kjackal> cory_fu: ^
<cory_fu> Well, it's optional to use it, so it seems unnecessary to require it, but making it a required dep probably isn't that big of a deal and would save potential headache
<kjackal> yeap agreed!
<BjornT_> tvansteenburgh: hi. is it possible to have bundletester not destroy and recreate the environment when it runs the tests?
<tvansteenburgh> BjornT_: reset: false in tests.yaml
<tvansteenburgh> BjornT_: see options here https://github.com/juju-solutions/bundletester#testsyaml
<BjornT_> tvansteenburgh: thanks
<petevg> bradm: I'm leaving a comment for your on those two outstanding PRs against bip (https://code.launchpad.net/~josvaz/charms/trusty/bip/charmhelpers-cleanup/+merge/301499, https://code.launchpad.net/~josvaz/charms/trusty/bip/client_side_ssl-with_helper-lp1604894/+merge/301802)
<petevg> bradm: since you're the maintainer, I think that the correct thing to do is for you to pull all the latest code into your namespace (including merging those PRs), publish the charm, and then get a charmer to promulgate it to the store for you.
<petevg> bradm: that should leave you in a place where you can do most of the things that you'd want to do as a maintainer, and it should close the circle for the community member who is trying to get his changes into the charm :-)
<cory_fu> cholcombe: Hey, I'm working on the review queue and saw that the gluster charm was +1'd a while back and that you pushed it to the store.  I just wanted to confirm that https://jujucharms.com/u/xfactor973/gluster/xenial/4 is correct and I'll go ahead and finish the promulgation
<cholcombe> cory_fu, let me just double check
<cholcombe> cory_fu, yeah xenial/4 is the one
<cholcombe> cory_fu, or trusty/16
<cory_fu> cholcombe: https://jujucharms.com/gluster/
<cholcombe> cory_fu, yup
<cholcombe> cory_fu, sweet!
<cory_fu> Sorry it took so long.  :)
<cholcombe> cory_fu, \o/  woo!  thanks :)
<ram_____> marcoceppi: Hi. For testing purpose I developed a simple charm using the shell script  to modify the cinder configuration file for the post-deployment of OpenStack. cinder configuration modified. But I am saw some error in charm log. pasted information of my issuehttp://paste.openstack.org/show/563408/
<ram_____> marcoceppi: I tried to deploy 'cinder-xtremio " charm in our local Juju openstack environment like $juju deploy cinder-xtremio. I was facing errors. Error log : http://paste.openstack.org/show/563416/
<ram_____> Can you please provide me  some solution for this. Thanks.
<cmars> hey ram_____ I don't know anything about openstack, but is your install executable and does it have a shebang at the top?
<cmars> ^install hook, i mean
<ram_____> cmars: Hi. shebang means. sry I did get it. What it mean?
<cmars> ram_____, the `#!/bin/bash -e`, i mean
<cmars> ram_____, the error in your paste, looked like the kind of thing you'd see if you tried to run a script without it
<cmars> (that's usually how i start my scripts... you might prefer `#!/bin/sh` or something
<ram_____> cmars: Ok thank you. I will try with that.
<ram_____> [22:52] <ram____> Hi. I tried to deploy "cinder-xtremio " charm in our local Juju openstack environment like $juju deploy cinder-xtremio. I was facing errors. pasted error log : http://paste.openstack.org/show/563432/. Please anyone provide me some solution for this.
<Guest79992> How to get a FQDN for a juju container?
<Guest79992> How to get a FQDN for a juju container?
<bjf> how do i set the security group for my AWS instances to use? url pointer to the appropriate juju 2.0 doc appreciated
<ram_____>  Hi. I want to develop a cinder-storagedriver  charm. And i want to integrate it with Ubuntu-autopilot . SO can I give input parameters like san IP, san user and san password from landscape autopilot UI. Otherwise everything we have to hardcode into the charm. And different users for the same storage array have different credentials.
<ram_____> any idea?
<kwmonroe> bjf: not sure if this helps, but i let juju create its own security groups.. i did run into an account limit where i had to go remove old security groups that weren't removed from previous deployments... but i haven't had to do that with recent juju-2 betas.
<kwmonroe> ram_____: sounds like you could make the san connection parameters as charm config options.. different users that deployed your charm would "juju set" appropriate credentials.
<bjf> kwmonroe, thanks
<bradm> petevg: sounds good, will try and look at doing that soon
#juju 2016-08-26
<bdx> something is super fishy ....
<bdx> http://paste.ubuntu.com/23091222/
<bdx> show 'feed/19'
<bdx> but ssh into the instance, and cat the .juju-charm file -> http://paste.ubuntu.com/23091226/
<bdx> shows 'local:xenial/feed-24'
<bdx> is there some kind of wierd version mismatch going on here
<bdx> weird*
<bdx> oooh my bad
<bdx> 19th deploy, 24th build
<bdx> lazyPower, mbruzek, stokachu: here's one for ya'
<bdx> lazyPower, mbruzek, stokachu: when I include layer-ruby, and layer-tls, I get this error -> http://paste.ubuntu.com/23091435/
<bdx> lazyPower, mbruzek, stokachu: if I don't include layer-ruby, my certs are there and I don't get the error
<bdx> lazyPower, mbruzek, stokachu: I feel like the inclusion of layer-ruby is preventing layer-tls from generating the certs somehow
<bdx> lazyPower, mbruzek, stokachu: I've been stuck on this for a few days, can one of you guys build something with layer-ruby AND layer-tls and verify this for me
<bdx> lazyPower, mbruzek, stokachu: figured it out -> https://github.com/battlemidget/juju-layer-ruby/pull/6/files
<bdx> lazyPower, mbruzek, stokachu: os.chdir was changing the directory context for all
<bdx> needed the context manager
<Randleman> Hey guys, what did quantum-security-groups change into?
<Randleman> ERROR unknown option "quantum-security-groups"
<axino> quantum is neutron now
<Randleman> Got that, but it throws me an error even when using neutron-security-groups in the nova-cloud-controller yaml
<Randleman> though neutron-security-groups works in the neutron-api yaml
<Randleman> so i guess nova-cloud-controller does not need that option anymore?
<Randleman> as it has a relation to the neutron-api?
<admcleod> it looks like juju overwrites /home/ubuntu/.ssh/authorized_keys - is that expected behaviour?
<admcleod> (1.25)
<josvaz> having some issues with rackspace auth https://pastebin.canonical.com/164061/
<josvaz> do youknow what can be the problem? the user & password work when using curl or python pyrax
<kjackal> admcleod: Well done on the accepted presentation on openstack and bigdata!
<admcleod> kjackal: thanks :)
<kjackal> When/where are you presenting?
<admcleod> openstack summit barcelona october
<kjackal> Nice!
<lazyPower> kjackal admcleod - not sure if you two used charmbox, but i've just unblocked the builders. Apparently there's a diff in the shipping packages in FROM ubuntu on the hub and FROM ubuntu locally (which makes no sense)
<lazyPower> if you use charmbox/charmbox:devel and encounter any issues please lmk so i can address accordingly
<ram_> marcoceppi : Hi. Based on yesterday's discussion with <andrey-mp> , I created a cinder-storageDriver charm using Shell script which will pass external config data to relation instead of cinder.conf directly. This is the standard way. It is working fine for us. We want to certify and integrate our charm to Autopilot. But I have a question. How much compatible our created cinder-storageDriver charm with Autopilot?  Our conversation log :ht
<kjackal> ok lazyPower, thank you. I havent used charmbox recently
<ram_> marcoceppi:  http://paste.openstack.org/show/563919/.
<Anita> Hi, can container have FQDNs
<Guest51400> can container's have fqdn?
<kjackal> Hello cory_fu, running tests on cwr all day today, I seem to be hitting this behavior: http://pastebin.ubuntu.com/23093330/ have you seen this before?
<cory_fu> kjackal: I hadn't run in to that before, but that makes sense.  I didn't realize that jujuclient was using SIGALRM or that it didn't work in threads.  That's annoying.
<cory_fu> I guess I'll have to pull out the threading logic and either make it multi-process or not parallel
<kjackal> cory_fu: thats for resetting the environment. What about the error at the end where the bundle tester output does not have a "test" (?) is this expected?
<cory_fu> kjackal: I'm pretty sure that's caused by the other exception.  Because of the first exception, it doesn't get the output its expecting
<kjackal> ok, let me see where the threading is done
<kjackal> cory_fu: just to confirm that removing Threads fixes the issue I was seeing
<cory_fu> kjackal: Are you saying that you confirmed that?
<kjackal> cory_fu: yes.
<cory_fu> Ok, cool
<kjackal> cory_fu: I removed the threads and it worked
<cory_fu> Yeah, I went with threads optimistically without much confidence that it was actually threadsafe.  TBH, I'm not sure why it's not just using subprocess and the CLI for bundletester anyway.  If we do that, threads would be fine
<kjackal> let me try to see what happens if we are to use Processes... It should work
<kjackal> cory_fu: replacing threads with processes seems to work, let me get the observable kubernetes test plan to pass and will update the PR to cwr
<lazyPower> kwmonroe - moving you here, i hear you had questions about the hub builders?
<kwmonroe> yeah lazyPower, related to https://github.com/juju-solutions/charmbox/issues/37, what do you make of the charmbox success in ci.containers?  you reckon it was inclusion of libssl-dev?  any insight as to why docker hub wasn't able to build right?
<lazyPower> kwmonroe - i added libssl-dev to the install manifests
<lazyPower> so, i have no idea still why it was faiiling, the only thing i can consider is that the base images are different somehow
<kwmonroe> hmph, roger that lazyPower.  happy to see a new build.
<lazyPower> kwmonroe - we're pulling them into a CI solution instead of relying on the hub to do our builds as well
<lazyPower> ideally we'll get some smoke tests around the containers and actually verify they work
<kwmonroe> yup, understood lazyPower.  what's the mechanism between ci builds and docker hub?  can i still docker pull jujusolutions/charmbox to get the latest from ci?
<lazyPower> little do you know, thats already been rolled over ;)
<lazyPower> as of about 7am this morning
<Anita_> How to get the fqdn of a container?
<Anita_> Hi
<kwmonroe> Anita_: does 'hostname -f' give you what you want?
<Anita_> How to get the FQDN of a container or is there any way to set the FQDN of containers
<Anita_> kwmonroe_:yes
<Anita_> kwmonroe_:it gives just juju-2cf5ba-19, without any domain name
<lazyPower> Anita_ - thats a great question to relay to the juju mailing list - juju@lists.ubuntu.com
<Anita_> lazyPower_:ok
<Anita_> lazyPower_:when I ping from one container to another container it takes with ".lxd" suffix. like "juju-2cf5ba-19.lxd" but that does not qualify FQDN I think.
<Anita_> lazyPower_:Will mail to juju@lists.ubuntu.com
<lazyPower> Anita_ - Juju doesn't manage DNS at present, this is why i said to ask on the list, as there may be workable solutions that i'm not aware of, but the plain answer is not today.
<Anita_> lazyPower_: ok . Thank you...
<kwmonroe> Anita_: you may also want to add your thoughts/requirements to this bug:  https://bugs.launchpad.net/juju/+bug/1590961
<mup> Bug #1590961: Need consistent resolvable name for units <juju:Triaged> <https://launchpad.net/bugs/1590961>
<kwmonroe> sounds like juju will have some sort of dns provision in 2.1.0
<Anita_> kwmonroe_:ok
<Anita_> kwmonroe_P
<Anita_> kwmonroe_:Thank you
<cholcombe> when i charm publish a new charm to the store how long typically does it take to show up?  I published to the stable channel
<kjackal> cory_fu: do you have time for a quick chat on the daily regarding cwr?
<cory_fu> Sure
<kjackal> thanks
<kwmonroe> cholcombe: usually less than a minute.  did you also "charm grant cs:~blah/foo everyone"
<kwmonroe> ^^ that gives everyone read perms, which is required to be visible in the store
<cholcombe> kwmonroe, oh.. no i didn't .  that's prob it
<mattrae> hi, when i deploy lxd containers to a host with juju deploy --to lxd:X, i see there is a file created on the host /var/lib/lxd/zfs.img which is 100G. What happens if my physical disk is smaller than 100G? I think i may have run into a situation where my containers filled up the disk. i'm not sure the best way to recover
<mattrae> right now zpool status says one or more devices are faulted in response to IO failures, https://gist.github.com/raema/337581a825687bf7774715dc925fff31
<mbruzek> bdx: Can you have a look at my change for tls ?   https://github.com/juju-solutions/layer-tls/pull/47
<beisner> hi mattrae - i recall a recent convo about that being the case, and not ideal for that very reason.  here's where that logic lives.  bugworthy imo.  https://github.com/juju/juju/blob/11682c54646fbe625120c0368b41b3349f04df77/container/lxd/initialisation.go#L124
<beisner> mattrae, bug 1617460
<mup> Bug #1617460: zfs.img sparse file size is fixed, assumes at least 100GB free space on host <lxd-provider> <uosci> <juju:New> <https://launchpad.net/bugs/1617460>
<kwmonroe> bdx: i'm in a pickle.  puppetlabs doesn't have ppc64le support at http://apt.puppetlabs.com/dists/, so including layer-puppet-agent fails like an absolte madman on that arch.
<kwmonroe> bdx: apt update eventually fails like this "W: Failed to fetch http://apt.puppetlabs.com/dists/trusty/Release  Unable to find expected entry 'dependencies/binary-ppc64el/Packages' in Release file"
<kwmonroe> bdx: so what's a good fix for layer-puppet-agent?  i was thinking an easy fix would be to check cpu arch here:  https://github.com/jamesbeedy/layer-puppet-agent/blob/master/lib/charms/layer/puppet.py#L117  and if ppc64le, don't bother with apt.add_source since we know it'll fail.
<kwmonroe> but that means we'll fall back to the archive to handle self.puppet_pkgs, and if somebody wants v4 on ppc64le, the archive wont have it.
<kwmonroe> i'll open an issue on the layer, but wanted to give you a heads up so you can spend your entire weekend thinking about this.  kthx.
<kwmonroe> bdx: in your free time ;) https://github.com/jamesbeedy/layer-puppet-agent/issues/9
<Ababsi> exit
#juju 2016-08-27
<admcleod> trying to deploy to lxc and getting this:
<admcleod> 016-08-27 11:27:30 DEBUG juju.container image.go:89 lxc image for xenial (s390x) is https://cloud-images.ubuntu.com/server/releases/xenial/release-20160826/xenial-server-cloudimg-s390x.tar.gz
<admcleod> ^ 404
<junaidali> Hi everyone, "juju status <service-name> " stucks although "juju status" is working fine. Has anyone faced this issue or is this a bug?
<junaidali> version of Juju that I'm using is 1.25.6
<beisner> hi junaidali - i'd recommend comparing the output of `juju status --debug` with `juju status nova-cloud-controller --debug` to see if something sticks out as a red flag.
<junaidali> Thanks beisner, let me compare the output
#juju 2016-08-28
<PCdude> I have problems with installing openstack on ubuntu, it fails when the installer uses JUJU an node to bootstrap
<PCdude> anyone here who can help?
<PCdude> I have problems with installing openstack on ubuntu, it fails when the installer uses JUJU an node to bootstrap
<PCdude> anyone here who can help?
<PCdude> http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju
<magicaltrout> anybody tried xenial maas on virtualbox?
#juju 2017-08-21
<stormmore> o/ juju world
<skay> I may have screwed up my local juju install. When I try to bootstrap a local environment I get https://paste.ubuntu.com/25363628/
#juju 2017-08-22
<digv> hi.. is https://review.jujucharms.com/ is down? I am not able to access it
<hloeung> digv: it's been shut down, see https://lists.ubuntu.com/archives/juju/2017-August/009294.html
<digv> hloeung: thanks
<digv> hloeung: one small help needed, I want to submit an updated charm which is already there in charm store.. can you guide me please
<digv> is there any change in process of submitting updated patch?
<hloeung> digv: if you own what's published i nthe charm store, it should just be this process here to update/publish a new/updated version - https://jujucharms.com/docs/2.2/authors-charm-store#pushing-to-the-store
<digv> hloeung: this is my first attempt, some member from "ibmcharmers" was maintaining it... I've recently joined that Group.
<hloeung> digv: right, so should just be charm build, cd <built_charm_dir>, charm push . cs:~ibmcharmers/xenial/mycharm, charm release cs:~ibmcharmers/xenial/mycharm-91 (replace 91 with what's output from the push command), charm grant cs:~ibmcharmers/xenial/mycharm-91 everyone
<digv> hloeung: thanks for help.. able to submit a charm :)
<kjackal> Hello AntonDan
<kjackal> magicaltrout: are you around?
<magicaltrout> kjackal__: kjackal_  whichever
<kjackal__> Hey magicaltrout how is it going? I wanted you to introduce you to AntonDan that would be interesting in woring on Mesos+Lxd
<kjackal__> AntonDan: are you there?
<AntonDan> yup
<magicaltrout> thanks kjackal__ !
<magicaltrout> hey AntonDan
<AntonDan> Hello
<magicaltrout> someone with half an clue about C and wanting to offer some sage advice or help extended Mesos for the LXD container spec would be great
<magicaltrout> s/extended/extending
<kjackal__> I am sure AntonDan can help
<magicaltrout> mesos added lxd support to their container roadmap AntonDan but they certainly won't prioritise it, so I'd like a community effort to drive it, I dont' think there is a massive amount of work as its just swapping the docker interface for lxd pretty much
<kjackal__> AntonDan: would need some guidance in terms of where to meet the Mesos people interested in lxd, what meetings to attand and any advice on technical aspects would be appreciated. Bu he seems to be a strong coder so...
<AntonDan> I'll try my best, my knowledge in C should be sufficient for the task. The unfamiliar terrain that is large projects is slightly obstructive but I'm treading said terrain slowly but surely
<AntonDan> Ye I did take a look at the roadmap
<AntonDan> There doesn't seem to be anyone else working on that task
<magicaltrout> nope there wont be, I asked months ago about it and their response was "I don't really see a usecase for it"
<magicaltrout> still people
<magicaltrout> okay AntonDan what can I do to help facilitate your development? code, servers, tests, docs? anything specific?
<magicaltrout> beer? food? caffiene?
<AntonDan> The main thing that gives me trouble right now is my familiarity with the mesos project itself. Currently I'm fine testing on my local machine and on a virtual machine that was given to me by kjackal. I might need some server later on though for testing.
<AntonDan> my familiarity or lack thereof, that is
<magicaltrout> no problem AntonDan I'm happy to supply you with some server resource when you need it. For Mesos, it really only has 2 components, Mesos (Master/Slave) and Marathon. Don't worry about Marathon though for now, thats mostly just a UI wrapper around the Mesos service
<AntonDan> I did set up marathon since I wanted to have a platform for me to communicate with mesos
<AntonDan> I had someone recommend me Chronos as an alternative but I found setting marathon up an easier task
<magicaltrout> I think Chronos is retired these days
<magicaltrout> mostly
<magicaltrout> yeah last update was months ago
<AntonDan> I see
<magicaltrout> AntonDan: if you can't find me in here and need anything, feel free to ping tom@spicule.co.uk
<AntonDan> will add to contacts. I'll just pm you here if I do find you online in order to reduce spam in the juju channel
<stormmore> o/ juju world, what madness is happening
 * stormmore is having fun putting OpenStack on LXD on his poor little laptop
<stormmore> hey rick_h hows the 2.3 testing going?
<hallyn> so...   i seem to have juju working against a remote vsphere server.  it's downloading an .ova file.  That's good.  BUT it's downloading it to my laptop.
<hallyn> Why do that?  Bc vsphere's folders won't work somehow?  I could see that... but...
<hallyn> well *this* won't owrk for me.  a one time experiment is fine, but who the heck thought it would be a good idea.
<hallyn> guess i'll try a vm closer to the datacenter.  but i'll do so under protest, noting how stupid and shortsighted it is...
<hallyn> stokachu: \o  do you know if that's normal^ or if i misconfigured something?
<stokachu> hallyn: I need to ask axw about that
<stokachu> That doesn't sound right
<hallyn> ok - hopefully there's just a setting I can fix on my local (macos) juju install, but for now I'm going to try and setup a base juju admin VM on the same vsphere.
<hallyn> (will tkae an hour to push an iso there and another hour to install probably :)
<rick_h> hallyn: so it looks like the docs state it works that way
<rick_h> hallyn: I'm not sure on the limitations/etc that caused it to be that way. Would be worth a bug and maybe an email to the mailing list or something. hml do you have any insight? ^
<stokachu> hallyn: yea that sounds like the opposite of what we want it to do :)
<rick_h> stormmore: hallyn that's from the note box in https://jujucharms.com/docs/2.2/help-vmware
<stokachu> hallyn: axw will be on in a few hours i think, i can ask him when he comes online
<hml> rick_h: hallyn: no extra insight, sounds like something with the vmware provider.  perhaps an email is best, so other can see and chime in
<hml> or even a bug
<hallyn> bug against juju project?
<rick_h> hallyn: yes
<hallyn> hml: rick_h: stokachu: https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1712431     Thanks.
<mup> Bug #1712431: sphere controller stores OVA files locally <juju (Ubuntu):New> <https://launchpad.net/bugs/1712431>
#juju 2017-08-23
<axw> stokachu hallyn: yep I agree, it's horrible (predates my involvement), but I couldn't see a way around it. the OVA is fetched locally, split into OVF & VMDK, then uploaded to vsphere. AFAICT there's no way to get vsphere to do that for you
<stokachu> axw: interesting
<stokachu> axw: and it happens prior to bootstrap so we couldnt cache it anyway
<axw> stokachu: right. one thing we *can* do, which I replied to an email about the other day, is reuse the OVF/VMDK cached in the datastore after bootstrapping
<axw> to avoid having hte controller download the OVA for each new machine
<axw> that bit's doable, but doesn't affect UX nearly as much
<stokachu> axw: ah ok
<stormmore> I wonder if I can get k8s to run on top of openstack on lxd
<Ting_> Does some have use juju on aws with federated user (short-live uesr) ?
<D4RKS1D3> Hi someone knows why when I add a unit in a machine that the state is ready, this machine waste around 200 seconds to running up?
<D4RKS1D3> If I execute more machines the time waiting between ready and deploying usually is more than 35 minutes
<magicaltrout> internet bandwidth?
<D4RKS1D3> magicaltrout, this answer is for me?
<magicaltrout> well, its a guess for you
<magicaltrout> i dunno, but of course when you spin up new nodes, they all download a bunch of stuff. juju, updates etc
<D4RKS1D3> But I am not talking about preparing the node... I am talking about turn on the node
<orf> hey, is there any way to use the JuJu VSphere provider with a specificed Resource Pool?
<orf> we have a namespaced system and need to create everything within a 'Development' resource pool
<orf> I had a browse of the source, and it seems to be able to: https://github.com/juju/juju/blob/d28f92df4051a02602347b73335467a29f5e7327/provider/vsphere/internal/vsphereclient/createvm.go#L139
<orf> there just is no documentation as to how to configure that
<hallyn> the docs are in createvm.go...  around line 140 iirc
<hallyn> j/k
<hallyn> yeah i'm hoping the vsphere controller gets some love
<orf> I gave up hallyn
<orf> Cant work out how to define `ComputeResource.ResourcePool`
<hallyn> :(
<stormmore> o/
#juju 2017-08-24
<stormmore> o/ juju world
<rick_h> bdx: around?
<bdx> rick_h: sup
<rick_h> bdx: email sent your way about models and upgrading. I was going to see if you could tell me more about them and see if there's stuff to help with there.
<bdx> yeah... just reading the email now
<bdx> is it all of the models, or just common-infrastructure ?
<rick_h> bdx: there's a list in the email of the ones that failed to upgrade
<rick_h> bdx: seemed like a lot of them, some 12ish? /me has to check notes
<bdx> oh wow, all of those eh
<bdx> yeah
<rick_h> so I got curious what's up. Some of the models that failed to upgrade I think are folks that removed their credentials, or have things in an error state and so juju didn't want to upgrade.
 * bdx crying uncontrollably 
<rick_h> bdx: but you have a lot and I wanted to see what's interesting on those that might make the upgrades fail
<bdx> rick_h: yeah I've filed some recent bugs around failing upgrades
<bdx> I think they the fixes are in master ... but a lot of good that does right
<rick_h> bdx: so basically reply to me. Let me know what's up. What we can do. Depending on what's up with them we'll go and file some stuff with the support account and such.
<bdx> will do, thx
<rick_h> bdx: well, there's a point release coming if you think that'll have fixes for your issues.
<rick_h> bdx: but I want to make sure we're aware/have a grasp across jaas-land
<bdx> yeah ... I'll link the bugs in the email
<rick_h> bdx: appreciate it
#juju 2017-08-25
<armaan> jamespage: hello, could i ask your opinion on https://github.com/juju/1.25-upgrade ? Can i use this for upgrading OpenStack environments?
<Ting_> Hi, could people use other authentication-type for juju aws instead of access-key? We are using aws federation account seems very hard to get access key...
<rick_h> Ting_: we need to support that. There's an open bug that could use some real user pressure. Let me see if I can find it.
<Ting_> rick_h: thx :)
<rick_h> Ting_: on my phone ATM and not able to find it. Let me look when I get to the office.
<Ting_> rick_h: sure, no hurry
<tychicus> just deployed the microbot application in CDK 1.7 to get a better understanding of how the ingress controller and load balancer work.
<tychicus> kubectl get ingress does not return an address
<tychicus> the only values returned are NAME, HOSTS, PORTS, AGE
<tychicus> I can validate everything is working correctly by creating a /etc/hosts entry for microbot.10.148.0.105.xip.io and using the IP address provided in the hostname,
<SimonKLB> tychicus: did you create it using the microbot action?
<tychicus> SimonKLB: yes juju run-action kubernetes-worker/0 microbot replicas=3
<SimonKLB> did you check the action status and/or output?
<SimonKLB> also, is it only the ingress that is missing or is the deployment also not there?
<tychicus> https://gist.github.com/roll4life/2cbaad645379f3db6998d2e08f70f664
<tychicus> here is the output of kubectl get services,endpoints and kubectl get ingress
<SimonKLB> ah my bad, i thought you meant the ingress was never created
<tychicus> sorry, ingress is created, but the address is not displayed
<SimonKLB> is the kubernetes-worker exposed?
<tychicus> yes
<tychicus> kubernetes-worker      1.7.0    active      3  kubernetes-worker      jujucharms   40  ubuntu  exposed
<SimonKLB> hmm, can you check the logs of the ingress-controller?
<tychicus> SimonKLB: updated the gist with the ingress-controller log output https://gist.github.com/roll4life/2cbaad645379f3db6998d2e08f70f664
<tvansteenburgh> tychicus: what's the output of kubectl get pods
<tvansteenburgh> is there an ingress controller running?
<tychicus> yes 3
<SimonKLB> tychicus: just tried it out on my end and works fine, wierd..
<tychicus> ok gist updated
<SimonKLB> can you check which version of the ingress controller youre running? `kubectl get rc nginx-ingress-controller -o yaml | grep image:`
<tychicus> image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
<SimonKLB> same here
<SimonKLB> so it's definately that "has no active endpoints" that's the issue
<SimonKLB> or atleast that is what looks different from mine
<tychicus> ok thanks I'll look into that and report back what I find, thank you for your help
<SimonKLB> tychicus: which provider are you running on btw?
<tychicus> local openstack juju/maas deployment
<tychicus> updated the gist with the output of kubectl get ing microbot-ingress -o yaml
<SimonKLB> that's wierd, it looks like it's adding ips, but they are empty
<SimonKLB> actually, looking at the log again it says "Updating loadbalancer default/microbot-ingress with IP *nothing*"
<tychicus> right, just noticed that as well
<SimonKLB> tychicus: can you paste your juju status output?
<SimonKLB> if the workers doesn't have public ips that might be it
<tychicus> updated, they do have "public addresses"
<tychicus> they are rfc 1918 addresses, but they a "public" internally
<SimonKLB> yea
<SimonKLB> tychicus: check if kube-proxy is running ok on the workers
<SimonKLB> you can run this for example: `juju run --unit kubernetes-worker/0 systemctl status snap.kube-proxy.daemon.service`
<tychicus> ok, thanks, I was just getting ready to ask how to check the status :)
<tychicus> they all show as running all have these errors with ID 1,5,7,9,11
<tychicus> Unable to decode an event from the watch stream: stream error: stream ID 1; INTERNAL_ERROR
<SimonKLB> tychicus: you can get the full log if you run `juju run --unit kubernetes-worker/0 "journalctl -u snap.kube-proxy.daemon.service"`
<SimonKLB> or ssh into the machine
<tychicus> gladly
<Cynerva> tychicus: i think the behavior you're seeing with microbot is pretty typical
<Cynerva> seems like ingress only gets an address assigned to it if you have external loadbalancer support
<Cynerva> which AFAIK requires cloud integration
<Cynerva> we only get that on AWS when deployed via conjure-up, i think
<Cynerva> but, the ingress is still usable without a loadbalancer
<tychicus> cynerva: ok, thanks
<SimonKLB> Cynerva: wierd, because it's working fine here and im running it on LXD
<Cynerva> huh
<Cynerva> that is weird
<SimonKLB> yea :D
<Cynerva> i just had a go on AWS w/o native integration, got no address but it's working the way i'd expect
<SimonKLB> % juju status | head -n 2
<SimonKLB> Model                        Controller           Cloud/Region         Version       SLA
<SimonKLB> conjure-kubernetes-core-531  localhost-localhost  localhost/localhost  2.3-alpha1.1  unsupported
<SimonKLB> % kubectl get ing
<SimonKLB> NAME               HOSTS                           ADDRESS            PORTS     AGE
<SimonKLB> microbot-ingress   microbot.10.212.38.141.xip.io   10.212.38.141...   80        1h
<SimonKLB> :o
<tychicus> yeah, that is pretty much what I expected to see :)
<SimonKLB> whats even more interesting is that my nodes does not have external ips:
<SimonKLB> % kubectl get no -o wide
<SimonKLB> NAME            STATUS    AGE       VERSION   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION
<SimonKLB> juju-cf3e22-3   Ready     16h       v1.7.0    <none>        Ubuntu 16.04.3 LTS   4.10.0-21-generic
<SimonKLB> juju-cf3e22-4   Ready     16h       v1.7.0    <none>        Ubuntu 16.04.3 LTS   4.10.0-21-generic
<SimonKLB> Cynerva: do you know if kubernetes or the ingress-controller somehow aware of it's environment and accknowledge the internal ips as external when running inside LXD or something?
<SimonKLB> since the ingress controller is actually adding the interal ip of the node:
<SimonKLB> % kubectl get no juju-cf3e22-3 -o jsonpath='{.status.addresses}'
<SimonKLB> [map[type:InternalIP address:10.212.38.141] map[type:Hostname address:juju-cf3e22-3]]%
<Cynerva> SimonKLB: not that i'm aware of, but could be
<bdx> rick_h: sup
<bdx> rick_h: is there anything on the roadmap for JAAS to support team owned models?
<rick_h> bdx: not direction at the moment. There's some precursor work going on though
<rick_h> bdx: e.g. not in the next release of work but it's on the radar and some groundwork is landing/going on
<bdx> nice nice
<bdx> thats good to know
<bdx> I'm leaving a trail of models owned by myself across the technology community :)
<rick_h> bdx: lol
<rick_h> bdx: it's not been pushed because you can add/remove users and such so there's some level of control.
<rick_h> so it's longer to do since you have to add each user vs a single team
<bdx> I see
<kwmonroe> bdx: just share your creds with people you trust.  instant team!  we can put them in relevant irc /title bars if it makes life easier for you.
<rick_h> bdx: and juju doesn't really have solid "group" idea on a self-operated controller so it needs some smarts
<rick_h> hah
<rick_h> kwmonroe: always here to save the day!
<kwmonroe> :)
<bdx> ^ perfect example of what not to do
<bdx> :)
#juju 2018-08-20
<wallyworld> vinodhini: what's the status of this PR? the bug associated with it has been ,arked fix committed but the PR itself is still open https://github.com/juju/juju/pull/9027
<vinodhini> i shd close this wallyworld
<vinodhini> done.
<wallyworld> ok, ty
<wallyworld> kelvinliu_: left me know if comments make sense
<wallyworld> babbageclunk: did you see there's a state.go conflict that's been flagged?
<babbageclunk> wallyworld: ooh, no - must be new. Fixing now
<kelvinliu_> wallyworld, yup, looking now, thx
<veebers> wallyworld: You where saying you weren't keen on cloudContainer storing a reference to State, is that right?
<wallyworld> yeah
<veebers> ack
<kelvinliu_> wallyworld, i m thinking if it's possible that charms have only crd but no containers
<wallyworld> kelvinliu_: oh right, that's possible
<wallyworld> that breaks the pod->unit model
<wallyworld> we'd need to figure out how to map that betweem k8s and juju
<wallyworld> i guess for now we can keep the EnsureCustomResourcDefinition method in the broker interface
<kelvinliu_> wallyworld, yup,
<kelvinliu_> wallyworld, just renamed the method and did some cleanup, would you take a look again?
<wallyworld> sure
<wallyworld> kelvinliu_: lgtm with a couple of small fixes
<kelvinliu_> wallyworld, thanks
<vinodhini> wallyworld: cud u plz take a look at PR  - https://github.com/juju/juju/pull/9080
<wallyworld> will do after i finished xtian's
<vinodhini> sure.
<veebers> wallyworld: we don't expose a global key for cloud container (we currently use unit global key when storing cloudContainer in the cc doc). Storing the status for the cloud container we'll either need to expose one or have it overwrite the unit status (i.e. use the unit global key)
<wallyworld> we can't overwrite
<wallyworld> we'll need a cloudContainerGlobalKey
<veebers> oh true duh. Ok can do
<wallyworld> maybe cc# prefix
<wallyworld> inctead of u#
<wallyworld> or
<wallyworld> as we do with elsewhere, add a suffix to the unit global key
<wallyworld> we append #charm for the unit global key to distingish from agent
<wallyworld> or visa versa
<wallyworld> so append #container or something
<veebers> ack, that's doable
<thumper> https://github.com/juju/testing/pull/140 anyone
 * anastasiamac looking
<anastasiamac> thumper: lgtm'ed (hoping that depenedcy reving is all good too)...
<wallyworld> babbageclunk: sorry about delay, got caught in other things, there's a few questions there. i have a meeting now, maybe discuss after if you are not EOD?
<wallyworld> or first thing tomorrow
<babbageclunk> wallyworld: no worries, you've got lots of other stuff happening - I'll have a look at the questions anyway. ping me when you're out of the meeting.
<wallyworld> ok
<wallyworld> vinodhini: i have a meeting now but i see tim looked at your pr
<veebers> wallyworld: the caas unit provisioner, via updateStateUnits, uses an UpdateUnitOperation to set the status info, which in turn uses statusSetOps to set the status info bits, it feels like it should just use setStatus there (to handle history); am I missing something?
<vinodhini> wallyworld: just had lunch. i think tim reviewed it.
<vinodhini> its ok.
<wallyworld> veebers: i'll look after meeting
<veebers> ack
<wallyworld> veebers: but unit update op thing which currently sets status on agent etc does do history
<veebers> wallyworld: ah right, in Done
<wallyworld> yup, that's it
<veebers> wallyworld: sorry, my bad for not reading properly
<wallyworld> nw
<veebers> I might need to go to that Derek Zoolander center
<veebers> wallyworld: when you have a quick moment could you eyeball this, see if I'm on the right path: https://github.com/juju/juju/pull/9081
<wallyworld> veebers: looking
<veebers> thanks!
<wallyworld> veebers: left comment - just one main issue, hopefully it's clear
<veebers> wallyworld: ack thanks, will hit it
<wallyworld> unit status will ultimately be set from the charm via hook commands
<wallyworld> what we get from k8s will be stored as container status
<thumper> anyone... https://github.com/juju/utils/pull/303
<jam> stickupkid: as a follow up to my patch against juju/txn, this is the one that brings it into juju 2.3: https://github.com/juju/juju/pull/9083 can you take a look at it?
<stickupkid> sure can :)
<stickupkid> jam: LGTM
<jam> stickupkid: I realize I needed 1 more quick change to juju/txn, the printing of PruneOptions is ugly because uint64 defaults to printing in Hex, which isn't very nice
<jam> stickupkid: http://github.com/juju/txn/pull/44
<stickupkid> ah, yeah, I've been bitten by that before :|
<stickupkid> jam: done
<jam> ah. yet one more, just a moment. Forgot the printing was in juju/txn. needs a small debugging tweak
<jam> stickupkid: can you look at http://github.com/juju/txn/pull/44 again ? I changed the type and added the logging
<stickupkid> jam: approved
<hatch> magicaltrout are you still having issues as mentioned on discourse on Friday?
<thumper> veebers: where do we store the charms themselves used for log rotation?
<thumper> veebers: actually if you have a few minutes it would be helpful
<anastasiamac> thumper: that's an interesteing question coz looks like some of our test charms have no hooks diretcory
<anastasiamac> thumper: or have one which is empty
<thumper> anastasiamac: some of them for sure
<anastasiamac> thumper: hence we r seeing the 'invalid charm" error that hml documented
<anastasiamac> thumper: we have restrictred our definition of what is a valid charm recently... hence, the error now
 * thumper nods
<anastasiamac> thumper: k wallyworld says he will look into this
<thumper> the charms are in the acceptence tests dir
<thumper> found the line I need to change
<thumper> I now need to relearn how to run the tests locally
<wallyworld> thumper: not so simp[le, the python generates the charms. we have it under control
<thumper> wallyworld: I'm looking at the log file rotation one
<wallyworld> thumper: sgtm, we are covering somne too, we will mark the ones we are doing
<thumper> it seems lilke the README in the tree could do with some love
<thumper> it talks about environments.yaml
#juju 2018-08-21
<veebers> thumper: It should take about how it's used by the tests (not juju) to store the details for substrates to test against
<thumper> veebers: do you have a few minutes
<veebers> thumper: I do
<thumper> veebers: HO?
<veebers> thumper: sounds good, up release-call?
<thumper> sure
<wallyworld> anastasiamac: small one https://github.com/juju/juju/pull/9085
 * anastasiamac looking
<anastasiamac> wallyworld: lgtm as long as 'hooks' dir is created.. m guessing it is otherwise it would not have worked for u :D
<wallyworld> the python code does all that
<wallyworld> you just need to assign the hooks
<anastasiamac> yep. i assumed so :) thanks for a quick fix!!
<wallyworld> np, i should do the same fix for the other charm in the same pr
<wallyworld> the constraints one
<wallyworld> will be the same thing
<anastasiamac> yes, all charms should have it now...
<anastasiamac> unless thay r testing the actual failure to deploy invalid charms and I do not think we have ci tests for that, only unit ones)
<veebers> wallyworld, thumper it seems the upgrade test failure for 2.4 is legit, upgrade commands states to use proposed, controller logs show it's looking in released: https://pastebin.canonical.com/p/NG8PRCg26f/
<wallyworld> shit eh
<wallyworld> lucky we have tests
<veebers> I've noted in the doc, moving on to the next one
<veebers> team, what's the haps with https://bugs.launchpad.net/juju/+bug/1782803 (just noticed it as I was filing a bug)
<mup> Bug #1782803: juju 2.4.1: juju status failure <cdo-qa> <cdo-qa-blocker> <foundation-engine> <juju:New> <Juju Wait Plugin:Invalid> <https://launchpad.net/bugs/1782803>
<veebers> just noticed it was critical*
<wallyworld> veebers: from memory we told them it wasn't for juju to retry
<wallyworld> if it's the one i'm thinking of
<wallyworld> anastasiamac: i pushed a couple of small fixes for the other 2 CI failures
<anastasiamac> veebers: it was marked as New overnight but it should not be critical
<veebers> wallyworld: ok, the bug has been updated 6 hours ago. It might not be clear there as it's still marked crit
<veebers> ack anastasiamac ^^
<anastasiamac> veebers: it's something with their setup and yes, wallyworld is right - not on us
<wallyworld> looks like they've reopened it will logs attached
<wallyworld> *with
<wallyworld> it can be looked at but IMO we'll push back as not a release stopper
<anastasiamac> wallyworld: m not convinced that the api change is needed but m not too attached to it :D so my +1 still stands unless u want multiple +1 from me :D
<wallyworld> anastasiamac: what api change?
<anastasiamac> "zip file spec 4.4.17.1 says that separators are always "/" even on Windows."
<wallyworld> ok that. that's why the unit tests are failing on windows
<wallyworld> we were looking for a hooks\install
<anastasiamac> wallyworld: oh ic... good to know
<wallyworld> i have not rerun the windows unit tests but that *has* to be the reason i think
<wallyworld> we'll see soon enough
<anastasiamac> :)
<veebers> kelvinliu__: nice work with the enable-condition
<wallyworld> kelvinliu__: has that fix above landed? if so i'll strike out the issue in the doc
 * thumper groans
<thumper> veebers: the test failed with an unrelated failure AFAICT
<kelvinliu__> veebers, wallyworld just in 1:1 meeting with Tim. going to land it now,
<wallyworld> gr8 ty
<veebers> thumper: which test
<kelvinliu__> veebers, I deployed the RunFunctionaltests-amd64 job with the fix, and tested
<veebers> thumper: ah log rotation right
<thumper> yeah...
<veebers> kelvinliu__: sweetbix
<thumper> I'm just deploying the charm myself locally and testing that way
<kelvinliu__> wallyworld, landed and tested. going to re-test crd now
<wallyworld> ty
<veebers> thumper: what was the new failure?
<thumper> https://pastebin.canonical.com/p/rQfKs22C4d/
<veebers> thumper: you weren't expecting "ERROR:root:Wrong unit name: Expected: /var/log/juju/machine-0.log, actual: /var/log/juju/machine-lock.log" ?
<thumper> veebers: oh, I was just looking at the last error...
<veebers> thumper: ack, that last failure is probably jujupy choking because you used --existing and it screwed up and got confused :-|
<thumper> ah
<thumper> wallyworld: quick call?
<wallyworld> ok
<thumper> wallyworld: release call HO?
<thumper> wallyworld: https://github.com/juju/juju/pull/9086
<veebers> lxc list
<veebers> lol, wrong window
<anastasiamac> veebers: lolo :) at least no password... we've all putour password into irc chat at least once :)
<veebers> hah, I have done that too :-P
<veebers> or perhaps 'lxc list' *is* my password >_>
<anastasiamac> hmmm k that would b pretty sad pass phrase :) altho m not better - i usually use song lyrics as my pass phrases :)
<anastasiamac> like a variation on 'a spoonful of sugar' :D
<veebers> ^_^
<veebers> ok, I'm redoing how we do the manual provider test, it's silly how we're currently doing it
<babbageclunk> Did someone clean up the GCE addresses? The quota is saying 4/23 in use.
<veebers> babbageclunk: I didn't, is it split by region?
<babbageclunk> veebers: I think so, but this is for the us-central1 region that's in the error.
<babbageclunk> huh, curiouser and curiouser.
<veebers> babbageclunk: hmm odd, Perhaps it's was a perfect storm and there was heaps of jobs running in that region at the time and we got unlucky to run out
<babbageclunk> Yeah, maybe.
<veebers> babbageclunk: could be worth checking what regions are used in tests and perhaps manually distributing them out a bit?
<babbageclunk> veebers: ok, just looking at the job config to understand what it's doing.
<veebers> babbageclunk: heh, let me know if you need anything clarified :-) Most the job configs are setup, the test run is a single build step
<babbageclunk> Thanks, I'll have a go at working it out first before roping you in! :)
<veebers> is it possible to set a UserKnownHostsFile option for juju (i.e. ssh option)?
<wallyworld> veebers: juju help ssh says yes
<wallyworld> i assume you are talking about for running juju ssh
<veebers> wallyworld: I meant for everything ssh that juju does (i.e. with a manual provider how it gets into the machine)
<wallyworld> oh, juju use of ssh internally. i think that's all fixed
<veebers> it's ok I've gone with a different approach that'll work. It's just not so fancy
<wallyworld> fixed as in hard coded
<veebers> wallyworld: ack, thanks for confirming. I've got something working though
<veebers> (the reason was: I was 'lxc copy'-ing new machines from a base, but need to auth them to ssh in, using a generated known_hosts key would work, but need to set which file that actually is).
<wallyworld> veebers: i left a comment on that upgrade bug - not something we can fix quickly / easily sadly IIANM
<veebers> I've since created manual tests for the different clouds and locked down which IPs they start with. The lxd network management seems pretty nifty https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/
<veebers> wallyworld:  oh, :-(
<veebers> wallyworld: it worked previously though right?
<wallyworld> not that i can see
<wallyworld> i can't have
<wallyworld> it
<veebers> ah ok
<veebers> oh, it works in develop though
<wallyworld> simple controller model owrks
<wallyworld> but not agents on machines
<wallyworld> probably broken in devel too, or not?
<wallyworld> need to check but if it works in develop my theory is wrong
<wallyworld> veebers: is the pexpect() stuff a substring match? eg does child.expect('(?i)password') match "some text here password:"
<veebers> wallyworld: that test is green for develop branch (upgrades)
<veebers> wallyworld: re: pexpect, should just be regex IIRC
<wallyworld> it could be green because the agent binaries get cached
<wallyworld> so my theory could be wrong. the code looks correct though
<wallyworld> for the controller be use the supplied agent stream
<veebers> wallyworld: FYI https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.expect
<wallyworld> hmmm, that test should work then
<wallyworld> unless it needs ^.* etc
<veebers> we should make it as promiscuous as possible, we only care if its asking for a password
<veebers> wallyworld: FYI I found a 2.4 branch run of the upgrade tst that passed: http://10.125.0.203:8080/job/nw-upgrade-juju-amd64-lxd/199/console (2.4-rc2)
<wallyworld> veebers: it could be the error is misleading then
<wallyworld> the agents will only look in release streams
<wallyworld> but if the controller has been done successfully first, the agents will be cached
<veebers> although this one fails as we're seeing now: http://10.125.0.203:8080/job/nw-upgrade-juju-amd64-lxd/233/console
<thumper> I can't seem to get the dbLog feature tests that fail intermittently to fail on my machine at all
<wallyworld> veebers: if my reading of the pexpect doc is correct, our test is broken. http://pexpect.sourceforge.net/pexpect.html#spawn-expect seems to say that expect("bar") will not match "foobar". so our expect("password") will not match "Enter a password:"
<veebers> wallyworld: huh, that seems to be the case if we're just passing in the string. We could pass in a compiled regex instead
<veebers> is the tst really just using ("password")? that sucks
<wallyworld> child.expect('(?i)password')
<wallyworld> which hopefully is treated as an uncompiled regex
<wallyworld> alol other usages seem to do the right thing and use the whole prompt
<wallyworld> eg
<wallyworld> child.expect('Enter client-email:')
<babbageclunk> did someone delete that GCE quota?
<wallyworld> not me said the duck
<veebers> babbageclunk: I haven't touched it
<babbageclunk> Weird, it's not listed on the quota page anymore. :/
<veebers> wallyworld: "Strings will be compiled to re types"
<veebers> babbageclunk: that's really odd
<kelvinliu__> wallyworld, the crd works as expected.
<wallyworld> veebers: ok, i'll look to follow convention elsewhere and use the exact prompt
<wallyworld> kelvinliu__: awesome ty
<kelvinliu__> wallyworld, np
<veebers> wallyworld: a regex would be better surely? so we don't get tripped up by minor text changes
<wallyworld> our preferred convention elsewhere (in juju also) is to use exact text
<wallyworld> so we get breakages
<wallyworld> so we think about the consequences of changing
<wallyworld> and also so we can see when error messages are dumb
<wallyworld> if you just match on a small regexp, you miss things like "could not do this because: could not do this: because could not do it" etc
<veebers> wallyworld: ack
<veebers> good point
<veebers> thumper: it seems like the commands in that job are failing which feeds bad input into the next command. one sec I'll line something up
 * thumper nods
<veebers> vinodhini: looks like the timeout extension worked, it needed an extra 10 minutes apparently
<veebers> 100 minutes is a long time for that test though, maybe there is an issue with azure-arm. Did you try a different region too? Perhaps the default we use is slow etc.
<vinodhini> i didnt try diff region.
<vinodhini> its just timeout period i incresed first in default reg
<vinodhini> http://localhost:18080/job/nw-model-migration-amd64-azure-arm/647/console
<veebers> vinodhini: I would attempting trying a different region see if that goes faster; having a test take 1hr 40 min is a bit gross :-)
<vinodhini> i will try with actual time period and diff region
<vinodhini> i mean the orig time period
<vinodhini> veebers: just a quick clarification plz correct me if i am wrong here - ENV=parallel-azure-arm -- iam setting this to different region. and i am listing out the regions from juju list-region azure
<veebers> vinodhini: no, that env stays the same (it's the part that says run this test in azure-arm). just below that should be the assess_<blah> call, that should take a --region arg
<veebers> one sec, let me check
<wallyworld> veebers: a small PR for the pexpect fix
<wallyworld> https://github.com/juju/juju/pull/9087
<vinodhini> ok. iam seeing in acceptance test assess_model_migration
<vinodhini> i got that.
<vinodhini> --region is option which overrides it.
<vinodhini> it alright thanks veebers
<veebers> vinodhini: yeah --region should be there for the model migration test
<veebers> sweet :-_
<veebers> wallyworld: ack, looking
<veebers> wallyworld: you've used a json query CLI tool before? something like jq or so?
<wallyworld> i have
<wallyworld> can't remember the syntax though
<wallyworld> been a while but very useful
<vinodhini> its ok. i verified in py script
<veebers> wallyworld: ack cool I'll look it up, Should be able to use this 5 piped command using grep/sed/head etc. ^_^
<vinodhini> now i have set time 90 and diff region and started it
<vinodhini> lets see
<wallyworld> veebers: yep, i pipe from stdin etc when i used it
<wallyworld> veebers: i thought about controller_name but that is the one bit we don't really care about that could change
<veebers> wallyworld: ack, fair enough
<wallyworld> and it may not be contreoller_name
<wallyworld> the test should be using a different controller
<wallyworld> for true multi-controller cmr
<babbageclunk> Is anyone else getting gocomplaints from gometalinter about gomocks-generated files not being goimported?
<babbageclunk> wallyworld: ^
<veebers> I've updated the nw-bootstrap-constraints-maas-2-2 job so it should get the right input for the test, going to have tea will check back in later on.
<wallyworld> babbageclunk: i haven't so far
<wallyworld> kelvin added some new micks yesterday
<wallyworld> but they are all committed in tree
<babbageclunk> wallyworld: I tried running it again and it went away, so I don't know what was happening there.
 * wallyworld shrugs
 * babbageclunk also
<veebers> wallyworld: don't forget to propse your fixes to develop too :-)
<vinodhini>  veebers: are u strill ard. i did revert back the time qnd changed the region and its all good Success.
<vinodhini>  http://localhost:18080/job/nw-model-migration-amd64-azure-arm/648/console
<vinodhini> wallyworld: looks like veebers not ard
<vinodhini> i would like to know abt this azure failure which is actually fine if we change the regin.
<stub> go go gadget gometalinter
<vinodhini> so what shd be the solution ? i have made the modification directly in Web UI
<vinodhini> I have updated the doc.
<wallyworld> vinodhini: not sure, i'll have to read the failure, i am not faimi9lair with it
<wallyworld> vinodhini: wouldn't it be better to increase the test timeout? that's what i seem to recall may have been discussed this morning
<vinodhini> wallyworld: i was away to get some dinner.
<vinodhini> Yes. initially i increased the timeout period and it was successful.
<vinodhini> but veebers was asking me not to do that way
<wallyworld> vinodhini: ok, i'm surprised at that. i'll talk to him tomorrow. just changing the region is quite fragile as that coud slow down also
<wallyworld> thanks for looking into it
<vinodhini> its ok.
<vinodhini> i was working in credentialsd part its was just side by side running.
<wallyworld> good plan
<vinodhini> this is not potential failure. its slow thats why its an issue
<wallyworld> yeah, azure is very slow at instance creation/destroy
<vinodhini> So we arent doing release today ?
<vinodhini> I am sure veebers will look into the status a bit later :-)
<wallyworld> maybe, maybe not, depends on how the other guys go with the remaining issues. i'd say not today but tomorrow if i had to guess
<vinodhini> In this case how to target the solution iam not sure. Modifying a config option is not a fix.
<vinodhini> So we should focus on solution.
<vinodhini> ok. wallyworld. I am drafting a mail to you. I wont be there tomorrow morning hours as i have appoinment with Indian consulate.
<wallyworld> it depends on the root cause. if the substrate is slow, then increasing a timeout seems reasonable to me
<veebers> vinodhini, wallyworld: The timeout is already 90 minutes, any more seems like a huge amount. My suggestion was to try a different region in case the original is having troubles etc.
<wallyworld> wow 90 minutes!!!
<wallyworld> fark
<veebers> wallyworld: if it's still taking ages in another region there is an issue there
<wallyworld> yeah, let's see
<veebers> yeah, it times out after 90 :-) Takes about 1hr 45 min for a successful run
<wallyworld> veebers: do you know the gce quota status? was that sorted?
<veebers> wallyworld: no idea sorry. I know babbageclunk was looking. We thought perhaps it was bad timing and we had a bunch of stuff all running the same region etc. Not sure if the suggestion to check which region is used across tests (with the thought to share it out a bit) went
<wallyworld> ok, np
<veebers> wallyworld: the jq way is much better: https://github.com/CanonicalLtd/juju-qa-jenkins/pull/81/files
<wallyworld> veebers: looks good
<veebers> wallyworld: this is an easy one: https://github.com/juju/juju/pull/9088
<wallyworld> looking
<wallyworld> lgtm ty
<stickupkid> manadart: you got 5 minutes for a quick HO?
 * stickupkid gone for lunch
<rick_h_> morning party folks
<rick_h_> stickupkid: morning
<rick_h_> stickupkid: can I ask you to pause WIP and grab an issue from the release blocking doc please?
<stickupkid> sure can
<rick_h_> stickupkid: ty, the other side of the world cranked out a lot of notes/fixes and we need to help move forward today.
<stickupkid> rick_h_: just reading up on the doc
<rick_h_> stickupkid: k, let me or hml know if you have any questions/issues
<manadart> externalreality: Approved #9084
<externalreality> manadart, cool. I spoted that I attempted to push the removal of the Id feild did not make it in. Gonna add that before attempting to land.
<manadart> externalreality: Didn't quite get all of my PR done before EoD, but I've put it up as a WIP, if you are able to review: https://github.com/juju/juju/pull/9090
<hml> stickupkid: quick pr pls: https://github.com/juju/juju/pull/9091
<stickupkid> hml: done
<hml> stickupkid: ty
<hml> stickupkid: iâm off to long lunch shortly.  do you have anything for me to review?
<stickupkid> hml: nope, nothing atm, just digging
<stickupkid> pretty sure I'm just making the hole deeper
<hml> stickupkid: ha!
<externalreality> manadart, reviewing now
<rick_h_> stickupkid: "I'm gonna need a bigger shovel!"
<stickupkid> rick_h_: true
<stickupkid> has anyone seen this recently "16:51:11 DEBUG juju.provider.common bootstrap.go:575 connection attempt for 10.156.96.10 failed: /var/lib/juju/nonce.txt does not exist" - it's been happening a couple of times today
<stickupkid> ?
<stickupkid> Just doing a "juju bootstrap localhost --debug" on the 2.4 branch
<stickupkid> it works in the end, but really takes it's time...
<rick_h_> stickupkid: looks like some history https://bugs.launchpad.net/juju-core/+bug/1314682
<mup> Bug #1314682: Bootstrap fails, missing /var/lib/juju/nonce.txt (containing 'user-admin:bootstrap') <bootstrap> <juju> <maas-provider> <juju:Expired> <juju-core:Won't Fix> <https://launchpad.net/bugs/1314682>
<stickupkid> rick_h_: nice, i'll give that a read
<stickupkid> rick_h_: so i guess the retry that's implemented to fix this, does work... maybe my computer was just being slow...
<rick_h_> stickupkid: yea, not sure.
 * stickupkid back to digging...
<veebers> Morning o.
<rick_h_> wheeeee
<cory_fu> wallyworld, kelvinliu_, knobby: This call reminded me of this, if you haven't seen it: https://www.youtube.com/watch?v=JMOOG7rWTPg  :p
<kelvinliu_> cory_fu, ^.@
<babbageclunk> wallyworld, veebers: I had a look at the GCE quota thing. As far as I could see the quota was now fine - IP addresses in use was fluctuating between 4 and 0 when the test was running. I couldn't change the region tests were using because it's defined as us-central1 in environments.yaml. Maybe I could duplicate parallel-gce as parallel-gce-us-east1 and move some jobs to use that instead?
<veebers> babbageclunk: using --region with an assess script should overwrite that IIRC
<hml> babbageclunk: it looks like we may hit it when there are two ci-run going at the same time
<hml> babbageclunk: thatâs what was giong on when it was hit again in run 1089
<babbageclunk> veebers: ah, thanks - so if I change the jobs to use different regions that might avoid it? It definitely looks like a per-region quota.
<babbageclunk> veebers: ok, I'm going to do that now.
<babbageclunk> (dumb question, but what does the nw- prefix mean?)
<babbageclunk> gah, my brain's stopped accepting "likelihood" as a real word.
<babbageclunk> likeli
<rick_h_> it does look strange written out
<babbageclunk> veebers: can you take a look at https://github.com/CanonicalLtd/juju-qa-jenkins/pull/83 ? I've checked there are no errors from jenkins-jobs.
<veebers> babbageclunk: can do
<babbageclunk> ta
<babbageclunk> After it's deployed I'll make sure to run each of the changed jobs, just in case I missed a \
<veebers> babbageclunk: LGTM. a redeploy should be just doing nw-* so it redploys all the functional jobs (no need to screw around cherry picking names etc.)
<babbageclunk> veebers: more detail? Hang on, I'll read more of the readme.
<veebers> babbageclunk: oh, your question earlier re: nw_ prefix; hah it's because while we where spinning up the new CI run bits we continued to run the original jobs; You couldn't run both at the same time as they stomped on each other (workspace/$JOBNAME is the working dir for a job). So I added nw- (new world), it was supposed to be changed when we did the roll over but never was
<veebers> babbageclunk: ah sorry, hah yeah the arg for jenkins-job . . . . -r jobs/ci-run nw-*
<babbageclunk> Ah, ok - so running `jenkins-jobs update` like in the deploying jobs section, but with a wildcard to do all the new-world jobs.
<babbageclunk> veebers: coolthanks!
<veebers> babbageclunk: yep that's the one
<babbageclunk> veebers: ok, having a go at deploying them now.
<veebers> babbageclunk: sweet, let me know when it's done as I'm deploying and testing some changes I'm making
<babbageclunk> how do you add a private key interactively (in juju add-credential)? Remove all the linebreaks?
#juju 2018-08-22
<thumper> ugh...
<thumper> my update clock branch now conflicts...
<thumper> when you touch that many files, I suppose it shouldn't be surprising
<wallyworld> veebers: what's the best way for me to ssh into a machine to run windows unit tests?
<wallyworld> thumper: here's a small charm.v6 tweak to hopefully fix the windows unit tests https://github.com/juju/charm/pull/256
 * thumper looks
<wallyworld> why the fark does windows insist and screwed up path separators
<wallyworld> *on
<thumper> wallyworld: approved
<wallyworld> ty
<thumper> wallyworld: does it fix the bug?
<wallyworld> thumper: nfi, i don't have a windows box. but reading the code i think so
<thumper> wallyworld: why can't you log into the windows box?
<wallyworld> is it documented somewhere?
<thumper> yes...
<thumper> somewhere
<wallyworld> change is good regardless as it makes the zips generated standards compliant
<veebers> wallyworld: (soz was at lunch) there is no automated way, you would scp and ssh into the windows machine and do it that way (you could crib off the windows unit test job)
<veebers> wallyworld: I had issues ssh-ing into the machine, you may have more luck rdp-ing into the machine after scp-ing the code onto it
<wallyworld> gawd
<wallyworld> easier to land the fix and see. it's a good fix regardless
<veebers> hah ok. We have plans to add windows unit test to the pr experience
<veebers> where is JUJU_DATA for windows?
<wallyworld> veebers: i think under the User home dir
<veebers> wallyworld: ack thanks
<wallyworld> thumper: here's the juju bit of that fix https://github.com/juju/juju/pull/9094
<thumper> wallyworld: that is a lot more than just the charms update
<wallyworld> tell me about it
<thumper> wallyworld: and there are two test failures
<wallyworld> yeah, fixing now
<wallyworld> due to new hooks
<wallyworld> the sands shifted underneath the deps
<wallyworld> so bringing in tip of charm.v6 also needed latest of other things
<wallyworld> the other things are oci-image and devices support in charm metadata
<wallyworld> which are unused in 2.4
<wallyworld> and also the new update-series hooks, so tests needs fixing
 * thumper nods
<wallyworld> thumper: tests pass now
 * thumper looks again
<thumper> wallyworld: approved
<wallyworld> ty
<thumper> wallyworld: I'm going to go and make a coffee before our chat
<thumper> wallyworld: are you wanting to meet this afternoon?
<wallyworld> thumper: sgtm, i'll do ther same
<wallyworld> yeah let's briefly
<thumper> ok
 * thumper goes to make coffee
<wallyworld> thumper: in HO
<wallyworld> babbageclunk: still on target to land the big mother branch?
<babbageclunk> Hopefully - have been chasing the openstack s390x failures with veebers
<veebers> sorry to distract you babbageclunk :-P
<babbageclunk> and dumb gce problem
<veebers> babbageclunk: hah, this commit introduced that test failure: https://github.com/juju/juju/commit/48471e3bac7cd694540067dc5fa823c3a28f52c2
<babbageclunk> bums
<veebers> it's super complicated :-P I'm not sure why yet but it does. I suspect that the error message is wrong and it should be something different
<veebers> we didn't see it come up earlier because I broke the ci-run and we missed a handful of commits
<veebers> wallyworld, thumper: I updated the doc, the invalid url thing isn't related to proxy issues (at least for the unit test) but was introed with the lts name change (not sure off the root cause of the failure, potentially masked error?)
 * wallyworld is otp with is
<veebers> ack
<veebers> I need to go sort tea, I'll be back to push a PR for the assess_recovery failure after that (just testing a run now).
<anastasiamac> veebers: just out of curiousity..
<anastasiamac> veebers: is bionic in /usr/share/distro-info/ubuntu.csv on the ci machines that run our tests?...
<veebers> anastasiamac: I'll have a look for you
<veebers> anastasiamac: yep, including cosmic
<anastasiamac> veebers: thnx... ok..
<anastasiamac> veebers: coz we also have hardcoded mapping (m rolling my eyes here) but m hoping that we do not use it
<veebers> the s390x machine does say it needs a reboot, perhaps I'll do that after dinner
<anastasiamac> veebers: ack
<veebers> anastasiamac: ack, I need to confirm but I think this might just happen on s390x
 * anastasiamac sighs
<anastasiamac> veebers: i was under impression we no longer supported s390x
<thumper> no... we do support s390x
<anastasiamac> thumper: \o/
<anastasiamac> veebers: m not sure if it'll help but it looks like the juju/os commit in depenendecies.tsv needs to be updated... 2.4 branch points to the one I did in July and misses the update from uros to support new mac os (done 14 days ago)... thumper <<
<thumper> agreed
<thumper> if we are rolling a new hash for testing anyway, we should update it
<anastasiamac> k. i'll pause what m doing and will propose now
<anastasiamac> just juju/os or can u think of anything else that needs to be updated?
<anastasiamac> thumper: veebers: https://github.com/juju/juju/pull/9095
<thumper> anastasiamac: approved
<anastasiamac> \o/
<anastasiamac> a simple review plz https://github.com/juju/juju/pull/9097
<stickupkid> anastasiamac: I'll give a look
<stickupkid> anastasiamac: AND done... :D
<rick_h_> stickupkid: can you please make sure to keep an eye out on that OS PR and land it once it's good?
<manadart> externalreality: Do you have any objection to suffixing the status constants in the model package, with "UpgradeSeries" or "Status"?
<manadart> Disambiguates them from other exports.
<stickupkid> rick_h_: yeap sure can
<rick_h_> stickupkid: ty!
<hml> stickupkid: rick_h_ : the ci-run for 2.4.2 doc is updated with the latest and greatest.
<cory_fu> stub: You around still?
<stickupkid> hml: i'll have a quick look at this one - Export bundle test shouldnât be running in 2.4 ci
<rick_h_> hml: awesome
<rick_h_> stickupkid: that was moved behind a feature flag. Might just need ot make sure the tests are engaging the flag?
<rick_h_> stickupkid: or since it's behind a flag skip the tests for the moment in 2.4
<rick_h_> and we'll sort it in 2.5/backport in the future
<hml> rick_h_: stickupkid: the export bundle test wasnât being run in 2.4 ci for a bitâ¦ then appeared overnight
<rick_h_> hml: like last night overnight?
<hml> rick_h_: yesâ¦ it wasnât run in the 1089 run, but is in 1093
<rick_h_> lol
<stickupkid> haha
<stickupkid> tada
<rick_h_> well, should make the git bisect easy :)
<stickupkid> https://github.com/CanonicalLtd/juju-qa-jenkins/blob/master/jobs/ci-run/functional/functionaltests.yml#L197
<stickupkid> so it seems this was changed, to run again
<stickupkid> I'll make a PR
<rick_h_> well that reads ! doesn't it?
<rick_h_> I'm confused
<rick_h_> stickupkid: time for our 1-1 chat?
<stickupkid> one sec, on a phone :S
<rick_h_> stickupkid: lol ok
<stickupkid> wife broke her new phone :|
<rick_h_> oops, did you tell her that's not how you're supposed to use it?
<rick_h_> I find that if I tell my wife that she shouldn't have done that it helps a lot and she's grateful for my advice :)
 * hml snickering :-D
<kwmonroe> cory_fu: will config.changed.foo always bet set on initial deployment?  i seem to recall config-changed always runs, but am unsure if the c.c.foo flag will be set at that time, or if it's only set if foo is not the default, or if it's not set at all until a 'juju config' operation.
<cory_fu> kwmonroe: Yes, it will always be set on the first run
<cory_fu> kwmonroe: Specifically, it uses `config.changed(opt)` (see https://github.com/juju-solutions/layer-basic/blob/master/lib/charms/layer/basic.py#L211) which will evaluate to True the first time it's called (https://github.com/juju/charm-helpers/blob/master/charmhelpers/core/hookenv.py#L351)
<kwmonroe> excellent cory_fu.  thx
<cory_fu> Cynerva_: I think I've addressed all of your comments on https://github.com/juju-solutions/kubernetes/pull/148 if you don't mind taking another look when you have some time
<Cynerva> cory_fu: thanks, will look soon
<externalreality> rick_h_, (upgrade series feature) - As far as allowing retries when hooks fail. My assumption is that retry on failure should be implicit from a CLI perspective. A user can simply run the command again after hook failure and not need to explicitly specify a flag (--retry).
<rick_h_> externalreality: right, currently though it says you can't because it's already started, for instance
<externalreality> rick_h_, understood
<rick_h_> externalreality: so there's a question then on if we want to warn the user it's already done and if they want to try again make them express that they're aware of that
<rick_h_> externalreality: so I guess that was just a "hmm, how do we want to handle this?"
<externalreality> rick_h_, where would that warning come from. `juju status` doesn't display that kind of information (?I think?) and the `prepare` command, of course, runs async with respect to the CLI (so cannot warn as command output). The only option would be to log that warning - correct?
<externalreality> rick_h_, Where would the user get that warning info from? that is.
<rick_h_> externalreality: so currently if I run prepare after I've already run it the command outputs that the lock document already exists (or something to that effect)
<rick_h_> externalreality: maybe I'm confused. let me get the test thing setup again.
<externalreality> rick_h_, Yes, if you run again the command says the lock already exists.
<externalreality> rick_h_, do you mean that if you should run it again and the hooks is in error, the command then notifies you that "The hooks has failed on a previous run, would you like to try again..." kind of thing?
<rick_h_> externalreality: thinking
<externalreality> rick_h_, ack
<rick_h_> sorry, yea my first thought was along that line but I recall a conversation with jam we had around existing methods of dealing with failing hooks and our existing resolved --retry mechanisms
<rick_h_> externalreality: so thinking through how that might work
<rick_h_> and any cases where it might fail/abort but not be tied to a pre/post hook execution
<externalreality> rick_h_, cool
<rick_h_> externalreality: ok, let's hold off on that for now then. Let's just make sure we can resolved/--retry the prepare hooks
<externalreality> rick_h_, ok
<rick_h_> externalreality: and we should validate that once those hooks are successful the process will still move forward doing the machine-related steps
<externalreality> grumpig is out of disk space
<rick_h_> wheeee
<rick_h_> tfw you forget to turn on the feature flag before you bootstrap and deploy the test charm...
<rick_h_> oh dammit, and I also forgot to run make install...wheeee take 4
<thumper> morning
<rick_h_> morning thumper
<veebers> Morning o/
<rick_h_> morning veebers
<veebers> Morning rick_h_ o/
<babbageclunk> morning veebers and rick_h_
<babbageclunk> and thumper
<rick_h_> wheeee
<babbageclunk> wallyworld: just doing the last bit of changes, flipping the sense of that feature flag. Do you think I should call it disable-raft-leases or something else?
<wallyworld> babbageclunk: what about use-legacy-leases or something?
<wallyworld> keep the sense of the flag what to use rather than what not to use
<babbageclunk> wallyworld: yeah, that's nicer thanks. Also, do you want to look through my other responses in case there's something I'm being silly about?
<wallyworld> babbageclunk: should be good, i can take a quick look
<babbageclunk> wallyworld: awesome thanks
<babbageclunk> wallyworld: ping me if you want to argue about anything :)
<veebers> wallyworld, babbageclunk: yay https://github.com/juju/juju/blob/develop/acceptancetests/jujupy/client.py#L227
<babbageclunk> veebers yay, delete it!
<anastasiamac> babbageclunk: PTAL https://github.com/juju/juju/pull/9096
<babbageclunk> anastasiamac: wilco - are you blocked on it? Do you mind if I finish off my raftlease featureflag change first?
<anastasiamac> babbageclunk: m kind of blocked on it but i can also try to exercise my patience... ;) wanted to land it within 24hr of proposing...
<anastasiamac> babbageclunk: i can always beg wallyworld or thumper for a review :D
<wallyworld> i can look soonish
<wallyworld> add it to the queue
<veebers> maas spaces and lxd, if I'm seeing the error: (1/lxd/0', 'no obvious space for container "1/lxd/0", host machine has spaces: "ha-space", "space-0") how can I add an obvous space? (this is maas 2.3, juju 2.4)
<wallyworld> babbageclunk: changes/comments look ok to me - maybe add a todo or a trello card for the fsm optimisation for expired lease removal
<anastasiamac> thanks, wallyworld \o/
<babbageclunk> wallyworld: great thanks
<wallyworld> babbageclunk: when it's landed i expect a video of you doing a happy dance
<wallyworld> and a discouse post :-)
<babbageclunk> anastasiamac: I had to run outside to get washing off the line - I'll take a look now!
<anastasiamac> babbageclunk: :)
<anastasiamac> babbageclunk: it's not that urgent.. i'll need to take my 5yo to do a blood test
<babbageclunk> oh no!
<anastasiamac> babbageclunk: which apparently requires several specialist to hold him down
<babbageclunk> :(
<babbageclunk> poor wee guy]
<anastasiamac> babbageclunk: so when i said 24 hrs... i meant we have until 10pm bne time
<anastasiamac> babbageclunk: yeah :(
#juju 2018-08-23
<wallyworld> vinodhini: pr looks good so far
<vinodhini> thank u. wallyworld: i shd just add in all call point to verify if its credentials
<vinodhini> tast all the work.
<veebers> sigh, I just wrote something down then went to select it and copy it :-|
<babbageclunk> ha
<anastasiamac> wallyworld: thumper: PTAL https://github.com/juju/juju/pull/9101 - a fix for that cmd/plugins issue with no controllers (bug linked in PR)
<wallyworld> lgtm, thanks for fixing
<wallyworld> can backport to 2.4 after release
<anastasiamac> nws, i will :)
<anastasiamac> thnx for such a quick review \o/ m speechless
<wallyworld> wasn't a big PR :-)
<wallyworld> and the logic made sense
<anastasiamac> babbageclunk: thnx for review \o/
<anastasiamac> babbageclunk: i like to name fail() whitin a func but this one was a variable that was package-visible.. i was naming it to avoid conflicts with other potential fail methods...
<babbageclunk> anastasiamac: yeahhhh, but it's a new package right?
<anastasiamac> babbageclunk: no, it has other stuff there...
<anastasiamac> babbageclunk: i can put a fail in both fun but it'll b just a copy.. i think i'll do it anyway.. might be neater
<babbageclunk> anastasiamac: oh, ok then - I'd get rid of the function in that case, `return empty, errors.Trace(err)` is better than `return veryLongFunctionThatDoesntDoAnything(errors.Trace(err))` ;)
<anastasiamac> babbageclunk: +1
<babbageclunk> (exaggerating, obvs)
<anastasiamac> :)
<veebers> Is there a way to discover bundles on the charmstore? I'm looking for one that doesn't use trusty to test something
<externalreality_> veebers, whats a good strategy for freeing up some space on grumpig?
<externalreality_> veebers, du says there is alot of old jobs living on grumpig at 1G a peice.
<veebers> externalreality_: hey o/ I just cleared it out :-)
<externalreality_> veebers, thank you veebers! :-D
<veebers> externalreality_: this is a known issue that I'm working on, I have a PR in the works for fixing how we run the pr jobs
<externalreality_> veebers, understood, thank you sir.
<veebers> externalreality_: long-short: we pull the source locally and attach that dir to the lxd container, if it fails we keep the container for debugging, but the next job comes along, tries to clean up the build dir (can't as it's being used by lxd) and so moves it aside
<veebers> my fixes make everything self contained in the lxd container (as well as simplify the config file, 1 20 line file encompassing all jobs instead of 20 many line files
<externalreality_> veebers, when you say "moves it aside" where does it move it aside to?
<veebers> in the workspace dir and adds _ws-cleanup-<timestamp>
<veebers> (those are what I deleted just now
<externalreality_> veebers, gotcha - cool, Yes and those are the old jobs that du was claiming occupy 1G+ of disk space a peice. They must build up quick.
<externalreality_> thank you veebers!
<wallyworld> veebers: just lifting my head out of the fog of cmr stuff, what's the status of the maas tests etc?
<veebers> wallyworld: ugh
<wallyworld> oh, that good huh
<veebers> wallyworld: Maas deploy test works (that happened earlier), I'm debugging the container network one, I've made a little progress
<veebers> wallyworld: re: vsphere test I've made progress there, can bootstrap etc. The test itself is pretty, uh, pants though. I'm just looking at what it's actually trying to do. I think we can pair it back to a 'sensible' deploy for now
<wallyworld> sadly i don't have much advice to offer
<wallyworld> yeah, minimal smoketest is probably ok for now
<veebers> wallyworld: s390x unit test, still unknown, I'm rebooting that machine at the mo (oh I hope it comes back up)
<veebers> (oh yay it did)
<wallyworld> win
 * veebers checks his notes
<veebers> got a couple things on the run, um, oh it's possible the charm-store failure is transient, re-run in progress now
<veebers> the maas and vsphere stuff takes *ages*, not sure if that's expected
<wallyworld> yeah, i think it is
<veebers> wallyworld: the network health test is all over the place, I'm proposing a deploy test for vsphere for the meantime to unblock us, then we can untangle the network health mess
<wallyworld> sgtm
<wallyworld> imo we could still ship even with network test not working
<wallyworld> the deployment works and maas 1.9 is only transiitonary
<veebers> wallyworld: I'm also sceptical that the assess_container_networking test worked, it does a 'juju run reboot', which will *always* result in a CalledProcessError as the session is terminated by the reboot, but it's not handled
<wallyworld> ugh
<wallyworld> we need to look at who wrote that test and perhaps ask about the history of it
<wallyworld> i wouldn't waste any more time on it
<veebers> wallyworld: I fixes for the issue I think. I'll plonk them in and re-run to see. (leave it running in the background etc.)
<wallyworld> ok
<veebers> wallyworld: the unit test on s390x is still unknown :-| Looking at that now
<wallyworld> ty
<wallyworld> anastasiamac: https://github.com/juju/juju/pull/8350 breaks aspects of status updates, we will need to discuss how best to fix
<wallyworld> i think we could achieve the intend using a check in setStatusOps
<wallyworld> *intent
<anastasiamac> wallyworld: k.. let's discuss it... when?
<wallyworld> anytime suits me
<anastasiamac> ho? standup?
<wallyworld> sure
<veebers> wallyworld: FYI charm-storage got a success, I used a different region, I suspect perhaps storage quota used up in the other one? (looks like teardown on error results in storage left lying around)
<wallyworld> plausible
<wallyworld> it did seem that it could be transient
<veebers> babbageclunk: FYI,, s390x unit test, a more helpful error message: no "bionic" images in some-region with arches [s390x]
<babbageclunk> veebers: bloody red herrings!
<veebers> it's looking but not finding, not yet sure why. I do know that https://github.com/juju/juju/blob/develop/environs/simplestreams/simplestreams.go#L392 is returning the herring
<babbageclunk> Is it because the test isn't seeding the local server with bionic images?
<babbageclunk> (might be a stupid question)
<veebers> babbageclunk: a possibility, need to dig in. (now I have more info I can do so locally, instead of on a distant machine via vanilla vi :-)
<babbageclunk> oh nice
<veebers> hmm, but it works on other arches, maybe it's going bung for s390x
<veebers> ok, vsphere deploy job passes (have the work, needs a pr), charm-storage passes (changed region, have the work, needs a pr), container networking passing (have the work, needs a pr), s390x unit test still on going
<veebers> PRs will probably come after dinner at this rate
<veebers> (oh and we nuke network-health vsphere as that test is pants, have a deploy instead)
<anastasiamac> veebers: what was the problem/fix for container networking? (well done btw!!)
<anastasiamac> we have tests that are pants?
<veebers> anastasiamac: unf-ing the test a bit ;-) will have a pr up shortly. but 1. change a string comparison 2. change a reboot command to handle the immediate termination of the ssh session that it's issued on
<anastasiamac> veebers: niice
<veebers> FYI https://github.com/juju/juju/pull/9102
 * anastasiamac looking
<babbageclunk> JujuConnSuite is an abomination and I am making it worse
<anastasiamac> :(
<veebers> babbageclunk: yay \o/ burn baby burn
<veebers> jam: re: the s390x test, I was going to look at provider/openstack/local_test.go localServerSuite.SetUpTest, perhaps the uploadfaketools or UseTestImageData isn't prepping things correctly
<jam> veebers: thanks for the pointer. This line seems particularly interesting
<jam> [LOG] 0:00.722 DEBUG juju.environs.instances matching constraints {region: some-region, series: bionic, arches: [s390x], constraints: mem=3584M, storage: []} against possible image metadata [{Id:1 Arch:amd64 VirtType:pv} {Id:id-1604arm64 Arch:arm64 VirtType:pv} {Id:id-1604ppc64el Arch:ppc64el VirtType:pv}]
<jam> It seems to find "ppc64el" binaries, but not s390x ones.
<thumper> well fuck...
<thumper> my brilliant thought on why my tests are failing was wrong
<thumper> now I'm back to not knowing why they are failing
 * thumper digs more in the 10m before the meeting
<wallyworld> if anyone can do a small review that would be gr8 https://github.com/juju/juju/pull/9103
<manadart> wallyworld: Looking.
<wallyworld> yay ty
<manadart> wallyworld: Approved with comments.
<wallyworld> manadart: thanks, i will fix the eror logging. i had it in my head Wait() return non nil error which is bogus
<babbageclunk> wallyworld: ping?
<babbageclunk> wallyworld: just in case you're around later on: I've updated all the tests that were claiming leases through state to use the dummy provider lease manager. The only package that still has failing tests is cmd/jujud/agent, which looks like it starts a full agent with dependency engine and raft workers...
<babbageclunk> wallyworld: it seems like it's falling prey to the startup/bouncing-apiserver issue I was planning on tackling next.
<babbageclunk> wallyworld: I'm tempted to set the legacy-leases flag for that test and land it, then fix the startup issue and that test at the same time. What do you think?
<babbageclunk> wallyworld: Actually, I'll do that now but not land it, I'll check with you in the morning.
<wallyworld> babbageclunk: heyu
<wallyworld> your plan sgtm
<babbageclunk> wallyworld: sweet
<BlackDex_> Hello :). I wonder if it is possible to have lxd 3.0.x installed on xenial during a juju deployment instead of having the default 2.0.x version
<stickupkid> BlackDex_: yes that's possible
<stickupkid> BlackDex_: you can follow this video, which does the same thing https://www.youtube.com/watch?v=RnBu7t2wD4U
<stickupkid> rick_h_: how much backwards compatible should we be with lxd 2.x?
<stickupkid> manadart: we never read that file, if the bridge name is "lxdbr0", I need to work out if that's been changed or not
<stickupkid> manadart: ignore me... think i got that wrong
<rick_h_> stickupkid: sorry, so what do you mean? :)
<stickupkid> HO?
<rick_h_> stickupkid: k, omw
<stickupkid> manadart: we're missing this function https://github.com/juju/juju/blob/2.2/container/lxd/initialisation_linux.go#L179
<manadart> stickupkid: Missing?
<stickupkid> manadart: it's not there inside the container, which causes it to error out
<manadart> externalreality: I tacked on a fix for the tag conversion panic to https://github.com/juju/juju/pull/9105. I know you conditionally approved it, but if you could take a look on account of the commits added since...
<manadart> externalreality: I really have to sign off now, but if it goes green, merge it to get the fix in. I am happy to take on defence of the other changes there.
<externalreality> manadart, ack
<rick_h_> hml: for those posts I've got a ci job category under development. I've moved those two over.
<hml> rick_h_: cool.  iâve figured out the discourse categories, but not the nested ones.  :-)
<rick_h_> externalreality: QA note inbound on your PR. Let me know if I missed something
<veebers> Morning all
<rick_h_> morning veebers
<veebers> How are things today rick_h_ ?
<rick_h_> veebers: wheeeeee
<rick_h_> veebers: once you get settled can you please check in with hml and make sure she carried through the WIP you had going last night?
<veebers> can do
<rick_h_> veebers: we tried to put together what the status was from the PRs and IRC backlog we had to go off of
<rick_h_> but good to make sure we figured it out right
<veebers> rick_h_, hml any idea where we landed with the s390x unit test, I believe jam took a bit of a look?
<hml> veebers: i didnât look at it today, so weâre in the same place
<veebers> ack
<thumper> babbageclunk: morning, got a few minutes?
<babbageclunk> thumper: sure!
<babbageclunk> in 1:1?
<thumper> ack
<babbageclunk> wallyworld: do you think I should squash up the commits before I land the megabranch?
<wallyworld> i thnk so
<babbageclunk> ok, but it's going to be one huge commit
 * thumper sighs
<thumper> there is a test in state that isn't timing out...
<thumper> hmm...
<thumper> WatchPodSpec
<thumper> hmm maybe not
 * thumper digs more
<thumper> nope, it is state pool tests
#juju 2018-08-24
<wallyworld> kelvinliu_: forgot to ask, this PR can be closed right? https://github.com/juju/juju/pull/8936
<kelvinliu_> wallyworld, yeah, i just closed this for now. we can solve it later.
<wallyworld> kelvinliu_: can you make sure there's a card on the caas trllo board for it so we don't forget
<kelvinliu_> wallyworld, sure, done.
<wallyworld> ty
<kelvinliu_> np
<thumper> bollocks
<veebers> wallyworld: I'm I going crazy, using a fresh develop build I deploy a caas charm, juju status never changes the workload or message from active/Started Container
<thumper> intermittent allwatcher test failures
<veebers> I thought my changes introduced that hence why trying develop. I built a fresh operator image and set that too incase there was some caching or something
<thumper> I think we have had them for a while,
 * thumper disd
<thumper> digs
<wallyworld> veebers: i don't quite follow. status should not start out as active
<veebers> wallyworld: sorry, this is status after ages: https://pastebin.canonical.com/p/3p5yYBFnKf/ I see with kubectl -n failing-message log -f juju-operator-mariadb that everything is happy etc. and the unit is fine
<veebers> I assumed that status should have been updated right?
<wallyworld> veebers: sorry, what's wrong with status? it looks correct? what are you expecting to see?
<veebers> wallyworld: ah shit,  you're right; sorry I was thinking that 'started container' was the first message and something else should replace that.
<veebers> wallyworld: I blame Friday + lack of sleep
<veebers> I'll continue on my way ^_^
<wallyworld> no worries. not that the charm is broken a bit
<wallyworld> it will sometimes send the wrong status after a restart
<wallyworld> *note
<veebers> ack
<anastasiamac> babbageclunk: \o/
<babbageclunk> anastasiamac: hey hey
<anastasiamac> just celebrating ur PR's merge...
<babbageclunk> anastasiamac: oh yay!
 * babbageclunk dances
<thumper> i need a teddybear... babbageclunk your soft...
 * babbageclunk sighs
<babbageclunk> ok!
<babbageclunk> in 1:1
<veebers> ah man, wallyworld I'm all of sudden seeing this error, any thoughts on debugging it? (I use make operator-image and docker push to publish it: Failed to pull image "veebers/caas-operator@sha256:fc83ad5cbba1247daa1623d9b102201e56a655abd3d3b680d1ca3d456645ec5d": rpc error: code = Unknown desc = Error response from daemon: repository veebers/caas-operator not found: does not exist or no pull access
<veebers> as far as I'm aware it's public and should have access
<wallyworld> um
<wallyworld> sure you you are using --config to set docker username when bootstrapping
<wallyworld> caas-operator-image-path
<wallyworld> not user name
<wallyworld> other than that, not sure. i've bootstrapped today using the official image with no problems
<wallyworld> do you need a non-released operator image?
<veebers> wallyworld: ah shoot, maybe using wrong username will check
<veebers> wallyworld: hmm, I am setting caas-operator-image-path as I was previously.
<veebers> yeah bootstrapping without setting caas-operator-image-path works, I'll keep diggin
<wallyworld> ok
<thumper> babbageclunk: I think the race is still there just a lot smaller than it was...
<babbageclunk> thumper: stink
<thumper> babbageclunk: https://paste.ubuntu.com/p/DwCgvSNYfC/
<thumper> babbageclunk: getting there...
<thumper> babbageclunk: I can't remember if the squishing of events is done in the txn watcher or the hub watcher
<thumper> if it is the former, then there is no race
<babbageclunk> I'm pretty sure it's the former
<thumper> then, yay?
 * babbageclunk yays?
<thumper> I sorta hadta do a refactoring to move NewModel on to the controller object
<thumper> from state
<thumper> it made a bunch of things easier
<thumper> and more correct
<babbageclunk> yeah, it makes more sense being there now that you say it.
<thumper> I was going to say all the tests are passing...
<thumper> but just noticed a few watcher failures in migration tests
<thumper> so almost there...
<thumper> all failures so far just in modelmigration_test.go
<thumper> I am getting hastled about finishing today though from Rachel
<thumper> I had said I'd finish early because of the extra hours this week
<thumper> but I'm trying to get these tests passing
 * thumper sigh
<thumper> never ending bollocks
<thumper> and likely merge conflicts with devel now...
<thumper> ... and StateSuite.TestWatchAllModels
<thumper> and another...
 * thumper sighs again
<thumper> where is my wine
<thumper> OOPS: 2263 passed, 3 skipped, 9 FAILED
<thumper> better than before...
<babbageclunk> yeah, I'm pretty sure it's beer oclock
<thumper> huh
<thumper> found that six of those failures are due to a replacement of the state clock
<thumper> which would screw up the watchers
<thumper> 6 fixed
<babbageclunk> thumper: juju-engine-report isn't working for me on a freshly bootstrapped controller machine?
<thumper> babbageclunk: underscores now
<thumper> juju_engine_report
<babbageclunk> thumper: ah, of course - thanks!
<wallyworld> babbageclunk: you having luck with the bounce fix?
<wallyworld> kelvinliu_: here's a small k8s storage fix https://github.com/juju/juju/pull/9108
<kelvinliu_> wallyworld, looking now
<wallyworld> ty
<babbageclunk> wallyworld: yeah, I think so - just fixing some tests that needed a lot of thinking about, but the basic change is pretty simple. Will try it with the cmd/jujud/agent tests next.
<wallyworld> ok
<wallyworld> good that the big mother farker landed :-)
<kelvinliu_> wallyworld, LGTM, thanks
<wallyworld> thanks!
<stickupkid> manadart: you got a second?
<manadart> HO? Let me put a shirt on.
<stickupkid> haha
<stickupkid> manadart: I got it working
<stickupkid> YESSSSS!
<stickupkid> manadart: we can't do this here for older lxd -- https://github.com/juju/juju/blob/develop/provider/lxd/server.go#L245
<manadart> Ahh.
<stickupkid> I could check the version I guess, because it does work for new LXD, or do I follow the todo and move it?
<stickupkid> manadart: also I don't think we can move it prior to instance creation?
<manadart> I did some work on that on account of the frequency of the logging messages, but backed it out because the bridge must be there before enabling the HTTP listener.
<stickupkid> what do you think the best course of action is, it looks like we don't need it for 2.0.x, but it's fine with 2.3.x >
<manadart> stickupkid: I think we still need it called once on the host at the outset, we just don't want it called by model provisioning...
<stickupkid> manadart: so we just need it hoisting further up the code flow?
<manadart> stickupkid: Somehow, but where to put the conditional is the tricky part. This is the factory created with the environ...
<stickupkid> manadart: so it doesn't know what's launching it, only the site of execution does (juju vs jujud)
<manadart> stickupkid: I found the PR for the TODO that was closed: https://github.com/juju/juju/pull/8964/files
<stickupkid> manadart: why did we close it?
<manadart> stickupkid: I mentioned it ^^. If you don't call it, local bridge name is not set, then there is no IP for enableHTTPSListener.
<manadart> It was one of the last LXD things I worked on. At the time I had to get an Openstack bug fixed, so it was parked. Didn't realise it was an actual bug at the time - I was addressing logging frequency.
<stickupkid> ha
<stickupkid> so I'll have a look at the PR and try and see what we can do with it
<stickupkid> manadart: if we move that call up the code, we then don't end up with a local bridge name :sigh:
<manadart> stickupkid: Exactly.
<stickupkid> haha
<stickupkid> right, i'm getting your comment before now, i'm just re-living it
<stickupkid> manadart: right it seems we still need to ensure the bridge for local setups see: https://github.com/juju/juju/blob/2.2/tools/lxdclient/client.go#L158-L167
<manadart> externalreality: It seems to be working now. Refreshed from develop, installed deps and rebuilt.
<manadart> manadart: Panic still. Different error.
<manadart> externalreality: ^
<externalreality> manadart, ack
<externalreality> manadart, looking
<manadart> externalreality: I think I've sorted it. One minute.
<externalreality> manadart, ok
<manadart> externalreality: Yep. Works end-to-end. Will put a PR up now.
<externalreality> manadart, cool
<externalreality> manadart, I see you comment on the PR
<manadart> externalreality: The lock cleanup? Yes.
<externalreality> manadart, I feel dumb, not sure how I left that bit out I was sure I put it in there... thus the code review process works.
<externalreality> manadart, In any case what I see is actually somewhat different that what you suggest on the pr
<externalreality> manadart, as in what the main loop of the worker is doing... it seems to be switching on the retreived state. Its quite nice.
<externalreality> manadart, anyway I updated the PR if you could have a look for quickly, I would appreaciate it. Its only a 3+ line change.
<manadart> externalreality: Sure.
<manadart> Also the PR that gets end-to-end working again is: https://github.com/juju/juju/pull/9109
<stickupkid> manadart: https://github.com/juju/juju/pull/9110
<manadart> stickupkid: Will look.
<stickupkid> manadart: i need to do some more manual testing, around snap 2.0.x and apt-get 3.0.x
<stickupkid> so many combinations :(
<stickupkid> hml: updated the description https://github.com/juju/juju/pull/9110
<stickupkid> note this needs backporting to 2.4.x - I need to check that to be sure.
<hml> stickupkid: ack, ty
<externalreality> manadart, if you are still around do you know why I get the error "CRIU is not installed" when trying to make a stateful snapshot with lxc - this when CRIU is infact installed.
<hml> rick_h_: you prefer this: https://paste.ubuntu.com/p/mhJQKCFPcZ/ with the newline?  at first glance the Cloud line blends into the Enter questions to me.  either way
<rick_h_> hml: sure, let's try it thanks.
<hml> rick_h_: k
<hml> rick_h_: on the PEM-encodedâ¦ i thinking a few more words would help too.
<hml> rick_h_: Enter the LXD client certificate, name of PEM-encoded file (optional):
<hml> rick_h_: or
<hml> rick_h_: certificate filename, PEM-encoded (optional):
<rick_h_> hml: plus one to the second one. I like filename
<hml> rick_h_: ack
<hml> rick_h_: i have 2 quick PRs up for your viewing pleasure: https://github.com/juju/juju/pull/9112 and https://github.com/juju/juju/pull/9111
<rick_h_> hml: cool will look. Thanks.
<rick_h_> hml: quest back to you on one of them
<hml> rick_h_: option 2 matches more closely to the o7k cert file requestion
<rick_h_> hml: which one is option 2?
<hml> rick_h_: the path to the PEM-encoded LXD server certificate file
<rick_h_> hml: cool, let's do that then. Thank you for adjusting it!
<hml> rick_h_: 9112 is updated
<rick_h_> hml: cool, will look once done with this call ty
#juju 2018-08-26
<thumper> wallyworld: I may need to talk to you shortly
<thumper> I say may... because I'm running the unit tests again
<wallyworld> ok
<thumper> ok
<thumper> now I need to
<thumper> wallyworld:  CAASUnitSuite.TestWatchContainerAddresses is failing for me and I'd like to talk it through
<wallyworld> ok
<wallyworld> did you want a HO?
<thumper> yeah
<thumper> 1:1?
<thumper> wallyworld: I'm at that stage where I know why it is failing, and now wondering how it ever passed
<wallyworld> thumper: finished SU, HO again?
<anastasiamac> thumper: :D
<thumper> wallyworld: sure
#juju 2019-08-19
<timClicks> anastasiamac: did you want to update those links within juju help bootstrap?
<anastasiamac> timClicks: i would have liked to changed that bootstrap help but it looks like rick_h wanted to deal with this bug so i left it to him
<timClicks> ok
<anastasiamac> timClicks: i commented on it with my suggestion and m happy to take over but did not want to step on anyone's toes... i'll wait until he wakes up :)
<timClicks> looks like I didn't get all of the emails
<anastasiamac> oh mayb need to update/fine tune ur subscriptions?
<timClicks> thumper: updated https://github.com/timClicks/juju/tree/develop-docs--readme-upgrade
<thumper> timClicks: cool. I'm just trying to focus on another problem just now
<thumper> with you later
<timClicks> babbageclunk: that edit to the release notes implies that you tracked down the missing vmdk issue?
<babbageclunk> timClicks: yeah, the culprit was a jenkins cleanup job. D:
<timClicks> babbageclunk: :/
<timClicks> phew, I guess?
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10533
<timClicks> wallyworld: hey with your Azure SDK updates in the azure provider code, is this line in the release notes still current "Note that not all Azure subscriptions support bootstrapping to all regions."? Is it possible to be more specific?
<timClicks> have expanded the release notes - appreciate any feedback https://discourse.jujucharms.com/t/1889
<stickupkid> manadart, did you get ineffassign a glance, so we can attempt to merge?
<manadart> stickupkid: Yes.
<stickupkid> manadart, ta
<manadart> stickupkid: Approved it. I'm not such a fan of using "var (" for a couple of vars in methods. It adds lines for no real gain in readability. But it's taste I guess.
<stickupkid> blame ian
<stickupkid> i like them in dense code, as it causes them to indent in one, so stands out when reading
<stickupkid> manadart, "FAIL: model_test.go:238: ModelSuite.TestUnitReturnsCopy" <- it no worky
<manadart> stickupkid: Let me see.
<manadart> stickupkid: Gah. Line 255 should be "c.Assert(u2.Ports(), gc.DeepEquals, ch.Ports)"
<stickupkid> manadart, i'll fix
<stickupkid> manadart, ta
 * manadart nods.
<stickupkid> achilleasa, approved your pylibjuju changes
<achilleasa> stickupkid: do we have a plan for the integration test timeouts? Also, should I upgrade my python and regenerate the clients?
<stickupkid> achilleasa, "Also, should I upgrade my python and regenerate the clients" no need
<stickupkid> achilleasa, "do we have a plan for the integration test timeouts" - travis is flakey for us, so i'm not sure atm
<achilleasa> stickupkid: so I can go ahead and land the PR then, right?
<stickupkid> achilleasa, yeap, gogogogogogogo
<stickupkid> fixed the final ineffassign failing tests, let's see if it merges!
<stickupkid> anyone know how to rename the default model name on bootstrap?
<stickupkid> hostedModelName - "-d" "--default-model"
<rick_h> happy monday
<rick_h> stickupkid:  yea, isn't it a bootstrap argument?
<rick_h> stickupkid:  achilleasa can you please check out https://github.com/juju/python-libjuju/issues/341 while you're in the space?
<achilleasa> rick_h: sure thing
<stickupkid> achilleasa, it's when bootstrapping to aws
<stickupkid> achilleasa, fails pretty easily
<hml> did github settings get replace with actions, or am i just blind?
<hml> settings for a project
<rick_h> hml:  settings are over to the right
<rick_h> actions are in the middle it looks like
<rick_h> hml:  but if you just have write access you might not have access to settings
<rick_h> hml:  but given that the webhook was already setup you shouldn't need to update it? or maybe if the key isn't right I guess
<hml> rick_h: i canât see anything called settings.  and the links on the discourse page give me a 404, so must not have permissions
<rick_h> hml:  probably then
<manadart> hml: Approved your merge patch.
<hml> manadart:  ty!
<stickupkid> achilleasa, https://github.com/juju/python-libjuju/pull/342
<achilleasa> stickupkid: looking in a few min
<stickupkid> rick_h, what do you think about doing the following - https://github.com/juju/python-libjuju/pull/342
<stickupkid> rick_h, like https://github.com/juju/python-libjuju/pull/342/commits/56435ab446889b16b3b7a448d74e07e21e9a6b25#diff-f1d141997568394b3d1de502a0ce201eR71
<rick_h> stickupkid:  let me look sec
<rick_h> stickupkid:  wfm
<stickupkid> rick_h, right, i'll clean up the tests and get it passing
<rick_h> stickupkid:  just one note on the wording of the error, I'd just say "must be an integer"
<rick_h> stickupkid:  vs "of type integer" or whatnot
<rick_h> but that's just to make my brain happy
<stickupkid> sure, i'll change it now
<hml> review?  https://github.com/juju/names/pull/95
<stickupkid> hml done
<rick_h> hml:  hah, it never even had a makefile?
<hml> stickupkid: ty - adventures in merge jobs on oldâ¦
<hml> rick_h: forget makefileâ¦ no depenency mgmt
<rick_h> hml:  hah
<rick_h> dark corners of juju land
<hml> rick_h: yupâ¦ juju spelunking :-D
<hml> stickupkid: thereâs some cleanup jobs missing for some of the stuff you and achilleasa  are working onâ¦ found a bunch of old containers on goodra
#juju 2019-08-20
<stickupkid> "ERROR failed to bootstrap model: cannot start bootstrap instance in availability zone "travis-job-28b2fcc6-d80d-4e62-8594-4e186717b5f0": not found"
<stickupkid> this seems very strange
<stickupkid> 2.7 branch...
<stickupkid> there is really something weird in 2.7 branch, it has random lock ups in travis when attempting to bootstrap, I can replicate it locally though
<stickupkid> manadart, got a sec
<manadart> stickupkid, achilleasa: OTP with Atos. Gimme a few.
<stickupkid> manadart, sure, take your time, just want to pick your brains
<stickupkid> achilleasa, CR this one https://github.com/juju/python-libjuju/pull/342 ?
<achilleasa> stickupkid: looking
<achilleasa> I actually started going through that late yest but hit EOD...
<manadart> stickupkid: HO?
<stickupkid> manadart, i can't replicate locally, but i have seen it happen :|
<stickupkid> manadart, should we be hitting this "juju.core.cache programming error, unit removed before being added, application name not found"
<achilleasa> stickupkid: I am getting that consistently in the logs when I run 'juju remove-application'
<manadart> stickupkid: No. That's an issue with the LXD profile watcher.
<stickupkid> manadart, we should fix
<manadart> stickupkid: Yes.
<stickupkid> manadart, i fix
<stickupkid> it's not really a programming error if we hit it all the time :|
<hml> morning
<stickupkid> manadart, https://github.com/juju/juju/pull/10541
<manadart> stickupkid: Looking.
<stickupkid> manadart, ta
<stickupkid> manadart, i agree, i've clean up the method call
<manadart> stickupkid: Cool.
<manadart> stickupkid: I got the AZ error bootstrapping localhost.
<stickupkid> manadart, https://github.com/juju/juju/pull/10424
<manadart> stickupkid: With develop, that didn't fix it for me.
<stickupkid> manadart, something else has changed then :(
<manadart> stickupkid: I think I see it.
<stickupkid> manadart, nice, if i could replicate it, I'd have a nice fix up :(
<manadart> stickupkid: Looks like the issue achilleasa was having too.
<manadart> stickupkid: Looks like a change in LXD. Getting the image alias returns a not found instead of just a nil alias now.
<stickupkid> manadart, ah really, but we've not bumped the client, that's unfortunate
<manadart> stickupkid: I will check 2.6 too.
<manadart> stickupkid: LOL; ineffassign fixes busted it.
<stickupkid> manadart, haha where?
<stickupkid> ho?
<manadart> stickupkid: Sure.
<manadart> stickupkid: https://github.com/juju/juju/pull/10542
<rick_h> hml:  recovering ok?
<hml> rick_h: coffee is my friend ;_D
<hml> :-D
<stickupkid> manadart, just doing a final Q&A on the PR
<stickupkid> rick_h, part 1 of updating deployment https://github.com/juju/os/pull/9
<rick_h> stickupkid:  cool ty
<stickupkid> manadart, done
<manadart> stickupkid: Ack.
<stickupkid> is there any reason why model names can't have underscores, seems weird
<achilleasa> stickupkid: got a few min to help me with a bit of lxd black magic?
<stickupkid> achilleasa, sure
<stickupkid> i like black magic
<stickupkid> being juju and all
<aisrael> cory_fu: Found a breaking bug in layer:basic with `use_venv: False`: https://github.com/juju-solutions/layer-basic/issues/146
<aisrael> tl;dr, any charm not using venv is broken
<cory_fu> aisrael: Hrm.  That's been like that for 4 years.  Why has it only come up now, I wonder?
<aisrael> cory_fu: I wonder if it has to do with the recent changes around setuptools? I guess I could pull the previous revision and test against that.
<aisrael> or something more recent, I mean. Maybe something in setuptools itself changed.
<cory_fu> aisrael: Well, it may just be the newer pip
<cory_fu> Seems like it would be safe enough to just add --ignore-installed to that line?
<aisrael> cory_fu: I think that's a reasonable change
<cory_fu> aisrael: I'm working on some OpenStack integration testing ATM.  Would you mind throwing up a PR?
<aisrael> cory_fu: sure, not a problem
<cory_fu> Thanks
<cory_fu> aisrael: flake8 is complaining about your line length.  :p
<aisrael> cory_fu: just noticed that :/
<aisrael> cory_fu: lint fixed :)
<cory_fu> aisrael: Thanks, merged
#juju 2019-08-21
<babbageclunk> thumper, wallyworld: can I get a review of a description change plz? https://github.com/juju/description/pull/61
<wallyworld> sure
<babbageclunk> thanks!
<babbageclunk> looks like there's a connectivity problem from CI to github though?
<babbageclunk> (for the description repo bot at least)
<anastasiamac> babbageclunk: yes m getting fatal: unable to access 'https://github.com/juju/juju.git/': Failed to connect to github.com port 443: Connection timed out
<thumper> https://github.com/juju/juju/pull/10544 for the mongo memory profile restart problem
 * thumper going to walk the dog
 * thumper wonders if it is the https proxy problem
<thumper> it is possible we are now expected to go via the proxy
<thumper> whereas before we weren't
<thumper> just thinking of what it could be
<babbageclunk> thumper: yeah, it does sound like that
<manadart> achilleasa: With a freshly built develop, I created a nested container. I am now trying after apt purging LXD from a parent.
<achilleasa> manadart: I am getting the same error with 2.6.6 from snap
<timClicks> Hi all, if you have a few minutes spare - you might like to look at our docs page. I've updated the navigation structure. What do you think? https://jaas.ai/docs
<manadart> achilleasa: And it worked after deleting LXD, but this is all on Bionic.
<achilleasa> manadart: this makes it break on my dev box: https://pastebin.canonical.com/p/N9H3PBkdjY/
<manadart> achilleasa: Is it the same if you just add-machine lxd:0?
<achilleasa> manadart: one sec; I am trying to deploy apache2 (everything is using bionic now). I will try the above command next
<achilleasa> manadart: same result with the above command
<achilleasa> should I try to clean up all lxd profiles on the host and bootstrap again?
<stickupkid> achilleasa, what's the issue?
<achilleasa> stickupkid: running https://pastebin.canonical.com/p/N9H3PBkdjY/ makes the lxd container instance error with "Create LXC container: LXD doesn't have a uid/gid allocation. In this mode, only privileged containers are supported"
<stickupkid> i've seen this in lxd issues somewhere
<stickupkid> achilleasa, locally or inside a container?
<achilleasa> stickupkid: so basically it's bootstrap, add machine and create container in that machine (last bit fails)
<stickupkid> achilleasa,  those commands worked for me
<manadart> Need a review: https://github.com/juju/juju/pull/10545
<achilleasa> stickupkid: I still get the same error after cleaning up everything and removing all lxd profiles from the host
<stickupkid> achilleasa, including default?
<achilleasa> stickupkid: yes
<stickupkid> achilleasa, can you try this and see it works "lxc profile set default raw.lxc lxc.apparmor.profile=unconfined"
<stickupkid> manadart, done, took some time.
<manadart> stickupkid: Thanks.
<achilleasa> stickupkid: bootstrap cannot complete after running ^^^
<stickupkid> achilleasa, that's interesting, as that removes apparmor, so i'm unsure - now
<achilleasa> it's currently stuck trying to ssh
<stickupkid> i'm not sure how disabling apparmor can prevent that
<achilleasa> let me try again
<stickupkid> if you deleted default profile, i'm unsure how it knows about the device - i.e. lxdbr0
<achilleasa> this is how the default profile looks like on the host machine: https://pastebin.canonical.com/p/HyrrRvRQFC/
<stickupkid> everything else looks legit except for the raw.lxc, which we've just added
<achilleasa> stickupkid: oh wait... it seems to proceed now... odd
<achilleasa> stickupkid: same error :-(
<achilleasa> while I am trying to figure out what's going on can someone please take a look at https://github.com/juju/packaging/pull/4?
<manadart> achilleasa: I can look in a mo'.
<achilleasa> manadart: I am trying the same experiment on aws now
<achilleasa> manadart: it works with aws so it must be something broken with my local lxd setup...
<rick_h> stickupkid:  did you find out anything with the lxd issue you were hitting?
<stickupkid> rick_h, yeah, so my ineffassign caught a bug, which we needed to fix, both me and manadart located and landed
<rick_h> stickupkid:  ah, gotcha
<drul> Hi.  I'm installing openstack-on-lxd and got stuck for a while with rabbitmq endlessly trying to install.  SSHing into the container and adding 127.0.1.1 with the container's hostname seems to have fixed it.  Looks similar to https://bugs.launchpad.net/charms/+source/rabbitmq-server/+bug/1401830 although that's apparently fixed.  Is this a regressi
<drul> on, or have I missed something?
<mup> Bug #1401830: rabbitmq hostname resolution error while deploying  <sts> <rabbitmq-server (Juju Charms Collection):Fix Released by niedbalski> <https://launchpad.net/bugs/1401830>
<drul> * adding to /etc/hosts, that is
<stickupkid> rick_h, 2.6.1 release notes https://github.com/juju/python-libjuju/pull/343
<hml> timClicks: quick pr review?  https://github.com/juju/juju/pull/10546  just had to rerun mockgen for a bunch of files
<hml> thanks timClicks
<timClicks> hml: sorry for the delay, missed your message at first
<hml> not a problem
<thumper> babbageclunk: easy review? https://github.com/juju/juju/pull/10547
<babbageclunk> thumper: ha, got confused seeing the python changes first
<wallyworld> kelvinliu: i've responded to review comments and made updates, had to resolve merge conflicts due to names.v3 landing to develop
<babbageclunk> thumper: approved
<thumper> babbageclunk: thanks
<timClicks> which is the earliest version of MAAS that we support?
<babbageclunk> timClicks: I think 1.9?
<anastasiamac> babbageclunk: timClicks i thought we just recently said that we do not support 1.9?
<anastasiamac> thumper: did we not remove it from our ci?
<thumper> timClicks: there is the technically, and really
<timClicks> anastasiamac: that's what I recall (vaguely)
<thumper> as of 2.7 we no longer say supporting 1.9
<anastasiamac> woot woot \o/
<timClicks> thumper: okay, well I've leave the note in the docs
<anastasiamac> but m guessing since th code is still there, we "kind" of support it
<timClicks> (which commits to supporting MAAS 1.x for the whole of Juju 2.x fwiw)
<kelvinliu> wallyworld: thanks!
#juju 2019-08-22
<babbageclunk> anastasiamac: ah, I missed that
<thumper> timClicks: well, since we are stopping testing against 1.9 there is a chance we'll miss something
<thumper> I'd rather say people need to go through juju 2.5 or 2.6 in order to upgrade their maas
<wallyworld> babbageclunk: if you have a moment at some point https://github.com/juju/juju/pull/10548
<babbageclunk> wallyworld: taking a look
<wallyworld> ty, no rush
<magicaltrout> random trivia question
<magicaltrout> if you do a juju remote backup
<magicaltrout> does it persist it somewhere on the filesystem on the server?
<magicaltrout> if so, where is it?
<wallyworld> magicaltrout: you talking abou the juju create-backup command?
<magicaltrout> yeah
<wallyworld> by default it will download it to your local client, but you can specify --keep-copy and it (I think) is stored as a blob in mongo
<wallyworld> you can list what ones are stored in the controller via list-backups
<magicaltrout> yeah, fair enough, plan b then
<wallyworld> you can choose to 1. contoller copy only, 2. local copy only, 3. both
<wallyworld> via a combination of --no-download and --keep-copy
<magicaltrout> ironically if you just want a pool and run a backupscript over them, 1,2 and 3 aren't ideal ;)
<magicaltrout> i'll stick juju on another box and hook up some cronjob against it
<wallyworld> if there's a use case you want, you could post a question on discource and we can consider it
<wallyworld> backups do need some love and attention
<magicaltrout> ha, i won't waste your cycles on it, i just hoped the backups would land somewhere on the controller and I could just dump them to backblaze rather than the client downloading the file then doing it
<magicaltrout> but its not a big deal
<wallyworld> kelvinliu: got a minute for a HO?
<kelvinliu> wallyworld: sure
<stickupkid> CR anyone https://github.com/juju/os/pull/10?
<manadart> stickupkid: Avin' a butcher's.
<manadart> stickupkid: I approved it, but then made a suggestion.
<stickupkid> manadart, i agree with said comment
<stickupkid> rick_h, this now correctly handles the Any type https://github.com/juju/python-libjuju/pull/345
<rick_h> stickupkid:  k, gave it a look but I have a case of not trusting my review eyes and wanting to see a test/code prove it works out.
<stickupkid> rick_h, 100% agree... this isn't pretty as I don't have the context to why it was wired up originally like that - it seems weird
<stickupkid> rick_h, i wonder if the assumption was that an `interface{}` would always be `map[string]interface{}`
<rick_h> stickupkid:  so an interface an map walk into a bar...
<rick_h> stickupkid:  no idea
<stickupkid> rick_h, got a sec?
<rick_h> stickupkid:  definitely
<stickupkid> rick_h, ho
<rick_h> omw
<magicaltrout> hello folks
<magicaltrout> lazy relation question
<magicaltrout> on the far end of a connection, how do I get the network addressable name/ip at the other end?
<magicaltrout> unit.blah
<timClicks> magicaltrout: sorry that we haven't gone an answer to you yet
<timClicks> magicaltrout: would you mind asking in discourse?
<timClicks> have updated our tutorials page to be more accessible to new users and to surface up community-contributed how tos https://jaas.ai/docs/tutorials
<wallyworld> magicaltrout: i think that's normally info that the other end is expected to put in relation data. the remote unit uses the network-get hook command to get the address info for a given binding/endpoint and shoves that in relation data for the other unit to read
#juju 2019-08-23
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10550
<wallyworld> ok
<anastasiamac> \o/
<timClicks> feedback please! https://discourse.jujucharms.com/t/feedback-please-wecloming-new-users-to-juju/1976
<wallyworld> timClicks: starting with a few well chosen bullet points is (to me) preferrable over a block of text first up
 * timClicks nods
<timClicks> yeah it certainly lowers the cognitive barrier to entry
<magicaltrout> timClicks: the first bullet point is an inside joke, right?
<magicaltrout> ;)
<timClicks> magicaltrout: shh
<magicaltrout> I think it reads fine... but
<magicaltrout> there's a couple of bits in V2 I don't think make sense to people who don't know
<timClicks> magicaltrout: I gave a talk at a meetup this week and when asked to explain Juju described it as "add terraform and ansible together, then remove about 90% of the complexity"
<timClicks> that off the cuff comment has sort of stuck in my mind a bit
<magicaltrout> "Juju focuses on the applications that your deployment defines and how they are related." that sorta makes sense, but i think could possibly be reworked into something thats a little easier to understand
<magicaltrout> "Extending your product should be as simple as deploying its first prototype." I'd argue that thats impossible... don't @ me
<timClicks> the downside of being pithy is that you end up looking naive
<magicaltrout> tbh the tweet of V1 I liked
<timClicks> yeah I'm quite happy with v1, tbh
<magicaltrout> you might want to keep the v1 structure, put the bullets after the first sentence and remove a couple
<timClicks> wallyworld: one of the problems with bullet points is that they're very easy to bikeshed about and obtaining perfection is haaard
<wallyworld> worth the effort though. most people will not look at a wall of text
<timClicks> +!
<timClicks> +1
<magicaltrout> do the trendy thing and bold key words ;)
<magicaltrout> ...
<magicaltrout> don't
<timClicks> ha
<timClicks> magicaltrout: is there anything in v1 that you slightly disagree with, in the sense of that prototype sentence
<magicaltrout> nope i like v2
<magicaltrout> er
<magicaltrout> 1
<timClicks> that's good to hear
<timClicks> magicaltrout: I also added a glossary yesterday https://jaas.ai/docs/glossary
<timClicks> and restructured the docs heavily to make them more digestable
<timClicks> I'm looking at the "getting started" page next (it's the most important page to get right, imo) and then I'll see if we can look into relations/interfaces in more depth
<magicaltrout> looks good timClicks. I know you've been my non technical colleague about the getting started docs this week, or so he claimed. He tries! But he doesn't have much development experience and I'm in the US and he's in the office in the UK... that said, I did tell him to keep going because if he can't get through the getting started docs, then they probably don't explain stuff simply/ccorrectly or as
<magicaltrout> obviously enough for new users
<magicaltrout> he told me it was broke, so when I get back to the office I'll find out where he's got stuck, but there's certainly stuff like old Xenial series tags and stuff being built against Bionic and things that confuse matters a bit
<magicaltrout> also, i'm very happy jaas.ai is getting its tutorials sections, but you all also need to make sure the old tutorials on ubuntu.com are removed because they'll be uber stale
<magicaltrout> oh also, one other thing about newbies and relation interfaces... I've argued for years and never got very far that it would be cool to have a bunch of generic interfaces for stuff... the example I raise plenty of times, and I'm guilty of never following through on building it like I said I would is, as well as a MySQL relation and a PostgreSQL relation etc have a JDBC relation that sends all the
<magicaltrout> common properties over the relation
<magicaltrout> why? Cause most stuff you relate to a database is going to speak JDBC or similar, so have a generic relation that allows the developer to just implement a single endpoint instead of one for each DB
<magicaltrout> but similar examples exist for LDAP, Backup etc etc
<timClicks> that's a v good point about the old tutorials, there is quite a lot of stale content around actually.. askubuntu.com has very good SEO, for example, and quite a few answers related to juju 1
<magicaltrout> yup
<timClicks> creating some standard foundational interfaces seems really sensible
<timClicks> ODBC and JDBC make perfect sense conceptually
<timClicks> I would like to see generic TCP and perhaps UDP interfaces for streaming bytes around
<timClicks> they could be very simple ("hi I'm listening on this port")
<magicaltrout> yeah i could see that being handy
<timClicks> but that could conflate the difference between the underlying transport and juju's primitives, which could muddy the waters
<timClicks> the pgsql:db interface also includes a dance for creating a unique username + password
<magicaltrout> it does, mysql does the same, thats useful, but i don't see why you can't do that in a generic interface
<timClicks> so part of the discussion about those foundational interfaces would be to decide on what extras are needed
<timClicks> sure
<magicaltrout> also timClicks on the relations topic... here's a question.... say I have a very simple relation... jdbc for example
<magicaltrout> sends a app name one way
<magicaltrout> and username, password, ip address, database type the other
<magicaltrout> why, as a developer, do I need to write any code at all to generate that relation?
<magicaltrout> it seems almost like a really digestable and developer friendly way to deal with simple relations would be to have 2 yaml files to define the interface on either end, and some python lib does the rest of the work
<magicaltrout> it also becomes understabld for people wanting to implement the relation because the metadata is defined in a yaml definition
<magicaltrout> -d +e
<timClicks> I don't know enough of the history to know why interfaces and charm development emerged as it did
<magicaltrout> from an implementation point of view of course I have no idea what work that entails, but most of my relations over the years have just shuttled data around and left it to the charms to figure it out, surely there's a really simple template to build that out in a way that just involves 2 files! ;)
<timClicks> my suspicion is that charming methodologies were developed by people who had a very thorough understanding of what they intended (because they created it)
<magicaltrout> cory_fu rewrites the interface patterns as a hobby... ;)
<magicaltrout> just to mess with me
<timClicks> autogenerated relations seems like a pretty neat idea though, just a matter of implementing it!
<magicaltrout> ha
<magicaltrout> well i've asked on the forums
<magicaltrout> i barely have any bandwidth but it'd be something i'd like to investigate purely from a sanity point of view
<magicaltrout> also if it built out the definition into boilerplate, if you wanted it to do something more useful you could tweak it before pushing it upstream
<timClicks> I like the idea, very hopeful that it will spur some productive discussion
<timClicks> I need to shoot off, nice to chat
<wallyworld> kelvinliu_: juju-run support https://github.com/juju/juju/pull/10551
<wallyworld> still need to work on setting things up before running an action so you don't have to first run an action for it to work
<kelvinliu_> yeah, we can add a cmd to check if the files/dirs/symlinks are ready to run
<hpidcock> wallyworld: looks like that should work perfectly fine with the TLS stuff
<wallyworld> good
<wallyworld> kelvinliu_: it's not so much from the operator sid eof things. the worklod expects to have juju-run available
<kelvinliu_> ah, right. we probable can have a hook on operator side to ensure/copy all stuff in place after workload pod is up.
<wallyworld> if initi was thinking we could use an init container which has the same jujud embedded in it as the operator
<wallyworld> maybe a shared emptyDir volume mount or somehting, not sur
<kelvinliu_> and it could be included into the new worker that will be responsible to ensure certs etc up to update.
<wallyworld> let's discuss with hpidcock on monday as i want to define the approach we need to use
<kelvinliu_> sure
<hpidcock> sounds good
<kelvinliu_> wallyworld: just left a few questions on ur PR, and also a draft pr for podspec v2 https://github.com/juju/juju/pull/10552/files, we can discuss on Mon together.
<wallyworld> no worries ty
<kelvinliu_> thx, have a good weekend!
<wallyworld> u2
<drul> Hi.  Now trying to get 'openstack-base' bundle up and running, and almost there... but keystone is constantly alternating between 'Migrating the keystone database' and 'hook failed: "shared-db-relation-changed" '.  On the unit, /var/log/keystone/keystone.log shows 'DBMigrationError: "Database schema file with version 61 doesn't exist." '.  Any thou
<drul> ghts welcome!
<atdprhs> Hello everyone, do any know how can I do this >> https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues
<atdprhs> using conjure-up which uses juju coredns doesn't connect to the internet
<atdprhs> but since I'm on ubuntu, i am doubting that this is the issue here >> https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/#known-issues
<hml> manadart: review pls https://github.com/juju/juju/pull/10549
<manadart> hml: Sure; need a few mins.
<rick_h> atdprhs:  hmm, not sure. I'd check with the k8s folks like kwmonroe and knobby
<atdprhs> ok, thanks rick_h
<rick_h> atdprhs:  you might also hit up https://discourse.jujucharms.com/ as there's a bunch of k8s folks that watch there and might have some hints
<knobby> atdprhs: I don't think that is your problem. CK has done this for some time: https://github.com/charmed-kubernetes/charm-kubernetes-worker/blob/master/reactive/kubernetes_worker.py#L694
<knobby> atdprhs: I agree with rick. I think the best option is to open a discourse post describing your environment and problem.
<atdprhs> knobby, i think we spoke before on Kubernetes slack
<atdprhs> it's the same problem :(
<atdprhs> I am using ubuntu 18.04, i'm guessing since ubuntu 1804 is using netplan, maybe there is a difference?
<atdprhs> I am trying to customize the coredns to use 8.8.8.8 as per https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/ but i don't know how to get it done, any changes eventually gets reverted back
<knobby> atdprhs: netplan is in bionic and we deploy bionic all the time. Something has to be going on here. I would hit discourse and be sure to describe it fully.
<atdprhs> @knobby https://discourse.jujucharms.com/t/encountering-network-error-while-attempting-to-install-kubernetes-via-conjure-up/1894/5
<knobby> atdprhs: why would you want to inject dns into coredns? why not just let the default dns that you hand out handle it. pods will ask coredns, which resolves cluster things and then asks your default dns servers about things. Why inject 8.8.8.8 into your coredns when you can just have it outside?
<atdprhs> because my pods are not able to connect to internet
<atdprhs> as per the ticket
<atdprhs> as per the issue/question on discourse
<atdprhs> using your kubectl command, i can't dig
<atdprhs> command not found
<knobby> atdprhs: when you ssh into a worker node does dns work from that node itself?
<atdprhs> when i juju ssh kubernetes worker, i noted dns issue, fixed it
<atdprhs> using netplan, did a reboot for the entire physical machine
<atdprhs> but no luck
<knobby> atdprhs: are you able to `dig google.com @8.8.8.8`
<knobby> from the pod
<atdprhs> no, dig command not found
<atdprhs> https://paste.ubuntu.com/p/bryPTkm8pq/
<knobby> atdprhs: apt install dnsutils
<atdprhs> https://paste.ubuntu.com/p/fPD8RptNhs/
<atdprhs> the pod can't connect to internet
<knobby> atdprhs: yes, there are some networking issues. Can you please describe your environment in that discourse post? How many machines are you using? Are you using aws? Is this bare metal? Are you using lxd?
<atdprhs> i have worker and kubeapi lb exposed as per https://jaas.ai/canonical-kubernetes which means that they should be able to connect to internet
<atdprhs> baremetal
<atdprhs> kvm
<atdprhs> maas
<atdprhs> conjure-up
<knobby> atdprhs: kvm how? maas pods?
<atdprhs> yes
<atdprhs> any machine commissioned by maas's pod (kvm) has connection to internet just fine
<knobby> ok, and juju ssh into a machine works without trouble to reach all the things? I'm concerned that you had to fix a netplan issue on a generated pod. It makes me think something is wrong with maas or something.
<knobby> wrong in maas I mean. Like a configuration issue there
<atdprhs> I updated netplan cuz I destroyed k8s and redeployed it
<atdprhs> the reason i updated netplan, is because i modified maas's subnet network to use a different dns ip hoping it might help
<atdprhs> but generally for all other machines commissioned by maas are working nicely and just fine
<knobby> atdprhs: that doesn't jive with "I'm using pods in maas". Is conjure-up/juju creating the vm for you when you deploy?
<atdprhs> yes
<atdprhs> conjure-up >> cdk >> maas
<atdprhs> >> rest of the process
<knobby> so it makes them with a bad netplan that you have to manually poke each time?
<knobby> or have you kept the same cdk deploy and you're just trying to get existing to work?
<atdprhs> nope, I just poked manually this time because I reconfigured something on maas's end but that issue is not related to this specific poke, since I synced with you on kubernetes slack, I tried 20 deployments of k8s
<atdprhs> all of them, never made any changes to netplan
<knobby> ok, so a `juju run --unit kubernetes-worker/0 -- dig google.com @8.8.8.8` works for you, correct?
<atdprhs> yes
<atdprhs> correct
<knobby> do the pods come up? kubelet is able to reach the net?
<knobby> atdprhs: what cni are you using?
<atdprhs> what do you mean by cni? and how can I know if kubelet is able to reach the net?
<knobby> atdprhs: cni is kubernetes networking plugins. Things like flannel or calico. if kubelet can't reach the net, things like images wouldn't download and pods couldn't start.
<achilleasa> can I get a quick CR on https://github.com/juju/packaging/pull/5? manadart since you reviewed the previous one can you take a look at this one too?
<atdprhs> thanks a lot knoppy, you rock!!!!
#juju 2020-08-17
<wallyworld> kelvinliu: i have a small fix for the juju shell for 2.8 https://github.com/juju/juju/pull/11907
<kelvinliu> looking
<kelvinliu> wallyworld: lgtm, just need to update the prompt string
<wallyworld> ty, looking
<wallyworld> kelvinliu: yeah, i thought i'd leave the string simple and only advertise the easiest approach to quitting
<wallyworld> and then if people did type "exit" it would also just work
<kelvinliu> ok, that makes sense
<wallyworld> otherwise it's a lot of options for them to process
<kelvinliu> right
<stickupkid_> achilleasa, manadart -> https://github.com/juju/charm/pull/314 CR please
<achilleasa> stickupkid_: done. small question about spew
<stickupkid_> achilleasa, no idea, was looking myself
<stickupkid_> achilleasa, go mod why tells me nothing
<achilleasa> stickupkid_: hope nobody landed something with a spew.Dump in non-test code... :D
<stickupkid_> achilleasa, hopefully not
<stickupkid_> â grep -ir spew .
<stickupkid_> ./go.sum:github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
<stickupkid_> ./go.sum:github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
<stickupkid_> ./go.sum:github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
<stickupkid_> that's all it reports back, which is annoying
<achilleasa> stickupkid_: quick CR please: https://github.com/juju/juju/pull/11909
<stickupkid_> achilleasa, where did the "@" syntax come from
<achilleasa> juju config apparently...
<stickupkid_> achilleasa, https://github.com/juju/charmrepo/pull/163
<stickupkid_> manadart, :point_up:
<achilleasa> stickupkid_: sorry; lunching ATM; can look in ~10' if manadart hasn't reviewed it
<manadart> stickupkid_: Deep in something ATM.
<stickupkid_> achilleasa, manadart i can wait
<stickupkid_> just wondered if I was sending to dead letters
<achilleasa> stickupkid_: did you put the comments next to the imports or did your editor do that for you?
<stickupkid_> was already there
<stickupkid_> i did a sed
<achilleasa> 'package testing_test' :D
<stickupkid_> achilleasa, it gets worse, testing/package_test.go package testing_test
<stickupkid_> TEST TEST TEST TEST
<stickupkid_> this is a test
<achilleasa> test that the test-suite is not lying to you?
<stickupkid_> haha
<stickupkid_> you never know!
<achilleasa> but then your test-suite will anticipate that and trick you
<stickupkid_> lies
<achilleasa> btw, PR LGTM; there are some empty lines between import that look redundant
<stickupkid_> achilleasa, I'll fix
<stickupkid_> ah we get XDG_* specification wrong - interesting
<stickupkid_> for the acceptance tests
<stickupkid_> typical
<stickupkid_> achilleasa, the empty lines follow the 3 stanzas
<achilleasa> ah... we do that everywhere then?
<stickupkid_> dunno, juju and charm repo it seems
<stickupkid_> note i hate it, but it's a standard
<achilleasa> thought it was a juju-only thing
#juju 2020-08-18
<stickupkid> I think bundlechanges shouldn't be a separate repo, I'm starting to think that we should internalise some of our libraries as they're really painful to update.
<stickupkid> juju/pkg would make sense
<achilleasa> stickupkid: I think bundlechanges should be exposed as an API by the controller; this way we don't have to re-implement it in both the cli and pylibjuju
<stickupkid> achilleasa, without doubt, the library part shouldn't be an external package though. It's causing a graph of dependencies
<achilleasa> stickupkid: do we know if anyone else is using it?
<stickupkid> achilleasa, I wonder if we care?
<stickupkid> achilleasa, https://github.com/juju/bundlechanges/pull/65
<stickupkid> or manadart :point_up:
<achilleasa> stickupkid: got CI errors
<stickupkid> achilleasa, fixed em
<achilleasa> ah, nvm
<achilleasa> done
<stickupkid> ta
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/11912?
<manadart> achilleasa: I can look.
<achilleasa> manadart: there might be an issue with my solution. It seems that patching the option list injects the defaults
<stickupkid> hml, here is my PR https://github.com/juju/juju/pull/11911
<hml> stickupkid: rgr
<hml> stickupkid: when can we use charm.v8?  i need schema stuff.  :-)
<hml> stickupkid: are you trying to get even for my 5K lines with 450 files?
<hml> ha
<stickupkid> hml, brings it in with this https://github.com/juju/juju/pull/11911
<hml> should have looked at the pr before asking myquestion
<hml> stickupkid: can we use the replace functionality in go.mod to change these to charm and not have to keep updating v7, v8 v9 etc?
<stickupkid> hml, I think it's bad practice to do that
<hml> stickupkid: rgr
<stickupkid> hml, but I agree, it's terrible to just update a charm package
#juju 2020-08-19
<wallyworld> kelvinliu: if you get a moment at some stage, there's a mostly mechanical PR to abstract CLI filesystem access  https://github.com/juju/juju/pull/11914
<kelvinliu> sure
<wallyworld> kelvinliu: trivial help text update https://github.com/juju/juju/pull/11915
<wallyworld> i've also added comment comments to the k8s doc and created some lanes in trello we can start to populate
<wallyworld> ty
<kelvinliu> lgtm, ty
<sou> Hey good people! I am having an issue with nova-compute application (charm=nova-compute). All nova-compute is in error state : hook failed: "secrets-storage-relation-changed"
<sou> Juju unit logs say: hvac.exceptions.InvalidRequest: wrapping token is not valid or does not exist
<sou> This was a working cluster, with 3 openstack controller machines. I had to do maintenance on all 3 controller machines. So I removed one controller each from the cluster reprovidioned and added all the necessary openstack units
<sou> readdition of 2 controller machines went smooth
<sou> But third compute machine threw the above error
<sou> Appreciate suggestion
<sou> Thanks!
<thedac> sou: Can you try refreshing secrets? `juju run-action --wait vault/0 refresh-secrets` and then `juju resovled nova-compute/<unit>`
<thedac> Sorry `juju resolved nova-compute/<unit>`
<thedac> Have to spell correctly ;)
<sou> Thanks a lot @thedac . That did the trick :)
<thedac> \o/
<thedac> Why that worked: the secrets relation gets a one time use token. If anything goes wrong, you have to refresh-secrets to get a new one.
<stickupkid> achilleasa, hml, can I get a juju tick https://github.com/juju/juju/pull/11913
<hml> stickupkid: looking
<stickupkid> hml, hatch approved it, but just need someone from juju to at least of seen it
<sou> Thanks for the explanation thedac !
<hml> stickupkid: just have a question before I give a tickâ¦ in the pr
<stickupkid> hml, done
<stickupkid> hml, by that, I mean I added a comment
<hml> stickupkid: answer for. the question?
<stickupkid> hml, https://github.com/juju/juju/pull/11913#discussion_r473149538
<stickupkid> want a comment to state that?
<hml> stickupkid: my browser wasnât updating i guess
<stickupkid> hml, in the code I mean?
<hml> stickupkid: i think itâll be fineâ¦  there are bigger issues if we get different details based on name/rev combo
<hml> stickupkid: ticked
<stickupkid> ta
#juju 2020-08-20
<sou> Hey good people, I am trying to enable TLS for openstack endpoints. For this purpose I am using vault as the CA, and trying to use https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-certificate-management.html . But signing the CSR step is confusing. It does not say how to generate .pem file. Do we have any other
<sou> document which I can use to setup TLS for keystone using vault as the CA!
<sou> Appreciate any tips/steps/documents.
<sou> Thanks!
<wallyworld> sou: you will most likely have more luck asking in #openstack
<sou> Thanks wallyworld. As the document mentions charms, I added my concern here. Let me check in #openstack!
<wallyworld> sou: yes, fair call. the folks who write the openstack charms tend not to hang out here so much. if you dn't get any luck in @openstack, there's a discourse forum you can try (i'd need to look up the address)
<sou> Sure. Many thanks!
<stickupkid> manadart, achilleasa SUPER EASY ONE https://github.com/juju/juju/pull/11919
<soutr> .
<stickupkid> manadart, achilleasa another one
<stickupkid>  https://github.com/juju/juju/pull/11920
<achilleasa> stickupkid: aren't we installing mongo via snap?
<stickupkid> achilleasa, you wish
<stickupkid> achilleasa, this is for testing
<achilleasa> why not? shouldn't we install the version actually installed when bootstrapping?
<stickupkid> achilleasa, but that's dependant on version
<stickupkid> achilleasa, also I'm unsure how snaps will behave on github actions, i.e. in a container/chroot thing
<stickupkid> achilleasa, I'm not sure I want to go down that rabbit hole
<stickupkid> achilleasa, keep in mind we should always work with the lowest common mongo, as we never upgrade our mongos
<stickupkid> achilleasa, (unless you do upgrade series maybe?)
<achilleasa> since we maintain our own juju-db, can't we tar.gz it and curl | tar it?
<stickupkid> life
<stickupkid> achilleasa, but we don't really
<achilleasa> but this is just the client tests right?
<stickupkid> yeah
<achilleasa> ok, cool then
<stickupkid> we use the packaged one with the os
<stickupkid> i.e. the apt one and make install-dependencies gets that latest one for us
<stickupkid> we just have to blast away what ever is there
<achilleasa> but latest is not really latest now is it?
<stickupkid> because github "decided" that it will install a FUCK TON of crap software when what I really want is ubuntu
<achilleasa> I mean due to the licensing issues
<stickupkid> HA
<stickupkid> yeah, yeah
<stickupkid> you get all this bullshit when you request ubuntu https://github.com/actions/virtual-environments/tree/main/images/linux/scripts/installers
<stickupkid> I already brought this up with the powers that be, that if I requested ubuntu, I should just get ubuntu, not that fucking shit show
<stickupkid> why the fuck do I want php
<stickupkid> and node
<stickupkid> they don't even have maas, juju, etc and they're canoncial products jeez
<SpecialK|Canon> As a GitHub Actions user I'm more likely to want PHP in my environment than I am MAAS
<SpecialK|Canon> but some of the versions can sure be surprising
<stickupkid> but you should ask for it, not be given it
<stickupkid> SpecialK|Canon, I'd rather have this https://paste.ubuntu.com/p/f55QkrbsWH/
<SpecialK|Canon> stickupkid: I know which one I'd rather implement the caching for ;)
<stickupkid> I know why they do it though, so they can cache the hell out of the image
<SpecialK|Canon> (But I see your point as a user)
<stickupkid> SpecialK|Canon, yeah, but ubuntu is a brand, it's expected to perform exactly the same for every installation, this changes that. I'm now not getting "ubuntu", I'm getting ubuntu with stuff that I have to horse around with to get me to a more stock ubuntu
<stickupkid> that's my main issue... caching is a github issue, not a user one
<manadart> stickupkid achilleasa: This is patch I have been discussing at standup: https://github.com/juju/juju/pull/11921
<stickupkid> manadart, will look
<stickupkid> manadart, ho? got questions
<manadart> stickupkid: Yep, gimme a couple.
<stickupkid> manadart, Q&A went well, although slow
<stickupkid> manadart, another question though
<manadart> stickupkid: Yup?
<stickupkid> in the database I'm looking at and the PR description, there is now a "type", is this an migration step, or do we not care?
<stickupkid> ah no wait, I'm blind
<stickupkid> manadart, tick
<manadart> stickupkid: Ta.
<achilleasa> stickupkid: or hml quick CR for a help text change? https://github.com/juju/juju/pull/11922
<hml> achilleasa:  looking
<hml> achilleasa:  one suggestion for the change.
<achilleasa> hml: It was easier to just rewrite the help text. Can you take another look?
<hml> achilleasa:  sure
<hml> stickupkid: review please: https://github.com/juju/juju/pull/11923. itâs not really 1k lines, there are mock files and json schema changes.
<stickupkid> hml, k
<hml> stickupkid: Iâll do the TODO once i get the whole thing wired up
<hml> in the next pr
<hml> achilleasa:  I like the new write up,  any concerns that this does not follow the other help msgs with a specific examples section?
<stickupkid> hml, do we need Resolve for charm hub?
<hml> stickupkid: we will, need to figure out where to put it and what it should contain.  thatâs my next step
<hml> iâm thinking just the charmhub package
<stickupkid> hml, done
<hml> stickupkid: ho?
<stickupkid> sure
<achilleasa> hml: no idea :D
<achilleasa> petevg: ^^^ thoughts on https://github.com/juju/juju/pull/11922 re: Heather's comment?
<achilleasa> hml: actually I should change the redis example and use apache2 everywhere
<stickupkid> esp. because redis isn't even updated
#juju 2020-08-21
<wallyworld> kelvinliu_: if you get a chance sometime https://github.com/juju/juju/pull/11917 it's mainly code deletion and some tweaks to refine imports
<kelvinliu_> looking
<kelvinliu_> lgtm thanks
<wallyworld> tyvm
<wallyworld> hpidcock: small fix https://github.com/juju/juju/pull/11925
<wallyworld> or kelvinliu_^^^^
<kelvinliu_> looking
<kelvinliu_> wallyworld: lgtm ty
<wallyworld> gr8 tyvm
<soutr> Hey good people, As per my understanding when  'juju remove-relation keystone:certificates vault:certificates', is executed, 'juju export-bundle' or 'juju status keystone --relations' should not show any relation between keystone and vault. But for some reason it is still present. And every time I try to readd the relation it says relation already
<soutr> exist. Is this the default behaviour?
<soutr> # juju remove-relation keystone:certificates vault:certificates# juju add-relation keystone:certificates vault:certificatescannot add relation "keystone:certificates vault:certificates": relation keystone:certificates vault:certificates already exists (already exists)
<soutr> Appreciate any tips/suggestions. Thanks!
<manadart> achilleasa: Can you look at https://github.com/juju/juju/pull/11926 and see what I have got wrong with merging your patches?
<manadart> On leave, NVM.
<stickupkid> manadart, are you allowed to import something from a core package from another core package
<manadart> stickupkid: I think so.
<manadart> stickupkid hml: Here's the patch https://github.com/juju/juju/pull/11927. I was hand-wavy with the QA steps. I will get something together and update with concrete steps, but the code-review can be done now.
<stickupkid> manadart, not sure the error handling is correct
<stickupkid> sigh: 200 passed, 1 skipped, 43 FAILED
<stickupkid> manadart, https://github.com/juju/juju/pull/11927#discussion_r474735254
<stickupkid> hml, I've put the PR up, but *may* require more Q&A https://github.com/juju/juju/pull/11928
#juju 2020-08-23
<voltbit> hello, I have a local setup with juju and lxc, I bootstraped juju and made a controller, then I installed a charm (rabbitmq) but the juju command around seeing cluster state seem stuck, if I run 'juju status' or 'juju clouds' I do not get any result, the commandline is stuck waiting for a result
<voltbit> I can see the controller being up with 'juju list-controllers' but that is all
<voltbit> how do I restart the controller/juju?
<voltbit> nvm, I eventually removed all containers and profiles with lxc and cleand controllers.yaml...
