#juju 2012-06-11
<twobottux> aujuju: How do I add and call a helper script in a Juju charm? <http://askubuntu.com/questions/149240/how-do-i-add-and-call-a-helper-script-in-a-juju-charm>
<bkerensa> SpamapS: I want to know what your doing!
<hazmat> rafa wins :-)
<m_3> mornin
<hazmat> m_3, ping
<m_3> hazmat: yo
<m_3> what's up?
<m_3> saw proof-errors... we should talk about how to incorporate other test results too
<m_3> we were talking about the browser showing green/red results for each charm... perhaps broken down by test
<hazmat> m_3, back
<hazmat> m_3, so wanted to ask about getting revision numbers into the job info
<hazmat> m_3, and also which is the canonical url i should be using for jenkins instances
<hazmat> m_3, to date i've just been the one off your subdomain
<hazmat> m_3, i was also wondering how much work it would be to just offer a jenkins publishing endpoint
<hazmat> m_3, the testing infrastructure seems a bit broken though which is also of some concern
<hazmat> i wanted to brainstorm/talk with you about alternatives
<hazmat> just to be clear for now i'm fine with jenkins testing, but i'd like to discuss what's on your plate as regards it this cycle, and the overall feature set we're aiming for
<gary_poster> juju folks, we're seeing a problem that is a big deal for us.  As of sometime late last week, we are seeing machines (not services/charms) that never move from the pending state.  We initially saw this late Friday afternoon and figured it was transient or something to do with our charm, but now we are seeing it again and repeatedly.  Here's an example of my status (see line 21): http://pastebin.ubuntu.com/1035478/ .  Here's
<gary_poster> an example of benji's status (see line 8).  benji started his with the fairly innocuous http://pastebin.ubuntu.com/1035483/ while I start with http://pastebin.ubuntu.com/1035462/ . Something suspicious is that this seems to happen with our slave charm and not with our master charm--but the charm shouldn't be able to affect this stage of things, right?  I can't even use `juju ssh 2` (I get "2012-06-11 09:35:41,881 ERROR None
<gary_poster> object cannot be quoted")
<m_3> hazmat: sounds good... lemme finish breakfast and take taylor to work.  ping you in an hour or so... g+ might be easier
<benji> another bit of data on the above: the EC2 console does not show the machine as running
<gary_poster> ah, yeah, thanks benji
<benji> (or even existing for that matter)
<gary_poster> huh
<hazmat> m_3, sounds good
<gary_poster> benji, it is shown as existing in mine, but stopped.  "State Transition Reason: Client.InstanceInitiatedShutdown: Instance initiated shutdown")
<hazmat> benji, can you pastebin the provisioning agent log
<hazmat> er. gary_poster
<gary_poster> :-) sure
<hazmat> gary_poster, which version are you running?
<hazmat> ppa or distro?
<benji> hazmat: sure; I assume that is somewhere on machine 0.  What is the path?
<gary_poster> hazmat ppa  0.5+bzr540-1juju5~precise1
<hazmat> benji, /var/log/juju
<hazmat> benji, its provisining-agent-log
<gary_poster> hazmat, http://pastebin.ubuntu.com/1035503/
<benji> hazmat: ooh, it looks interesting: http://paste.ubuntu.com/1035504/
<mgz> ha.
<hazmat> gary_poster, thanks
<benji> I /think/ this is because I had an existing, but stopped instance.
<hazmat> benji, yup
<hazmat> if it was a previous juju instance that you had stopped, ie managed behind juju.. then this makes sense
<mgz> hazmat: related question, currently juju leaves stopped machines alone of destroy environment
<mgz> this seems... bad to me.
<hazmat> mgz, agreed :-)
<gary_poster> hazmat, ah!  yes, we had both done a poweroff that we thought was in an lxc but turned out to be in the host :-P
<hazmat> mgz, there are additional issues with the security group intraction
<gary_poster> thanks hazmat
<hazmat> mgz, so intead of killing the security group when it exists already, we should just clear it out
<mgz> yeah, I'm trying to resolve some of these teardown issues right now
<hazmat> mgz, this also saves from the mostly pointless wait at destroy
<hazmat> mgz, since that almost never succeeds
<mgz> right, that's also painful.
<hazmat> mgz, so intead on teardown just kill the instances, on bootstrap, clear out sec groups of rules as we allocate machines in them
<hazmat> mgz, we can leave sec group cleanup in its entirety to a jitsu tool, or opportunistic provisioning agent level thing
<hazmat> mgz, the ostack branch looks like its shaping up... any chance i can start reviews on it
<mgz> so, I've got a neat way around that for openstack, but seems like a good plan for ec2
<mgz> right, I'm nearly there with all the moving parts so should be reviewable this week
<mgz> some things can land later.
<hazmat> mgz, re way around... what's that?
<mgz> you can just disassociate a security group from a server in openstack
<hazmat> oh.. nice
<SpamapS> bkerensa: huh? You asked what I'm doing?
<mgz> that avoids the hole where opening ports in a pre-existing sec group means you need to be sure you killed any previously associated machines
<hazmat> mgz, are you testing on hpcloud btw?
<mgz> (not that this firewalling is really security critical)
<mgz> hazmat: not yet with this branch
<hazmat> mgz, well considering the per machine group, any pre-existing associated machines are really user error imo or juju not properly handing stopped instances (which also amounts to the same given the lack of support for manual machine management atm)
<hazmat> ie. we shouldn't care about pre-existing sec groups per se
<mgz> right.
<m_3> hazmat: back
<m_3> SpamapS: promulquestion if you're still around...
<m_3> should we make promulgate _change_ the owner of a branch or require a separate push to a ~charmers branch?
<hazmat> m_3, invite out
<SpamapS> m_3: around now, wassup?
<m_3> SpamapS: hey
<m_3> SpamapS: just wondering if we should make promulgate do the extra work of pushing to ~charmers first for you
<m_3> SpamapS: and if so, should it just change the ownership of the lp branch or would it require another push
<m_3> right now I'm just barfing if it's not ~charmers (and --owner-branch isn't set)
<SpamapS> m_3: you can't "just change ownership" of a branch.
<m_3> SpamapS: ok... well that answers that then :)
<SpamapS> m_3: so are you saying, just have promulgate look at metadata.yaml, and if no explicit branch destination is set, have it just try to push to lp:~charmers/charms/<current-series>/<name>/trunk ?
<SpamapS> m_3: I'd be fine with that
<SpamapS> m_3: bzr wil protect us from doing anything super stupid in that case
<m_3> yeah, essentially
<SpamapS> m_3: should be *loud* about what we're doing
<m_3> it seems like lots of extra work to push to ~charmers every time
<SpamapS> promulgate is currently way too quiet
<SpamapS> m_3: automation for the win!
<m_3> SpamapS: cool one thing at a time... I'll mp the restriction to only promulgate ~charmers branches unless explicitly excepted first
<SpamapS> m_3: werd
<m_3> SpamapS: thanks man
<hazmat> m_3, what sort of charms are you pushing that shouldn't be viewable in the browser?
<m_3> hazmat: promultest for instance
<m_3> hazmat: and looking at creating launchpadlib-based tests that might temporarily stick stuff in locations like .../charms/<series>/<name>/trunk...
<m_3> hazmat: it'd probably be best if they don't get cached in the charm browser / store
<hazmat> m_3, funky..
<hazmat> m_3, if their publishing junk, why are they publishing it ;-)
<hazmat> m_3, seems like it should be kept local then
<m_3> hazmat: testing the publishing mechanism itself
<hazmat> m_3, i'm not opposed to the idea incidentally, just trying to understand if its nesc.
<m_3> i.e., tests for 'charm promulgate'
<hazmat> m_3, so how about a .no_store file in the root of the charm
<hazmat> niemeyer, ^
<hazmat> m_3, or you do want it in the store, but not the browser?
<m_3> no biggie... sort of a one-off thing... not those almost always eventually turn into bigger things :)
<m_3> hazmat: in this case, I want it in neither the store nor browser cache
<SpamapS> m_3: just let it be in the store
<SpamapS> m_3: "This charm is not useful. It is only for testing the charm store itself."
<m_3> SpamapS: ok
<SpamapS> Don't make it special at all
<SpamapS> at < 100 charms it might be found.. but when we get to the many hundreds and then thousands of charms.. nobody will ever see it
<m_3> hazmat: I'll kill that bug
<hazmat> except when it pops for search..
<hazmat> m_3, no.. i think its a good idea still to have that capability..
<hazmat> m_3, pls leave the bug, i'll hit up on the next charm browser round
<m_3> hazmat: ok... cool, thanks
 * SpamapS wonders if its worth our time is all
<m_3> ok, enough charm-tools for now... back to charmtester
<FunnyLookinHat> Here's the question of the day - can I use a charm to image a machine ( i.e. real hardware ) ?  I can't think of a way other than just re-purposing the charm's config code.
<FunnyLookinHat> I think that would essentially be rewriting juju more than rewriting charms - yes?
<marcoceppi_> FunnyLookinHat: Sounds like you're looking for MaaS?
<FunnyLookinHat> yes ?
<FunnyLookinHat> The real question is - I've "heard" a lot about maas - but I haven't seen a way to implement it yet - am I missing something?
 * FunnyLookinHat googles.
<marcoceppi_> FunnyLookinHat: So MaaS is a way to coordinate and manage bare metal, it's also a provider for Juju. So Juju talks to MaaS to deploy charms to bare metal (among other things) https://wiki.ubuntu.com/ServerTeam/MAAS
<FunnyLookinHat> Right right - let's say I'm creating a zimbra charm to help install/setup zimbra mail servers that will be used for totally different clients though
<FunnyLookinHat> and they will just plug the server in at their place after I've config'd it
<FunnyLookinHat> Does this still provide a valid use-case?  or would I be better off taking the charm for zimbra and just hacking it into a setup script that I run after imaging the server.
<marcoceppi_> hum, I mean you _can_ use Juju, but it sounds like you'd be better just creating a setup script
<FunnyLookinHat> marcoceppi_, yeah agreed.
<FunnyLookinHat> hey congrats on the ultrabook from UDS btw
<FunnyLookinHat> Did you end up submitting the git charm ?
<marcoceppi_> FunnyLookinHat: I did gitolite and gluster. Gitlab proved to be too difficult in the amount of time I had. So polishing those up now for the store
<FunnyLookinHat> Ah ok - cool  :D
<marcoceppi_> Now that we deploy gitlab in production it's easier to finish the charm
<SpamapS> FunnyLookinHat: btw you could definitely use MaaS+juju to create those machines..
<SpamapS> FunnyLookinHat: you'd just want to remove juju before shipping them out.
<FunnyLookinHat> SpamapS, you think so?  The machines won't be connected to the MaaS/Juju server after the initial image.
<FunnyLookinHat> Ah.
<SpamapS> FunnyLookinHat: probably enough to just 'juju destroy-service customerfoo-zimbra' before shutting it down
<SpamapS> FunnyLookinHat: its not what juju is specifically designed for, but it should work
<SpamapS> FunnyLookinHat: (the real question is why are you selling them a mail server instead of a whole cloud. ;)
<SpamapS> because now that they have mail, they'll want a status.net microblog and a wiki to distill the billions of words of email that nobody has time to read ;)
<FunnyLookinHat> hahaha
<FunnyLookinHat> I'm just trying to speed up my own process.
<FunnyLookinHat> Seems silly to essentially "copy" a charm and not use juju
<twobottux> aujuju: How do I gracefully shutdown a Juju Charm? <http://askubuntu.com/questions/149550/how-do-i-gracefully-shutdown-a-juju-charm>
<twobottux> aujuju: How do I make a Juju Charm's revision match the Bazaar revision of its repo? <http://askubuntu.com/questions/149553/how-do-i-make-a-juju-charms-revision-match-the-bazaar-revision-of-its-repo>
<SpamapS> m_3: https://code.launchpad.net/~mark-mims/charm-tools/fix-list-commands/+merge/109697 .. dude, just commit that
<m_3> SpamapS: ok
<m_3> done
<FunnyLookinHat> quick bash help anyone?  I need to make a be the substr of b - from length of TEST + 2 - instead of just test... but +2 evals as a string obviously: a=${b:${#TEST}};
<FunnyLookinHat> i.e. a=${b:${#TEST}+2};
<FunnyLookinHat> ah - never mind: a=${b:$((${#TEST}+2))};
<negronjl> m_3: ping
<hazmat> FunnyLookinHat, re zimbra to me it really depends on whether you plan on taking it multi-node
<hazmat> SpamapS,  a JBOM provider :-)
<m_3> negronjl: yo
<negronjl> m_3: I just followed the instructions ( README ) for MongoDB ( precise ) and, provided that I follow them, they work on the replica-set
<negronjl> m_3:  Would you give me more details on how to reproduce ?
<m_3> negronjl: oh cool... yeah, I was just triaging a few
<m_3> didn't look too carefully
<m_3> we have a lot of bugs that aren't getting caught in the review process
<negronjl> m_3:  no worries ...  marking bug as Invalid
<m_3> I hate to add them to the review queue though... maybe just 'triage general charm bugs' as a reviewer task
<m_3> dude... review-queue rocks btw... how did we ever get along without it?
<negronjl> m_3:  lol ... it does if I say so myself :)
<FunnyLookinHat> hazmat, single node
 * SpamapS re-uploads Juju to Debian after it was rejected for debian/copyright niggles :-P
<roy-feldman> Does anybody have a few minutes about Juju with Maas?
<roy-feldman> I have read all of the docs I can find online and I am still have some issues. BTW, I am using KVM for most of my MaaS nodes I want to use with Juju.
<marcoceppi_> roy-feldman: I can try to help
<roy-feldman> Thanks
<roy-feldman> For starters, have you used KVM with MaaS?
<marcoceppi_> I have briefly, I ended up having quite a few problems with it though :)
<roy-feldman> I there a better hypervisor in your opinion for Maas + Juju at this point?
<roy-feldman> I don't have enough physical nodes to do anything very interesting with MaaS
<bkerensa> SpamapS: http://www.omgubuntu.co.uk/2012/06/ubuntu-12-10-development-update-1 <-- your interview
<marcoceppi_> roy-feldman: I've tried virtual-box which wasn't anymore fun to run. So far the only hypervisor I've had any real-life usage out of is Xen
<roy-feldman> But have you used it successfully with MaaS?
<roy-feldman> Searching the forums, the one person on the Juju team that seems to use KVM is Jorge.
<roy-feldman> What problems did you run in with KVM?
<marcoceppi_> roy-feldman: I couldn't get them to properly PXE boot, and when I finally got around that the wouldn't register with the maas pxe-boot server
<marcoceppi_> so there were a lot of networking foul-ups
<roy-feldman> I have had similar problems
<SpamapS> bkerensa: nice! thanks!
<roy-feldman> Following the instructions, I was able to PXE boot a KVM
<roy-feldman> Configured for Wake up lan
<roy-feldman> It would find the MaaS server and get the "initial" image and shutdown.
<roy-feldman> The MaaS interface would then show that it was ready.
<roy-feldman> However, when I would try to peform a juju bootstrap, I would get the following error
<marcoceppi_> Is your MaaS server running in a KVM or is it on another box (or the host machine)
<roy-feldman> ERROR Invalid host for SSH forwarding: ssh: Could not resolve hostname node-525400edd759.local:
<marcoceppi_> roy-feldman: that's a DNS issue
<roy-feldman> I have installed maas-dhcp and set it correctly, as far as I can tell.
<marcoceppi_> it should be pretty easy to overcome, if you've installed maas-dhcp on the maas machine, then set /etc/resolve on your juju machine to point to the maas server
<roy-feldman> I have gone over the Maas setup instruction several times
<marcoceppi_> juju machine, being the machine you're running juju from
<marcoceppi_> MaaS is kind of designed to take over a network, so it's supposed to act as your internal DNS server
<roy-feldman> what would the directive look like in /etc/resolve to do that
<marcoceppi_> roy-feldman:  /etc/resolve.conf *
<roy-feldman> BTW, MaaS is not my gateway, instead I pointed it at my router in the maas-dhcp setup
<roy-feldman> Would that be a problem?
<marcoceppi_> It's nameserver IP_ADDRESS - I would recommend commenting out (#) your current entries and placing the values to the MaaS (you may need to restart networking)
<marcoceppi_> not sure, that's better suited for the #maas room
<marcoceppi_> It *shouldn't* be a problem
<roy-feldman> Ok, I will go to the maas channel, but are you saying the gateway setting is not a problem?
<marcoceppi_> AFAIK, it's not
<roy-feldman> And maas will refer to my gateway for external address resolution after /etc/resolve is configured correctly?
<marcoceppi_> roy-feldman: it should
<roy-feldman> One last question
<marcoceppi_> shoot
<roy-feldman> Once I have Maas properly configured
<roy-feldman> I can deploy charms to maas registered machines, whether they are currently running or not?
<roy-feldman> Can I ...
<marcoceppi_> roy-feldman: correctly, MaaS will ping it with a WOL call to turn the machine on, then provision it
<roy-feldman> great
<roy-feldman> Is the maas channel open to outsiders?
<roy-feldman> Some ubuntu channels are members only
<marcoceppi_> roy-feldman: it's open
<roy-feldman> great
<marcoceppi_> If after you get that all sorted, juju doesn't work, feel free to pop back in here
<roy-feldman> Ok... one more clarification of your answer, please
<roy-feldman> When you said comment out my current entires, did you mean my entries in /etc/reslove?
<roy-feldman> resolve
<marcoceppi_> yeah, the ones in /etc/resolv.conf
<roy-feldman> What did you mean by " /etc/resolve.conf *"
<roy-feldman> ?
<marcoceppi_> the file is /etc/resolv.conf - mis-spelled it several times :)
<roy-feldman> Ok, thanks a lot!
<roy-feldman> Wish me luck
<roy-feldman> When I finally succeed I will write up a little cookbook for Maas + KVM, I think it is sorely needed.
<roy-feldman> Looking at the Maas forums, I am not the only person having problems with Maas
<roy-feldman> I do have a quick Juju specific question
<marcoceppi_> sure, go for it
<roy-feldman> Is there a juju ppa repo I should be using for juju on 12.04, or am I better off sticking with normal updates?
<SpamapS> roy-feldman: the PPA is extremely stable (for now) but may get volatile later.
<roy-feldman> Assuming that all I am doing is charm development?
<SpamapS> roy-feldman: there are a few updates pending in precise-proposed ..
<SpamapS> which reminds me I need to get those back on track. :-P
<SpamapS> we really need a 'juju-origin: ppa://....' so we can have a more stable PPA than the one that builds from trunk. :-P
<roy-feldman> In the meantime, are there any major advantages for a charm developer to be using the current ppa repo?
<roy-feldman> Also, would it be helpful to the project if I used the ppa repo?
#juju 2012-06-12
<roy-feldman> Hey Marco - juju bootstrap is working now with my Maas node, without any changes!
<roy-feldman> in KVM
<roy-feldman> But I had to something a little odd, which may be a MaaS problem
<roy-feldman> Basically I had to PXE boot it twice, and I had to select a boot option on the second boot
<marcoceppi_> roy-feldman: that's...odd - but I'm glad it works!
<roy-feldman> After the initial PXE boot, I got the network error I described
<roy-feldman> Then I started guest again, and hit F12 and selected the virtio boot option, not the default boot option which PXE boot
<roy-feldman> It seems that MaaS is not completing the PXE boot in one step
<roy-feldman> Also, the start button in the MaaS interface is a "noop"
<roy-feldman> Now I do have a Juju question
<roy-feldman> I can see my MaaS node when I do juju status
<roy-feldman> I have done a juju deploy and juju expose of mysql
<roy-feldman> How long should it take for local mysql charm to transition from pending to running?
<marcoceppi_> roy-feldman: depends on how beefy your machine is. Think of it as having to install the Ubuntu OS, install the juju working parts, then deploy the charm and install the charm.
<roy-feldman> I am running a beefy i7 laptop .. I wouldn't think it would take very long
<roy-feldman> It doesn't matter if the machine is already running with a OS?
<roy-feldman> This was not a cold deploy
<marcoceppi_> roy-feldman: not sure, I've typically enlisted machines from the PXE boot screen
<roy-feldman> I have done it the other way
<roy-feldman> I entered the Mac address to Maas
<roy-feldman> Then I booted the machine which registered it with MaaS
<roy-feldman> In my case, I had to boot it twice
<marcoceppi_> roy-feldman: but did the machine have the Ubuntu OS yet?
<roy-feldman> Yes
<marcoceppi_> See, I'm not sure what happens here since MaaS builds it's own images to use
<marcoceppi_> So I'm not sure if it's going to wipe/re-install or what
<roy-feldman> During the second boot it appeared to install the latest server packages
<roy-feldman> Perhaps.  In any case I hope to soon have better model of how MaaS + Juju work then I have now. ;-)
<roy-feldman> Any suggestions on how I can see what is going on, what logs I can be looking at?
<roy-feldman> Looking at the output of Juje debug-log, I see a series messages starting with "ProviderInteractionError: Unexpected Error interacting with provider: 409 CONFLICT"
<roy-feldman> I think that would explain why my mysql instance is not coming up
<roy-feldman> Should I file a bug report?
<marcoceppi_> huh, 409 CONFLICT means MaaS doesn't have any nodes for Juju to use
<roy-feldman> Maybe I shouldn't have my only maas node running if I want to deploy a charm
<marcoceppi_> roy-feldman: you'll need at least two MaaS nodes to bootstrap and deploy with Juju
<marcoceppi_> One for the Bootstrap node and one for the charm you wish to deploy :)
<roy-feldman> I do have two nodes
<marcoceppi_> Oh
<marcoceppi_> What does the MaaS dashboard show?
<roy-feldman> If you mean I have a node for the MaaS server and another for deployment
<roy-feldman> Hold on
<roy-feldman> It show that I have one node which has been allocated
<marcoceppi_> And that's it?
<roy-feldman> There were 0 nodes when I installed MaaS
<marcoceppi_> How many nodes do you have available?
<roy-feldman> 1
<roy-feldman> Does juju require its own MaaS node?
<roy-feldman> I assumed that I could run Juju at the native level to interact with MaaS.  Am I wong?
<lifeless> roy-feldman: you can indeed
<lifeless> you'll need a MaaS controller, and then a Juju control node, running on a MaaS provisioned node
<roy-feldman> I missed that in the configuration steps
<roy-feldman> The setup of the Juju control node
<roy-feldman> Where is that documented?
<lifeless> its what juju bootstrap does
<roy-feldman> So I need a free MaaS node when I run Juju bootstrap
<roy-feldman> ?
<roy-feldman> i.e. So are you saying that every time I run juju bootstrap with a maas envrionment, I need an available Maas Node for the juju controller?
<roy-feldman> What is the best way to rollback my juju deployment and go back to adding additional MaaS nodes?
<roy-feldman> Should I do a destroy-environment?
<lifeless> yes, thats what I'm saying
<roy-feldman> thanks
<lifeless> uhm, you shouldn't need to destroy it, just add more nodes so it can get them when you go to deploy a charm
<roy-feldman> And its Ok to simply cntrl-c out of my original juju deploy and expose?
<roy-feldman> No need to do any housecleaning?
<roy-feldman> The one that never completed because there wasn
<roy-feldman> wasn't an available node
<roy-feldman> BTW, shouldn't juju expose give some kind of message if there aren't sufficient nodes to complete the action?
<lifeless> please file a bug abou that
<roy-feldman> will do
<lifeless> I agree it would be good to do so, I don't know why it didn't... may be a MaaS bug, for instance.
<roy-feldman> Looking at the trace, it looks like the loop is in juju
<roy-feldman> Specifically juju/agents/provision.py
<roy-feldman> It just keeps retrying
<lifeless> you could leave it where it is and add nodes, if it keeps retrying when a node comes available it will succeed ;)
<roy-feldman> Thanks again for all the help, I will try again with more nodes and see what happens and I will file a bug report about juju provision
 * negronjl is done for the night
<bkerensa> SpamapS, marcoceppi: http://www.omgubuntu.co.uk/2012/06/ubuntu-12-10-development-update-1
<bkerensa> you got questions about juju ^ :P
<jimbaker> at usenix config mgmt summit today in boston, will be talking later on "service orchestration with juju"
<niemeyer> jcastro: ping
<niemeyer> jimbaker: Sweet
<jimbaker> niemeyer, it's a good lineup of speakers from chef, bcfg2, cfengine, vmware
<themiwi> Hi all. Is there an easy way to get essentially the output of `getconf getconf _NPROCESSORS_ONLN` of the remote host in a *-relation-changed hook? Or should I use `n=$(ssh $(relation-get hostname) getconf _NPROCESSORS_ONLN)`?
<m_3> themiwi: you could pass that as a relation variable... one side would do `relation-set num-cpus=$(...)` and the other side does `relation-get num-cpus`
<m_3> themiwi: that can usually be inferred by instance-type though.. depends on provider.  Also you might take a look at constraints in the juju docs.
<m_3> all depends on what you're trying to do... passing relation variables works fine though
<negronjl> 'morning all
<imbrandon> morn
<negronjl> hi imbrandon
<m_3> negronjl: yo
<negronjl> 'morning m_3
<twobottux> aujuju: Is juju specific to ubuntu OS on EC2 <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2>
<SpamapS> I love the askubuntu bot
<SpamapS> really.. fantastic idea
<m_3> SpamapS: yeah, me too... can we answer it here?
<jcastro> imbrandon: you don't need to add "new-charm" anymore, just setting the status Does The Right Thing(tm)
<jcastro> also dude, awesome job on the RPMs!
<jcastro> has anyone tried them yet?
<imbrandon> umm not sure
<imbrandon> i just put the word out a few hours ago
<imbrandon> so no feedback yet
<jcastro> you blog it or need me to?
<imbrandon> i'm about to blog about that and my new "download for ubnutu" button soon, so shuld get some
<imbrandon> soonish
<jcastro> k, poke me and I'll syndicate it on cloud.u.c
<imbrandon> your more than welcome to , more ppl read yours i think
<imbrandon> ll
<imbrandon> kk
<imbrandon> i'm pretty sure my whole juju category is syndicated already
<imbrandon> on cloud.u.c
<imbrandon> i'll make sure tho
<jcastro> no I need to post it, it doesn't automatically publish
<imbrandon> ahh, kk
<SpamapS> m_3: no, how would it give you credit? ;)
<imbrandon> SpamapS: loging to the bot via api :)
<SpamapS> imbrandon: good luck with that
<SpamapS> sounds like a yak to be shaved later
<m_3> hmmm... travelng to nepal lately?
<imbrandon> SpamapS: heh
<adam_g> hazmat: ping
<hazmat> adam_g, pong
<adam_g> hazmat: looking at snapshot.py of charm runner for the first time... is clean_juju_state() something that can be easily adpated to work with a non-local environment?
<hazmat> adam_g, about to get into a meeting..
<hazmat> adam_g, the state cleaning yes, the storage cleaning no
<hazmat> adam_g, we don't have a provider storage method for killing files
<adam_g> hazmat: if i wanted to just script around it via ssh/paramiko, i'd just be deleting the related files from the web storage right?
<hazmat> adam_g, what provider?
<adam_g> MAAS
<adam_g> actually, i wouldn't even need to do that. ive got local access to the MAAS server
<hazmat> adam_g, if its maas, i'd check if they have an api for deleting files and just sniff ~/.enviroments.yaml by hand for the creds to delete the files
<adam_g> hazmat: ill start there, thanks
<m_3> negronjl: yo... gotta sec?
<negronjl> m_3: sure
<m_3> g+?
<negronjl> m_3: sure .. give me a sec.  I'll invite you when I'm there
<m_3> ok
<negronjl> m_3: started invite sent
<themiwi> m_3: Sorry, got interrupted and then had to dash away to catch the train. Yes, passing this information as a relation variable to override the default would be great. However, I'd like to also provide a sensible default choice which is not just a pessimistic 1.
<m_3> whoohoo... jim's talk/demo went well
<SpamapS> sweet
<jcastro> nice
<SpamapS> themiwi: interesting problem you're trying to solve. In what case is it important to know the *remote* CPU count for service configuration?
<SpamapS> themiwi: either way, you don't have to provide a default. Just keep running your changed hook until the other side has *set* that value.
<SpamapS> themiwi: the changed hooks can ping-pong back and forth a few times.
<themiwi> SpamapS: I'm trying to cook up a charm for the Sun Grid Engine (SGE), where the master/head node needs to know the number of slots it can allocate on the compute node.
<themiwi> SpamapS: yes, that's what I'm going to do now. I set it in the *-relation-joined hook of the compute charm and the keep querying it in the *-relation-changed of the head charm
<SpamapS> themiwi: perfect :)
<SpamapS> themiwi: wi would have thought grid engine would have its own RPC to talk to its nodes and figure things like CPU's out
<SpamapS> anyway.. time to find nourishment
<themiwi> SpamapS: I suppose it could, but the thing with cluster administration is, that administrators want to be able to adjust every tiny detail and would probably hate such automatisms...
<SpamapS> hah
<SpamapS> tweakers
<themiwi> SpamapS: ;-) yep. they take pride in squeezing every single flop from their clusters.
 * m_3 is a card carrying tweaker
<themiwi> Another question: Whenever a relation is added, the variable nodecpus is set to the number of processors/cores of the slave. so far so good. however, the master needs to maintain a *sum* of all node cpus. Where would I store this persistent information?
<m_3> themiwi: juju doesn't choose this for you... I'd stick it in a map or db on the filesystem (we often use /var/lib/juju/ to house such things).  some folks like facter.. there're lots of options.  key is that this map will update over time as nodes die and/or join
<m_3> themiwi: hooks xxx-relation-changed, xxx-relation-departed, and xxx-relation-broken will have to recalculate that number over the lifetime of the cluster
<themiwi> m_3: was just wondering whether i could do something like "unit-set cpucount $ncpus"...
<themiwi> m_3: but maintaining a simple text file containing that figure is also no problem.
<m_3> themiwi: relation-list is useful here
<m_3> themiwi: oh, so config allows you to set stuff from the command line too...
<m_3> there's a config-changed hook
<themiwi> m_3: sure. writing these charms is quite intricate and the documentation rather scarce, it seems :-)
<m_3> and then `juju set var=value` commands
<m_3> yeah... some are really easy... some really hard... depends on the service
<themiwi> especially keeping things straight with all that ping-ponging between xxx-relation-changed hooks is a bit daunting
<m_3> yes
<m_3> it's super-powerful though
<m_3> it's auto-negotiation for config params that you usually have to set manually or in config scripts
<themiwi> sure, but you also have to be super-careful
<imbrandon> its a one time investment though :)
<imbrandon> or should be ... heh
<m_3> maybe once per long-term release :)
<imbrandon> right
<imbrandon> :)
<jcastro> imbrandon: heya
<jcastro> hazmat never got your ubuntufication of the sphinx template
<jcastro> can you hook him up so our docs look awesome?
<jcastro> hazmat: this is handy btw, http://jujucharms.com/tools/store-missing
<jcastro> nice one on that one
<imbrandon> jcastro: sure
<imbrandon> jcastro: yea i hadent sent it to anyone really yet
<imbrandon> like the download button ? heh u should pimp it a lil for me :)
<jcastro> no, you said you styled the stuff for juju.ubuntu.com/docs
<jcastro> the sphinx stuff
<imbrandon> doing it now, is there a reason not to just do a merge req , i mean i dont mind sending them directly to hazmat , but just curious
<imbrandon> and yes thats what i said "sure" to, and then added the bit about the button
<imbrandon> :)
<imbrandon> but yea, let me dig that branch up its burried in my ~/Projects folder and then i'll pass it arround , give me ~30ish min
<jcastro> well remember I was like "show daniel too a bunch of other ubuntu stuff uses sphinx"
<imbrandon> yup yup
<jcastro> no rush or whatever
<hazmat> imbrandon, just do a merge request
<jcastro> I just happened to be talking about it with kapil
<hazmat> imbrandon, my only concern is if it requires updates to the installation
<imbrandon> hazmat: cool cool , yea i dont mind, was just like ummm ? heh
<imbrandon> nah
<imbrandon> its just some new files in _templates
<hazmat> imbrandon, the build should cron update and run automatically, so as long as there aren't any new sphinx extensions it should just work as a mp proposal
<imbrandon> thats about it, dont even think the biuld rules need updates
<imbrandon> if i rember right
<imbrandon> yup yup
<imbrandon> btw it does use the html build target right ?
<imbrandon> as far as whats published
<imbrandon> not like the all-in-one html or something weird
<jcastro> SpamapS: ok so like brandon is winning with his packages. :)
<SpamapS> jcastro: I can't do anything to thwart the Debian NEW queue
<imbrandon> hahah i'm sure SpamapS will mucho win in the end, i actually had alot of help unknowingly from hazmat doing the footwork over at pypi.python.com a while back
<imbrandon> so i cheated :)
<imbrandon> SpamapS: btw your interview over at omg is on fire, there are over 200 ppl current viewing it and a ton of positive twitter comments ( despite some of the "ohh its not desktop" idiots )
<imbrandon> been like that for like 3 or 4 hours now
<SpamapS> imbrandon: heh cool. :) #2 this week
<koolhead17> SpamapS, :)
<imbrandon> SpamapS: ugh , can we ( read: you ) rename jitsu binary to juju-jitsu ? hehe ( i know i know probably not ) ... there is a conflict with another deployment tool named jitsu and both dont like to live in bin/jitsu at the some time heh
<imbrandon> no biggie though, lets just hope that I'm not most ppl and have both installed , or want to at least
<imbrandon> ( the other jitsu is the Joyent's tool to deploy and manage http://no.de and http://jit.su accounts
<imbrandon> for nodejs )
<SpamapS> imbrandon: oh there's another jitsu?
<imbrandon> yea
<imbrandon> i found out the hard way
<SpamapS> why does node.js take all the best names
<imbrandon> heh
<imbrandon> i went to install juju-jitsu and it was like ummm there is somethign there already
<imbrandon> i looked and rembered
<imbrandon> heh
<imbrandon> i was like damn!
<SpamapS> node also conflicts withs ome broke ass old AX25 packet thing caled node
<imbrandon> but yea its probably a corner case
<imbrandon> heh
<SpamapS> not really a corner case
<SpamapS> I can see them conflicting
<SpamapS> there ought to be a single registry for bin names
<imbrandon> really its used as a wrapper anyhow, so if it was just named juju-jitsu for the bin
<imbrandon> it would probably be fine
<imbrandon> mostly* a wrapper
<SpamapS> I want it called jitsu
<SpamapS> typing juju-jitsu sucks
<imbrandon> cuz once youve done juju-jitsu wrap-juju , it wont matter tho :)
<imbrandon> and only rename the bin not everything :)
<imbrandon> i dont really care, but i did run into it today heh
<SpamapS> I'd be willing to relent and do jjitsu
<imbrandon> heh
<imbrandon> jj :) that would make a good alias, /me adds to .bash_profile
<imbrandon> i wanna add real alias support too btw, like hub has for git , you just "alias git=hub" in bash_profile and go on about your business
<imbrandon> its sweet
<imbrandon> thats one of those $sometimes things
<imbrandon> actually its more like .... eval "$(hub alias -s)"
<imbrandon> but still, same thing, just works if you use zsh or any shell
<negronjl> SpamapS, jcastro, m_3:  Do we know which day is the Charm School planned for during Velocity ?
<SpamapS> negronjl: did we decide to do one w/o the conference blessing?
<negronjl> SpamapS: not sure yet, hence the question :)
<SpamapS> negronjl: IIRC, it was rejected by them
<negronjl> SpamapS: ah.  thx
<SpamapS> http://packages.qa.debian.org/j/juju.html
<m_3> negronjl: nope... dunno
<negronjl> m_3: thx
<imbrandon> SpamapS: saweet
<imbrandon> :P
<imbrandon> SpamapS: here, this one is all for you :) http://www.brandonholtsclaw.com/blog/2012/juju-everywhere#comment-555572949
<imbrandon> Mr Spaleta
<SpamapS> Our biggest fan
#juju 2012-06-13
<surgemcgee> Can I specify the instance type in the charm/config? Can I use the free micro one? Is this only specifiable on the command line when deploying?
<surgemcgee> Settle for 33% answer or better. :)
<imbrandon> no, yes yes
<imbrandon> surgemcgee: ^^
<surgemcgee> Got it figured out. Gotta stop withn the research lazyness.
<al-maisan> Hola! Just looking at the haproxy/metadata.yaml -- it says: "provides: website: interface: http"
<al-maisan> should that not be a requires? i.e. the other way around?
 * al-maisan reads the haproxy manual
<_mup_> Bug #1012497 was filed: Juju should only give security warnings on bootstrap <juju:New> < https://launchpad.net/bugs/1012497 >
<al-maisan> hello there! I ran "juju bootstrap" with "INFO 'bootstrap' command finished successfully" but juju status gives me the dreaded "ERROR Invalid SSH key" message
<al-maisan> any ideas how to resolve this?
<al-maisan> FWIW: I have id_rsa and id_rsa.pub in ~/.ssh and the permissions are set correctly
<al-maisan> I am running juju on an ubuntu 12.04 server
<al-maisan> Version: 0.5+bzr531-0ubuntu1
<al-maisan> BTW, adding a "authorized-keys-path:" entry to environments.yaml makes no difference
<al-maisan> hmm .. after boot-straping juju I can ssh into the created ec2 instance as follows: "ssh -i /home/muharem/.ssh/id_rsa ec2-x-x-x-x.compute-1.amazonaws.com" -- however juju status fails with "ERROR Invalid SSH key"
<al-maisan> any help would be greatly appreciated!
<al-maisan> the "ERROR Invalid SSH key" problems I was having earlier occurred inside an ubuntu 12.04 server kvm image .. when trying juju from a desktop everything works flawlessly
<al-maisan> any idea why juju would have issues when utilized inside a kvm image ..?
<m_3> imbrandon: getting 'The requested URL /jquery-1.6.3.min.js was not found on this server' on your blog
<m_3> imbrandon: awesome job getting the rpms in!
<imbrandon> m_3: kk i'll take a look in a sec, thanks ( its a temp theme anyhow untill i got my good one done :)
<imbrandon> and ty
<imbrandon> working on the juju doc theme atm
<imbrandon> m_3: http://api.websitedevops.com/juju-docs/index.html
<imbrandon> there we go, all fixed up, yea my theme is a mess, something about a mechanic never works on his own car ?? heh
<imbrandon> ty
<imbrandon> hazmat: got a bit more here in a while, i've been busy on it since i did the merge req, figured i had the code out
<imbrandon> heh
<imbrandon> but yea i dident intend to target trunk, ty ty
<hazmat> imbrandon, thank you
<hazmat> looks great
<m_3> imbrandon: yeah, blog header looks good now
<tedg> So I did a "juju debug-hook" and in that shell I can't do a "config-get" ?
<tedg> How do I get the JUJU_AGENT_SOCKET ?
<avoine> tedg: I think JUJU_AGENT_SOCKET should be set for you
<avoine> in the debug-hook shell at least
<tedg> Hmm, it's not :-/
<tedg> This is a subordinate hook, does that make a difference?
<tedg> subordinate charm actually.
<negronjl> 'morning all
<pavolzetor> hello,
<pavolzetor> I am curiosu how much does costs some tiny instance for experiments
<pavolzetor> and how to setup twisted and python
<pavolzetor> thanks :)
<avoine> pavolzetor: what do you want to experiement?
<pavolzetor> I wanted to use U1DB for sync
<pavolzetor> but after some tests
<pavolzetor> it would use a about 10 megs a day, which is a lot for phone
<pavolzetor> s
<pavolzetor> so I want to use AWS (free tier) for experimenting with Twisted and PUSH (data in JSON)
<pavolzetor> it should massively reduce data usage on phone and I can get almost realtime results
<pavolzetor> I will still use U1db for local caching on pc
<SpamapS> pavolzetor: you can use the free tier if you're careful
<SpamapS> pavolzetor: you'll need to use the (undocumented) option 'placement: local' .. and then the charms will all end up on the "juju server" .. since the free tier only gives you one t1.micro. You'll run out of RAM really fast.
<pavolzetor> I do not get it, I am going to read docs again
<pavolzetor> and I will ask :)
<pavolzetor> is there any way to stop it charging me?
<pavolzetor> if I somehow exceed
<pavolzetor> or just enter fake carD?
<SpamapS> pavolzetor: no
<pavolzetor> I see
<SpamapS> pavolzetor: its $0.02 per hour and you always get 1GB of data transfer for free
<pavolzetor> I see
<SpamapS> pavolzetor: So The *worst* case is you pay maybe $20 in one month
<pavolzetor> that's okay that :)
<SpamapS> unless you do something dumb like put up a free porn site
<pavolzetor> n
<pavolzetor> I do not plan to do so :)
<al-maisan> I was having "ERROR Invalid SSH key" problems with juju inside an ubuntu 12.04 server kvm image .. when trying juju from a ubuntu desktop everything works flawlessly.  Any idea why juju would have issues when utilized inside a kvm image ..?
<SpamapS> al-maisan: can you maybe paste the whole error?
<SpamapS> al-maisan: also if you bootstrap from one machine, you'll need the ssh private key from that machine to access the environment from any other machine
<SpamapS> jcastro: https://code.launchpad.net/~clint-fewbar/juju/docs-add-operating-systems/+merge/110135
<SpamapS> wait.. ruh roh
<SpamapS> something went wrong there
<jcastro> ouch, old branch?
<SpamapS> no.. hm
<SpamapS> we need to resolve this
<SpamapS> lp:juju needs to *not* have a /docs
<SpamapS> if we're also going to have a docs series
<SpamapS> lp:~juju/juju/docs
<marcoceppi> SpamapS: I think I had a merge proposal for that
<SpamapS> slightly wrong target
<marcoceppi> SpamapS: https://code.launchpad.net/~marcoceppi/juju/remove-trunk-docs/+merge/106850
<SpamapS> https://code.launchpad.net/~clint-fewbar/juju/docs-add-operating-systems/+merge/110138
<marcoceppi> but I somehow missed hazmat's message
<marcoceppi> I don't think we're missing much, but I'm not going to mill through all those files
<SpamapS> I wonder can't you just merge?
<marcoceppi> merge old docs folder with current doc tranch?
<marcoceppi> branch, even.
<SpamapS> bzr merge -r 432.. ../trunk/docs
<SpamapS> I did that..
<SpamapS> produces I think the diff
<SpamapS> hrm n/m
<slank> I'm having trouble with 'juju debug-hooks'. My environment isn't getting set up on the unit (e.g. values like JUJU_AGENT_SOCKET aren't set)
<SpamapS> slank: they won't be set until a hook is actually executed b
<SpamapS> slank: the prompt you get at first is just a root prompt on the box
<SpamapS> slank: you need to cause a hook to execute so you get the full context
<SpamapS> slank: so, add a relationship, or change a config setting.
<slank> SpamapS: OK, so I've tried: cd /var/lib/juju/units/<unit>/charm/hooks; ./db-relation-changed, and I get an error:
<slank> No JUJU_AGENT_SOCKET/-s option found (when running juju-log)
<SpamapS> slank: no you can't just run it
<SpamapS> slank: you need to tell juju to run it
<SpamapS> slank: the idea is to run the hook *in the context of when it will actually be run*
<avoine> tedg: still there?
<slank> SpamapS: Right, that's what the tmux session is for, isn't it?
<al-maisan> SpamapS: please see http://paste.ubuntu.com/1039536/
<avoine> slank: have the same problem that you had earlier
<avoine> tedg: ^
<SpamapS> slank: the tmux session will pop up a new window when the hook is executed by the agent
<avoine> wrong ping sry
<SpamapS> al-maisan: its possible something went wrong during the installation of your key
<SpamapS> al-maisan: ec2-get-console-output i-36d0414f might give you some clues
<al-maisan> SpamapS: as you could see at the end of the log I was able to ssh manually in the juju control instance
 * al-maisan tries ec2-get-console-output
<slank> SpamapS: aha!
<SpamapS> al-maisan: ah, try juju status again
<SpamapS> al-maisan: I bet it works. It may have been a timing issue
<slank> SpamapS: got it. thanks so much
 * al-maisan tries
<SpamapS> slank: no worries. Its not the most straight forward thing int he world (but it is really really helpful ;)
<al-maisan> SpamapS: unfortunately the problem is persistent: http://paste.ubuntu.com/1039547/
<al-maisan> SpamapS: here's the ec2-get-console-output BTW: http://paste.ubuntu.com/1039549/
<SpamapS> al-maisan: can you ssh without adding the '-i ~/.ssh/id_rsa' ?
<SpamapS> al-maisan: perhaps your ssh configuration is not picking up that default key location and trying it?
<al-maisan> SpamapS: Permission denied (publickey).
<SpamapS> al-maisan: check ~/.ssh/config and /etc/ssh/ssh_config  ... they are likely doing something weird
<al-maisan> SpamapS: will do .. thanks!
<SpamapS> al-maisan: you will need to fix that, as juju just does 'ssh <whatever the hostname is>'
<al-maisan> SpamapS: hmm .. I just removed ~/.ssh/config altogether .. no effect
<al-maisan> Oh wait .. now "ssh -i ~/.ssh/id_rsa ec2-107-21-179-132.compute-1.amazonaws.com" does not work either
<al-maisan> SpamapS: hmm .. when I specify the "ubuntu" user on the ec2 machine then things work e.g. "ssh ubuntu@ec2-107-21-179-132.compute-1.amazonaws.com"
<al-maisan> juju seems to be using port 2181 on the remote machine .. is that right?
<al-maisan> Spawning SSH process with remote_user="ubuntu" remote_host="ec2-107-21-179-132.compute-1.amazonaws.com" remote_port="2181" local_port="53059".
<SpamapS> al-maisan: right, it forwards communication through that ssh forwarding
<SpamapS> al-maisan: so, does juju status work yet?
 * al-maisan tries
<al-maisan> YES
<al-maisan> Oh, wow!
<al-maisan> That's funny, it just seems to be taking a very long time.
<al-maisan> SpamapS: thanks for looking into this!
 * al-maisan tries timing this..
<al-maisan> hmm .. now it just works nicely .. must have been something in my ssh configuration
<al-maisan> Thanks again!
 * al-maisan bows out
<kapilt> almaisan-away: all commands wait till the env finishes bootstrapping fwiw
<kapilt> Jcastro ping
<jcastro> kapilt: hi
<kapilt> Jcastro 8:30am fwiw
<jcastro> kapilt: is when we start?
<jcastro> kapilt: nice, I guess I'll leave at 4am, heh. :)
<kapilt> I forwarded you what I know
<jcastro> kapilt: ack'ed
<bmullan> there seems to be a lot of Charms on the MIA page?  http://jujucharms.com/tools/store-missing
<bmullan> is that the extent of status for them?
<SpamapS> bmullan: a few of those are because of weird branch churn that we had for a while
<SpamapS> bmullan: that report is new, and we're not really able to dive into those issues because the charm store isn't very chatty about why it does or does not have something in it
<SpamapS> hopefully that will change soon
<bmullan> its been frustrating experimenting with the likes of lxc or nova and find 1/2 way through that some of the charms are missing
<SpamapS> bmullan: indeed. We'll get better at that as we learn. :)
<SpamapS> bmullan: they're always available via bzr ats 'bzr branch lp:charms/foo'
<SpamapS> bmullan: you just have to build a local repo of them.. and use '--repository ~/charms local:foo'
#juju 2012-06-14
<phe_13> marcoceppi, here am i
<marcoceppi> phe_13: are you running this command from your machine, or the AWS machine?
<phe_13> my machine
<marcoceppi> Okay, do you have your juju environment setup to connect to AWS?
<marcoceppi> https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment-using-ec2
<phe_13> sorry man, thats my AWS machine, but it in my own DMZ
<phe_13> # juju bootstrap
<phe_13> Could not find AWS_ACCESS_KEY_ID
<phe_13> 2012-06-13 21:16:24,605 ERROR Could not find AWS_ACCESS_KEY_ID
<marcoceppi> phe_13: what version of Ubuntu are you running?
<phe_13> debian wheezy
<pkimber> #join pocoo
<zyga> hey, https://juju.ubuntu.com/ says "If you are testing Ubuntu 12.04"... I think we should apply s/testing/using/
<zyga> how can I fix that?
<zyga> juju does not work well when co-installed with virtualbox
<zyga> juju bootstrap fails because virbr0 is already created
<twobottux> aujuju: Invalid SSH key error in juju when using it with MAAS <http://askubuntu.com/questions/147714/invalid-ssh-key-error-in-juju-when-using-it-with-maas>
<bbcmicrocomputer> I'm guessing it is, but is the transmission of config data between the juju client to the cloud (and subsequently to the deployed instances) encrypted?
<marcoceppi> bbcmicrocomputer: data between client and bootstrap node is encrypted to my understanding, however I'm not sure about node -> node communication (though I would assume it is)
<imbrandon> via ssh from you to the bootstrap node yes, between clients no idea
<imbrandon> heya marcoceppi
<marcoceppi> o/
<bbcmicrocomputer> imbrandon, marcoceppi: thanks
<marcoceppi> imbrandon: I've got fixes for the wordpress session issue *maniacal laugh*
<imbrandon> ann yea i figured out what it was tooo
<imbrandon> ahh*
<imbrandon> the secret keys were diffrent on the nodes
<marcoceppi> yup
<marcoceppi> \o/ soo easy.
<imbrandon> yea , when i noticed that i was like yay
<imbrandon> heh
<marcoceppi> I should have generic wp charm ready end of next week
<imbrandon> sweet, i am hoping to have the drupal one tomarrow ( drupal 7 not 6 )
<imbrandon> 6 is in the store
<imbrandon> but not "great"
<marcoceppi> imbrandon: is there an nginx proxy charm yet?
<imbrandon> thats comming with the 7 charm
<marcoceppi> cool
<imbrandon> its really a group of charms
<imbrandon> nginx and nginx proxy and drupal and drupal-site
<SpamapS> imbrandon: we need a way to define a strong primary<->subordinate relationship.
<SpamapS> imbrandon: I like the way you're going w/ drupal/nginx .. but its going to make the setup pretty non-intuitive.. we need "stacks"
<imbrandon> yea i was working on a dependancy hack
<imbrandon> but yea we need a real way
<imbrandon> the problem i came accross in the depend hack was it made it hard to use the charm outside of that stack
<imbrandon> e.g. nginx-proxy dont HAVE to use nginx as the server etc
<imbrandon> SpamapS ( or hazmat ) yall know what the deal is with the docs build, i see the error but no idea why it dident build
<SpamapS> error?
<imbrandon> yea something about the makefile conflict, i am guessing thats it
<imbrandon> one sec
<imbrandon> erm cant fin it in LP right now
<imbrandon> i was checking earlier that the merge went ok after hazmat approved it yesterday
<imbrandon> and noticed it dident build , some page listed a makefile conflict but now i cant find it , heh
<cheez0r> hrm, I'm running a juju bootstrap in a maas environment, and when I run juju -v status, it says it's trying to SSH connect to remote port 2181; why would that be?
<cheez0r> SSH is running on port 22 of the targeted nodes, leading to the connection for juju status timing out.
<imbrandon> zookeeper
<cheez0r> nothing on the targeted node is listening to port 2181 however.
<imbrandon> SpamapS: +1000 on the missing-hook idea
<m_3> imbrandon: yeah, that sounds awesome
<lucian> hello. i'm trying to find out about the status of rackspace support
<lucian> there is a ticket from 2011, but that's all i could find
<m_3> lucian: no native openstack provider yet... you still need to have the ec2 api enabled.  so no love on the rackspace cloud proper yet
<m_3> (afaik)
<lucian> m_3: ok, thanks
<m_3> np
<negronjl> 'morning all
<m_3> negronjl: mornin
<negronjl> 'morning m_3
<imbrandon> heya
<negronjl> 'morning imbrandon
<hspencer> congrats on getting juju into Debian unstable..good work guys
<m_3> SpamapS: whoohoo!!  ^
<koolhead17> hspencer, :)
<koolhead17> SpamapS, siir
<SpamapS> hspencer: thanks :)
<negronjl> jcastro: ping
<koolhead17> SpamapS, hello sirr
<SpamapS> koolhead17: howdy
<shazzner> I'm curious, does juju on debian bootstrap debian nodes?
<SpamapS> no
<SpamapS> debian lacks a few things
<shazzner> ah ok, just curious
<SpamapS> actually its possible that the local provider could be made to do it
<SpamapS> I haven't looked at the lxc debian template to see
<shazzner> huh
<SpamapS> But the code itself calls 'lxc-create -t ubuntu' so.. no ;)
<shazzner> ah ok cool
<shazzner> having to rewrite charms would be a waste of effort anyway
<SpamapS> shazzner: I think at least some charms will work fine crossing over from debian and ubuntu
<SpamapS> shazzner: but yeah, I don't see much point honestly
<SpamapS> Maybe if somebody wants to spin up on architectures that ubuntu doesn't have
<bkerensa> jcastro: any update on the HP thing? I got a call from someone in their engineering team yesterday randomly
<m_3> jcastro's at a conference today
<robbiew> bkerensa: I can confirm you are on the list
<bkerensa> robbiew: thanks
<robbiew> and we received confirmation from HP that they have you
<bkerensa> robbiew: So I can start using a instance now?
<robbiew> as to why someone from engineering would call...no idea...job offer? :)
<robbiew> hmm
<robbiew> one sec
<robbiew> bkerensa: now THAT I don't know...let me check with our internal liason...one sec
<robbiew> bkerensa: not getting a response, I shoot him an email and let you know
<bkerensa> kk
<robbiew> bkerensa: our internal HP contact just responded and said he'll follow up...translation, no one knows :/
<bkerensa> :)
<mars> SpamapS, around?
<SpamapS> mars: I am, wassup?
<mars> Hey SpamapS, I replied to bug 1006553, and I have a live runaway process on my system right now.  I was wondering if you needed to gather any other feedback while I have it?
<_mup_> Bug #1006553: Juju uses 100% CPU after host reboot <juju:New> < https://launchpad.net/bugs/1006553 >
<mars> SpamapS, it isn't hard to reproduce, takes about a day, but I thought if you needed more info, a live discussion would speed things up.  But if you prefer to keep it in the bug, that's cool too.
<SpamapS> mars: yeah hm
<SpamapS> mars: can you strace -f $thepid -o /tmp/foo.txt .. wait about 5 seconds, then pastebin that file?
<SpamapS> oh now I see your reply, reading
<mars> just for fun, the 5 second tracelog is 8.2M :)
<SpamapS> mars: takes a day is a bit weird
<mars> SpamapS, well, it doesn't start as soon as the system is booted.  I have to wait for the process to go nuts.  I haven't measured exactly, but 24 hours is enough.
<SpamapS> thats very weird
<mars> SpamapS, fwiw, zookeeper has a cron entry in cron.daily
<SpamapS> mars: zookeeper or zookeeperd ? (meaning, the package names)
<mars> zookeeper
 * SpamapS checks that out
<SpamapS> mars: well that doesn't seem to cause the issue
<SpamapS> mars: in fact that just exits immediately
<mars> SpamapS, what limits the machine agent connection loop?  You said yours tries every few seconds, whereas mine is in a busy-wait loop
<SpamapS> mars: I think mine is blocked on something else
<SpamapS> mars: is anything landing in $datadir/machine-agent.output ?
<mars> SpamapS, nope
<SpamapS> mars: actually it might even be /tmp/juju-$user-$envname-machine-agent.output
<mars> the only file I have is machine-agent.log, which I posted to the original bug report
<SpamapS> mars: do you have the file in /tmp tho?
<mars> SpamapS, you mean, my data directory? Yes, that is: /tmp/local-juju/mars-local/machine-agent.log
<SpamapS> mars: no the upstart job seems to redirect output to a special file
<SpamapS> mars: check /etc/init/juju-mars-local-machine-agent.conf
<SpamapS> mars: it should be redirecting output somewhere. Check that file.
<SpamapS> mars: I'm trying to get a way for you to run the agent in the python debugger
<SpamapS> mars: hopefully you can run it that way, and when it goes wack-o again, ctrl-c will drop you wherever it is polling
<SpamapS> hazmat: ^^ your expertise in python debugging would be helpful here :)
<SpamapS> jimbaker: ^^
<mars> SpamapS, found it: machine-agent.output is empty.  file-storage.output has Python exceptions in it, but it isn't growing.
<SpamapS> hrm.. debugger doesn't actually help that much because of twisted
<mars> You need to embed one of those SIGUSR1 "dump trace and exit" hooks :)
<SpamapS> mars: can you pastebin 'sudo lsof -n -p ...' whatever the pidof is
<SpamapS> mars: yeah
<mars> SpamapS, http://pastebin.ubuntu.com/1041631/
<mars> This looks promising: http://stackoverflow.com/questions/132058/showing-the-stack-trace-from-a-running-python-application
<SpamapS> mars: the second answer looks helpful
<SpamapS> I think threading may be getting int he way here too
<SpamapS> mars: can you attach to the process with gdb -p $thepid and do a 'bt' then 'thread 2' then 'bt' ?
<SpamapS> mars: In mine, there are three threads (1 2 3) and 2 are inside libzookeeper
<SpamapS> mars: thanks for going through this btw
<mars> SpamapS, np
<mars> SpamapS, same here, three threads, Py_Main, and two in libzookeeper
<mars> SpamapS, one of them is in setsockopt, called from zookeeper_interest in libzookeeper.  Is that what you see?
<SpamapS> mars: no actually
<SpamapS> #0  0x00007ffc5c18db03 in __GI___poll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
<SpamapS> mars: I'm beginning to think this is some weird libzookeeper bug
<SpamapS> mars: either way, I think we should actually just take out the 'start on ...' from the local provider agents until zookeeper is started as well
<mars> SpamapS, if it is Python, I can just hack a fix in there to test it out
<SpamapS> mars: anyway, I think the right fix is to not start the agents on reboot
#juju 2012-06-15
<mars> SpamapS, alright.  I need to sign off.  Thanks for looking at my problem.  I'll watch the bug for a fix that I can test.
<SpamapS> mars: should be soon I think. :)
<hazmat> SpamapS, back
<hazmat> there are some pyzk bugs with fixes in trunk that haven't been backported / packaged but nothing relatable
<SpamapS> hazmat: seems odd that after a while it would go from lazy polling to furious reconnecting
<hazmat> mars, is zookeeper running?
<SpamapS> hazmat: anyway, I say we just stop trying to start agents that will never work
<hazmat> ps aux | grep java
<SpamapS> hazmat: the whole point is that zk is not running
<hazmat> SpamapS, so two options make local envs survive restarts, or remove upstarts for all of them
<SpamapS> hazmat: well I'd like them to survieve
<SpamapS> hazmat: seems that we'd need to re-start zookeeper and it would work
<SpamapS> hazmat: oh, and the containers. ;)
<SpamapS> hazmat: either way, the half-done bit needs to go
<SpamapS> hazmat: I say remove the start stuff until we can get around to finishing the feature
<hazmat> SpamapS, sounds good to me
<surgemcgee> Well, running 6 m1.small instances got a little pricey. I have two large amazon bills. Ahhhhh =:s
<surgemcgee> I blame that on you guys. Need to put the handy constraints options more close to the front of the documentation.
<SpamapS> surgemcgee: I topped $200 this past month, and thats just from testing stuff for 1 - 2 days at a time ;)
<SpamapS> surgemcgee: constraints only landed about 2 months ago.. so.. before that you just had to eat it. ;)
<_mup_> Bug #1013469 was filed: Reactor issues (should switch to solar) <juju:New> < https://launchpad.net/bugs/1013469 >
<twobottux> aujuju: What is the correct way to set config options to a juju service unit with a file? <http://askubuntu.com/questions/151075/what-is-the-correct-way-to-set-config-options-to-a-juju-service-unit-with-a-file>
<twobottux> aujuju: Can't get juju to deploy, problem with ec2 keypair? <http://askubuntu.com/questions/151088/cant-get-juju-to-deploy-problem-with-ec2-keypair>
 * negronjl is done for the night
<imbrandon> night
 * imbrandon just got done with the second round of updates for the doc theme
<rogpeppe> what's a good idiom for detecting when all my relations have been joined? store the joined status in a file?
<imbrandon> relations can be joined at any time, even weeks after a deployment when new nodes or services are added
<rogpeppe> imbrandon: what if i depend on those relations, so i need them to be fulfilled before i can proceed?
<rogpeppe> imbrandon: i guess i'm wondering if there's some way of querying the status of the other relations while within a relation-joined hook
<imbrandon> then dont do anything until relation-joined
<imbrandon> almsot every charm has something like that, e.g wordpress cant work without a db
<imbrandon> so it waits for the mysql to join relation
<rogpeppe> imbrandon: that's fine. but in this case i want to wait for *both* relations to be joined
<rogpeppe> imbrandon: and i'm wondering what the idiomatic way to do that is
<imbrandon> ok so do some setup in one and some in the other and make them inter depend
<rogpeppe> imbrandon: by storing state in a file?
<rogpeppe> imbrandon: or is there some other way to track local state?
<imbrandon> could be that or checking for a file that only exists on setup, like a settings.php
<imbrandon> config-set would be an option but it has not be coded yet :)
<rogpeppe> imbrandon: i'm not sure that config-set is quite what i'm after, as that's global to the service, but this is something local to the unit
<rogpeppe> config-set --local, perhaps
<rogpeppe> imbrandon: BTW i'm not quite sure what you mean by "checking for a file that only exists on setup, like a settings.php"
<imbrandon> umm so i am wordpress, i need a webserver and i need a db
<rogpeppe> imbrandon: the other end of the relation is remote, right? so there won't be any associated files.
<imbrandon> so i setup myself on www when i get the ww relation
<imbrandon> but ther eis no settings file as that is made by the db relation
<imbrandon> so i know its not be setup yet by [[ -t /pth/to/settings.php ]]
<imbrandon> and can wait till next relation fires
<rogpeppe> imbrandon: ah, so you're tracking which relations have been joined by which files you've created. i see.
<rogpeppe> imbrandon: and then in both relation-joined hooks you test for both files and start the service if they exist.
<imbrandon> not really cuz very rearely do i reeally care, i'll set up the relation regaurdless of the other state
<imbrandon> the charm wont work untill they all come about anyhow :)
<imbrandon> but if i did care that would be the simplest way
<rogpeppe> imbrandon: thanks a lot
<imbrandon> np heh :)
<imbrandon> SpamapS: btw did mention i LOVE sphinx doc system, like it has some funky parts to work arround but for real, i even installed the PHP and JavaScript domain plugings to document my personal libs and such
<m_3> imbrandon: do you use trello mobile?  jorge's recent change to ubuntu group borked my mobile boards
<imbrandon> yea, but i havent logged into it in a few days
<imbrandon> lemme check
<imbrandon> ios right ?
<m_3> not sure if I'm just being dumb... yeah ios
<imbrandon> ahh crap ipad is dead, i'll check it here in afew once it charges a lil
<imbrandon> should only take a few min to charge enought to come on
<m_3> np.. no hurry... just wondering if I was the only one
 * m_3 can also just rtfm :)
<imbrandon> heh yea, i actually leave that and mail clien up on my ipad alot
<imbrandon> just sitting off to the side
<negronjl> 'morning all
<imbrandon> kinda nice to just thumb through but not have a tab
<imbrandon> heya negronjl
<m_3> negronjl: yo
<negronjl> 'morning imbrandon
<negronjl> m_3: hi
<m_3> wish they had a native ipad version... that'd be really cool with trello
<m_3> visual layouts n stuff
<imbrandon> yea, all the effect with the movement would be nice
<imbrandon> like iphoto, well the new one
 * m_3 nod
<imbrandon> i think this is thr first time i;ve charged this one since i got it
<imbrandon> about a month ago ( the new 3, traded off my 2 )
<imbrandon> but honestly i can tell no diffrence at all, even with the retna display
<imbrandon> seems ok here m_3
<m_3> imbrandon: ok, thanks
<jcastro> SpamapS: hey clint, may I brainstorm an idea with you?
<jcastro> have we thought about the equivalent of like a debian watch file?
<jcastro> so we can test/know when a charm is out of date vs. upstream?
<imbrandon> thats what i was tracking with the x-vcs , we could approve a metadata field easy enough for a upstream url and then have charm tools check it
<imbrandon> like watch does, via the charm store too
<imbrandon> like bts
<SpamapS> jcastro: No we haven't throught about it, but I'd love to have something like it
<SpamapS> And there is a discussion in Debian right now to add Vcs to watch files
<hazmat> imbrandon, i'm starting to wonder if this sphinx bootstrap layout wouldn't be better (retains navigation tree) .. demo -> http://code.scotchmedia.com/engineauth/docs/index.html code ->
<imbrandon> hazmat: yup, already on it
<imbrandon> ;)
<hazmat> imbrandon, +1 on upstream url
<hazmat> i was thinking how to get that into the browser with google search api ;-)
<hazmat> but ofc a metadata field for it would be much nicer
<imbrandon> heh yea :)
<imbrandon> the feeds api is great for that btw, if you ever dig more
<jcastro> SpamapS: ok, and one other thing, where'd you put in the docs the info for other distros?
<imbrandon> but yea a dedicated field/fields for tarball/vcs
<imbrandon> would rock
<imbrandon> jcastro: http://local.assets-online.com/docs/operating-systems.html
<imbrandon> err . correct domain
<imbrandon> thats my local copy heh
<jcastro> got it
<imbrandon> hazmat: yea i ve got an update 3.4 done with better navigation
<imbrandon> closer to the old way
<imbrandon> but still ubuntu like
<imbrandon> probably will need more tweaks but much better than whats i put up yesterday
<imbrandon> btw i am splitting out the theme so other sphinx users in canonical/ubuntu can use it too + i have a "base" plain .html verion i;ve been mirroring too so that i can convert it to drupal/wordpress/django easy as well
<imbrandon> for the little one off cananical non-webteam or pre-web team sites
<imbrandon> heh
<imbrandon> but yea my .plan hazmat was to utilize the responsive css, and the 1170 and 980 screens use the tree on the side
<imbrandon> and the smaller ones and tablets use a dropdown
<imbrandon> its what i;ve been working on between nginx/spdy stuff this morning
<hazmat> imbrandon, sounds good to me
<hazmat> imbrandon, nginx spdy is pretty experimental was my understanding
<imbrandon> also noticed i hard coded a few http:// and ssl errors are throwing, got that fixed up too
<hazmat> imbrandon, awesome, thanks
<imbrandon> SpamapS: so can we add upstream: and vcs: pweease ! *said in his best kid on christmass day voice*
<jcastro> SpamapS: are you planning on adding it more or should I post this to the list?
<SpamapS> jcastro: I have no plans. ;)
<m_3> SpamapS: yo
<m_3> SpamapS: I'd like to kick off a jjj build
<m_3> SpamapS: is there anything that should happen before `make dist`?
<SpamapS> m_3: autoreconf :)
<m_3> SpamapS: yeah... got all that
<SpamapS> actualy make dist probably does that
<SpamapS> well no
<SpamapS> make diest is after .. anyway
<m_3> SpamapS: just can't quite tell if I need to bump the vers
<SpamapS> m_3: no, it should be self contained
<m_3> awesome
<SpamapS> m_3: only bump the version if you're doing an actual release :)
<SpamapS> should be like, 0.11 or something righ tnow
<m_3> so `make dist` now will create 0.10.1?
<m_3> ah
<SpamapS> no make dist does not create a version.
<m_3> is there also build recipe to kick off manually from the web or is that handled in make dist?
<m_3> I see "create a release" button on the web
<SpamapS> m_3: there's a build recipe attached to the trunk
<SpamapS> m_3: it points at the stable branch..
<SpamapS> m_3: unfortunately we can't use {latest-tag} in bzr build recipes.. or it would be fully automatic
<m_3> SpamapS: you have any problems with me creating a new release version with jim's changes (they're approved and merged into trunk now)
<m_3> (watch subcommand)
<SpamapS> m_3: the release procedure is to bump the version in configure.ac, commit+tag that, push that to trunk+stable, then bump again and push that to trunk only
<SpamapS> m_3: then make dist, create release, upload tarball, etc. etc.
<SpamapS> m_3: would love to automate this so its 'make release'
<m_3> hmmm... wow, didn't think I'd need to upload tarballs... ok
<SpamapS> m_3: the final step is to edit the recipe and bump the revision to the next stable version, and request a build
<SpamapS> that last step is the most annoying
<SpamapS> because its just waiting on an update to launchpad to be automatic :-P
<SpamapS> m_3: you don't *have* to push tarballs
<m_3> before I get started on that... bigger question: should this process sync up with bigger cycle milestones?  or is it ok to be when we feel like it?
<SpamapS> m_3: I just wanted to make it easy for anybody who wants to run it
<SpamapS> m_3: jjj is experimental. We release whenever.
<SpamapS> often :)
<SpamapS> 1 commit is enough
<SpamapS> :)
<SpamapS> anyway, I have to run and do some errands
<m_3> SpamapS: cool
<SpamapS> bbl
<m_3> SpamapS: I'll work on the bump... I'll try to ping you only as last resort :)
<m_3> thanks!
<MarkDude> imbrandon, pingy
<imbrandon> sup MarkDude
<MarkDude> Good news on Fedora JUJU?
<imbrandon> yea its, in the wild, did not see 1000 tweets from everyoen ? heh
 * MarkDude has some that may help with process now the questions start
 * MarkDude hasa had time taken by ladyfriend
<imbrandon> seriously tho its on gitup, all ready to start and take a beating, i noticed there is a zookeeper issue i need to fixup
<imbrandon> but it works :)
<MarkDude> also we are STILL arguing over names in Fedora
<imbrandon> hahaha
<imbrandon> nice
<MarkDude> Antoehr board meeting
<MarkDude> some polls
<MarkDude> changing the voting system
 * MarkDude calleda  few people humorless
<MarkDude> due to their lack of humor
<imbrandon> http://github.com/jujutools/rpm-juju should be where it is atm
<imbrandon> btw
<imbrandon> heh make it beefy
<imbrandon> just beefy
<imbrandon> :)
<MarkDude> They are pushing for a theming way to do it
 * MarkDude has suggested BBQ
<imbrandon> btw you should have seen my gimp job i posted with the rpm's
<imbrandon> check out brandonholtsclaw.com its still on the front pahe
<imbrandon> page*
<imbrandon> MarkDude: but yea i'm sure that they are not perfect, i'm more than happy to take bugreports or help or whatever from the testers
<MarkDude> Im putting it out now on IRC
<MarkDude> Then will go to MLs
<imbrandon> if they do file a bug report , i'll give ya the tracker url
<imbrandon> one sec
<imbrandon> https://github.com/jujutools/rpm-juju/issues
<imbrandon> and patches more welcome too as i'm not a native rpm'er :)
<imbrandon> i was gonna try that suse build service soonish to try and build centos / suse / fedora packages all form the git repo automatic
<imbrandon> =just had not got that far :)
<MarkDude> So Jeff Spatula is wrong about charms needing to be hosted on LP- am I correct?
<imbrandon> they dont NEED to be, it would be easy for yall if they were
<imbrandon> just means you have to setup your own store if you want to not use LP ( there is talks about the store moving to github , not sure where thats gonna go tho )
<jcastro> negronjl: mira mira
<jcastro> when do you want to do the graphs for the charm store?
<jcastro> hazmat: what's the refresh interval on the review queue, I believe you said 15 minutes?
<hazmat> jcastro, yup
<hazmat> jcastro, max delay with the http cache is 18m
<_mup_> Bug #1013457 was filed: twistd still autostarted for juju after juju is removed from system <juju> <storage-server.log> <twistd> <juju:Confirmed> <juju (Ubuntu):Triaged> < https://launchpad.net/bugs/1013457 >
<SpamapS> m_3: how did it go?
<m_3> SpamapS: almost there!!!
<m_3> on the last step... can't seem to edit the recipe
<m_3> is it owned by you?
<m_3> https://code.launchpad.net/~clint-fewbar/+recipe/juju-jitsu-stable
<SpamapS> m_3: let me transfer it to juju-jitsu
<m_3> SpamapS: also, I see a deb version in the recipe (0.10-0stable1) but no updates in the changelog in lp:juju-jitsu/packaging... did you just make up 0.10-0stable1?
<SpamapS> m_3: https://code.launchpad.net/~juju/+recipe/juju-jitsu-stable
<SpamapS> m_3: yes I just made that up
<m_3> cool... trying again now
<SpamapS> m_3: Have to run. TTYL
<m_3> ok, thanks... that seemed to work
<m_3> SpamapS: awesome... it built
<hazmat> SpamapS, m_3, jamespage around?
<hazmat> trying to figure out python-dbg stuff to trace down a deadlock
<hazmat> but it can't seem to import any extensions on precise
<hazmat> it always fails with ImportError: /usr/lib/python2.7/dist-packages/apt_pkg.so: undefined symbol: Py_InitModule
<hazmat> which is troubling
<hazmat> nm
<hazmat> just removing sys.excepthook gets rid of apport which was the problem
<hazmat> failure in the failure system ;-)
<hazmat> hm... didn't work as well i had hoped
#juju 2012-06-16
<imbrandon> is the go port done yet  ...
 * imbrandon wishes it would hurry so I can make a provisioning agent to act as a source of truth for juju since juju cant be used for provisoning servers easily
<imbrandon> hrm maybe instead of puppet inside of juju maybe juju inside of puppet ... /me digs a little
<kees> hi! how do I deploy multiple services onto the same machine? the docs are eluding me at the moment.
<imbrandon> only subordinate charms can be deplpoyed to the same machine as another service
<lifeless> kees: short answer is you don't, in general.
<kees> yeah, looks like it only works if the charm itself was designed for it.
<kees> ah well. was hoping to start etherpad without needing 2 machines :)
<tshauck> hi, is it possible to get access to charms that aren't in the store? (or is it possible to get a gunicorn / tornado?)
<imbrandon> sure just use cs:~<lpname>/precise/<charmname>
<imbrandon> or --repository ~charms loca:<charmname>
<imbrandon> local:*
<tshauck> thanks - I''ll take a look
<tshauck> I'm just feeling my way through juju, but it's pretty cool so far
<tshauck> what does lp stand for in lpname?
<imbrandon> launchpad
#juju 2012-06-17
<hazmat> charmworld getting rebooted, due to ec2 scheduled downtime, back in a few
<imbrandon> oh noes!! heh j/k
 * m_3 primus on the brain... hoodlyhoodly bass
<hazmat> and its back
<m_3> yup... totally screwed my system yesterday grabbing something out of a really old backup... as root :(
 * m_3 palms forehead
<imbrandon> ouch
<negronjl> jcastro: ping
<negronjl> hazmat: ping
<negronjl> m_3: do you know the maas password for the maas/openstack/juju setup ?
<hazmat> negronjl, pong
<negronjl> hazmat: I'm working with the hardware setup for MaaS and was wondering if you know the password for the admin user
<negronjl> hazmat: I have tried a few but, I'm sutck
<negronjl> s/sutck/stuck
<hazmat> negronjl, let me grep the src
<negronjl> hazmat: thx
<hazmat> negronjl, have you done a createadmin script?
<hazmat> negronjl, it looks like that setups the password
<negronjl> hazmat: no ... the setup should already have one and I don't know what will happen if I make another one
<hazmat> negronjl, if your using the sample data..  its admin/test
<negronjl> hazmat ... I'll check ... brb
<negronjl> hazmat: aha !!!  thx.
<hazmat> negronjl, np
<negronjl> hazmat: do you happen to know the password for the ubuntu user on the laptop that is serving as the maas server ?
<hazmat> negronjl, oh.. you mean the 8box demo cluster?
<negronjl> hazmat: I have 9 HP box cluster here ( I think 8 live and 1 spare ) and a Dell laptop that is acting as the server.
<hazmat> negronjl, no.. robbiew, daviey, adamg, francis would know better
<negronjl> hazmat: Cool ... I am talking to Daviey as well.
<negronjl> hazmat: thx for your help though :)
<_mup_> txzookeeper/errors-with-path r46 committed by kapil.foss@gmail.com
<_mup_> merge trunk
<SpamapS> m_3: thanks for trying on the jjj release. You only messed one thing up. odd is "dev", even is "release"
<SpamapS> m_3: so you should have released 0.12, and bumped to 0.13
<m_3> SpamapS: gotcha
<lifeless> so, is there any way to run juju locally without getting new network services (apt-cacher-ng + zookeeper) installed ?
<lifeless> or do I need to run a kvm instance, and run juju within that ?
<m_3> lifeless: be careful with the latter option... you can do it, but juju's lxc uses(needs) libvirt's default 192.168.122.0/24 network
<m_3> lifeless: that works though (we run local provider juju environments _within_ ec2 instances all the time for testing)
<m_3> don't know about passing new zk/apt-cache addresses into the local provider though... never tried.  I'd imagine it's pretty hard-coded to localhost though
<lifeless> so, what I'd really like is to be able to shove juju into an lxc and give it a single callback to a 127.0.0.1 only service to fire up more lxc's
<lifeless> that would let me run multiple juju environments
<lifeless> as it is, I'm looking at jorge's howto for juju with lxc and wondering why-bother
<lifeless> (not why bothwe with juju, why bother with that setup; its very constrained and inflexible, and has overhead (via zk and apt-cacher-ng whether I'm using it or not)
<m_3> lifeless: like juju local provider with a bootstrapped bootstrap node that houses zk
<lifeless> right
<m_3> lifeless: kvm on a non-default libvirt network would do that right now
<m_3> lifeless: but yeah, I like your idea better... a safer sandbox for lxc.  that wouldn't add extra deps to the base machine.  maybe file a bug?
<m_3> lifeless: it's closer to how we use the bootstrap node in other providers (ec2) too
<lifeless> what lp projct
<lifeless> juju? juju core? juju-ng ?
<lifeless>  :P
<lifeless> m_3: ^
<m_3> lifeless: also beware of juju local provider's startup scripts too... https://bugs.launchpad.net/bugs/1006553
<_mup_> Bug #1006553: Juju uses 100% CPU after host reboot <juju:Triaged> < https://launchpad.net/bugs/1006553 >
<m_3> lp:juju itself
<m_3> your fix would help box some of those issues too
<lifeless> hah, so thats exactly the sort of thing I was worried about happening back when the lxc provider was first discussed.
 * lifeless buffs his fingers on his chest
<m_3> yup
<m_3> :)
<m_3> there's lots we can do to clean up the local provider
<lifeless> something to be aware of
<lifeless> some libvirt configs bridge onto the LAN
<m_3> right... I've got one that does
<lifeless> I do that with my non-laptop libvirts, because that lets me access the resulting instances directly and trivially.
<m_3> grabbing the default is... um... not ideal
<m_3> I think the answer may be to create a dedicated one on install... there's a bug for that (also using lxcbr not virbr, but that's another bug)
<m_3> it's incompatible enough with most of my libvirt setups that I created lp:charms/juju
<lifeless> m_3: https://bugs.launchpad.net/juju/+bug/1014435
<_mup_> Bug #1014435: lxc local provider sandboxing could be more complete <juju:New> < https://launchpad.net/bugs/1014435 >
<lifeless> for your editing pleasure
<_mup_> Bug #1014435 was filed: lxc local provider sandboxing could be more complete <juju:New> < https://launchpad.net/bugs/1014435 >
#juju 2013-06-10
<noodles775> Hi wedgwood! When you've got some time, I had some problems using the execd_preinstall() helper from charm-helpers - https://code.launchpad.net/~michael.nelson/charm-helpers/mock_call_unimportable_on_precise/+merge/168015
<noodles775> Let me know if I've missed something there, or if you'd prefer something done differently etc.
<vds> hi all I'm trying to run the postgres charm using non ephemeral storage but I keep getting the following error http://paste.ubuntu.com/5751820/
<mthaddon> stub, gnuoy: do you guys have any ideas here? ^
<mthaddon> (pg persistent storage issues - not sure if something else needs configuring)
<stub> No, I haven't looked into the storage stuff yet.
<wedgwood> noodles775: yep, that looks saner. I think I was trying to avoid side-effects a little to aggressively. I'll merge that now
<wedgwood> vds: oh hi. Is that ^^ what you messaged me about?
<noodles775> sweet, thanks wedgwood
<vds> wedgwood, yep
<wedgwood> vds: mthaddon: It's been a long time since I looked at that storage code. I suspect that thedac may have looked at it more recently, but I could be wrong about that.
 * wedgwood looks at the actual error
<thedac> sorry not specific to postgres which uses the regex for device name
<vds> wedgwood, you recall how the configs should look like?
<gnuoy> vds, at what point are you attaching the storage ?
<wedgwood> vds: yeah. which charm are you using? what's the bzr revno?
<wedgwood> vds: or even better, the repo path and revno
<gnuoy> vds, I can get you a config example
<vds> gnuoy, I'm not attaching the storage myself, how/when should I do it. lp:~charmers/charms/precise/postgresql/trunk/ revno 50
<vds> gnuoy, that would be great! :)
<gnuoy> vds, is this openstack you're using ?
<vds> gnuoy, yep
<gnuoy> vds, config example http://paste.ubuntu.com/5751886/
<gnuoy> vds, you need to create the volume and then gets its euca name
<gnuoy> once you have that change setup the juju config with the unit name and euca volume name ( i.e. volume-map: '{"postgresql/0": "vol-00000007"}'  )
<vds> gnuoy, thanks, the volume must exixts, right? Should it also have a file system already?
<gnuoy> vds, you just need to create the volume and attach it, you don't need to format it
<wedgwood> vds: looking at the code, it looks like that error message might actually mean "the volume is already mounted"
<gnuoy> I seem to remember that error message covers a multitude of sins
<wedgwood> bah, actually I can't tell. it's either that or it's complaining that the volume is NOT mounted
<gnuoy> vds, if this helps this is an example of attaching storage after the unit is up: http://paste.ubuntu.com/5751998/
<vds> gnuoy, so the persistent storage is created after the postgres unit is fired up?
<gnuoy> vds, you can create the storage whenever and attatch it as the juju env comes up or afterwards as I've done in the pastebin
<gnuoy> vds, fwiw I always attach the volume as the env is being initialised, not sure what the charm does with the existing data when you add attach after its started up and established its relations
<vds> gnuoy, this means you deploy the postgres charm first and then you attach a new volume to the juju unit, right?
<gnuoy> vds, yes, I usually attach as soon as nova list shows the unit as active
<gnuoy> ( well actually a python script does it )
<vds> gnuoy, thanks
<gnuoy> np
<jcastro> evilnickveitch: I want to add 2 pages to the Charm Authors section for the newdocs
<evilnickveitch> ok
<jcastro> I would name these "authors-blah.jade" right?
<jcastro> from looking at the naming convention
<evilnickveitch> jcastro, that would be best
<jcastro> is there a way to generate a blank jade from a template? or should I just copy and paste from an existing one?
<evilnickveitch> jcastro, just take the beginning bit from one of the existing pages...
<evilnickveitch> i.e.
<evilnickveitch> extends inc/layout
<evilnickveitch> block vars
<evilnickveitch>   - var title='Relations'
<evilnickveitch>   - var page='charms-relations'
<evilnickveitch> block content
<evilnickveitch>   article.
<jcastro> nod
<evilnickveitch> then slap in your html after that
<evilnickveitch> jade requires an indent
<evilnickveitch> four spaces for the <section>
<evilnickveitch> six for <h1>
<evilnickveitch> if you run a make when you are done, it will helpfully spit out loads of error messages at you :)
<jcastro> ack
<evilnickveitch> but I am working on ditching jade in the long term
<evilnickveitch> jcastro, if you run into difficulty getting it to build properly, just push it anyhow and let me know and I will sort it out.
<jcastro> yep
<jcastro> almost got policy done
<jcastro> then I'll do best practices
<jcastro> evilnickveitch: proposing crashes bzr!
<jcastro> bzr lp-propose lp:juju-core/docs
<jcastro> is that correct?
<jcastro> bzr: ERROR: exceptions.Exception: lp:~jorge/juju-core/policy-draft is not mergeable into lp:~evilnick/juju/go-juju-docs
<evilnickveitch> jcastro, hurrah! No, i have no idea of the arcane spells needed to fix that
<evilnickveitch> jcastro, don't you just push it back?
<evilnickveitch> hang on, i will try and merge it from my end
<evilnickveitch> jcastro, done!
<jcastro> also I get some weird jade error about -
<jcastro> so I wasn't able to test
<jcastro> but hopefully I contributed enough for you to fix my niggles. :)
<evilnickveitch> jcastro, that's fine, I'll sort out your mess :)
<evilnickveitch> jcastro, oh yeah, i forgot to mention, jade hates tabs
<evilnickveitch> which is one of the reasons it has to die
<evilnickveitch> but i will fix it
<jcastro> evilnickveitch: when does the evilnick.org stuff regen?
<evilnickveitch> jcastro, i operate an on demand service at the moment... I will do it when i have fixed your indents!
<jcastro> hmmm, I was adding spaces
<jcastro> are they all over?
<jcastro> the indents I mean
<evilnickveitch> jcastro, there are tabs in there as well, it likes one or the other, not both.
<evilnickveitch> I know... it's too fussy
<jcastro> ok, I'll pay more attention next time
<jcastro> I could have sworn I was all spaces
<evilnickveitch> it mostly was, but there were a few. It's a real pain! That's why I am going to ditch it once the nav stuff is finalised
<evilnickveitch> okay, it is up on evilnick now
<evilnickveitch> there are no links from the nav though, so you will need to go to
<evilnickveitch> www.evilnick.org/juju/authors-charm-policy.html
<evilnickveitch> jcastro:, i will fix the english tomorrow :)
<jcastro> no worries!
<jcastro> evilnickveitch: hmm, the 2nd section is missing
<jcastro> unless I messed that up
<evilnickveitch> jcastro, it's there in the source, but i guess there must be a missing </> somwhere, i'll hunt it down
<marcoceppi> jcastro: evilnickveitch: good standards are not being fussy :P
#juju 2013-06-11
<pavel_> hi guys
<pavel_> quick question, is there any way to configure ec2 root disk size?
<pavel_> I mean when I launch it with juju
<ehg> hmm, is there a way to manually remove a service that refuses to die?
<ehg> with goju
<ehg> e.g. do i have to manually edit a DB to remove zombie services? and how do i do that? :)
<jcastro> marcoceppi: hey, the mysql charm, once you flavor=percona, there's no going back is there? I get a config-changed error
<marcoceppi> jcastro: I guess so, haven't written tests/checked that yet
<marcoceppi> what do the logs say?
<jcastro> I didn't look, it was a few days ago
<jcastro> I just remembered
<marcoceppi> jcastro: open a bug and I'll make sure to check it when I finish writing tests for mysql
<jcastro> yeah I am going to check it out
<jcastro> but we should test that
<jcastro> someone from Maria finally wrote me back!
 * marcoceppi nods
<arosales> question on juju status in 1.11
<arosales> say I didn't know which port mongdb (or any given charm was exposing over)
<arosales> juju status in 1.0 > doesn't show the exposed port
<arosales> http://paste.ubuntu.com/5755016/
<arosales> I am pretty sure the port output was there before, is there a suggested way to find the exposed port?
<mgz_> arosales: bug 1173093
<_mup_> Bug #1173093: status should report ports <juju-core:Confirmed> <https://launchpad.net/bugs/1173093>
<arosales> ah, thank you mgz_
<arosales> mgz_, I also added a comment.
<arosales> mgz_, any gut feeling on when this bug may get an owner to address?
<arosales> I _think_ the open port status is still shown in the juju gui, but I think they are communicating over the web socket.  But the data should be there . .  .
<mgz_> arosales: not really, perhaps along with fixing other addressing issues (which will also be a compat change in the json output of some kind)
<arosales> mgz_, ok. thanks for the reply on the bug too
<jcastro> there's another thing expose needs
<jcastro> I put it in a bug
<jcastro> it needs to return "service exposed on: http://blah blah" when you first expose it
<jcastro> so I don't have to do another status command
<mgz_> yup.
<marcoceppi> jcastro: that brings some oddities though. What if you expose prior to open-port being called?
<jcastro> then wait I think
<jcastro> it sucks to expose, then status
<marcoceppi> I wouldn't want to block the expose, so many people's scripts will cry
<jcastro> evilnickveitch: confused, do I base my new page off my existing branch or do I wait for you to commit to trunk?
<evilnickveitch> jcastro, pull the latest version, because I fixed a lot of your bits...
<jcastro> evilnickveitch: it's dated from june 6th
<evilnickveitch> hmmm
<jcastro> lp:juju-core/docs right?
<evilnickveitch> that's not right - it is revision 39 now
<jcastro> https://code.launchpad.net/juju-core says 31
<evilnickveitch> ah... sorry. grab my branch
<evilnickveitch> we are changing the juju/core one so it is owned by ~charmers
<jcastro> ah ok
<evilnickveitch> so this will be a lot easier after tomorrow
<evilnickveitch> grab ~evilnick/juju/go-juju-docs if you want to work on anything
<jcastro> ack
<jcastro> evilnickveitch: hey, do we have like a sidebar class or something for the docs?
<jcastro> something for like, a side comment or something I want to tell the audience, but not be in the text block
<evilnickveitch> jcastro, you can use an <aside>. i haven't created a style for that yet, but I will need to soon anyhow
<jcastro> ah, perfect
<evilnickveitch> or you can use the "Note" or "Warning" styles for like footnotes
<jcastro> evilnickveitch: ok, best practices done, but when I propose to merge it says "This branch is not mergeable into lp:~evilnick/juju/go-juju-docs."
<evilnickveitch> jcastro, it's okay, i don't know why it says that, but i can merge it myself
<jcastro> ok
<evilnickveitch> where is the branch? same place?
<jcastro> branch is pushed to ~jorge/juju-core/best-practice
<evilnickveitch> jcastro, yay!
<evilnickveitch> okay, i merged that. as you see there isn't a style as such for the aside, but it's cool to have one to work with. I will sort it out
<evilnickveitch> AND you didn't mix your spaces and tabs this time :)
<jcastro> \o/
<jcastro> evilnickveitch: ok, after that I have some updates to both, but that will post meeting and post-lunch
<jcastro> m_3: http://codeascraft.com/2013/06/11/introducing-loupe/
<m_3> jcastro: sweet
#juju 2013-06-12
<AskUbuntu> MAAS & juju trubles from KAZAKHSTAN | http://askubuntu.com/q/307205
<jamespage> adam_g_, I proposed a couple of merges for nova-cloud-controller and nova-compute off the back of the serverstack quantum work I've been doing
<jamespage> wedgwood_away, can you give me a ping re consistency of return values in hookenv when you are around
<ehw> hey, everyone, o/ , are there current docs for setting up juju-core + maas?  getting errors following the steps on the wiki; namely, 'no public ssh keys' are defined in the environments.yaml, but it's not documented anywhere how to set this up
<wedgwood> jamespage: ping
<jamespage> hey wedgwood
<jamespage> hows things?
<wedgwood> not much to complain about
<wedgwood> you?
<jamespage> yeah - good thanks
<jamespage> anyways; I was having a dig through charm-helpers this morning
<jamespage> and started to review the stuff in core/hookenv against what we have in contrib/hahelpers/utils
<jamespage> as there is alot over overlap
<jamespage> specifically with regards to relation/unit/config accessors we have a bit of inconsistency
<jamespage> in contrib/hahelpers/utils if the return value from a juju cli call is empty, the function returns 'None'
<jamespage> makes it nice and easy to detect an unset of missing value in python
<jamespage> its a little different in core/hookenv; do you think its going to be possible to agree on a single approach?
<jamespage> there are also some inconsitencies - relation_get does the None thing; unit_get does not
<jamespage> wedgwood, any opinion on which is the right* way
<jamespage> *tm
<jamespage> adam_g_, you will be interested in this as well ^^
<wedgwood> sorry, got pulled away a bit
<wedgwood> It's hard to say. There is such a thing as a null value in yaml.
<wedgwood> I think None is right, but I disagree that it indicates that a value is not set
<wedgwood> jamespage: I agree that we should return None more consistently in the case of either a) an unset value or b) a value that is null
<jamespage> wedgwood, coolio
<jamespage> the other nice thing I added to contrib/hahelpers/utils was some transparent function caching
<wedgwood> the alternative is to throw an exception for unset values and nobody wants that. nor is there any way to know really since the relation-* tools just return nothing
<jamespage> wedgwood, indded
<wedgwood> tbh, I haven't looked closely at hahelpers. I did see a lot of overlap so I turned my attention to what seemed like more pressing things.
<wedgwood> I'll have a look at the function caching if you think that's the most ripe for promoting
<jamespage> wedgwood, its only a couple of functions - I'm happy to refactor it into hookenv
<wedgwood> that'd be much appreciated!
<jamespage> wedgwood, but I wanted to checkin with you on the return type thing first
<jamespage> wedgwood, OK - lemme take a run at the hookenv None thing first
<jamespage> and then I'll work in the caching
<jamespage> it hugely improves performance of some of our charms
<stub> marcoceppi: How are your test tools going?
<marcoceppi> stub: good, I should have a simple first cut out this week
<stub> marcoceppi: ok. Let me know if you want eyeballs on it :)
<marcoceppi> stub: I'll give you a ping for sure!
<stub> I'm just updating my test harness for gojuju. Found a few gotchas.
<marcoceppi> stub: anything I should keep an eye out for?
<stub> marcoceppi: Having to wait for services with life: dying to actually die was the main one I found.
<stub> marcoceppi: Trying to diagnose if/when openstack nodes get reused atm... gojuju seems to like spawning new nodes rather than reusing ones that are idle or soon will be.
<marcoceppi> stub: I'll make sure to add "dying" checks in. Didn't think of that
<noodles785> wedgwood: hi! No rush, but if you've time over the next few days, could you pls look at https://code.launchpad.net/~michael.nelson/charm-helpers/add-declarative-support/+merge/168961
<noodles785> jcastro: ^^ That's the declarative helper I mentioned on G+ using saltstack's states.
<wedgwood> noodles785: that looks an awful lot like a saltstack module, rather than a generic helper.
<wedgwood> do we want to choose saltstack over puppet or chef?
<wedgwood> noodles785: at the spring we discussed the idea of having some package-specific helpers, and I think this would be a good one.
<wedgwood> but that implies to me that the naming should be a bit more package-specific.
<noodles785> wedgwood: I've tried to write it in a way that we could use other packages too (just a thin wrapper). I have in the past done similar charms using puppet to set the state. I don't care which we go with personally, but some people don't like puppet.
<noodles785> wedgwood: Happy to re-work it as needed - I just want to be able to declare machine states rather than write (and test them) procedurally.
<wedgwood> +1 on that
<noodles785> salt seems like a good choice (it's python, and they're getting involved with ubuntu stuff too: https://plus.google.com/u/0/116015965439782966698/posts/FhL9ahHqdbR )
<jcastro> wedgwood | do we want to choose saltstack over puppet or chef?
<mgz_> what's the oldest version of ubuntu that juju actually supports?
<jcastro> hey so I'm the ecosystem guy, I of course want to see as many integration points with as many tools as possible
<jcastro> wedgwood: that probably doesn't jive with your real-world needs though, heh
<wedgwood> noodles785: The structure isn't there, but I envision this going somewhere like charmhelpers.package.saltstack. please feel free to suggest a better namespace
<wedgwood> (there=in the package, yet)
<noodles785> wedgwood: that sounds fine - but why not charmhelpers.contrib.saltstack ?
<wedgwood> noodles785: that works too. right now, contrib implies temporary and unstable, but that can develop over time to be the home for things like this
<noodles785> wedgwood: cool, I'll update it to that, and rename the two methods so that they're obviously salt and not generic (?)
<wedgwood> noodles785: and btw, this looks great and I think it'll be immedately useful to other too
<wedgwood> noodles785: that would be perfect
<noodles785> Excellent, will do. Thanks!
<wedgwood> mgz_: I seem to remember that we shipped some version with oneiric, but I could be wrong
<jcastro> mgz_: we have oneiric stuff in the store
<jcastro> but it's not very supported, it's there but I don't think we officially say it's supported
<jcastro> "we only care about precise" is a good reflection of the current reality
<mgz_> thanks guys, responding to mailing list post
<rick_h__> jcastro: I think that's on the way out, well the browser is removing it
<arosales> Charm Meeting starting at the top of the hour. Pad URL = http://pad.ubuntu.com/7mf2jvKXNa
<jcastro> Charm hangout in a few minutes, you can follow along on http://ubuntuonair.com
<jcastro> or just ask us to jump in if you want to participate
<jcastro> https://plus.google.com/hangouts/_/41d46ad46b450f9e8a5d57259121f6ed436e6af9?authuser=0&hl=en
<_mup_> Bug #1190293 was filed: preinstall packages for charms <juju:New> <https://launchpad.net/bugs/1190293>
<jcastro> marcoceppi: I need the youtube link for the video when you have it
<marcoceppi> jcastro: waiting for it to upload
<marcoceppi> jcastro: https://www.youtube.com/watch?feature=player_embedded&v=0qaxWHnqxNQ
<jcastro> negronjl: https://code.launchpad.net/~patrick-hetu/charms/precise/gunicorn/python-rewrite/+merge/167088
<jcastro> this is ready!
<FunnyLookinHat> Do the images at http://cloud-images.ubuntu.com/precise/current/ come ready with a specific key-pair or password?
<jcastro> smoser: ^^
<jcastro> I think it's ubuntu/ubuntu but not sure
<arosales> also utlemming  may know.
<arosales> I _think_ the default user was made optional in the latest images
<arosales> default user = ubuntu
<smoser> :)
<smoser> of course they do not
<smoser> in 12.10 there is no user at all in the image
<FunnyLookinHat> smoser, just a keypair ?
<jcastro> http://ubuntu-smoser.blogspot.com/2013/02/using-ubuntu-cloud-images-without-cloud.html
<smoser> in 12.04 and prior, there was a 'ubuntu' user, but its password login was locked.
<FunnyLookinHat> jcastro, perfect, thx
<smoser> jcastro's link will get you want you want. (assuming what you want is to be able to login)
<cmars232> is anyone working on a charm for the horde PHP applications, esp webmail?
<jcastro> I've not seen anyone working on that
<jcastro> it's up for grabs if you want to snag it!
<jcastro> we have a mailstack and MySQL already, I think you'd just need the fronend bits
<jcastro> marcoceppi: m_3: is there any other nuclear option remaining for removing drupal6?
<lifeless> jcastro: you want more nukes? or non-nuke options ? :)
<jcastro> I think nuking it is the only way to be sure
<cmars232> jcastro: mailstack.. excellent. will consider it, though it might be a non-trivial first charm. thx
<cmars232> jcastro: where is mailstack, i don't see it in the charm store? does that include postfix, etc.?
<jcastro> http://jujucharms.com/~ivoks/precise/mail-stack-delivery
<cmars232> ah
<jcastro> I'll annoy ivoks to get back to working on it
<m_3> jcsackett: hmmm... dunno
<m_3> jcsackett: sorry, that was meant for jcastro
<m_3> but he quit the channel
<m_3> win 19
<jhf> hey all - trying to understand charm upgrades. is it expected that charms can do an "in-place" upgrade?  for my charm (which installs Liferay, a java-based portal server with an embedded Tomcat servlet container), I really need to a)stop the server, b) extract the new bits to a parallel directory to the old bits, c) copy and change config files, d) start up the new bits, and e) delete the old bits. this seems *really* heavyweight.
<jhf> not to mention, the new bits would now be in a different directory than the old bits.
<jhf> so I can't just re-install over top the old bits, with the server still running. that would not  be good for the system.
<sarnold> jhf: can you not re-organize those a litle? something like (a) extract new bits into temp directory (b) copy and change config files (c) stop the server (d) rename running directory to old, rename new directory to running (d) start the server (e) delete the old bits
<sarnold> directory renames are near enough to instant (especially compared to jvm startup)
<jhf> yep I sure couldâ¦ that sounds like a good approach, but what about the relations established on the "old" bits?  does juju need to know anything, can I just do what you suggest without juju knowing?
<jhf> it won't call any other hooks besides upgrade-charm, during an upgrade?
<sarnold> I know the order of hook calls is well-defined, but I'm less certain about timing
<jhf> like the db relation in particular. are any of the db-relation-* hooks called during an upgrade ?
<jhf> https://juju.ubuntu.com/docs/charm-upgrades.html implies that this is the only hook called during an upgrade.  "After the upgrade-charm hook is executed, new hooks of the charm will be utilized to respond to any system changes."
<jhf> I presume that means that other hooks are only called in response to operator actions like adding/removing relations
<jhf> but not automatically, as part of the upgrade process
<sarnold> oh, nice :)
<marcoceppi> jhf: no other hooks, other than hooks/upgrade-charm will be run by juju. If you want to execute a hook you'll need to execute it inside the hooks/upgrade-hook hook
<marcoceppi> err hooks/upgrade-charm* hook
<m_3> jhf: or... just use config for service "upgrades" :)
<marcoceppi> +1, use a "version" configuration option to allow users to move between and "pin" a version of the software
<m_3> upgrade-charm just for upgrading the charm bits themselves... doesn't _necessarily_ even need to stop the service
<m_3> best practice is on the fence between those approaches... depends on what's more natural for the service
<sarnold> I feel like this is an area for juju to provide more opinion :)
<m_3> sarnold: ack
<m_3> ok, so do it my way :)
<m_3> jk
<m_3> I really do like the upgrade in config
<m_3> but there's really a lot of variation between services
<m_3> I guess the framework charms are the worst
<sarnold> I'll grant it is difficult because every service is slightly different.. and some vastly different.. but i think someone administering a few thousand nodes would much prefer them to all act identically across all services for all service upgrades.
<m_3> charm bits with versions, framework bits with versions, other deps with verisons, and then an actual application with versions
<m_3> sarnold: oh, yeah... good point
<sarnold> knowing which services upgrade themselvs on charm upgrade and which ones have a config option to set to push an upgrade would be a bit detailed.
<m_3> yeah
<m_3> sarnold: but often you're infra is only a dozen or so actual services
<m_3> even if it's 15,000 instances
<sarnold> m_3: true.
<m_3> but yeah, good point... I'd be in favor of restricting upgrade-charm to just do the charm bits
<m_3> push all service upgrades into config
<m_3> makes charms more complicated in general though
<m_3> so steeper learning
<m_3> but...
<m_3> that'd at least be consistent between framework charms and other charms
<m_3> upgrade-charm only upgrades charms
<sarnold> I presume it's a bit late to suggest upgrade-service hooks? :)
<m_3> sarnold: ha
<m_3> nope, suggest away!
<m_3> well, I can't say about late or not
<m_3> but... suggest away!
<m_3> :)
<sarnold> hey! I suggest an upgrade-service hook to explicitly upgrade a service, so upgrade-charm can perform exactly charm upgrades!
<sarnold> proposed. :)
<m_3> the bug'll go smoother if you're asking rather than me
<marcoceppi> sarnold: You could always just write a hooks/upgrade-service and have config-changed run it when someone changes the version config option
 * marcoceppi runs away
<m_3> :)
<sarnold> marcoceppi: hahaha
<sarnold> marcoceppi: actually..
<m_3> makes sense
<m_3> future-proof?
<sarnold> that feels like a very reasonable Best Practice to suggest.
<marcoceppi> sarnold: well, i'd be better to create an upgrade_service function and just stub it in the config-changed hook
<sarnold> I don't really care about the details; it just seems like the current "whatever the charm author wants" is a bit vague and more opinion could help.
<m_3> one of the biggest best practices I'd like to start pushing is to put your hook impls in a library...
<marcoceppi> I wouldn't want to polute the hooks/ space with non-hook files
<marcoceppi> m_3: +1
<m_3> marcoceppi: yeah, exactly!!!
<m_3> sarnold: yes, accepted... wild wild west only works for so long
<m_3> we did actually get some great ideas from organic growth though
<m_3> we just gotta now clean things up
<sarnold> m_3: that's cool to hear :)
<AskUbuntu> how to turn on and start openstack and openvswitch in a 32 bit machine running ubuntu 13.04? | http://askubuntu.com/q/307514
<marcoceppi> Does someone have a list of all the agent-states that pyjuju and gojuju show?
#juju 2013-06-13
<pavel> Is there any way I can increase size of root partition in EC2 instance? Or at least attach extra ebs volume?
<pavel> my problem is that by default lucid images are 8gb and it's totally not enough for my purposes
<mgz> gah, why are recreating the error of having 'docs' as a series of the main project?
<mgz> juju-core/docs makes no sense
<stub> oh noes! Where has debug-hooks gone
<mgz> stub: bug 1027876
<_mup_> Bug #1027876: cmdline: Support debug-hooks <cmdline> <juju-core:Confirmed> <https://launchpad.net/bugs/1027876>
<stub> In the -broken hook, the relation is already gone... so I need to use my -changed hooks to remember state somewhere else so my -broken hook can actually clean up?
<mgz> that sounds possible to me, hopefully someone with experience of charming something like that can give suggestions
<mthaddon> hi folks, I have three instances in an environment that I want to get rid of (no longer needed). I've removed them via juju destroy-service (for 2) and a juju remove-unit (for 1). I've then done a nova delete, but juju has reprovisioned the instances. Is this expected behaviour? is there some way to work around it?
<mgz> yes, you want `juju terminate-machine`
<mgz> mthaddon: ^
<mthaddon> mgz: awesome, thanks
<ehg> hi - does anyone have any idea of how to get rid of zombie services/machines in the go juju DB? i.e. services that won't die, machines that are pending
<RAZORQ> Hi guys
<RAZORQ> i have question about Ubuntu for phones. I Have ZTE GXI with Intel atom processor, and i want to know that, will be there ubuntu for my phone official, or i must port it on my own?
* RAZORQ changed the topic of #juju to: k
* marcoceppi changed the topic of #juju to: Reviewer: ~charmers || Review Calendar: http://goo.gl/uK9HD || Review Queue: http://jujucharms.com/review-queue  || Charms at http://jujucharms.com || Want to write a charm? https://juju.ubuntu.com/docs/charm-store.html || OSX client: http://jujutools.github.com/
<vennenno> (?)
<jamespage> wedgwood, adam_g_: cached decoration and some hookenv normalization - https://code.launchpad.net/~james-page/charm-helpers/caching_hookenv/+merge/169160
<wedgwood> jamespage: we can't cache relation-get. we may want to see our own relation variables
<wedgwood> same for the other relation_* commands
<wedgwood> er, some of them anyway. the ones that call relation-get
<jamespage> wedgwood: not sure I understand why that is?
<wedgwood> jamespage: if, in one part of my hook I call relation-get -r <some relation id> - <my unit name>, then relation-set -r <same relation id>, I won't see the changes if they're cached
<jamespage> wedgwood: you can't see data in that way within a relation
<jamespage> hmm - or maybe...
<wedgwood> jamespage: I'm pretty sure you can
<wedgwood> jamespage: I did a lot of experimenting when I wrote http://senselessranting.org/post/52242864437/juju-relations-how-do-they-work
<wedgwood> jamespage: you can see your own changes immedately
<wedgwood> they're not committed and sent to other units until you exit the hook, but you can see your own
<wedgwood> none of the rest of charm-helpers operates with that assumption, but I wouldn't want to surprise someone
<wedgwood> jamespage: I also have another branch that may give you similar results.
<wedgwood> jamespage: it's not completely finished: https://code.launchpad.net/~mew/charm-helpers/persistence
<wedgwood> jamespage: you could always call cached(relation_get(...)) in your own charm.
<jcastro> wedgwood: mind if I syndicate that blog on juju.u.c?
<wedgwood> jcastro: by all means
<jamespage> wedgwood: well I never even knew you could do that
<wedgwood> neither did I :)
<jamespage> wedgwood: I can probably figure out how to selectively flush the cache if that happens
<Slaytorson> How can I destroy a charm?
<jcastro> can you specify what you mean by destroy?
<jcastro> as in, remove the machine it's running on or destroy the service or ... ?
<Slaytorson> Perhaps I meant service. Completely delete and terminate the service and all machines running it
<jcastro> look at destroy-service and destroy-machine
<Slaytorson> Thanks. I did "juju --help" and nothing about destroying came about.
<jcastro> actually, I wonder what happens when you destroy-machine before destroying a service
<jcastro> `juju help commands` is what you want
<jcastro> oh, which version of juju are you on?
<Slaytorson> Not sure. juju --version doesn't give me the version
<jcastro> if you do "juju" and then hit enter do you get a nice list of basic commands and a link to the website?
<Slaytorson> Sort of. I don't get anything about destroying. https://gist.github.com/bslayton/197c168321381135556e
<jcastro> ah yeah
<jcastro> you're on the right version then
<jcastro> but still crappy that we don't make --version, etc.
<jcastro> I think I filed a bug about that
<jcastro> https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1182898
<_mup_> Bug #1182898: Please support `juju --version` <amd64> <apport-bug> <raring> <juju-core:New> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1182898>
<jhf> hey all - I think I'm in some weird state.. after a reboot (with all my juju running and happy), when I run 'juju status' I get 'ERROR could not connect before timeout' yet 'juju bootstrap' reports 'ERROR Environment already bootstrapped' - this is using LXC containers.
<jhf> any ideas?
<jhf> 'juju -v status' errors with Socket [10.0.3.1:60460] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
<sarnold> jhf: I vaguely think I recall someone complaiing about their network bridge not coming back up after a reboot recently...
<jhf> yeahâ¦ I ended up doing a destroy-environment and starting over for now..
<jhf> but I imagine if I reboot again it'll happen again.
<jcastro> http://andrewsomething.wordpress.com/2013/06/13/introducing-bug-2-trello/
<jcastro> m_3: marcoceppi: arosales ^^^^^
<marcoceppi> jcastro: wat. awesome
<marcoceppi> Oh, I thought it did sync stuff, it's a chrome plugin
<marcoceppi> still, very cool
<thumper> FunnyLookinHat: ping
<FunnyLookinHat> thumper, howdy
<thumper> FunnyLookinHat: hey hey
<thumper> so
<thumper> containers...
<FunnyLookinHat> Ha
<thumper> good progress is being made
<thumper> but nothing ready to test yet
<thumper> we have managed to get some things working with manual poking
<thumper> but not fully automated yet
<FunnyLookinHat> Ah interesting
<FunnyLookinHat> Any idea on a timeline ?
<thumper> :)
<thumper> that's the question isn't it?
<thumper> as fast as possible
 * thumper considers carefully
<thumper> I *think* we should have something very beta by the end of next week (hopefully(
<thumper> this depends on some network refactoring
<thumper> to make sure the containers are properly addressable from the rest of the machines
<thumper> sorry it isn't more concrete
<thumper> but since this is the first iteration
<thumper> it is bound to be wrong
 * thumper quotes kiko - the first two times will be wrong
<thumper> FunnyLookinHat: are you on the juju-dev email list?
<thumper> FunnyLookinHat: I'm trying to post updates there
<thumper> FunnyLookinHat: so, completely different question - the new ultrabook - does it have a metal outer?
<FunnyLookinHat> Sorry was AFK - had to take care of something
<thumper> FunnyLookinHat: np
<FunnyLookinHat> Ok so - Q#1 - at some point could you give me a high level idea of how the charms would work to deploy applications within containers?  i.e. will much have to change from a standard charm?  On top of that - I'm still not sure if this means I'd put the entire LAMP stack in the container or just LAP and then do MySQL elsewhere.
<FunnyLookinHat> RE: mailing list - I'm not - I asked for approval and never got it I believe :
<FunnyLookinHat> RE: Ultrabook - not metal, but a VERY strong polycarbonate.  I can't bend the one sitting next to me at all if I try to twist one way or another
<FunnyLookinHat> The metal chassis inside is quite awesome :)
<thumper> ?! you need approval for juju-dev list?
<FunnyLookinHat> I guess so?  I threw in my email address and it said "pending approval" - no emails yet  :)
<thumper> A#1 - charms shouldn't need any modification *at all* - should ben entirely transparent
<FunnyLookinHat> I used my personal, so that might be it - funnylookinhat@gmail.com - if you could approve
<thumper> I'll take a look
<FunnyLookinHat> thumper, ok - so charm theory follow-up.  Given that I don't care about actually "scaling" any of our deployed applications, would it make sense to have a charm wrap an entire LAMP stack?  I would _really_ like to be able to do that for security and backup purposes
<thumper> FunnyLookinHat: I think that idea may well suit your needs very well
<FunnyLookinHat> Ok - good deal.
<FunnyLookinHat> Gotta run - but thanks for the update thumper
#juju 2013-06-14
<AskUbuntu> Juju on AWS t1.micro (or other) reserved instances | http://askubuntu.com/q/307954
<virusuy> hi guys !
<BaribalWrk> Hi. I've gotten access to a few racks with rather impressive machines that run KVM/libvirtd. Is there a way to integrate juju in this environment?
<Baribal> Also, so far they're only deploying CentOS VMs here, but AFAIK that shouldn't be an issue as juju downloads images itself anyways.
<stub> Is the charm cache on the bootstrap node, or in the control bucket?
<marcoceppi> stub: I think it's in the control bucket
<jhf> hey all - question on best practices. my charm installs a service whose filesystem layout is different depending on version. so for example, if I install to /opt/liferay, then in one version, its "home dir" is /opt/liferay/v1 but in the next version it'll be /opt/liferay/v2.  So I am saving the value of /opt/liferay/v1 into a dot file so that in a future upgrade I can discover the filesystem layout of the "old" version during the upgrade. I considered making i
<jhf> part of the charm config but that didn't feel right because this is not a user "tunable" and if one were to change it, it would break badly. so is it cool to save variables in this way during the install so that they can be available during an upgrade?
<marcoceppi> jhf: absolutely. A lot of charms that need to keep track of low level data like that typically put them in a dot file in either the services home (not really a good idea in this case) or the $CHARM_DIR
<jhf> hmm.. good idea re: charm dir, but when I upgrade the charm, will those files remain during the upgrade of the actual charm unit files?
<marcoceppi> jhf: yes, upgrade-charm only overwrites files that are extracted from the new charm. So any files created by the charm are not touched
<stub> marcoceppi: Thought so. I discovered the weirdness I was seeing was update-charm deploying revisions I was working on yesterday, despite a fresh bootstrapped environment.
<jhf> nice.. I like that better than putting it in the service dir. I'll go with that, thanks!
<jhf> btw - I tried to use the "juju debug-hooks" command, but it tries to do fancy things via the terminal, and gets really messed up on a mac: I am running ubuntu in a virtual box vm, so I ssh to the virtual ubuntu instance, and then run juju debug-hooks and my terminal gets all wonky, making juju debug-hooks unusable :(
<jhf> I don't think this is juju's fault but just thought I'd mention it. haven't played with terminal settings yet to try and fix it.
<jhf> hey all - I am using a local repository for charm testing - I am editing the files and then running juju upgrade-charm but it keeps using old, cached copies of the files so the resulting charm that is copied to the container doesn't have the updates.. is there a way for it not to use the old, cached charm files?
<_mup_> Bug #1191030 was filed: Provisioning agent stops working <juju:New> <https://launchpad.net/bugs/1191030>
<marcoceppi> jhf: You need to either increment the revision number or use the -f flag to force an uprade
<jhf> woo! it works! now time to re-learn bzr to push the fixes :)
<marcoceppi> Charm school in 20 mins!
<marcoceppi> Charm School is now! http://ubuntuonair.com/
 * FunnyLookinHat is watching :D
<FunnyLookinHat> and taking notes
<jhf> Is there a way to stop a service (not destroy-service)?  That is, something that triggers the stop hook, but otherwise leaves things as they were?
<FunnyLookinHat> marcoceppi, select the gray box and double click the color next to "Fill" in the bottom left :)
<FunnyLookinHat> But use the arrow tool to select it.
<FunnyLookinHat> off-topic question for the charm school today, but I can't find documentation on how to fire a custom hook on a charm that's already deployed
<jhf> https://juju.ubuntu.com/docs/policy.html#license refers to the list of OSI-approved licenses at http://opensource.org/licenses/
<jhf> question!! Is there a way to stop a service (not destroy-service)?  That is, something that triggers the stop hook, but otherwise leaves things as they were?
<FunnyLookinHat> jhf, That's what I'm aiming for...  :)
<FunnyLookinHat> with a custom hook
<FunnyLookinHat> Ah ok - so you'd write it into config-changed
<jhf> Is there a reason there's no charms for any release other than precise?  I assume I'd start with the latest Ubuntu release, but didn't see any raring charms
<sarnold> jhf: I'd expect saucy charms to pick up, as people prepare for 14.04 and assume it'll be the next LTS release
<jhf> ah ok, thanks
<sarnold> jhf: for my part, I did my (very few) charms on precise, because it seemed the most well-tested, and intended to port them to quantal; but I never got around to it, and next thing I know, raring is out...
<FunnyLookinHat> Ok - so to do it in juju-core, you would have to fire a config change and handle the flag within config-changed
<sarnold> .. and it's easier to ignore two broken things than one broken thing :)
<jhf> :)
<sarnold> jhf: but I'm in a different position, I don't have hundreds of machines to support, perhaps those people that do are just targetting the newest release that they actually have deployed in their environments
<fred> hi
<Guest89610> #nick fred
<Guest89610> i forgot how to use irc :(
<sarnold> Guest89610: /nick
<sarnold> Guest89610: but, pick a different nickname, fred must be protected, it'll just change you away again :)
<Guest89610> thamks man
<Guest89610> *tnx
<Guest89610> ah i see. okay
<fred1234> fred1234 will do :P
<fred1234> how ya all :P
<fred1234> how's ubuntu?
<fred1234> hi
<fred1234> where can i get started?
<fred1234> i mean, the flirst v
<fred1234> i mean, the first vid of juju?
<fred1234> i had not updated or been active with ubuntu lately. i want to catch-up.
<FunnyLookinHat> So - if I were to deploy on AWS - is there a way to deploy to two separate regions at once?
<fred1234> :-(
<FunnyLookinHat> fred1234, I'd probably just read through this stuff: https://juju.ubuntu.com/docs/
<FunnyLookinHat> Just go through the table of contents - it paints a picture pretty well and gets you into stuff quickly
<fred1234> ah okay, fun :) this coyld help. been using ubuntu since 6.xx, but im not a c or c++ expert :(
<fred1234> but had manage through terminal somehow lols :P
<fred1234> *could
<fred1234> tnx again. i'll carry-on
<fred1234> #juju: i'm just finish with the home page. also ventured reading en.wiki -> http://en.wikipedia.org/wiki/Juju_%28software%29
<fred1234> nice, like service ilnterfacing and concurrency, and cloukd... still wondering what services and how does it provide those....
<fred1234> ... i'll read further lols
<marcoceppi> FunnyLookinHat: no, not currently. Cloud (and region) federation is on the roadmap
<marcoceppi> fred1234: check out jujucharms.com for a list of charms (services) we have currently!
<FunnyLookinHat> marcoceppi, Ok - my worry is that AWS has had outages in the past, and without rackspace support I feel as though we're having to make a bad decision  :-/
<marcoceppi> FunnyLookinHat: yeah, it's been on the roadmap for a while. I think with containerization and local provider being worked on that cloud federation will be ready by 14.04. It's kind of a requirment to make the tool truely "production" ready
<FunnyLookinHat> yeah - I'm keeping a close eye on the containerization stuff  :)
<jake> tes tes
<Mage_Dude> How can you delete a node/service if juju status doesn't work?
<rnathuji> Hi folks, I have a quick question on bootstrapping juju
<rnathuji> I've deployed MaaS and enlisted 3 ARM nodes (shown as ready state).  juju bootstrap fails with "No matching node is available". I did try doing a set-constraint arch=armhf, but then it complains that it's not bootstrapped. Does the boostrap node have to be x86?
<rnathuji> nm, figured it out
<Mage_Dude> You wouldn't happen to know how to delete a node from maas would you?
<rnathuji> I haven't tried it, but it looks like the UI has a "Delete node" button on if you click on the node name from the list of all nodes. Or is that itself not working for you?
<Mage_Dude> No, you can't do it that way because the node has been allocated. However, juju status doesn't work so you can't unallocate the node! The *only* way I've found to do it is to wipe the MAAS db and recommission all the nodes again.
<rnathuji> Can you use maas-cli to release the node and then delete it?
<rnathuji> (I'm not even sure what exactly release does, just saw it on the maas-cli options list)
<Mage_Dude> I'll have to look at it
<Mage_Dude> That's bizzare. The cli only lists one node as allocated, but the gui shows two.
<Mage_Dude> sudo maas-cli nodes release <some-id-value> ... why is that incorrect syntax?
<Mage_Dude> Grrr
<paraglade> marcoceppi: Just got done watching the vid for the charm school today and wanted to know if I heard you correctly say that we could submit are charms to GitHub.
<Mage_Dude> Is maas+juju actually functional or is it a late April Fools joke?
<Mage_Dude> So, maas doesn't do anything other than provision machines. Then you have to ssh into one of the provisioned machines and actually bootstrap juju there?
<sarnold> Mage_Dude: that doesn't sound right..
<sarnold> I don't have enough free hardware laying around to try out maas, but the maas provider for juju should just let you juju bootstrap ; juju deploy ...
<Mage_Dude> sarnold: I wish. juju bootstrap (Yay kittens!) juju status (I will throw ambiguous errors!)
<sarnold> heh :(
<Mage_Dude> I can ssh directly into a 'provisioned' node with the ssh key generated. However, I try juju status and get an ssh key error. That makes no sense.
<sarnold> Mage_Dude: it's true that juju status often passes up an opportunity to show something more meaningful from a log file. status is hard to capture in the few states available. :(
<Mage_Dude> And it's so great when maas is trying to provision two nodes to do the same thing because it can't actually bootstrap/setup the node. Any thoughts on why my perfectly valid ssh key is being rejected?
<m_3> paraglade: yeah, we can handle PRs, they're just a little more manual of a process until we get some things changed around
<m_3> paraglade: I'd recommend pinging me or marcoceppi directly when you've submitted one... at least for the next few weeks
<bkerensa> jcastro: will local environment work by OSCON? :)
<m_3> bkerensa: shit, don't remember... jorge'd probably know
<m_3> he's out today
<bkerensa> m_3: ok just wondering were planning Juju demo at the booth and its either local enviroment or rackspace cloud neither of which worked last time I checked :P
<bkerensa> m_3: unless jorge is expensing AWS :)
<m_3> :)
<m_3> which conference?
<bkerensa> m_3: the one you will be at in July
<m_3> bkerensa: awesome
<bkerensa> :)
#juju 2013-06-15
<bkerensa> m_3: you know you spend a lot of time in Portland :)
<m_3> bkerensa: yeah, we'll figure something out... I think it's safe to plan for a demo
<m_3> worst case we can put a gui up there on the staging site where it refreshes
<m_3> yeah, totally
<m_3> I'm trying to remember... seemed like there was a third
<m_3> ODS, Railsconf
<m_3> Oscon
<m_3> but thought there was another
<_mup_> Bug #1191284 was filed: Latest ppa release is not working on Ubuntu Lucid <juju:New> <https://launchpad.net/bugs/1191284>
<Vido> Hey
<Vido> guys
#juju 2013-06-16
<hallyn_> hey, i'm not sure if this is a cloudinit or just a transient problem, but last 3 days when i juju bootstrap on amazon ec2 with default-series precise, i have to log into the bootstrap node and manually reboot after installation is done
<hallyn_> is that a known thing right now?
<hallyn_> (i never had to do that before, but haen't used it with amazon ec2 in a few months)
<hallyn_> whelp, i guess i ought to try precise <clickety tap>
<hallyn_> hm, yeah, that works.  so it's only raring.
<AskUbuntu> juju status timeout error 111 connection refused please help | http://askubuntu.com/q/308862
#juju 2014-06-09
<thumper> stub: do the postgresql charms for precise work on trusty?
<stub> thumper: the are supposed to. I haven't run the tests recently on trusty.
<axw> mgz: standup?
<mgz> axw: blast, did standup happen?
<axw> mgz: yes, wallworld is going to email you
<mgz> gah, really wanted to talk to you guys as last week we didn't get any
<tech2> Hi all, I'm writing a juju charm and when a relation triggers, how do I determine the IP address of the machine the other service is on? I have a multi-master database setup, with replication. When a link is formed between the two via a relation in juju, how does the juju hook script know what each end's IP addresses are (or do I need to carry that around as local configuration info?)
<tech2> Also, is there a way I can configure my charm to prevent multiple installations on the same machine (use of --to <number>)? The software I'm writing it for sadly only supports a single instance per machine.
<mthaddon> hi folks, I'm wondering if the swift charms could grow support for swift-dispersion-report relatively easily? http://docs.openstack.org/trunk/config-reference/content/object-storage-dispersion.html
<gnuoy> mthaddon, I'll take a look
<mthaddon> gnuoy: thx - I'm happy to file a bug if that's appropriate
<mthaddon> looks like it should "just" be creating a new user, generating /etc/swift/disperson.conf and then running swift-dispersion-populate - users can then run swift-dispersion-report whenever needed
<gnuoy> mthaddon, a bug would be great, thanks.
<mthaddon> k
<gnuoy> But I agree, at first glance it looks straight forward
<mthaddon> gnuoy: https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1328064
<gnuoy> ta
<_mup_> Bug #1328064: Add support for swift-dispersion-report to charms <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1328064>
<tech2> Does juju have a concept of a "collection" of services, such that when a relation is made between two machines it's really made to the group, or is this something to be managed externally? I'm trying to manage multi-master database replication without having to set-relation between every instance in a full-mesh (but having the resulting configuration represent that).
<mthaddon> tech2: for which service are you trying to configure multi-master replication?
<tech2> mthaddon: our own database system which I'm trying to write a charm for.
<mthaddon> tech2: ah okay - so relations are between services, not between machines, so basically the model would be that you'd have a foo-multimaster-db service, and the relation would be to that
<mthaddon> hi folks, I've been looking at the swift charms and there are two settings I'm wondering about - partition-power has a default of 8 in swift-proxy which means essentially the default expects a ring with up to 2 drives (you're supposed to have 100x the number of total drives you expect in the cluster and it's raised to the power of 2)
<mthaddon> the second setting is "workers" in the swift storage components, which seem to be hard coded to 2, which doesn't quite represent what the swift docs recommend
<tech2> mthaddon: sorry, bad choice of words, and yes. So how does one manage the relations to create a full mesh?  <dbname>-relation-joined would be run, but how would that derive information about all participants without someone having run set-relation on all the services?
<mthaddon> tech2: that'd be up to the charm itself. Basically when you run a set-relation on (say) the appserver side, that would trigger a response from the foo-multimaster-db service that would return information about all participants, but this may be DB specific.
<kentb> do suboridinate charms work with constraints, or, I guess I should ask, is there any point to declaring constraints when deploying a subordinate charm since they basically just 'bolt on' to an existing service?
<lazyPower> kentb: they bolt onto an existing service with scope: container
<lazyPower> tbh i haven't tried to deploy a subordinate with constraints to see if it blocks the deployment on a machine that doesn't meet the constraints.
<lazyPower> so it may have some functionality, but needs citation.
<kentb> lazyPower, ok. thanks.
<tech2> mthaddon: so how would you propose structuring this (assuming I wanted N instances of the DB, all cross-replicating)? I'd have a second multimaster charm that just acts as a mechanism to connect things to?
<mthaddon> tech2: I don't think that's necessary, but depends on the application. Do the nodes already chatter amongst each other and keep an up to date list of what nodes are in the replication set? If so, could you use that?
<tech2> mthaddon: I'm starting afresh. This used to be a package that was deployed to a box and then configured to replicate either by hand or via chef script by adding the IPs of the other instances it would replicate with. Instead of managing that by hand I'd just like the option of telling it "you're a part of this group, find your neighbours" or something.
<tech2> mthaddon: I was just hoping this was a known pattern for services (and as such that there'd be a standard solution) rather than me making something up myself.
<mthaddon> tech2: I think the peer relations is what you're looking for - trying to find an example for you
<tech2> Thanks
<mthaddon> tech2: the haproxy charm's peer-relation-joined (it's a symlink to hooks.py, but notify_peer (and therefore notify_relation) is probably what you're looking for
<tech2> I'll take a look, thanks.
<jcastro> lazyPower, you're on reviews this week?
<jcastro> I believe cory_fu is your deputy if you guys wanna pair up or whatever
<lazyPower> i am?
 * lazyPower looks
<lazyPower> Welp, there goes the neighborhood
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: lazypower / cory_fu || News and stuff: http://reddit.com/r/juju
<cory_fu> Does anyone have experience with destroy-relation?
<cory_fu> Specifically, I issue the command, get no errors, but the relation never goes away
<lazyPower> cory_fu: is a dependent service in error?
<lazyPower> cory_fu: or is there an open debug-hooks session trapping your commands?
<cory_fu> No services in error state whatsoever
<cory_fu> No debug-hooks open, either
<lazyPower> strange. I've not seen that behavior before. What version of juju are you running?
<cory_fu> 1.19.3
<cory_fu> I can work around it, but was wondering if it was a known bug
<lazyPower> it warrants filing
<lazyPower> I'm not seeing that behavior and i'm on 1.19.3
<lazyPower> make sure you attach your all-machines log with the bug report, + how to reproduce, and scrub any api keys if you're using a public cloud from the log output.
<jcastro> hey mbruzek
<jcastro> do you think we can get elasticsearch working on power by say ... thursday?
<mbruzek> jcastro, ElasticSearch works, but logstash does not
<mbruzek> Logstash has a architecture dependency.
<mbruzek> jcastro, IDK how to resolve the architecture problem in logstash,
<jcastro> oh sweet, reviewboard incoming!
<thumper> jcastro: charm?
<thumper> avoine: IIRC you did the python-django charm yes?
<thumper> avoine: I have some questions on the juju list about how to hook up the subordinate charm for supplying the actual django site
<thumper> would love some comments if you have some time
<jcastro> thumper, yep
<lazyPower> mbruzek: did you see? log4j, [2014-06-09T21:01:44.664]  WARN: org.elasticsearch.transport.netty: [Kwannon] Message not fully read (response) for [859] handler org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$4@45114dfc, error [true], resetting
<lazyPower>  - Logstash itself has an issue all around
<lazyPower> I'm filing the bug now
<arosales> alexisb: rick_h_ jcastro: perhaps we make What's coming in the Juju and Juju gui UOS to Juju whats coming in the GUI
<arosales> as we have a Juju Core Roadmap
 * arosales looking at http://summit.ubuntu.com/uos-1406/all/
<arosales> for Cloud DevOps
<mbruzek> lazyPower, I did not see that
<alexisb> though jcastro I am happy to get some help with the juju core roadmap stuff as this will be my first UDS :)
<lazyPower> mbruzek: check in /opt/logstash/*.log - this is scrolling non stop after adding teh cluster relationship
<arosales> Seems Road Map and whats coming (juju core wise) may be the same content
<jcastro> I am of the opinion that one session can cover juju and the gui
<mbruzek> lazyPower, I could not get logstash running on Power remember?
<jcastro> I mean, 1 hour is a long time to read off changelogs and roadmaps
<rick_h_> jcastro: but it's so exciting!
<arosales> jcastro: agreed thus my suggestion
<rick_h_> oh hmm, that's tomorrow
<lazyPower> mbruzek: was it not starting up at all?
<usdudeink> The only thing it can't do is PDFs from Firefox with HPLIP, any ideas out there??
<arosales> jcastro: so can you make "whats-coming-in-juju-and-the-juju-gui" just "whats-coming-in-the-juju-gui" --drop Core and just cover core in the "Juju-core-roadmap" session
<arosales> jcastro: actually I think I can update
<mbruzek> lazyPower, no, it had an unknown-Linux error
<jcastro> arosales, sure, but that's 2 sessions, is that what we want?
<jcastro> I can't imagine the gui has 1 hour of things coming soon
<lazyPower> mbruzek: ack. I'm refining this bug stepping through an itneractive debug session - if you want to follow the progress - https://bugs.launchpad.net/charms/+source/logstash-indexer/+bug/1328272
<_mup_> Bug #1328272: Failed Unicast Discovery <audit> <logstash-indexer (Juju Charms Collection):New> <https://launchpad.net/bugs/1328272>
<arosales> jcastro: that was jut what was on the schedule
<arosales> there were 2 sessions, but "whats coming" had both core and gui
<arosales> jcastro: I guess we could cover both in one and then drop the roadmap session
<jcastro> I can go either way
 * arosales just going off what is on the schedule right now
<jcastro> it seems the gui session has already been renamed
 * arosales renamed it
<jcastro> oh
<jcastro> rick_h_, hey, can you fill a slot with 55 minutes of content?
<rick_h_> jcastro: arosales ok, will check back later to figure out what I need to prepare for tomorrow. EOD
<rick_h_> jcastro: not really, I can do 30min or so perhaps
<arosales> jcastro: feel free to combine "whats coming" to be both core and gui, _but_ then drop the "Juju Core Roadmap session"
<jcastro> I'll ping you tomorrow, worst case we can do it on thursday too
<rick_h_> jcastro: ok
<arosales> jcastro: thanks or getting the CloudDevOps track all set
<jcastro> our big data session will be epic
<alexisb> jcastro, if you want to combine the core and gui stuff into one session that is fine by me
<alexisb> I can make sure we have core present
<jcastro> yeah let's do that
<jcastro> It's supposed to be a summary, surely we can do both in an hour
<alexisb> ok, when is the session?
<alexisb> jcastro, ^^^
<jcastro> wed, 1500 UTC
<jcastro> rick_h_, that gives you an extra day to prepare
<alexisb> jcastro, perfect, thank you
<jcastro> that gives us an extra slot to go over with "Getting Started with Juju"
<jcastro> just in case
<jose> negronjl: hey! do you have a minute? I'm stuck with some seafile stuff
#juju 2014-06-10
<negronjl> jose: I'm in the middle of something ... give me a few minutes ( about 30 I'm afraid )
<jose> I have time :)
<jose> ping me when you're available :)
<negronjl> jose: I'm back ... what's up ?
<jose> negronjl: I'm having some troubles with the charm
<jose> because of the err_and_quit you defined instead of the set -x
<jose> now I'm getting some weird django errors, too
<negronjl> jose: let me deploy it so I can see as well
<jose> second
<jose> oh, go ahead
<negronjl> jose: What are you deploying on ?  precise? trusty?
<jose> precise
<negronjl> jose: ok ... hold on ... bootstrapping
<jose> np :)
<negronjl> jsoe: Where are you deploying this ?  ec2? something else ?
<negronjl> jose: ^^
<jose> ec2
<negronjl> can you ssh into the instance ?
<jose> but the error I get is an unbound variable one
<jose> I'm in right now
<jose> oh, no
<jose> lemme redeploy
<negronjl> jose: changes pushed
<jose> negronjl: ack, thank you!
<mhshams> ERROR bootstrap failed: waited for 10m0s without being able to connect: Permission denied (publickey,password).
<mhshams> hi, I'm trying to bootstarp a private cloud (openstack) with juju, but getting above error.  where should i set the user/pass for the image that I'm using for bootstrap?
<mhshams> Hi, I'm trying to bootstarp a private cloud (openstack) with juju, but getting above following error.  where should i set the user/pass for the image that I'm using for bootstrap?    ERROR bootstrap failed: waited for 10m0s without being able to connect: Permission denied (publickey,password).
<mhshams> ls
<mhshams> hi, after successfully bootstraping juju with my private openstack, when I try to deploy a charm (wordpress for example), it's complaining that network is unreachable. because it's trying to use private netowrk IP, not the public (floating) one.
<marcoceppi> mhshams: did you set "use-floating-ip" to true in the environments.yaml?
<mhshams> yes, i did. in fact bootstrapping is done with public ip, but only deploy is trying to use private one.
<mhshams> marcoceppi: even when i try to run "juju status", the result is the same:   ERROR state/api: websocket.Dial wss://192.168.0.2:17070/: dial tcp 192.168.0.2:17070: network is unreachable
<marcoceppi> mhshams: bootstrap wasn't done with the public IP address if deploy is trying to use the private one
<mhshams> marcoceppi: but the private network is not visible from my machine (juju installed). how can juju bootstrap with ip that is not visible in the network?
<marcoceppi> mhshams: when juju bootstrap runs, it connects to the OpenStack install and uses the API to spin up an instance. If use-floating-ip isn't set to true before you bootstrap for the first time, then when it requests metadata about the instance to cache where the bootstrap node is, it'll just get the first address it responds with which will likely not be the floating-ip
<mhshams> marcoceppi: in my configuration, use-floating-ip is set to true, and I can see the juju-openstack-machine has the public ip assigned. i can connect to the machine with it's public ip.
<BradCrittenden> hi rbasak
<jose> mhshams: if it's not production, try destroying the environment and re-bootstrapping. easiest way.
<rbasak> bac: hi!
<rbasak> bac: let me just read your email again
<bac> rbasak: cool.  hey could you join us in #juju-gui in case someone else there is interested?
<mhshams> @jose: i did tried that, and the result was the same. one thing to clarify is that, i'm creating the image metadata locallay and using this bootstrap with --metadata-source.  ( juju bootstrap --metadata-source=/home/<user>/.juju/metadata)
<jose> I'm no openstack expert
<jose> someone else should be able to help
<jcastro> Hey guys, I''m going to move the LXC/juju session from thursday to wednesday so didrocks and I can be there
<lazyPower> jamespage: ping
<jamespage> lazyPower, OMG it must be ping jamespage hour
<jamespage> lazyPower, hey!
<lazyPower> oh, i'm sorry :(
<lazyPower> I was curious if you had a chance to review teh vxlan-gateway that we sync'd on a while back, and what your notes were if any. Its come back up in the rev queue
<jamespage> lazyPower, I've not looked yet
<jamespage> lazyPower, (and no problem about the ping :-))
 * lazyPower takes notes to not ping jamespage before 10:45 EDT
<lazyPower> can i interest you in a coffee, or a crepe while I have your attention?
<tech2> if you have a charm that has peers, how would you instantiate an additional instance of the peer on a particular host/vm/container? Is it just juju add-unit --to <name>  ?
<marcoceppi> tech2: yes
<tech2> More to the point I suppose. If I had a setup akin to US East, Europe (in terms of data-centres), how would I add units to one of those collections of VMs rather than to a particular machine?
<marcoceppi> though, you will probably want to use add-unit --to (kvm:lxc):Machine#
<tech2> thanks Marco.
<marcoceppi> tech2: that's something you would do with constraints, there is a "zone" constraint landing in juju which will allow you to dictate which zone to create a new machine
<tech2> marcoceppi: zones aren't here yet (assuming that's what you mean with the future-tense word "landing")?
<marcoceppi> tech2: correct, code just landed in trunk for zones
<marcoceppi> so it'll likely be in the next release of juju
<tech2> marcoceppi: thanks
<tech2> marcoceppi: is there a writeup about this anywhere (something I can show to others). I'm trying to upsell the virtues of juju at the moment and these are the kinds of issues being raised.
<marcoceppi> tech2: there's no real document on zones, it just follows the normal constraint pattern, https://juju.ubuntu.com/docs/charms-constraints.html
<tech2> thanks
<marcoceppi> Once it lands in a release the docs will be updated
<marcoceppi> tech2: for the exact format, here's the bug against the doc: https://github.com/juju/docs/issues/86#issuecomment-45469055 let us know if you and your team have any other questions about juju! Happy to help
<tech2> marcoceppi: no doubt I'll be in and out over the coming months, thanks again for the help.
<tech2> Is there a tool for web-managing multiple environments (much like juju-gui is for a single env), or is that Landscape?
<marcoceppi> tech2: Landscape does a bunch of stuff, but I don't think it does management for multiple juju environments, it can and does know about multiple but won't give you the same experience as the GUI
<tech2> marcoceppi: so the only way of really using a web-ui for multiple environments at the moment would be to have multiple instances of juju-gui (one per environment)?
<marcoceppi> tech2: correct, that's the way at the moment
<tech2> marcoceppi: My issue is that we have an obligation to keep different clients' data on different instances such that we could do things like provide them with ssh access for an audit etc. I figured environments might be the cleanest way to do that but that sadly increases the devops overhead a bit.
<marcoceppi> tech2: I can see your point in increasing the overhead a bit. However, juju is fully drivable via a websocket per environment
<marcoceppi> so you could append functionality to easy the devops world by building tools to help drive deployments
<tech2> Right, there's always a cost, it just depends who has to pay it :)
<avladu> hello there :). can someone help me on how to connect to the mongodb server, on the juju bootstrap node?
<marcoceppi> core is also pretty responsive to users needs, so if there are shortcomings for your team feature requests can be opened,e tc
<marcoceppi> avladu: any reason why in particular? it's strongly discouraged given the schema changes and it's not guarenteed to work between releases
<avladu> I just want to see if some relation ids are correct
<avladu> judebug reasons only
<avladu> debug reasons only
<marcoceppi> avladu: there's a python tools called juju-dbinspect which will allow you to connect to the mongodb and pull data out with Python
<marcoceppi> avladu: https://github.com/kapilt/juju-dbinspect
<avladu> marcoceppi: thanks
<marcoceppi> alternatively, you can just `juju ssh 0` then connect to the mongo instance running there
<avladu> marcoceppi: yes, I have done: juju ssh 0
<avladu> marcoceppi: I do not know the credentials to connect to the jujudb
<marcoceppi> avladu: that information is stored somewhere... not entirely sure, but the juju-dbinspect gets you around that
<lazyPower> nice session jcastro
<jcastro> ugh local provider
<lazyPower> yeah, minor snafu
<jose> and when lazyPower wanted it to fail, it didn't
 * lazyPower flips tables
<avladu> marcoceppi: thanks, with a few hacks, the juju-dbinspect worked
<jcastro_> lazyPower, remind me, did you solve your logstash problem or just figure out what was wrong?
<jcastro_> iirc a version mismatch?
<alexisb> jcastro_, rick_h_, do you guys have some slides for the roadmap discussion?
<alexisb> for UDS
<alexisb> I am trying to figure out how I should prep
<jcastro_> some people do slides, other just do a status report
<jcastro_> up to you
<rick_h_> alexisb: I was just going to walk through and try to demo a little bit.
<bloodearnest> I assume you guys know that upstream lxc has go bindings to the lxc api?
<lazyPower> jacekn: i figuredo ut what was wrong, i have WIP on fixing it
<lazyPower> s/jacekn/jcastro/
<jcastro> marcoceppi, I'll need you at the top of the hour for "State of the Charm Store"
<sparkiegeek> lazyPower: cory_fu: hi there, do you think you'll get a chance to look over https://bugs.launchpad.net/charms/+bug/942032 this week?
<_mup_> Bug #942032: Charm Needed: Review Board <Juju Charms Collection:Fix Committed by adam-collard> <https://launchpad.net/bugs/942032>
<lazyPower> sparkiegeek: you bet!
<lazyPower> I'll promote it to first thing when i switch off from what i'm working on and move back into the rev queue
<sparkiegeek> lazyPower: excellent :) looking forward to seeing the feedback
<jose> sparkiegeek: I'll go ahead and do a community review at the moment (I'm not a charmer), taking a look at it
<sparkiegeek> jose: ah great, any and all reviews gratefully received
<jose> :)
<alexisb> jcastro, rick_h_, I know have a conflict for the 15:00 UTC time
<alexisb> would it be possible to move the UDS gui/core roadmap session to 16:00 UTC?
<jcastro> otp, one sec
<alexisb> or up to 13:00 UTC
<jcastro> arosales, can you move it?
<alexisb> nws
<arosales> I can take a look
<sparkiegeek> I had some trouble using Amulet with the PostgreSQL charm. The latter has a concept of "allowed_units" which charms that relate to it should poll to see if $UNIT_NAME is in.
<sparkiegeek> the way Amulet inserts sentries between the two relating services means it'll never see it's own UNIT_NAME in allowed_units
<lazyPower> sparkiegeek: its not a perfect solution, and thats oen we are aware of
<lazyPower> theres a project called juju-db-inspect that we are looking at dropping in to aid in relationship inspection vs using sentries
<arosales> alexisb: are you free at 16:00 UTC on Wed?
<sparkiegeek> lazyPower: ok, sounds interesting
<lazyPower> sparkiegeek: because it is a bit troublesome that the sentry can interfear with the relationships being defined.
<sparkiegeek> lazyPower: indeed
<arosales> alexisb: looking at switching "Juju with LXC containers for local dev" with "Juju Core and GUI Roadmap"
<alexisb> arosales, thanks
<arosales> alexisb: so to confirm that time slot works for you, correct?
<alexisb> yes it does
<arosales> ok
<jose> what's the next charm store session going to cover?
<marcoceppi> jose: ack
<marcoceppi> jcastro: ack
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdk_oApTaoYDWAv5eeR9zK3n8oUHTI8sE1WL4RptTN0lOaLpQ?authuser=0&hl=en
<jcastro> marcoceppi, ^^^
<marcoceppi> jcastro: ack
<jcastro> and whoever wants to join
<jose> jcastro: what's the topic on this one?
<alexisb> arosales, let me know if you were able to make the switch, otherwise I will have to try and delegate the core roadmap discussion
<jose> sparkiegeek: mind a PM?
<arosales> alexisb: the schedule is locked so I am pinging mhall119
<sparkiegeek> jose: WFM
<mhall119> arosales: it's not locked, track leads can edit it
<arosales> mhall119: ah looks like jcastro or marcoceppi will have to make the edit
<arosales> mhall119: thanks. I forgot I wasn't a lead this go around
<mhall119> arosales: yeah, you're only a lead on Ubuntu Development, if it's a devops session they can do it
<mhall119> arosales: also,sorry for the delay in replying, was hosting Mark's keynote
<arosales> mhall119: no worries
<arosales> mhall119: could i bug you to switch or add me as a lead to the Cloud DevOps?
<mhall119> arosales: sure
<arosales> mhall119: thanks
<mhall119> give me one minute to do that
<arosales> mhall119: ok thanks
<mhall119> arosales: done
<arosales> mhall119: thanks
<arosales> :-)
<arosales> alexisb: all set for wed
<arosales> thanks again mhall119 :-)
<alexisb> arosales, your the man, thank you!
<arosales> alexisb: no worries, easy fix
<jose> mbruzek: you know how to run a test in an already bootstrapped env, with constraints?
<mbruzek> jose you might be able to put the constraints in the Deployer class
<mbruzek> method
<jose> oh, thanks
<mbruzek> I don't think I have done constraints within Amulet yet.
<mbruzek> if you can't find it please let me know.
<mbruzek> jose It might be possible to set constraints in bootstrap and then all future deploys will have same constraints
<mbruzek> https://juju.ubuntu.com/docs/charms-constraints.html#working-with-constraints
<jose> I'll check then :)
<jcastro> marcoceppi, let's talk local template lock LXC-hates me
<marcoceppi> okay
<marcoceppi> hangout or in here?
<jcastro> either
<jcastro> actually, hangout
<jcastro> go to the daily standup hangout
<jose> jcastro: session set up, youtube set up, will set up the page tomorrow morning
<sebas5384> lazyPower: ping
<lazyPower> sebas5384: pong
<sebas5384> lazyPower: hey!
<lazyPower> Whats up?
<sebas5384> we sell the first juju project! :)
<sebas5384> to a client hehe
<lazyPower> Thats awesome news!
<lazyPower> congrats
<sebas5384> so now i can be actually working in something thats goes to production
<sebas5384> yeah! thanks! :)
<sebas5384> so thats why i was lost these days
<lazyPower> np :)
<lazyPower> I've been occupied i assure you
<sebas5384> yeah i imagine!
<sebas5384> but hey! we must have our meeting to talk about vagrant and juju
<sebas5384> and other devops stuff hehe
<lazyPower> can we schedule it for end of week again? UDS has set me back in terms of time
<sebas5384> yeah absolutely :)
<sebas5384> let's talk trough the week then
<lazyPower> sebas5384: surely
<sebas5384> cmars: ping
<cmars> sebas5384, pong
<sebas5384> hey! cmars o/
<cmars> hi
<sebas5384> i'm doing a drupal charm in ansible
<cmars> oh, neat
<cmars> i really like the ansible helpers
<sebas5384> and i wonder if you can help me with a doubt
<sebas5384> yeah!! thank you for your work on ansible+charms
<sebas5384> :)
<sebas5384> well
<sebas5384> the thing is i was reading the helper code
<cmars> oh, that's not mine..
<sebas5384> to whats all about the tags
<cmars> i'm just a fan
<sebas5384> ooh hehe
<sebas5384> yeah thats right duh!!
<sebas5384> hehe
<sebas5384> sorry about that
<cmars> no, i just don't want to take the credit. you should thank noodles775
<cmars> :)
<sebas5384> well the doubt i have is, there's some way i can declare some default tags for the playbook
<sebas5384> thanks noodles775 !! :)
<sebas5384> hehe
<sebas5384> btw here is the repo https://github.com/sebas5384/charm-drupal
<sebas5384> :)
<sebas5384> maybe noodles775 can help me with these then :P
<cmars> you might be able to find help in #ansible as well. could be a way to set tags in the ansible env
<sebas5384> yeah i was looking some info about that
<sebas5384> thanks cmars :)
<sebas5384> oh! and i didn't forgot to do the changes in the pull request for the juju-nat
<sebas5384> i'm still going to do that, just need time hehe
#juju 2014-06-11
<mhshams> hi all, appreciate it if you have a look at this issue and help me to fix it: http://ubuntuforums.org/showthread.php?t=2228930&p=13046316#post13046316
<sparkiegeek> mhshams: that sounds like https://bugs.launchpad.net/juju-core/+bug/1308767
<_mup_> Bug #1308767: juju client is not using the floating ip to connect to the state server <addressability> <landscape> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1308767>
<sparkiegeek> mhshams: if you're feeling brave you can try using Juju 1.19.3 which contains the fix
<mhshams> @sparkiegeek: thanks for the information. i'll try 1.19.3 for sure.
<avoine> noodles775: I've push my latest code for juju-ansible-role if you want to check: https://github.com/avoine/charm-ansible-roles
<avoine> I still need to migrate it to use your wsgi-app
<noodles775> avoine: Excellent, thanks. I'll probably not get a chance to look through the diff today, but will do after the session (hopefully tomorrow).
<avoine> great
<noodles775> avoine: slides for the session here if you want to browse through - http://goo.gl/iewo1m
<avoine> noodles775: ok, I'll check that out. I've got tons of questions and comments for the session :-)
<noodles775> avoine: great, as someone who's been using ansible for charms, it'd be great if you can join the hangout too.
<avoine> noodles775: sure
<jamespage> gnuoy, a +1 on https://code.launchpad.net/~james-page/charms/trusty/openstack-dashboard/login-url-stable/+merge/221700 would be good if you have 5 mins
<gnuoy> looking
<gnuoy> jamespage, +1'd
<noodles775> jcastro, jose or marcoceppi: do one of you guys setup the hangout for me, or is it something I do myself? http://summit.ubuntu.com/uos-1406/meeting/22244/simpler-charms-with-ansible/
<jcastro> if you set it up all I need is the hangout URL
<jcastro> and the youtube link it generates for you
<jcastro> or I can do it for you, up to you
<noodles775> jcastro: if it doesn't need to be linked to any special account, I can do it. If it requires me knowing a special url from which to start so it's linked to canonical, then I don't know that url.
<jcastro> nope, just as normal hangout on air
<jcastro> note, on air, not a normal hangout
<jcastro> want me to just do it? :)
<noodles775> jcastro: https://plus.google.com/events/cof2tovla6vsv84cq9d34u47b6c
<jcastro> noodles775, ok, I just the need the URL to the hangout, right from your browser location and I'll take care of the rest
<avoine> noodles775: should I use this link too?
<noodles775> avoine: Join #ubuntu-uds-dev
<noodles775> avoine: hangout url is https://plus.google.com/hangouts/_/hoaevent/AP36tYeQwxrxdZGveiZVKpetKkySfiZswgNHf5t0qJUtsowh4qjIIg?authuser=0&hl=en-GB
<tech2> How can I tell what juju is doing? Say I am working using the local environment and I have a slow connection to the internet, if I do a juju deploy whatever it needs to fetch an image for the instance (I can see the wget in htop), but all status tells me is the machine id is pending.
<marcoceppi> tech2: right, that fetch is only done once
<marcoceppi> then it creates a template
<marcoceppi> and subsequent deploys will clone taht templates
<tech2> marcoceppi: okay, so that's a one-off time cost, that's cool, but is there a way of telling what it's up to in general?
<marcoceppi> tech2: not really, you can tail logs, and agent-state/status should show you what has just happened, but theres no real way to expose the event queue for a machine
<tech2> marcoceppi: thanks again.
<bloodearnest> rick_h_: just caught up with the roadmap uds session - machine view looks really nice!
<rick_h_> bloodearnest: :) woot
<rbasak> jamespage: so this juju-core SRU. If not going through every bug for SRU information, what other approach do you suggest?
<rbasak> An MRE applicatioN?
<pindonga> lazyPower, jcastro MAGIC! I just got juju-gui deployed locally with quickstart! finally :)
<pindonga> thanks a lot for the help
<pindonga> one thing worth mentioning is.. I see a bunch of WARNING messages when running juju status
<pindonga> like WARNING unknown config field "container"
<pindonga> not sure what that is, but at least I got the local provider running
 * pindonga is happ(ier) :)
<lazyPower> pindonga: awesome!
<rick_h_> pindonga: what issue were you having? /me missed what was up.
<cjohnston> marcoceppi: any chance you could review https://code.launchpad.net/~ubuntu-ci-engineering/charms/precise/apache2/apache-apt-update/+merge/222846 for me please
<pindonga> rick_h_, hi.. the issue was that I couldn't get the local provider to bootstrap and deploy things properly
<rick_h_> pindonga: ah ok, so the GUI is behaving itself then?
<pindonga> yep, that was never an issue, it was juju breaking and not cleaning up itself properly
<jcastro> marcoceppi, omg juju kill
<pindonga> but with the cleanup plugin and some persistence I got around it :)
<rick_h_> pindonga: awesome, glad you got through it
 * pindonga too
 * rick_h_ ducks back into the shadows until the next time someone mentions quickstart/gui
<jcastro> rick_h_, you're all set for the rest of the week
<rick_h_> jcastro: now to get those releases out
<rick_h_> jcastro: thanks for leashing me through the process there. :)
<jcastro> cory_fu, lazyPower: nice work churning through the queue fellas
<lazyPower> jcastro: what are you talking about? we're not working the rev queue
<lazyPower> we're playing broforce on steam right now
<jcastro> heh
<lazyPower> oh i mean, yes, we're working feverishly on the rev q
<dpb1> lazyPower: any change you could pause broforce and look at https://code.launchpad.net/~davidpbritton/charms/precise/apache2/vhost-config-relation  :)
<lazyPower> dpb1: you must have ESP because i'm already looking at that one now
<lazyPower> i'm kind of spooked about this really
<dpb1> lazyPower: look behind you
<lazyPower> ahhh!
<lazyPower> i see my cat
<dpb1> haha
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdCySsBAuEzAEONYU_JQO2lE6GNgDJyfQ2__c1KzGUX4Jc6jA?authuser=2&hl=en
<jcastro> marcoceppi, charm school will be there ^^^
<jcastro> and anyone else who wants to be in the actual hangout
<jcastro> marcoceppi, 5 min warning
<mark06> why did juju move to github?
<marcoceppi> mark06: increased exposure to the go-lang community? I imagine there's a number of reasons
<lazyPower> dpb1: so we are still base64 encoding the templates with this vhost-relation extension?
<dpb1> yes, right.  I think that is a requirement, unless there is something else preferred?
<lazyPower> dpb1: is there a reason we are base64 encoding them vs just passing a string-blob on the wire for apache2 to parse/load/populate/deploy?
<khuss> i am trying to configure the maas server so that two interfaces are configured on the server. I setup two interfaces on the cluster but the server comes up with only one. any ideas?
<dpb1> lazyPower: ^
<lazyPower> i dont think its necessarily required to be base64 encoded... i may be wrong. let me follow up with the ~charmers on this
<dpb1> hm
<lazyPower> i think its added complexity that may not need to be there. becaus ehow cool would it be to just pass a heredoc or a file handle and let juju do its thing?
<dpb1> ok sure.  I guess it would need to be encoded somehow or else the shell command (relation-set) would barf on it
<lazyPower> thats probably why the base64 encoding came up as a requirement now that i think about it
<dpb1> but I haven't tried explicitly with a real template
<lazyPower> yeah, it'll need to be escaped and all kinds of voodoo
<lazyPower> do you ahve an example charm to use for this or do i need to codify one real quick to validate?
<lazyPower> dpb1: ^
<dpb1> lazyPower: the only one I have is in a landscape branch that is a bit hard to setup (since the code is proprietary -- as you know).  It wouldn't be a good test, but you are free to look at how it implements.
<dpb1> lazyPower: we are waiting to submit until this one is up, since it will break without it.
<dpb1> lazyPower: the landscape-charm branch: lp:~davidpbritton/landscape-charm/vhost-config-relation/
<lazyPower> stands to reason... i'm not clear on what i'm giving the relationshiop. you call a method that makes me think its using  jinja2 - template = get_template()
<lazyPower> a great thing to have in here would be teh actual contents of an example template. even if its just plane ol yaml, it'll make it clearer and consicer than calling a ficticious method.
<lazyPower> ah yeah, even in this example you're pulling it from unit data
<dpb1> ah, got it.  This is basically a way to to specify 'vhost_http_template' and 'vhost_https_template' (current apache2 config settings) in a relation.  But I agree, those are not doced well to begin with I guess (because they are very application specific).
<lazyPower> so, it being in YAML format, whats the spec? vhost: {name} \n  port: {port} \n access_log: {logpath} \n directory_root: {dir} ?
<dpb1> I think I explicitly documented that, let me check
<dpb1>     relation-set vhosts="- {port: '443', template: dGVtcGxhdGU=}\n- {port: '80', template: dGVtcGxhdGU=}\n"
<lazyPower> ah, i get that i was under the impression teh template itself was yaml spec
<dpb1> oh, no, the template is a jinja2 apache config file
<dpb1> so you can do {{ blah }} in the middle of it
<lazyPower> yeah, it supports all the niceties of liquid syntax
<lazyPower> dpb1: when you get some time, lets sync on this and clean up the apache readme a bit for new users so its more approachable.
<lazyPower> you seem to have a lot of answers when it comes to the charm :)
<dpb1> lazyPower: ok.  sure
<dpb1> lazyPower: I'm going to eat lunch, want to set up a time and we can chat?
<lazyPower> I'm pretty booked today. Lets earmark this for post UOS?
<dpb1> lazyPower: ok, sounds reasonable
<lazyPower> thanks for teh preemptive agreement. I'll try to get it on teh calendar for next week
<dpb1> thx
<cjohnston> thanks marcoceppi
<sparkiegeek> cory_fu: hi, I've addressed your comments on my reviewboard charm, ready for another round of review!
<sparkiegeek> cory_fu: https://bugs.launchpad.net/charms/+bug/942032
<_mup_> Bug #942032: Charm Needed: Review Board <Juju Charms Collection:Fix Committed by adam-collard> <https://launchpad.net/bugs/942032>
<cory_fu> Ok, I'll take a look.  :)
<sparkiegeek> cory_fu: cheers!
<cory_fu> sparkiegeek: That was quick.  :)  I still need to test it, but it looks good.  You do need to update the config option descriptions, though.
<sparkiegeek> cory_fu: ah yes, the dreaded documentation ;)
<cory_fu> I've got a meeting right now, but I'll test it shortly and add my +1
<cory_fu> ha
<sparkiegeek> cory_fu: ok. Doc changes are made and pushed
<cory_fu> Thanks
<sparkiegeek> cory_fu: as for speed, I didn't want to end up on the wrong side of that big long review queue. Got to make the most of your attention ;)
<cory_fu> sparkiegeek: I'm getting "no revisions to pull"
<sparkiegeek> cory_fu: latest is r17...
<dpb1> lazyPower: hey there -- so what do I do to get that same thing merged into trusty?  separate MP?  how are you guys handling that?
<sparkiegeek> cory_fu: my bad, r18 just pushed
<cory_fu> Ok.  Do you think it's worth documenting the semi-magic "localhost" -> public-address behavior?
<lazyPower> dpb1: target trusty, make sure its got integration tests re: amulet or integration test method of your choosing.
<sparkiegeek> cory_fu: it's in the README
<cory_fu> So ti is
<cory_fu> *So it is
<cory_fu> Great.  It looks perfect.  I'll add my +1 and ping an official charmer (*ahem* lazyPower *ahem*) to get it promulgated shortly
<sparkiegeek> promulgate: a word I only ever here from Juju people ;)
<sparkiegeek> *hear
 * lazyPower blinks
<lazyPower> cory_fu: is it *good*
<lazyPower> like *real good* ?
<sparkiegeek> lazyPower: did you miss the "It looks perfect"? ;)
<lazyPower> sparkiegeek: well... yes. i did.
<sparkiegeek> I want stars, banners.... maybe even a pony? Can we juju deploy pony yet?
<sparkiegeek> :)
<dpb1> lazyPower: ok, unit tests don't suffice?  there are 0 integration tests (juju test) for apache2 right now.
<dpb1> afaict
<dpb1> unless I'm missing something (which is likely)
<lazyPower> dpb1: wait is apaceh2 in trusty? *looks*
<lazyPower> dpb1: ah it is!
<lazyPower> yep, same process. Let me run a quick deploy test, if its g2g i'll ack it and merge
<lazyPower> no need to fill out another MP. i may change my mind on the rquired MP in teh future, but today - nahhhh thats just extra busy work.
<dpb1> lazyPower: ok, let me know.  I'll facilitate however need be
<lazyPower> dpb1: always a pleasure working with you my man. +1 on a personal level
<dpb1> lazyPower: thanks!  you have been great to work with as well. :)
<lazyPower> dpb1: ack'd and pushed
<dpb1> thx lazyPower, much appreciated
<sparkiegeek> lazyPower: does reviewboard get official blessing? Want to get in whilst the going is good :)
<lazyPower> sparkiegeek: its up on the list, i need a few to do a review as well
<sparkiegeek> lazyPower: ok. third time's the charm... or so they say :)
<lazyPower> sparkiegeek: unless you include a pizza with attempt #2, then attempt #2 + bribery is the charm
<sparkiegeek> lazyPower: fair! :)
<lazyPower> sparkiegeek: just to be sure, i am reviewing lp:~adam-collard/charms/precise/reviewboard/trunk  correct?
<sparkiegeek> lazyPower: correct
<lazyPower> ack, starting review now
<lazyPower> sparkiegeek: nice charm!
<sparkiegeek> lazyPower: why thank you :) I'd like to thank my family...
<lazyPower> sparkiegeek: ackd, promulgated. it will be in the store shortly
<lazyPower> Welcome to the charm store :)
<sparkiegeek> lazyPower: great! thanks a lot. Between you and cory_fb (not forgetting jose) the quality of the charm greatly increased through the review process.
<jose> woohoo, congratulations sparkiegeek!
<lazyPower> Thats our goal, with enough eyes we can squash papercuts
<sparkiegeek> was a little scary to see the review queue so long, but thanks to your efforts we turned it around in only 2 days
<lazyPower> i look forward to seeing your unit tests :) With those you'll be one step closer to a high quality mark on your charm. I've got a follow up todo after its ingested to fill out teh quality data.
<sparkiegeek> lazyPower: right. Amulet tests are unfortunately, a non-starter. But unit tests are plausible (and desirable)
<lazyPower> sparkiegeek: the review queue is kind of a fibber. it doesn't calculate based on last interaction, it reads from beginning of item - so, its quick to scatter the result.
<sparkiegeek> lazyPower: top of my todo list :)
<sparkiegeek> lazyPower: is there a way of subscribing to a particular charm's bugs in LP?
<sparkiegeek> ah n/m found i
<sparkiegeek> *it
<lazyPower> sparkiegeek: ty for updating the bug to fix released, i just noticed i missed that
<sparkiegeek> lazyPower: np
<AskUbuntu> How to Access HBase after JUJU | http://askubuntu.com/q/482036
#juju 2014-06-12
<jose> guys, is it possible to use Juju with Rackspace?
<davecheney> jose: yes
<jose> davecheney: where should I put the info? on the openstack part of the config
<jose> ?
<davecheney> depends, if rackspace looks close enough to openstack then you could try the openstack provider
<davecheney> or you could use the ssh provider and manually provision some machines for your environment
<jose> oh, I meant to use Rackspace automagically like it's done with AWS or HPCloud, for example
<thumper> jose: if rackspace offers the same api endpoints as the openstack provider wants, then yes
<thumper> jose: if it doesn't, then no
<thumper> for using the openstack provider
<jose> I'm not exactly sure, that's why I was asking :)
<jose> I'll check, though
<thumper> I'm not sure if any of the core devs have tried
<thumper> I certainly haven't
<lazyPower> jose: you need to use the manual provider. Rackspace doesn't have an object store.
<jose> ack, thanks
<thumper> lazyPower: ah.. if that is all, hopefully soonish... we will be able to
<lazyPower> thumper: thats teh only thing off the top of my head - i dont know about their API endpoitns
<rick_h_> thumper: lazyPower for the longest time they didn't have cloud-init enabled on the OS images and such.
<rick_h_> thumper: lazyPower that's changed in the last few months
<rick_h_> jose: but I use the manual provide and works well if a bit more manual than hoped
<jose> cool, thanks rick_h_
<AskUbuntu> MAAS and Openstack Network | http://askubuntu.com/q/482090
<AskUbuntu> Add Nodes to MAAS for JUJU Bundle | http://askubuntu.com/q/482094
<AskUbuntu> Where to see the complete list of juju charms available? | http://askubuntu.com/q/482103
<sparkiegeek> is there a "preferred" way of including trademark/copyright information in a charm? specifically with regards the charm logo.
<rbasak> jamespage: ping. Can we talk about this Trusty SRU when you have some time please?
<jamespage> rbasak, sure - let me just eat lunch and I'll be back
<rbasak> OK
<pindonga> hi, while I thought I managed to fix this... I'm still having issues with the local provider: when I deploy some instances (eg, wordpress/mysql) the are kept in the pending state for a long time, and then they failed due to the template taking too long and then the second machine failing bc it couldn't clone a running template
<pindonga> I managed to work around this by destroying the environment and starting over (now that the template is finally complete)
<pindonga> but I thought it worth to mention
<jamespage> rbasak, back
<rbasak> o/
<rbasak> jamespage: so I need to land this 1.18.4, which has ~20 bugs fixed. Trusty is on 1.18.0 at the moment.
<rbasak> This is something that I'd want an MRE for. Preparing SRU paperwork for 20 bugs is going to be a pain otherwise.
<rbasak> Is that what I need to do? I'm not sure all of them are individually SRU-worthy either.
<rbasak> Or, any other guidance on how to proceed?
<rbasak> Oh Trusty is on 1.18.1 I think, sorry.
<rbasak> Still, it's a fairly big bump in terms of commits.
<jamespage> rbasak, I agree that we need a MRE but this was pretty much nacked by pitti when I applied for it
<jamespage> rbasak, I'd email off that thread about what we need todo now and ask again
<jamespage> we can have a list of things that have been fixed
 * rbasak looks
<rbasak> Ah, I see it
<rbasak> https://lists.ubuntu.com/archives/technical-board/2014-April/001877.html
<rbasak> jamespage: OK, thanks. I'll prepare the tracker bug and reply to https://lists.ubuntu.com/archives/technical-board/2014-April/001924.html I think.
<lazyPower> sparkiegeek: wrt you copyright/trademark question - yes. You can place that info in the Copyright file
<lazyPower> there's a specific directive that marcoceppi showcased yesterday in his charm development track that shows how he denotes multiple levels of copyrighted material being included in a charm.
<sparkiegeek> lazyPower: ok, great. Thanks
<marcoceppi> arosales: yeah
<Egoist> hey
<Egoist> is there any way to pass data between instances but not by relation-set?
<marcoceppi> Egoist: no
<Egoist> marcoceppi: ok. But data is taken from config.yaml, is ther any way to change this file while charm is running?
<marcoceppi> Egoist: no, you can only change configuration values from the user
<marcoceppi> charms can't change their own or others
<ali1234> I tried to follow http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage and i got "error bootstrapping a local environment must not be done as root"
<lazyPower> ali1234: which version of juju are you using?
<ali1234> the version in 14.04
<lazyPower> ali1234: that looks like your Ubuntu series. can you show me the output of juju --version
<ali1234> yes, it is my ubuntu series. i am using the version which ships with ubuntu 14.04
<ali1234> 1.18.1-trusty-amd64
<lazyPower> do you have the juju-local package installed? it should have prompted you for sudo rights during the bootstrap
<ali1234> yes
<ali1234> i followed the instructions on askubuntu exactly
<ali1234> it did prompt for sudo password
<ali1234> http://paste.ubuntu.com/7634650/
<ali1234> when i try to run it without sudo like it tells me to, it says this: http://paste.ubuntu.com/7634653/
<sarnold> yikes
<sarnold> check dmesg output for anything 'strange'. that's mighty odd looking...
<ali1234> http://paste.ubuntu.com/7634668/
<sarnold> all looks normal there
<leftyfb> I'm trying to setup a juju local provider on my laptop here but can't seem to get it working. juju init works fine, I edit the config and change default to local, then juju bootstrap.....
<leftyfb> juju status gives me this:
<leftyfb> ERROR state/api: websocket.Dial wss://10.0.2.1:17070/: dial tcp 10.0.2.1:17070: connection refused
<leftyfb> mongodb is in fact running on 37017
<leftyfb> there's no iptables rules in play
<leftyfb> it doesn't seem like juju is creating the lxc container when bootstrapping
<leftyfb> also seeing this in debug:
<leftyfb> 2014-06-12 15:56:04 DEBUG juju.environs.simplestreams simplestreams.go:388 fetchData failed for "file:///home/leftyfb/.juju/local/storage/tools/streams/v1/index.sjson": stat /home/leftyfb/.juju/local/storage/tools/streams/v1/index.sjson: no such file or directory
<Egoist> marcoceppi: change configuration values from the user -> What does it mean?
<leftyfb> also seeing this in /var/log/machine-0.log
<leftyfb> 2014-06-12 16:54:39 DEBUG juju.agent agent.go:384 read agent config, format "1.18"
<leftyfb> 2014-06-12 16:54:39 ERROR juju.cmd supercommand.go:305 symlink /home/leftyfb/.juju/local/tools/machine-0/jujud /usr/local/bin/juju-run: no such file or directory
<lazyPower> Egoist: meaning that only users can change the values defined by config.yaml
<Egoist> lazyPower: Ok i get it. Thanks
<lazyPower> leftyfb: looks like you're missing the simpestreams stuff that bootstrap should be fetching for you. is there a cached cloud image  in /var/cache/lxc for cloud-trusty / cloud-precise?
<lazyPower> ali1234: i'm sorry i dont know why its throwing a stack trace at you. thats a great candidate for a bug http://launchpad.net/juju-local
<leftyfb> lazyPower: /var/cache/lxc/trusty/rootfs-amd64 exists with what looks like a complete filesystem
<lazyPower> leftyfb: it should be creating cloud-trusty and cloud-precise iirc
<lazyPower> the cloud-* are the cloud images that juju uses to create your containers. It may be prudent to 'nuke from orbit' your in progress components to build the lxc containers and start over. There's a plugin at http://github.com/juju/juju-plugins that warehouse this utility and has installation instructions - its 'juju clean'
<lazyPower> I've got to get ready for my UOS track however, good luck. I'll stop in to check on you once my session is over
<ali1234> launchpad is kaput :(
<leftyfb> lazyPower: that script/plugin only does 3 steps(and not sufficiently) that i've been doing over and over again trying to debug this issue.
<sarnold> ali1234: sorry, lazyPower had the wrong url, use this one: https://bugs.launchpad.net/ubuntu/+source/juju-core/+filebug
<lazyPower> leftyfb: https://github.com/marcoceppi/plugins/commit/1c315f40fcc6e3102b4be5a4aa11544218159fd0
<ali1234> sarnold: i did not use the url, i used ubuntu-bug
<ali1234> launchpad timed out while i tried to submit the report
<sarnold> ali1234: eep.
<marcoceppi> lazyPower: can you review/merge that?
<ali1234> i will just do it again
<lazyPower> marcoceppi: i'd love to but i cant merge, i can only comment.
<marcoceppi> what, why not? Aren't you in the juju group?
<lazyPower> WAIT
<lazyPower> i was looking at your repo, whoops
<marcoceppi> w/e cowboy'd it
<lazyPower> neat
<ali1234> cool, now i crashed apport
<leftyfb> lazyPower: is this a typo somewhere? :
<leftyfb> 2014-06-12 17:55:49 DEBUG juju.environs.simplestreams simplestreams.go:362 cannot load index "file:///home/leftyfb/.juju/local/storage/tools/streams/v1/index.sjson": invalid URL "file:///home/leftyfb/.juju/local/storage/tools/streams/v1/index.sjson" not found
<leftyfb> should it be index.json , not index.sjson?
<marcoceppi> leftyfb: it checks for both
<leftyfb> so it does
<leftyfb> 2014-06-12 17:55:49 DEBUG juju.environs.simplestreams simplestreams.go:388 fetchData failed for "file:///home/leftyfb/.juju/local/storage/tools/streams/v1/index.json": stat /home/leftyfb/.juju/local/storage/tools/streams/v1/index.json: no such file or directory
<leftyfb> #leftyfb@blanchard[0]:~$ ls -l .juju/local/storage/tools/streams/v1/index.json
<leftyfb> -rw------- 1 leftyfb leftyfb 1152 Jun 12 13:55 .juju/local/storage/tools/streams/v1/index.json
<leftyfb> nm, looks like it writes it further down
<leftyfb> https://pastebin.canonical.com/111613/
<sebas5384> there's a way to select a node(machine) from maas, when deploying a service?
<sebas5384> i'm planning to have different constraints in each node, so I would like to choose which one is going to be used
<leftyfb> sebas5384: juju deploy mysql --to 3
<sebas5384> leftyfb: yeah, but thats when the machine is already mapped by the juju
<leftyfb> you can also use constraints
<leftyfb> https://juju.ubuntu.com/docs/reference-constraints.html
<sebas5384> leftyfb: thanks!!
<sebas5384> that was exactly what i was looking for
<sebas5384> maas-name and maas-tags will be the solution
<sarnold> sebas5384: why do you want to select one specific node rather than just using e.g. cpu and memory constraints?
<sebas5384> sarnold: because my client is going to provide the servers vms
<sebas5384> the first battle was to implement juju and maybe maas
<sebas5384> sarnold: IaaS is still going to be another battle for me
<sebas5384> hehe
<sarnold> sebas5384: ahh, okay. :) I've just been happy to say "I want a big machine" or "I want a small machine" when deploying services in our cloud
<sebas5384> juju > maas > node > X vmware vm
<sarnold> sebas5384: not knowing or caring how it happens is part of what I like :)
<jcastro> marcoceppi, how long is `juju clean local` supposed to take?
<marcoceppi> jcastro: 2-3 mins
<jcastro> ack
<marcoceppi> at most
<leftyfb> jcastro: I've found if it's taking long, it's due to permissions on ~/juju
<sebas5384> yeah sarnold! thats going to happen some day
<leftyfb> er, ~/.juju
<jcastro> does it log anywhere?
<sebas5384> but first i have to prove some concepts, it's a big and political company
<leftyfb> jcastro: nope, hit CTRL+C and you'll see the errors
<jcastro> yep, correct!
<leftyfb> marcoceppi: the script should use sudo or chown before trying to delete ~/.juju
<sebas5384> juju clean ?
<sebas5384> thats a new command?
<marcoceppi> leftyfb: it uses sudo when it needs to
<jcastro> is sudo juju clean a good idea?
<marcoceppi> jcastro: no
<leftyfb> marcoceppi: when deleting ~/.juju, it needs to
<jcastro> I never got prompted to put in a sudo pw
<marcoceppi> leftyfb: bugs and pull requests welcome
<jcastro> but with the local provider .juju/local is owned by root
<marcoceppi> jcastro: it shouldn't be anymore
<jcastro> k, your example command is wrong too, I'll submit a PR for that tonight
<leftyfb> i'm having bad luck with juju all around
<leftyfb> my laptop won't bootstrap, and I just installed juju-local on the maas server in our lab here and it's stuck pending for juju-gui on machine 1 which says it's precise, even though the server is trusty.
<leftyfb> also, according to jcastro's session the other day, using juju quickstart should put juju-gui onto machine 0 along with the bootstrap. That's not happening on the maas server
<sebas5384> ooh!! https://github.com/juju/plugins/blob/master/juju-clean
<lazyPower> sebas5384: you got an honorable mention in my UOS track
<sebas5384> ooh yeah?? \o/
<lazyPower> leftyfb: have you specified a default-series in your ~/.juju/environments.yaml?
<lazyPower> sebas5384: yeah - wrt the vagrant plugin we'll be working on
<sebas5384> lazyPower: \o/
<sebas5384> great!
<lazyPower> I feel like my track was meh, but in retrospect i'm feeling pretty beat up this week. Looking forward to recharging over the weekend.
<lazyPower> get some Vitamin D, it wont be all rainy and blah outside
<leftyfb> lazyPower: I didn't do so on the maas server ... to be honest I only installed juju on there to confirm that something was wrong with the setup on my laptop(confirmed)
<sebas5384> lazyPower: i know that feeling Â¬Â¬
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1329425
<_mup_> Bug #1329425: Fatal out of memory error when bootstrapping on Azure <juju-core:New> <https://launchpad.net/bugs/1329425>
<jcastro> anyone see this before?
<lazyPower> leftyfb: ok. That should correct the container series you see - but also during bootstrap its downloading a 200mb cloud image file - so it takes a couple of minutes to complete the first time. but i'm pretty sure you're aware of this since we checked /var/cache/lxc
<lazyPower> leftyfb: where are you in terms of debug/nuking/retrying?
<leftyfb> lazyPower: i've nuked several times ... give me a few to move out of the lab and go nuke again and start over
<lazyPower> ack. ping when ready
<lazyPower> sebas5384: so, now that its been phaux announced, we need to hammer out a time to do that sync we've been pending
<sebas5384> yeah lazyPower absolutely!
<leftyfb> lazyPower: my nuke involves much more hulk :)
<lazyPower> leftyfb: oh man, sounds serious
<sebas5384> lazyPower: what about 5pm EDT ?
<sebas5384> friday
<lazyPower> sebas5384: sure. I'm goign to send a calendar invite.
<sebas5384> great!!
<sebas5384> lazyPower: when is your track?
<lazyPower> sebas5384: already wrapped about 10 minutes ago
<ali1234> bug 1329429
<_mup_> Bug #1329429: local bootstrap fails <amd64> <apport-bug> <trusty> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1329429>
<sebas5384> oohhh :(
<jcastro> is it possible to bootstrap multiple environments at once?
<sebas5384> lazyPower: invite acepted :)
<lazyPower> jcastro: trying now to find out for you
<lazyPower> jcastro: i have 2 running atm - one for maas another for amazon
<lazyPower> juju switch maas && juju bootstrap -- in another byobu terminal - juju switch amazon && juju bootstrap
<lazyPower> we'll see if things blow up, it'll take ~ 5 minutes to complete for maas
<jcastro> I might try a plugin, I want to do like "juju demo" and have it fire off a GUI in every environment in my yaml file or something
<lazyPower> jcastro: that would be stupid awesome. do it
<lazyPower> with some clever wrapping of quickstart you could even get it all fired up in your browser ready to rock n roll
<jcastro> yeah
<leftyfb> lazyPower: this is prettmuch my hulking of juju: https://pastebin.canonical.com/111617/
<jcastro> sinzui, have you ever seen anything like this perchance? https://bugs.launchpad.net/juju-core/+bug/1329425
<_mup_> Bug #1329425: Fatal out of memory error when bootstrapping on Azure <juju-core:New> <https://launchpad.net/bugs/1329425>
<lazyPower> leftyfb: even going so far as to remove teh packages. pretty thorough.
<sinzui> jcastro, no, Neither CI or I have ever experienced that
<sinzui> but that bug is old
<leftyfb> lazyPower: obviously i'm missing something because it's not working from a "fresh" install
<jcastro> sinzui, old? I just filed it
<sinzui> You should have searched
<sinzui> it us the first bug in the azure-provider listing, bug 1250007
<_mup_> Bug #1250007: Bootstrapping azure causes memory to fill <amd64> <apport-bug> <azure-provider> <saucy> <juju-core:Triaged> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1250007>
<sinzui> jcastro, Did you use the cer instead of the pem?
<sinzui> This confusion also affects users of the python-sdk. MS is not very clear about what they expect
<jcastro> oh, I was using the .cer, fixing and trying with the pem
<jcastro> sinzui, that was it, I'll update the bug, it's bootstrapping fine, thanks!
<jcastro> sinzui, I will update the docs as frankban recommends in the master bug
<sinzui> jcastro, I learned that today, BTW. I just wrote a script to clean up the 10 terabytes of CI tests data juju left behind. I read a lot of support pages to learn that secret
<sebas5384> hey lazyPower i watch your session! thanks for the mention! hehe
 * lazyPower hides 
<sebas5384> was a great session lazyPower o/
<sebas5384> sorry that i missed hehe
<lazyPower> I did however, do something productive - i fixed a bug in the docs - https://github.com/juju/docs/pull/120
<sebas5384> good one! that one was wrong since its there
<sebas5384> xD
<jcastro> sinzui, mirrors on azure sick? http://pastebin.ubuntu.com/7635108/
<lazyPower> jcastro: oh! also - all is well with running bootstraps in paralell
<d_`> is there a way to install juju server on a server of my choosing instead of a random server of juju's choosing?
<jose> d_`: is this OpenStack, AWS, HPCloud, what?
<sebas5384> juju upgrade-charm of a local charm is giving me an error, saying that it can't find the charm
<sebas5384> what can be?
<lazyPower> sebas5384: did you specify --repository path on the juju upgrade-charm command?
<sebas5384> hmmm
<sebas5384> lazyPower: no Â¬Â¬
<lazyPower> eg: juju upgrade-charm --repository=$HOME/charms myawesomecharm
<lazyPower> sebas5384: i have a set of shell aliases when working with local repositories, because thats cumbersome to type
<sebas5384> ahhh ok, i thought that was automatic
<lazyPower> https://github.com/chuckbutler/dotfiles/blob/master/bashaliases  <- sebas5384
<sebas5384> giving a look at lazyPower
<lazyPower> oh hey look at that relic jrecycle in there...
 * lazyPower updates
<jcastro> lazyPower, mine was a config issue, filed some bugs, all set now
<jcastro> other than getting an error with the mirrors
<jcastro> going to wait a few minutes and try again
<lazyPower> ahhhh hokay
<lazyPower> boooo on mirrors being wonky
<d_`> jose: I'm using MAAS and it's just choosing a random physical server to install on
<d_`> we have vmware here as well
<sebas5384> lazyPower: nice, something like the juju clean plugin
<jcastro> he wants to specify where to deploy the bootstrap node it sounds like
<sebas5384> i'm using Makefile for that hehe
<sebas5384> could it be nice to have something like declaring repositories
<sebas5384> or adding
<sebas5384> for example in the environments.yaml file
<sebas5384> repositories:
<lazyPower> sebas5384: file a feature request
<sebas5384> lazyPower: will do that :)
<leftyfb> lazyPower: https://pastebin.canonical.com/111624/
<sebas5384> Declaring repositories in the environments.yaml -> https://github.com/juju/juju/issues/87
<lazyPower> leftyfb: thats really strange
<leftyfb> lazyPower: you mean it's not broken by design? :)
<lazyPower> leftyfb: i'm not sure what to mention here, as that issue is consistent with what the juju-clean script has thus far corrected.
<lazyPower> but its progress now that its not complaining about simple-streams data
<leftyfb> 2014-06-12 19:41:57 DEBUG juju.environs.simplestreams simplestreams.go:490 fetchData failed for "file:///home/leftyfb/.juju/local/storage/tools/streams/v1/mirrors.json": stat /home/leftyfb/.juju/local/storage/tools/streams/v1/mirrors.json: no such file or directory
<leftyfb> 2014-06-12 19:41:57 DEBUG juju.environs.simplestreams simplestreams.go:567 no mirror index file found
<leftyfb> 2014-06-12 19:41:57 DEBUG juju.environs.simplestreams simplestreams.go:548 no mirror information available for { }: mirror data for "com.ubuntu.juju:released:tools" not found
<lazyPower> i stand corrected
<lazyPower> i really dont know :(
<lazyPower> leftyfb: have you pinged in #juju-dev to see if they know of any current issues similar to what you're seeing?
<leftyfb> nope
<lazyPower> That would be my next step, is to open a bug against this and ping with that in #juju-dev
<mattrae_> i'm in an environment with limited internet access. when i deploy --to lxc:1, i see there is is a wget process trying to access cloud-images.ubuntu.com.. is there a right way to get around this problem in a closed environment?
<mattrae_> here's the wget i see running http://pastebin.com/72q2BWsC
<mattrae_> can i specify an internal location for the images perhaps?
<marcoceppi> hazmat: have you tried the docean plugin on Mac OSX?
<leftyfb> lazyPower: fixed it
<thumper> do we have a collective noun for "juju environments" ?
 * thumper was thinking "a collective of environments"
<thumper> but perhaps that is too borg
<marcoceppi> thumper: environments?
<thumper> marcoceppi: that isn't a collective noun
<thumper> think "a murder of crows"
<thumper> "a horde of hamsters"
<marcoceppi> a bushel of environments
<thumper> heh
<marcoceppi> a gaggle of environments
 * thumper briefly thought "group"
<thumper> but group has other meanings
<thumper> a plethora of environments
<thumper> a plague of environments...
<marcoceppi> thumper: cull?
<thumper> "a host of environments"?
<thumper> hmm...
<thumper> I'm trying to avoid overloaded terms...
<thumper> so trying not to have a collective noun that relates to other well known terms
<marcoceppi> you could call it a charm of environments
<thumper> so I guess avoiding: host, group, insert-other-techy-term
<thumper> that overloads the term "charm"
<thumper> which I'm trying to avoid
<marcoceppi> I know <3
<thumper> :)
<lazyPower> leftyfb: nice! what was teh fix?
<thumper> what about "an illusion of environments"
<thumper> apparently it is "an illusion of magicians"
<leftyfb> https://bugs.launchpad.net/juju-core/+bug/1329480
<_mup_> Bug #1329480: Cannot create juju-run symlink if /usr/local/bin doesn't exist <juju-core:New> <https://launchpad.net/bugs/1329480>
<marcoceppi> thumper: howa bout "a lot of environments"
<marcoceppi> or, better yet, alot
<marcoceppi> just make up our own
<lazyPower> leftyfb: interesting... i would have assumed /usr/local/bin would just exist.
<leftyfb> devs should never assume :)
<thumper> marcoceppi: how about "an ensemble of environments"
<lazyPower> A Quafffel of Environments
<thumper> leftyfb: I assumed /usr/local/bin existed
 * thumper thinks ensemble is cute
<lazyPower> thumper: i see whatchya did there
<thumper> and a nod to pre-juju
<thumper> done
<thumper> ensemble of environments it is
<thumper> add it to the docs!
<thumper> it's official
<leftyfb> is it possible using juju-gui to install to a specific container?
<cory_fu> Is there a way to ask juju for the public IP address instead of the public-address name?
<lazyPower> cory_fu: which provider?
<cory_fu> I mean a general way for a charm to get the public IP as opposed to the public address
<lazyPower> cory_fu: depending on the substrate, some respodn with just teh ip. other's i've had to run through dig +short to get the proper IP
<cory_fu> Ok, that's what the charm is doing currently; I was just hoping there was a better way
<lazyPower> there's an open bug on this somewhere - i'm pretty sure matt williams opened one during our sync on the DNS charm
<leftyfb> it seems when you deploy services and then destroy them, juju-gui will leave those containers running and spin up new containers when deploying new services
<lazyPower> leftyfb: thats correct. by design juju will not destroy the machine after the service has been destroyed.
<leftyfb> lazyPower: and no way to re-utilize that machine for future deployments using juju-gui?
<lazyPower> leftyfb: not using the gui, no
<leftyfb> bug worthy?
<lazyPower> leftyfb: there's an effort to bring machine-view to the juju-gui, which will alleviate some of this in the future, along with easing deployment of KVM/LXC units to a server for density
<leftyfb> cool
<lazyPower> cory_fu: i'm unable to locate the bug if one was opened. so it may not have been raised.
<cory_fu> No biggie.  I don't think it actually mattered if I had an IP in this current case, actually, and, while it does in the other case and it would be nice to clean it up, it works currently, so...
<lazyPower> cory_fu: wait i found it - https://bugs.launchpad.net/juju-core/+bug/1308374
<_mup_> Bug #1308374: unit-get public-address on EC2 returns split horizon DNS <addressability> <networking> <strategy> <juju-core:Triaged> <https://launchpad.net/bugs/1308374>
<leftyfb> lazyPower: how about specifying distro version (precise/trusty) for deployments? Everything seems to be precise even though I have default-series: trusty in my ~/.juju/environments.yaml
<lazyPower> leftyfb: our trusty charmstore isn't very big
<lazyPower> http://manage.jujucharms.com/charms/trusty
<lazyPower> leftyfb: we have a rather large queue to get our existing prceise charms tested, updated to meet current charm store best practices, and unit/integration tested for inclusion into trusty - if you'd like to volunteer to help we could certainly use a helping hand - current work items are being managed in this spreadsheet - https://docs.google.com/a/canonical.com/spreadsheet/ccc?key=0Aia4W3c4fbL-dGs4SVBJMGRIdnlSMWhzSmo3WE1mZ1E&usp=drive_web#gid=0
<leftyfb> lazyPower: once I nail down juju, I might just offer some assistance in that effort
<leftyfb> I can't even remove a service at this point
<lazyPower> leftyfb: juju destroy-service <servicename>
<lazyPower> juju destroy-machine #
<leftyfb> lazyPower: yeah, it'd be nice if that worked
<lazyPower> is the charm in an error state?
<leftyfb>   mysql:
<leftyfb>     charm: cs:precise/mysql-45
<leftyfb>     exposed: false
<leftyfb>     life: dying
<leftyfb>     units:
<leftyfb>       mysql/0:
<leftyfb>         agent-state: error
<leftyfb>         agent-state-info: 'hook failed: "start"'
<leftyfb>         agent-version: 1.18.4.1
<leftyfb>         life: dying
<leftyfb>         machine: "4"
<leftyfb>         public-address: 10.0.2.31
<leftyfb> it failed to start
<leftyfb> clicking on "resolve" or "retry" did nothing
<lazyPower> you'll need to resolve the hook error before it will allow any of the other events to run. its goign to hang in that error state until you resolve it
<leftyfb> with no output on the juju-gui as to what the issue was
<leftyfb> how do I find out what's wrong?
<lazyPower> you can inspect juju debug-log -n 500 to find the issue if the logs caught it - otherwise you'll want to investigate in running juju debug-hooks to run the hook interactively to see why it failed
<lazyPower> leftyfb: if debug-log doesn't tell you anything - https://juju.ubuntu.com/docs/authors-hook-debug.html
<lazyPower> leftyfb: also - note - since its in state dying, even if you resolve it successfully - your service is queued for destruction, and will promptly be destroyed once the issue has been resolved.
<leftyfb> lazyPower: you know a customer is jumping through hoops at this point right? All I did was drag mysql to my canvas and clicked deply. Now the customer is looking through log files and searching for what a "hook-name" means
<lazyPower> leftyfb: unfortunately - I dont have a better answer at this time.
<leftyfb> what is the unit name?
<leftyfb> I tried both mysql and cs:precise/mysql-45
<lazyPower> mysql/0 - which is shown directly under units
<lazyPower> in juju-status
<lazyPower> for each node in a deployment, get increment by 1, so your second mysql unit if the service is scaled would be mysql/1
<lazyPower> leftyfb: if you're deploying MySQL on the local provider - there is a known bug with the charm that will prevent it from deploying successfully
<leftyfb> :/
<leftyfb> it's not on the bootstrap machine if that's what you mean by local provider
<leftyfb> if you mean a separate lxc container, then that's how it's setup at the moment .... what's the resolution?
<lazyPower> leftyfb: LXC is a best attempt at emulating clouds on your local machine - so there are some subtle differences between it and a true cloud provider that sometimes cause unintended side effects with teh charm. I'm looking for the solution now, its been a while since i've used MySQL on the local provider.
<leftyfb> i'm seeing possible memory exhaustion, that ring a bell?
<lazyPower>  juju set mysql dataset-size='512M'
<lazyPower> juju resolved -r mysql/0
<leftyfb> that did it
<leftyfb> i'll mess with this more tomorrow/later
<leftyfb> thanks for the help
<lazyPower> No problem. Sorry your first day wasn't as magical as we aim for it to be. We're here to help though leftyfb
<leftyfb> lazyPower: it's not as bad as you think, it's not my first day with juju :)
<leftyfb> i've messed with it working with maas, but was told local provider was the better way to play with it
<lazyPower> leftyfb: i use both fairly regularly. I have consistently better luck with MAAS than i do the local provider - each has their own suite of perks and negatives.
<leftyfb> 1 tip I have is for juju quickstart to be suggest more heavily
<lazyPower> mbruzek: https://code.launchpad.net/~lazypower/charms/precise/mysql/local_provider_bug_note/+merge/223003
<mbruzek> just a second lazyPower I am in meeting
<lazyPower> ack, thanks for taking a look
<mbruzek> lazyPower, +1 LGTM, I merged it
#juju 2014-06-13
<vila> jam, mgz: Hi there ! We are severely affected by bug #1200267 to the point where people feel discouraged to write integration tests. The bug description perfectly capture our use case, yet, the importance is 'Low'. What's your pov on that ?
<_mup_> Bug #1200267: Expose when stable state is reached <canonical-is> <canonical-webops> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1200267>
<vila> jam, mgz: Don't get me wrong, I realize you may have more urgent issues but I'd like to understand what can be expected from juju or amulet in the short term
<jam1> vila: https://docs.google.com/a/canonical.com/document/d/1XZN2Wnqlag9je73mGqk-Qs9tx1gvOH0-wxMwrlDrQU4/edit#heading=h.mqzqdofq7hgv is a spec that we worked on for having charms be able to report when they've finish what they're working on to report "ready".
<jam1> also some discussion on it here: https://docs.google.com/a/canonical.com/document/d/1XZN2Wnqlag9je73mGqk-Qs9tx1gvOH0-wxMwrlDrQU4/edit#heading=h.9euay17obra7
<jam1> I believe it falls under "Observability" which is tasked to Tanzanite (wallyworld's team)
<jam1> but later in this cycle
<jam1> say 3 months out or so
 * wallyworld_ takes note
<vila> wallyworld_: you're on my radar now ;-D
<wallyworld_> \o/
<vila> jam1: thanks ! /me reads
<jam1> wallyworld_: is it accurate to say that "charm status" is on your work items?
<jam1> I was trying to sort through all the stuff that we discussed and if it actually got scheduled
<wallyworld_> it's all a bit open ended, but it can be if it's considered important
<jam1> wallyworld_: I certainly know people who want it, the main question is relative importance
<jam1> "observability" got scheduled to you, but I think it covers a lot of items
<wallyworld_> yep. i've reached out to stakeholders (dan, antonio, kapil etc) to ask for input
<jam1> wallyworld_: there is also https://bugs.launchpad.net/juju-core/+bug/1254766 sounds dupe-ish
<_mup_> Bug #1254766: unit "started" status is unhelpful <cloud-installer> <hooks> <landscape> <relations> <status> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1254766>
<vila> jam1: so that would be the 'ready/unready' hooks in 'State, status, charm reporting' ?... Hmm, right, may be more even in that section
<jam1> there was also a separate discussion about an "idle" indicator, separate from charm health
<wallyworld_> yes, we did discuss that
<wallyworld_> our todo lists is not yet fully formed
<vila> jam1: anyway, you've perfectly answered, thanks
<vila> wallyworld_: ETA please !
<vila> :-D
<vila> wallyworld_: kidding ;)
<wallyworld_> lol
<gnuoy> I'm trying to debug an issue with juju not bootstrapping on openstack. Running in debug mode I see it retrieving the json with the image list but then "index has no matching records". What I don't see in the debug is a statement of what key is being used to do the lookup. I'm asking for a trusty amd64 image, I see trusty amd64 images listed so I'm not sure where to go from here
<wallyworld_> vila: we're finishing up on the github migration and some other feature work eg availability zones. next we'll be looking at what customer facing issues / bugs we should target
<wallyworld_> gnuoy: what version of juju?
<gnuoy> wallyworld_, 1.18.4-trusty-amd64
<wallyworld_> private cloud? or do you have access to http://streams.canonical.com/?
<gnuoy> wallyworld_, private cloud and I  believe the list in our cloud is updated daily so something could have gone wrong in the simplestream update process. But from what I can see it all looks good
<vila> wallyworld_: I'm subscribed to bug #1254766 and bug #1200267 and officially nominate them to be my 3 wishes ;-D
<_mup_> Bug #1254766: unit "started" status is unhelpful <cloud-installer> <hooks> <landscape> <relations> <status> <usability> <juju-core:Triaged> <https://launchpad.net/bugs/1254766>
<_mup_> Bug #1200267: Expose when stable state is reached <canonical-is> <canonical-webops> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1200267>
<wallyworld_> sure, np, thanks for the input
<gnuoy> wallyworld_, http://paste.ubuntu.com/7638201/
<wallyworld_> gnuoy: what does "juju metadata validate-images" say?
<gnuoy> wallyworld_, http://paste.ubuntu.com/7638212/
<gnuoy> line 54 of http://paste.ubuntu.com/7638201/ shows a "trusty 14.04 amd64" entry
<wallyworld_> gnuoy: if you look inside your index.json file, does it have data for region "serverstack" and endpoint " http://10.98.191.25:5000/v2.0" ?
<gnuoy> wallyworld_, yes, http://paste.ubuntu.com/7638231/
<wallyworld_> can you paste the whole file?
<gnuoy> wallyworld_, sure
<gnuoy> wallyworld_, http://paste.ubuntu.com/7638235/
<wallyworld_> gnuoy: and you have daily streams configured?
<gnuoy> wallyworld_, sorry, I don't understand. The data is updated daily, is that what you're asking ?
<jam1> gnuoy: there are often 2 different streams
<jam1> "released"
<jam1> and "daily"
<wallyworld_> gnuoy: simestreams uses a stream type eg daily or released to distinguish images
<wallyworld_> com.ubuntu.cloud.daily:server:14.04:amd64
<jam1> you have to configure "image-stream: daily" in order to get daily streams
<wallyworld_> is the key in the index file
<wallyworld_> what jam1 said
<wallyworld_> your index file contains daily images
<wallyworld_> so juju needs to be told thst
<wallyworld_> or else it looks for released images by default
<gnuoy> ah, so in environments.yaml  ?
<gnuoy> Yes, I have that set
<wallyworld_> yup
<wallyworld_> hmmm
<wallyworld_> gnuoy: looks like it can only find 1.18.3 tools
<wallyworld_> 2014-06-13 10:05:40 DEBUG juju.environs.simplestreams simplestreams.go:883 metadata: &{map[com.ubuntu.juju:12.04:i386:{ 1.15.0 i386   map[20140603:0xc2100f1d20]} com.ubuntu.juju:13.04:amd64:{ 1.14.1 amd64   map[20140603:0xc210106120]} com.ubuntu.juju:13.10:amd64:{ 1.14.1 amd64   map[20140603:0xc210106360]} com.ubuntu.juju:14.04:amd64:{ 1.16.2 amd64   map[20140603:0xc2101065a0]} com.ubuntu.juju:14.10:amd64:{ 1.18.2 amd64
<wallyworld_> map[20140603:0xc210106a80]} com.ubuntu.juju:14.10:arm64:{ 1.19.2 arm64   map[20140603:0xc210106ba0]} com.ubuntu.juju:14.10:i386:{ 1.18.2 i386   map[20140603:0xc210106cc0]} com.ubuntu.juju:14.10:ppc64:{ 1.19.2 ppc64   map[20140603:0xc210106de0]} com.ubuntu.juju:12.04:amd64:{ 1.14.1 amd64   map[20140603:0xc2100f1c00]} com.ubuntu.juju:13.10:i386:{ 1.15.0 i386   map[20140603:0xc210106480]} com.ubuntu.juju:14.04:i386:{ 1.16.2 i386
<wallyworld_> map[20140603:0xc210106840]} com.ubuntu.juju:12.10:amd64:{ 1.14.1 amd64   map[20140603:0xc2100f1ea0]} com.ubuntu.juju:12.10:i386:{ 1.15.0 i386   map[20140603:0xc210106000]} com.ubuntu.juju:13.04:i386:{ 1.15.0 i386   map[20140603:0xc210106240]} com.ubuntu.juju:14.04:arm64:{ 1.17.2 arm64   map[20140603:0xc210106720]} com.ubuntu.juju:14.04:ppc64:{ 1.19.2 ppc64   map[20140603:0xc210106960]}] map[] Tue, 03 Jun 2014 08:02:36 +0000 products:1.0
<wallyworld_> com.ubuntu.juju:released:tools}
<wallyworld_> 2014-06-13 10:05:40 INFO juju.environs.bootstrap bootstrap.go:58 picked newest version: 1.18.3
<gnuoy> wallyworld_, and that would effect its ability to find the right disk image ?
<wallyworld_> yes, if you are bootstrapping with 1.18.4
<wallyworld_> i think it needs an exact match
<wallyworld_> 1.18.4 client i mean
<gnuoy> wallyworld_, I took " picked newest version: 1.18.3" to mean, "shrug, I'd have liked 1.18.4 but 1.18.3 will do"
<gnuoy> wallyworld_, ok, let me have a go at fixing that and I'll report back on my result
<wallyworld_> i think it is saying that it looked in the metadata and found all these versions and it picked the latest
<gnuoy> wallyworld_, I've added the latest tools and I can see bootstrap picking them up. The image lookup is still failing though, http://paste.ubuntu.com/7638333/
<wallyworld_> gnuoy: can you paste the products file at streams/v1/com.canonical.serverstack.serverstack:ubuntu:daily.json
<gnuoy> wallyworld_, that was the one I pasted before, http://paste.ubuntu.com/7638235/
<wallyworld_> gnuoy: oh sorry, the other one then, the index file?
<gnuoy> sure, np
<gnuoy> wallyworld_, http://paste.ubuntu.com/7638362/
<gnuoy> wallyworld_, I need to nip out for about an hour
<wallyworld_> gnuoy: ok, i'm looking at thedata and see no reason for it not to match
<wallyworld_> :-(
<gnuoy> wallyworld_, it'd be really useful if juju said what key it was looking for, or maybe the logic is more complicated than that.
<wallyworld_> gnuoy: i can't recall ottomh exactly what it logs, but yeah
<wallyworld_> i'll have to use your metadata to set up something locally and try and see why it's failing
<wallyworld_> but it's quite late here and i'm falling asleep sadly so i can only do it tomorrow or something
<gnuoy> wallyworld_, np, I shall keep digging.
<wallyworld_> ok, email me if you want and i'll see what i can do over the weekend
<gnuoy> wallyworld_, I can pick it up with you on monday
<wallyworld_> ok
<hazmat> marcoceppi, i haven't tried on osx.. i've resurrected my osx machine though, i can give it a whirl
<hazmat> marcoceppi, part of the issue on osx is that the brew setup is out of date, 1.16 style.. and that's not supported
<hazmat> marcoceppi, i've got an extant bug that we should be distributing/compiling osx binaries
<nooky> Hello, one question, exist some chance that a Hook do a callback to the machine where was executed juju set for example?
<james_w> hi lazyPower, does your name in the channel title mean you are on the hook for charm reviews?
<james_w> if so I'd appreciate a look at https://code.launchpad.net/~james-w/charms/precise/haproxy/metrics/+merge/221408
<lazyPower> james_w: ack, i'll take a look shortly. i'm in a meeting
<james_w> thanks
<lazyPower> james_w: looking now
<gnuoy> wallyworld_, I added some debug to my client http://paste.ubuntu.com/7639281/ which showed that the items were lacking the endpoint and region fields and hence the match was failing. I've spoken to smoser and he was doing some work on the format of the simple streams data published. It looks like this has exposed a bug in juju in that higher level config options should apply to lower level ones but juju when looking for an image match is not using the high
<gnuoy> er level options
<marcoceppi> hazmat: brew has 1.18.3
<hazmat> marcoceppi, cool good to know it got updated.
<smoser> hey
<hazmat> marcoceppi, doesn't really change my perspective we should be distributing osx binaries the same way we do for windows
<smoser> can someone read juju core code for me (as i'm to lazy)
<marcoceppi> hazmat: fwiw, juju-docean didn't quite work on osx
<hazmat> smoser, what do you need?
<smoser> and tell me if
<smoser>  https://bugs.launchpad.net/simplestreams/+bug/1329805
<_mup_> Bug #1329805: juju search for image does not find item if endpoint and region are inherited <juju-core:Confirmed> <simplestreams:Fix Released> <https://launchpad.net/bugs/1329805>
<smoser> if juju will look "up" at all for endpoint/region, or if the tags explicitly have to be on the item level.
<hazmat> marcoceppi, hmm.. i'll take a look, there's some extant patches for client that might resolve
<smoser> i know it will not look up to sibling-of-products.
<marcoceppi> hazmat: cool, thanks
<lazyPower> james_w: excellent tests! sorry its taken me a moment i got sidetracked, back on the review now
<lazyPower> james_w: merged. Thanks for the contribution!
<james_w> thanks lazyPower
<nuclearbob> is it possible to run juju in openstack from an instance in that same openstack region?
<marcoceppi> nuclearbob: I'm not sure I understand your question
<marcoceppi> but I think the answer is yet
<marcoceppi> yes*
<nuclearbob> marcoceppi, I'm got an instance setup in canonistack lcy 02, and I want to run juju in that instance, but whenver I run canonistack-sshuttle, I lose my ability to connect to the instance.  I'm trying to set it up now without using sshuttle, but now I'm getting bootstrap failures, and I'm not sure how to reset things
<marcoceppi> nuclearbob: are you running canonistack-sshuttle from the instance?
<nuclearbob> marcoceppi, yes
<marcoceppi> nuclearbob: don't you're already in canonistack you don't need to run sshuttle
<marcoceppi> it should just work without any shuttleing
<nuclearbob> marcoceppi, okay, that makes sense
<nuclearbob> marcoceppi, now when I try to bootstrap, I get this: http://pastebin.ubuntu.com/7640009/
<marcoceppi> nuclearbob: run with --debug see if it gives you more information
<nuclearbob> marcoceppi, it does: http://pastebin.ubuntu.com/7640012/
<marcoceppi> nuclearbob: run juju destroy-environment --force
<marcoceppi> then try to bootstrap again
<mattrae_> hi, i have units with failed, hooks.. resolve --rtry tells me "cannot set resolved mode for unit".. "already resolved" http://pastebin.com/msfTZSA4
<mattrae_> any way to get past that?
<nuclearbob> marcoceppi, http://pastebin.ubuntu.com/7640024/
<marcoceppi> mattrae_: how long has it been since you first ran resolved? and did it first work or has it been suck always? are you in debug hooks?
<mattrae_> marcoceppi: i tried to do debug-hooks to debug this after trying --retry.. now i'm thinking maybe the hook is still running.. just very slowly
<mattrae_> marcoceppi: i didn't see any output for awhile from the unit log.. but looks like it moved forward
<mattrae_> yeah the hook finally finished running
<jose> marcoceppi: will you need me to host that call?
<lazyPower> jose: for TSII?
<jose> mhm
<jose> lazyPower: ^
<lazyPower> jose: Wouldn't hurt. Castro is out and marco's in a meeting.
<jose> ok
<lazyPower> he's going to be up to the wire
<jose> I'll go and have lunch, then, bbiab
<marcoceppi> jose: ?
<marcoceppi> I can run it, if I find the password in time
<jose> marcoceppi: troubleshoothing one
<jose> I already set it up
<jose> s/one/two/
<marcoceppi> yay
<lazyPower> Troubleshooting II is starting everyone!
<lazyPower> If yo have any questions, feel free  to target me and i'll make sure we get them covered
<ali1234> what is Troubleshooting II?
<mbruzek> The second edition of Juju Troubleshooting
<lazyPower> ali1234: http://www.ubuntuonair.com
<mbruzek> http://ubuntuonair.com/
<lazyPower> wait, i dont see it. i see the plenary on UOA
<lazyPower> @josedid we get UOA updated?
<ali1234> yeah me too
<lazyPower> ali1234: refresh :)
<ali1234> so..... any idea why i can't do a local bootstrap?
<mbruzek> ali1234, We need the error, put the error on pastebin and we could help
<ali1234> mbruzek: https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1329429
<_mup_> Bug #1329429: local bootstrap fails <amd64> <apport-bug> <trusty> <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1329429>
<lazyPower> ali1234: i've pinged in #juju-dev with your issue pending response from someone taking a look
<lazyPower> they may have additional questions - so you might want to preemptively join #juju-dev to field those as they come in
<lazyPower> with it being a null pointer dereference, its obv. a bug in the code somewhere and requires more intervention than I can provide as a non-developer.
<jose> any questions?
<mattrae_> if i'm doing debug-hooks and I want to pause execution leaving a hook in an error state, by using 'exit 1'.. if i see juju move on to execute the next hook, is this normal? i though exit1 should pause execution of hooks so I can exit from tmux and run juju resolved --retry to restart execution
<lazyPower> mattrae_: not sure if you're on the troubleshooting hangout following along
<lazyPower> however, its a bug and known behavior - debug-hooks doesn't seem to respect the exit code you provide the shell.
<mattrae_> lazyPower: nope i'm not on the hangout. is there info about this troubleshooting hangout?
<lazyPower> mattrae_: ubuntuonair.com
<mattrae_> lazyPower: oh cool, thanks.. so its a bug
<nuclearbob> lazyPower, or whoever, I've got a different problem with juju local than I did last time.  it seems to just hang when it gets to "Bootstraping Juju machine agent"  debug output is here: http://pastebin.ubuntu.com/7640395/
<mattrae_> nuclearbob: not sure if it will help.. but maybe try adding --upload-tools?
<nuclearbob> mattrae_: I'll give it a try
<mattrae_> nuclearbob: also if it was working previously.. maybe something is corrupt in .juju.. after destroy-environment --force it should be pretty empty
<nuclearbob> mattrae_, okay, I'll give it a chance to run again and then try moving that out if I need to
<nuclearbob> mattrae_, if I move .juju somewhere else, do juju init, and then restore my .juju/environments.yaml, is that a good way to restart things?
<mattrae_> nuclearbob: yeah that should give you a fresh start.. i've seen sometimes where containers get left around as well.. i'm not sure if you're running into that in this case
<nuclearbob> mattrae_: what container names should I look for to delete?  ones starting with my username?
<mattrae_> nuclearbob: they'll be something like 'juju-machine-1-lxc-0' if when you run sudo lxc-ls
<nuclearbob> mattrae_: okay, I don't see any containers with juju in the name
<mattrae_> nuclearbob: cool, just something to watch out for if one ever gets left around after destroy-environment
<lazyPower> Thanks everyone for watching. Juju Troubleshooting II charmschool is officially closed
<lazyPower> make sure you join us next week for working with / troubleshooting the local provider
<nuclearbob> lazyPower, when is the session on troubleshooting the local provider?
<lazyPower> let me check the calendar, i'm pretty sure its same time next friday but it may be in 2 weeks
<lazyPower> Interestingly enough I dont have it on the calendar. I'll track this down for you an ping you back with the info nuclearbob
<lazyPower> nuclearbob: We're going to schedule it for 3-4PM EDT next friday.
<nuclearbob> lazyPower, cool, thanks
<lazyPower> NP. We'll send a reminder email to the list next week
<cjohnston> is a units public IP address something that I can do a relation-get for or does it have to have a relation-set first?
<ali1234> if i reboot my computer, do all my local environment get started up automatically?
<lazyPower> ali1234: yep. there are links left behind to autostart the containers
<sebas5384> hey lazyPower o/
<lazyPower> hey sebas i'm im nearly home. I'm going to be about 5 minutes late
<sebas5384> what do you think if we start a hangout live
<sebas5384> of course, no rush :)
<lazyPower> sebas5384: you want to hangouts on air?
<sebas5384> just for recording
<sebas5384> there are some people here that can't join us right now
<lazyPower> sure thats fine
<sebas5384> but want to watch it later
<lazyPower> i need a minute or two to get settled and i'm good 2 go
<sebas5384> awesome :)
<lazyPower> https://plus.google.com/hangouts/_/hoaevent/AP36tYfF6VnfsnVX5hl_jMiONm-ybp7bfiniyN2j9tR3c48Z2Vsjhg?authuser=0&hl=en   sebas5384
<sebas5384> lazyPower: https://github.com/sebas5384/juju-vagrant-plugin
<JoshStrobl> Hey, I noticed that there is documentation on juju debug-log (https://juju.ubuntu.com/docs/config-LXC.html) regarding a bug that prevents it from working as expected. However the bug (https://bugs.launchpad.net/juju-core/+bug/1202682) shows that the fix has been released, so the bug at this point no longer "exists" and I think the docs should be updated to reflect that.
<_mup_> Bug #1202682: debug-log doesn't work with lxc provider <cts-cloud-review> <debug-log> <local-provider> <papercut> <ssh> <ui> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1202682>
<alexisb> JoshStrobl, thanks for the catch, can you send a note to dev list?
<JoshStrobl> alexisb: #juju-dev ? Gotta add myself back on, but yea I'll send off the mail.
<sebas5384> lazyPower: here's the link of the session I told you https://amsterdam2014.drupal.org/session/devops-where-start
<wallyworld_> gnuoy: that's great detective work. when i looked at your data i saw that the region/endpoint were defined higher up and assumed they were inherited. i'll investigate and may need to fix juju if there's been a regression
<SIGILL> When I start a new charm, juju will choose the *first* matching/free machine and deploy the charm without trying to be clever about chosing "the best" machine, right?
#juju 2014-06-14
<ali1234> it's broken again :(
<ali1234> http://paste.ubuntu.com/7643749/
<JoshStrobl> I'm having an issue deploying a local charm for testing using the Vagrant box. My charm is located in ~/charms (a.k.a /home/vagrant/charms) yet when I do juju deploy --repository ~/charms local:metis it isn't work (trying on the trusty box) and when I tried with it in a trusty folder within it, changing to local:trusty/metis, it claims there isn't an install folder (reviewing unit-metis-0.log) despite the fact that there is. Any ti
<JoshStrobl> ps?
<JoshStrobl> *install file
<threebadwheels> does anyone know of a doc or tutorial for how i can implement juju on my single node install of openstack so i can use it to deploy charms?
<threebadwheels> i have it at the physical host level now.
<mattrae_> hi, i'm using juju 1.19.3 the latest devel version where juju debug-log was fixed for the local provider. I'm able to run juju debug-log, but seems like i'm only seeing machine agent output, no units
<mattrae_> is there something new in debug-log to be able to show unit log output as well?
<mattrae_> i mean is there something new i need to do to show unit output?
<mattrae_> looks like all the unit output is going to /var/log/syslog at least
#juju 2014-06-15
<threebadwheels> hi
<threebadwheels> is anyone talking in here?
<Marek1211> Hi guys! I am working on a project juju/maas/openstack. I would like to know if it is possible to provide High Availability of openstack services with Juju. In case of failure of any node. how can I make sure that my services can stay functional if one fails? Thanks!
#juju 2015-06-08
<nwingfield> How can I specify in the `requires` section of metadata.yaml that a database is required, but could be one of several choices (e.g., mongodb OR postgresql OR mysql)?
<blr> nwingfield: you would add an interface for both mysql and postgresql and mongodb but set them as `optional: true`
<blr> nwingfield: there may be a way to require that at least one is provided, but I'm not certain
<nwingfield> yeah, i canât find any examples of mutually exclusive interfaces
<nwingfield> iâll give that a shot though
<thumper> nwingfield: hey there, yes have all three as optional
<blr> thanks thumper
<thumper> but with new status work, the charm could be put into a status that says "awaiting database"
<thumper> so anyone that goes "juju status" would see that it isn't yet running
<thumper> this is the new 'workload status' work
<nwingfield> that will be nice
<nwingfield> thanks for the help blr, thumper
<chrome0> I've got a spinning jujud here - it's dumping tons of lines with "machine X has new addresses" over and over, cf. http://paste.ubuntu.com/11644054/
<chrome0> Anyone seen something like this?
<schkovich> there are two manual environments running in rackspace. i have a task to move n machines from environment A to environment B. is something like that possible at all?
<gnuoy> jamespage, if you have a moment: https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/use-dc-stable/+merge/261372
<Odd_Bloke> Is there a difference between 'juju-run ...' and 'sudo juju-run ...' on an instance?
<aisrael> Odd_Bloke: I wouldn't think so, as `juju-run` runs as root already
<Odd_Bloke> aisrael: I'm running it from a shell as not-root; does it escalate itself?
<lazyPower> Odd_Bloke: there is one context in which juju run will run as the ubuntu user
<lazyPower> and i beleive thats when you target --machine, vs using --service or --unit
<lazyPower> Odd_Bloke: otherwise the context of a juju run session is executed as root, no elevation required.
<Bialogs> Is there a way to tell Juju not to overwrite configs? I'm editing Glance config and when I reboot Glance the configs switch back to the Juju settings.
<rick_h_> Bialogs: it's in the charm and what it does. The charm has to support non-charm related edits to the configs.
<Bialogs> rick_h_: Thanks
<lazyPower> hey marcoceppi, have you ever seen the juju unit commands just flat out hang?
<lazyPower> eg: unit-get, juju-log, etc.
<lazyPower> doesn't seem to matter which command i run, its just hanging around like its attempting to connect to the event socket and never timing out.
<Zetas> does it initiate an ssh connection?
<lazyPower> I'm not sure about that Zetas, but i want to say no.
<rick_h_> lazyPower: is there a gui running? canit connect to the wss?
<lazyPower> rick_h_: i terminated the env and stood it back up. its worth noting that this also happened after dhx'ing into the unit, and that may have made an environmental change
<rick_h_> oh hmm, ok
<lazyPower> so i dont necessarily think this is core related, and more something wonky happened
<rick_h_> lazyPower: have the url for charm testing handy? It's not linked in the bug and I can't find it in the docs under charm testing or anything.
<rick_h_> lazyPower: is that not "live" atm?
<lazyPower> rick_h_: thats a deceptively large topic, are you looking for the amulet docs? as in "how do i test my charm"?
<rick_h_> lazyPower: the "charm tests failed on my charm, I think I fixed them, where's that test rnuner that would prove this to be true on a reporting page" testing
<lazyPower> ah
<lazyPower> http://reports.vapour.ws/latest-bundle-and-charm-results
<lazyPower> that's not linked in the docs - the individual test runs should be embedded in the bug filed for the MP/Review request
<rick_h_> lazyPower: right, just following up on https://bugs.launchpad.net/charms/+bug/1459345 and curious if the update ran/passed tests
<mup> Bug #1459345: Review/promulgation request for the Redis charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1459345>
<lazyPower> rick_h_: that was probably run out of band by hand by a charmer.
<lazyPower> i do know, that the current impl of the revq is linked to an older testing project in jenkins, and we have a newer setup that's using container isolation to kick off the tests that seems more reliable.
<rick_h_> lazyPower: ah ok. Well not much we can do then. ty
<lazyPower> we still have an open ticket to integrate the new method, and make it a tad more self-serving to end users, eg: open MP, get automated results like osci
<lazyPower> thats upcoming this cycle by devx aiui, but no promises. I dont manage the workload on that team :D
<marcoceppi> lazyPower: we have more than an open ticket, there's a spec being drafted that makes CI way more robust
<lazyPower> even. better.
<rick_h_> hmm, juju-gui results going back to aug of last year?
<lazyPower> rick_h_: i may have spoken too soon i was able to reproduce. Deploying the gui now
<lazyPower> rick_h_: is there something specific to look for other than wss connectivity?
<rick_h_> lazyPower: well if it can talk to the API but you can't from the cli it might be network issues or something
<rick_h_> lazyPower: basically can you load up the gui, service details, unit details
<lazyPower> ack, makes sense
<lazyPower> yeah gui is working as expected
<lazyPower> this is weird, if i detach and re-run the hook it works as expected.
<lazyPower> yet when attached via debug-hooks, dhx, etc. it hangs on every unit-agent command
<lazyPower> config-get, juju-log, unit-get
 * lazyPower takes an env dump
<bilal> Hi, I was looking to deploy Openstack HA with juju and MAAS. I could not find any good step by step guide and architecture guide for it. Can someone help me point to it? I will be very thankful :)
<lazyPower> beisner: ping :) ^
<lazyPower> o/ bilal
<beisner> hi bilal -  the openstack charms do generally support HA use cases, however only non-HA openstack bundles (solutions) have been published thus far in the juju charm store.
<bilal> Thanks beisner. So in order to make a charm e.g. keystone charm to work in HA usecase, what different steps do I need to do? Also do you know if maybe outside juju charm store, someone has posted HA reference architecture?
<thumper> lazyPower: so I got the django 1.7 support tests all passing with bundletester over the weekend
<lazyPower> GREAT SUCCESS \O/
<thumper> lazyPower: which included one with django 1.8, which also found a bug, which is also fixed in the branch
<lazyPower> thats awesome news thumper
<thumper> the autocommit attribute in the postgres config was removed in 1.8
<thumper> lazyPower: was using juju 1.24 to run the tests though, not 1.23
<thumper> lazyPower: I have celery worker support working locally too
<lazyPower> thumper: ok, lets land the updates to 1.7 first
<thumper> there is a wip merge proposal up so I can track changes
<lazyPower> then we'll get celery in a feature branch
<thumper> lazyPower: yes, definitely
<lazyPower> sounds awesome though, thanks for the dedication there
<thumper> lazyPower: it needs some tests though
<lazyPower> so does whats in teh charm store :)
<thumper> also coming once this has landed will be moving all the feature tests to amulet
<lazyPower> oh thank the developer, that will be great
<lazyPower> instead of continuing the carry the pre-amulet tests forward
 * thumper nods
<thumper> as I mentioned before, I'd like this charm to become an example of a well written charm
<thumper> an exemplar if you will
<thumper> as we get workload status too, I'd like to hook this up
<thumper> seeing the great work landing in juju master right now has me eager to use it
<thumper> kinda hard to work out if django is fully "up" though since it is a framework charm
 * thumper thinks...
<thumper> lazyPower: how about this as a crazy arse idea...
<thumper> lazyPower: we encourate a "special" url in the app
<thumper> lazyPower: something configurable, but defaulting to something like '/.juju-status'
<thumper> lazyPower: and use that for workload status
<thumper> if it is configured
<lazyPower> That doesn't seem very practical and very scoped to juju.
<thumper> socialize juju into django apps :)
<lazyPower> I think something along the lines of checking for the CSRF token you get out of a django site would be good
<thumper> that doesn't tell me it is working
<thumper> for example
<lazyPower> it tells you the framework is responding, othewrise you'd get a 503 bad gateway error and no CSRF token in the <head> tag
<thumper> as far as I'm concerned, django isn't working if my app isn't there
<thumper> my app provides https endpoints
<thumper> lazyPower: this assumes that you are using the csrf middleware (which, you should be, but don't have to)
<thumper> lazyPower: how about this...
<thumper> lazyPower: as config
<thumper> lazyPower: you specify which relations are needed
<thumper> lazyPower: if all relations are made, service is up
<thumper> lazyPower: that way, as the charm user, I could say "up means pqsql, amqp, django-settings"
<thumper> although ideally we'd want to be able to specify the remote side name
<lazyPower> that sounds strikingly similar to what the services framework is using as a basis for charming
<thumper> I'd like to be able to say "you need a django-setting relation with foo"
<thumper> lazyPower: I know nothing about the services framework
<lazyPower> but what you're telling me you want to do sounds mor elike a health check of the service to a) ensure its online and b) has the associated components online and in a healthy state
<thumper> exactly, this is what workload status is
<thumper> and it shows up in juju status
<thumper> (or it will soon...)
<thumper> 1.25 I think
<lazyPower> i think this needs more brainstorming than I have capacity for at the moment, because i see your point of "my service is an amalgamation of these components, just checking 200ok is not enough"
<lazyPower> but theres a fair amount of overlap here that may or may not be present in other apps, and other healthchecks.
 * thumper nods
<lazyPower> s/apps/services/
<thumper> lazyPower: how about we schedule a chat some time this week
<thumper> with perhaps another eco member or two
<thumper> and just spit-ball some ideas
<lazyPower> im booked this week in prep for DockerCon, can we do this after the conference if you need me there?
<thumper> look, this isn't urgent, and unlikely to happen in the next two weeks
<thumper> so we can do it after
<thumper> I'm really just thinking out loud
<thumper> so to speak
<lazyPower> perfect, i dont want you to lose the enthusiasm/momentum you've got going though
<lazyPower> so i'm willing to be flexible when its not stacking :)
<thumper> lazyPower: unlikely, I'm using this charm in a production environment
<thumper> and I want it to be good
<lazyPower> sounds good, we have open office hours
<lazyPower> and this would be brilliant to bring up during the office hours - it might be a bit early for you though
<thumper> lazyPower: when is it?
<lazyPower> 6  AM in Canberra ACT
<thumper> lazyPower: good think I'm in NZ and it is 8am then
<lazyPower> oh perfect
<lazyPower> i had no idea so i guessed at your locale
<lazyPower> well i knew you were a kiwi, but thats as far as that goes
<thumper> with my wife leaving for work early, it isn't hard for me to start at 8am
<lazyPower> ok cool, on the 18'th of this month at 8am
<thumper> I'll pencil it in
<thumper> lazyPower: I have to say, with all this charm dev work, it makes me want to add more to the charmhelpers and making juju better...
<thumper> also, I have a patch for bundletester
<lazyPower> orly?
<thumper> but I haven't worked out how to add a test...
<thumper> lazyPower: just upper cases the debug flag
<lazyPower> thumper: does it change -l DEBUG to -l d ebug?
<thumper> so it can handle 'debug'
<lazyPower> haha
<lazyPower> i love it
<lazyPower> called it
<thumper> :)
<thumper> lazyPower: https://github.com/juju-solutions/bundletester/pull/19
<lazyPower> thumper: commented, one of the devx peeps will give that a thorough review / merge
<thumper> sure, np
<beisner> hi bilal, generally, the HA needs and topologies vary quite a bit from one environment to another.   also as you note, the approach is different per service (percona-cluster vs. keystone vs. glance and such).
<lazyPower> aisrael: 'preciate you
<aisrael> lazyPower: Right back atcha, dude.
#juju 2015-06-09
<jose> Does Juju have an environment API I can play with? for example, a command that will list all the deployed services, but just service names (not status)
<stub> jose: Yes. I don't know where it is documented, but I do know there are python bindings (jujuclient on pypi)
<jose> stub: oh, awesome. I'll check later - need to rest a bit now. thanks a bunch!
<jamespage> gnuoy, pxc seems unhappy - non-leaders are trying to migrate stuff from local storage
<jamespage> debugging that now
<bbaqar> Hey Liam
<lazyPower> hmm, marcoceppi, tvansteenburgh, aisrael - when yinz get in I have a question about deployer and some bundles, i've managed to build a bundle that only works against deployer in pip - not whats in the ppa. I'm curious about the skew here and what i can do to  resolve this problem.
<tvansteenburgh> lazyPower: the ppa is well ahead of pypi right now. probably means there's a regression in a recent commit
<lazyPower> ok, is the head of development for deployer still in launchpad?
<tvansteenburgh> lazyPower: wait, the juju/stable ppa, or other?
<tvansteenburgh> lazyPower: yes
<lazyPower> tvansteenburgh: i have both enabled on my system, aiui deployer only lives in the juju/stable ppa
<tvansteenburgh> lazyPower: i think there are daily builds too. if you're on stable i'd expect that to match what's on pypi but it sounds like that may not be the case right now
<lazyPower> ok, I'll dig a bit deeper and see if i can find whats going on. Thanks for the info tvansteenburgh
<tvansteenburgh> lazyPower: np, good luck
<lazyPower> aisrael: 1463117 hit fix-committed lastnight \o/
<aisrael> lazyPower: excellent!
<beisner> tvansteenburgh, do juju-deployer packages follow the juju/devel juju/proposed and juju/stable ppa flow?
<tvansteenburgh> beisner: not sure. dpb, you know?
<tvansteenburgh> dpb1 ^
<beisner> tvansteenburgh, i ask because uosci uses deployer from those ppa sources, and we can exercise proposed packages ahead of landing in stable if so.
<dpb1> beisner: no, nothing that complex
<dpb1> beisner: we build nightlys from trunk in two ppas
<beisner> i'd have to manually kick a job, but would be happy to do that in the interest of validating juju-deployer changes against existing implementations
<dpb1> beisner: then every once in a while, marco or someone puts them into juju/stable, then every once in a while they are released into the distro in universe.
<beisner> ah i see.
<dpb1> beisner: two ppas:
<dpb1> https://code.launchpad.net/~ahasenack/+recipe/python-jujuclient-daily
<dpb1> https://launchpad.net/~ahasenack/+archive/ubuntu/juju-deployer-daily
<dpb1> beisner: personally, I run from those two ppas
<beisner> dpb1, ok thanks for the info.  and thanks for being the perpetual deployer gate  ;-)
<dpb1> beisner: lol :)
<jose> marcoceppi: hey, is there a way to use amulet to test actions? not documented afaics
<lazyPower> jose: you can, but its not using amulet syntax sugar
<lazyPower> jose: you use subprocess to call the juju action do stanza
<jose> lazyPower: huh, good idea. I'm gonna check that and see what I can do on my end. thanks!
<ttt_> hi
<rick_h_> jcsackett: marcoceppi arosales lazyPower anyone else, we're bringing up new jujucharms.com env and data is loading. Data might be loading (like stats) for some time so if you see any funkiness around that (we've not switched dns yet) that's why and should settle out by tomorrow
<lazyPower> ack, thanks rick_h_
<jcsackett> rick_h_: think you probably meant to ping jcastro there, not me? ^
<rick_h_> jcsackett: ah yea, sorry jcastro ^
<arosales> rick_h_, thanks for the fyi
<jcastro> noted
<bilal> Hi is there any official solution for High availablity OpenStack with multi-controller on juju charm store? Can someone point me towards it. Thanks
<Destreyf> Has anyone here used juju to deploy a full open stack system on a single node (for test purposes)
<lazyPower> Destreyf: closest i've seen is illustrated here: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/
<lazyPower> Destreyf: but note: the article is a year old, and uses 2 machines
<Destreyf> lazyPower: thanks for the link, i hadn't seen this one yet
<Destreyf> basically i want to setup a cluster with 3 nodes
<Destreyf> i know that's terrible to want
<Destreyf> :P
<Destreyf> but its only going to serve about 3-4VM's
<Destreyf> so its very little usage
<lazyPower> do you need the overhead of openstack for that?
<lazyPower> juju supports deploy --to kvm:#
<Destreyf> well
<Destreyf> we need HA
<Destreyf> currently using proxmox
<lazyPower> 3 nodes will not give you a HA setup
<lazyPower> 12 nodes minimum for  a proper openstack, even more for HA
<Destreyf> even when stacking services on a single machine?
<lazyPower> if you unplug the machine, your stacked services all tank
<lazyPower> i'm assuming you're talking having 2 noiva-computes as "HA" and everything else (cinder, glance, keystone, et-al) running off your bootstrap node
<Destreyf> The goal was to actually setup 3 servers, with server 1 being teh bootstrap
<Destreyf> but also having the rest on it
<Destreyf> and possibly telling juju to be multi-node bootstrap
<Destreyf> although i haven't actualyl researched that
<lazyPower> i dont think thats going to work, but i haven't tried it
<Destreyf> :P
<Destreyf> thanks for your input
<Destreyf> i appreciate it
<Destreyf> just trying to find the most effective solution for workable ha
<Destreyf> on a 3-4 node setup
<mbruzek> ping hazmat
<marcoceppi> Destreyf: it's possible, I've been meaning to write a new blog post on it
<Destreyf> marcoceppi: that's awesome to hear
<Destreyf> i'm actually following your guide
<Destreyf> using juju and bootstrap node
<Destreyf> to test with single machine
<Destreyf> any caveats to note?
<Destreyf> marcoceppi: i bound juju-gui to port 8081 by doing juju set juju-gui port="8081"
<Destreyf> so that's the only hiccup i could see so-far
<marcoceppi> Destreyf: you'll want to use LXC for almost everyhting
<marcoceppi> and you can't really co-locate ceph and nova-compute
<Destreyf> oh?
<marcoceppi> basically, deploy nova-compute to the one machine, then everything else as LXC containers
<Destreyf> ceph and compute can't co-locate
<Destreyf> so ceph inside of lxc would work though?
<marcoceppi> Destreyf: last I checked, that may have changed. Ceph can't run in a LXC container
<Destreyf> oh
<Destreyf> okay
<Destreyf> thanks
<marcoceppi> Destreyf: if i have time this weekend I'll give it a go again and blog about it
<marcoceppi> nuetron is also a bit sticky
<marcoceppi> if you skip neutron, it's a lot easier
<marcoceppi> otherwise you'll have to create a custom KVM container with two nics
<Destreyf> Yeah i figured neutron would be a pain, but without neutron can't make it HA really can you?
<Destreyf> also i had neutron mostly working on my last test run, but juju kept getting stuck installing OpenStack-Dashboard
<Destreyf> once everythign else was running outside of containers
<Destreyf> just used --to 0
<Destreyf> on everything
<marcoceppi> Destreyf: nova-compute is --to 0
<marcoceppi> everyhting else is basically --to lxc:0
<marcoceppi> that will give you containerisation of resources and prevent conflicts
<Destreyf> nice, well letting LXC run and install teh rest of the mess right now :P
<marcoceppi> Destreyf: you could also use VMAAS
<Destreyf> once i have this much done, i'll image the machine and play around with getting ceph into the mix
<marcoceppi> where you create a bunch of KVM containers on that one machine and use MAAS to manage the virtualization
<Destreyf> marcoceppi: the end goal is to have a 3-4 node openstack setup, we're using a proxmox system right now, but we have too many performance issues to keep using it.
<Destreyf> we might also just go with 3 proxmox and 3 ceph nodes instead
<Destreyf> if we can't get this working
<Destreyf> just would like to reduce the number of nodes to a more power managable number
<Destreyf> :P
<Destreyf> backup battery only lasts 5 minutes currently
<Destreyf> with our 7 nodes
<Destreyf> mysql came up
<Destreyf> :D
<Destreyf> marcoceppi: So my openstack-dashboard doesn't show a public ip other than the ip assigned by lxc
#juju 2015-06-10
<Odd_Bloke> Hello all; I'm trying to add servicegroup support to the ubuntu-repository-charm, using the latest charm-helpers.
<Odd_Bloke> I've added the config option and set it (for a new deployment), but none of the services show up in Nagios as having a servicegroup.
<Odd_Bloke> Is there some configuration I need to do in Nagios?
<Odd_Bloke> Here are some (potentially) relevant bits of config on the hosts: http://paste.ubuntu.com/11689441/
<jamespage> lukasa, around? I have a query re use of etcd on the neutron-api charm charms for Calico
<lukasa> jamespage: Howdy
<lukasa> Shoot
<jamespage> lukasa, just working through the current set of proposed changes
<jamespage> lukasa, and I note the introduction of etcd into the neutron-api charm
<lukasa> Yup
<jamespage> lukasa, is that used by Calico to maintain state?
<lukasa> It is indeed
<jamespage> I'm trying to understand how it needs to scale - clouds may need to scale the neutron-api service quite big
<jamespage> does that make sense for the etcd bits?
<lukasa> Yes. =)
<lukasa> So, etcd will scale separately from the neutron-api side
<jamespage> lukasa, right now it will match the number of neutron-api units
<lukasa> Ohhh, I see
<lukasa> No, it won't, not quite. =)
<lukasa> neutron-api installs etcd on each node, but configures that etcd instance as a proxy
<jamespage> lukasa, we also have quite a strong etcd charm coming out of the cloudfoundry work
<jamespage> https://jujucharms.com/u/kubernetes/etcd/trusty/2
<lukasa> jamespage: Yeah, I'm aware, I have a roadmap item to migrate to that. =D
<lukasa> But the etcd proxy instances don't count towards the cluster size
<lukasa> They're basically just fancy HTTP proxies
<jamespage> lukasa, what are they proxying to?
<lukasa> The core etcd install, which is covered by the etcd relation I added
<jamespage> lukasa, ah - sorry the use or 'peer' in the interface name confused me for a moment there
<lukasa> Yeah, sorry
<lukasa> From an etcd configuration perspective they count as peers
<lukasa> But they don't participate in raft
<jamespage> lukasa, so these don't fit into the relations provided by https://api.jujucharms.com/charmstore/v4/~kubernetes/trusty/etcd-2/archive/metadata.yaml ?
<jamespage> i.e. its not etcd (client interface)
<lukasa> I'm not sure yet
<lukasa> I've had a bit of a chat with Whit and Charles, but I haven't sat down and examined that charm in depth yet
<lukasa> And that charm may need an enhancement to work for us (though it may not, they're playing with it for calico-docker)
<jamespage> lukasa, ack - OK - I understand enough to review for now
<lukasa> Awesome. =) Feel free to ask more questions whenever if you need
<jamespage> lukasa, I'm making the assumption I'm going to see etcd proxy use in the neutron-calico charm as well right?
<lukasa> jamespage: Correct =)
<jamespage> lukasa, do you have a charm-helpers mp pending as well?
<lukasa> Already merged, I think
<jamespage> lukasa, checking now
<jamespage> lukasa, this one - https://code.launchpad.net/~cory-benfield/charm-helpers/trunk/+merge/257260
<jamespage> lukasa, fyi gnuoy and I where talking about plugins/neutron-api yesterday a bit; we're going to come up with a suboridnate approach which makes the code separation in the charms between core neutron-api and plugin a bit clearer
<lukasa> jamespage: That's the one
<jamespage> lukasa, you don;'t need to rework anything, but we'll probably do a tech-debt run through at some point in the future to move all plugins to that model
<lukasa> jamespage: Sure thing
<lukasa> I'll be around, drop me an email when you do and I'll do my best to help out
<jamespage> lukasa, thanks
<jamespage> lukasa, well come up with a base template and stuff as well
<jamespage> make the whole process easier for all to consume
<lukasa> \o/
<jamespage> lukasa, do you have a nice bundle to deploy this?
<jamespage> lukasa, looking at the peer relation on the neutron-calico charm - if related to bird, is the data passed on that relation superceeded
<jamespage> lukasa, peer relations get noisy at scale
<jamespage> lukasa, read the code and answered my own question
<jamespage> lukasa, sent you a MP for a slight nicer way of parsing calico-origin in the install hook
<jamespage> lukasa, I think you could also answer the feedback on the NEW charm bug by making install and config-changed to exactly the same thing
<jamespage> or maybe no-opping install althogether
<jamespage> the service bringup will do install -> config-changed anyway
<jamespage> lukasa, remind me again about that bird bug in trusty?
<jamespage> lukasa, shutting up - funny how you raised it and everything :-)
<jamespage> https://bugs.launchpad.net/ubuntu/precise/+source/bird/+bug/1342173
<mup> Bug #1342173: BIRD 1.4.0-1 fails to start on Ubuntu 14.04 <trusty> <bird (Ubuntu):Fix Released> <bird (Ubuntu Precise):Confirmed> <bird (Ubuntu Trusty):Confirmed> <bird (Ubuntu Utopic):Fix Released> <https://launchpad.net/bugs/1342173>
<jamespage> lukasa, I'll get that into the SRU queue today - however its probably OK to use the upstream PPA as well, as the maintainer is the same person as in debian
<jamespage> lukasa, fixes in the queue - that bug report will be updated once accepted
<lukasa> jamespage: Sorry, was at lunch =)
<lazyPower> jamespage: we have an etcd charm that supports large scale already :)
<lazyPower> https://jujucharms.com/u/kubernetes/etcd/trusty/2 - i'd be interested in any feedbak you have here.
<lazyPower> looks like we need to cut a new release however - https://github.com/whitmo/etcd-charm  v0.0.3 is the latest, and supports bintar deployments + private network bootstrap (vs using etcd to bootstrap etcd)
<jamespage> lazyPower, indeed we do - can we adapt it for lukasa's requirements for etcd proxies on local units, related to a clustered, core etcd deployment.
<lazyPower> jamespage: im thinking so. I have already started talking to lukasa and crew about it :)
<lukasa> \o/
<lazyPower> we have pending work to address first, but that was next
<jamespage> lazyPower, so i gather
<jamespage> lazyPower, awesome - I was having a general run through the charmset inc the openstack charms today
<lazyPower> jamespage: i'll try to refrain from jumpin in a convo before i've had my coffee moving forward, i realized about 10 minutes later, there was a whole heap more to that conversation.
<jamespage> lazyPower, its all looking pretty close - if we could close on the etcd stuff, then there are only niggles to land
<lazyPower> lukasa: what were we pending? just adding the client relation bits and running a test right?
<lukasa> lazyPower: Yeah, and me finding time to do that
<lukasa> Currently swamped coordinating about 5 or 6 different things at once. ;)
<lazyPower> no pressure :D
<lukasa> Sometime this week or early next I hope
<lazyPower> ack, if you want some support from our side feel free to tap me in lukasa
<lazyPower> i'm pretty fmiliar with what our etcd charm is doing since i ran with the upgrades to 0.2.x
<lukasa> Yeah, will do. =) I just blew up an attempted deployment to Kilo, but I think I just got the config wrong ;)
<lukasa> So I'll see how this goes
<Odd_Bloke> Can anyone help me with a Nagios servicegroups question: http://paste.ubuntu.com/11690101/ ?
<Odd_Bloke> (Trying again now East Coast people might be around :)
<jamespage> beisner, https://code.launchpad.net/~james-page/ubuntu-openstack-ci/neutron-gateway-rename/+merge/261630
<mbruzek> marcoceppi: did you have an image generator for bundles?
<marcoceppi> mbruzek: svg.juju.solutions
<Odd_Bloke> Anyone around to give me pointers on a nagios_servicegroups problem I'm seeing?
<Odd_Bloke> See http://paste.ubuntu.com/11690101/ for details.
<beisner> jamespage, thanks, merged.  i'll follow up on that by updating amulet tests as i think they may call the old name as well.
<beisner> jamespage, gnuoy - also, i'll work up an MP to update neutron name in the mojo specs.
<cholcombe> lazyPower, so leader election in juju.  I get that by calling is-leader right?
<lazyPower> cholcombe: corret, thats a predicate to determine who is the leader of the service group
<cholcombe> are there docs on that?  i'm having trouble finding them
<lazyPower> iirc they only exist in the devel doc release notes
<cholcombe> ok
<lazyPower> it hasnt been officially docuemented/included - and its subject to being reworked.
<lazyPower> last i heard they were going to rework the leader-election bits sometime this or next cycle.
<cholcombe> i'm on 1.23.3 juju
<cholcombe> i'm going to give it a try and see what it returns
<Bialogs> Does anyone here have experience with the Cinder charm? I'm getting the following error: losetup: Could not find any loop device. Maybe this kernel does not know about the loop device? (If so, recompile or `modprobe loop'.)
<lazyPower> cholcombe: https://jujucharms.com/docs/stable/reference-release-notes
<lazyPower> search "is-leader"
<cholcombe> sweet
<cholcombe> yeah i saw it
<cholcombe> i see we have new hooks also. i need to take this into account
<lazyPower> Bialogs: that sounds somewhat familar... did you provide the cinder charm a block device?
<lazyPower> Bialogs: i beleive thats also covered in the charm readme if you havent taken a look yet - https://jujucharms.com/cinder/trusty/23
<Bialogs> Yes I did but I'm not certain that I did that correctly, in my config file: block-device: /var/lib/cinder-sdb.img|10G
<lazyPower> beisner: tap tap :)
<beisner> lazyPower, who's there?
<lazyPower> beisner: any known issues with cinder and loopback storage? the config provided by Bialogs looks correct to me according to the config option.
<Bialogs> lazyPower: I should also mention that I'm trying to do this in LXC
<lazyPower> oooooooooooooo
<lazyPower> i bet its an apparmor issue
<beisner> lazyPower, loopback storage is a testing/experimental feature;  i'd add that i've not tried to use it inside lxc.
<lazyPower> but we'll see what the wizard has to say
<lazyPower> beisner is my lifeline into the openstack wizards circle :)
<Bialogs> lazyPower: On the host I can't see the kernel module either though
<lazyPower> i'm willing to bet thats the issue
<lazyPower> apparmor is preventing you from loading the loopback kernel module
<lazyPower> i've run into so many weird edge cases with apparmor being very strict about what you're allowed to do in lxc
<cholcombe> lazyPower: is-leader rocks :)
<Bialogs> modprobe loop doesn't return anything either
<Bialogs> apparmor will prevent that on the host machine too?
<lazyPower> not on the host, no
<lazyPower> only in the lxc client iirc
<Bialogs> Ya so that's strange
<lazyPower> you have to whitelist modules in apparmor to get them to load in lxc
<lazyPower> if thats the issue that is
<lazyPower> mind you i'm making a guess at this, without having evidence thats the problem
<Bialogs> I'm going to try and get it loaded on the host first
<beisner> lazyPower, sounds reasonable to at least look at apparmor.
<ennoble> hi, I'm trying to manually provision a node and I get an error, "ERROR error checking if provisioned: subprocess encountered error code 1" is there any debugging I can turn on to further diagnose the problem?
<Bialogs> Assuming I don't get this working what's the alternative to loopback Cinder
<beisner> Bialogs, please do let us know if you see issues with that outside a container.
<lazyPower> ennoble: can you do 'juju retry-provisioning' with --verbose?
<lazyPower> er -v
<ennoble> lazyPower: when I try that with the machine name I get "error invalid machine", without it "error: no machine specified." The machine isn't listed in juju status in any state after the failure
<ennoble> lazyPower: is there a way to get verbose output from add-machine? -v doesn't seem to do anything
<lazyPower> ennoble: you can tail the machine log on your bootstrap node, but that wont give you much output during the provisioning process unfortunately
<lazyPower> i'm trying to think if there's a debug method on that to spit out more data like bootstrap --debug -v
<mgz_> lazyPower: `juju help logging`
<mgz_> turn up the verbosity, then look at the machine log
<lazyPower> oh yeah i sent a Pr to the docs about this
<lazyPower> mgz_: *hattip*
<Destreyf> marcoceppi: Did you have any issues with your Hyper Visor not showing up in the stack?
<ennoble> mgz_: I tried that, the help, isn't quite right in 1.23.3 (looks like you need juju set-environment logging-config="juju=TRACE;unit=TRACE"
<ennoble> lazyPower: funny enough it worked after setting logging even though I've tried it many times previously... not sure how to debug that more
<lazyPower> ennoble: that is indeed odd
<lazyPower> perhaps a transient networking error?
<ennoble> lazyPower: Possibly, although i've had an ssh session open to the machine the whole time as well.
<ennoble> lazyPower: thanks for the input. If it happens again maybe I'll see a pattern
<lazyPower> np ennoble, cheers :)
<cholcombe> is is-leader does some strange things :).  when my leader-elected hook was called the normal juju context stuff that I gather didn't work
<lazyPower> cholcombe: might be worth following up in #juju-dev to see why certain things arent available in leader context
<cholcombe> ok
<cholcombe> is that on freenode also?
<lazyPower> yep
<ennoble> lazyPower: Actually my bad, I was wrong, I spelled the node name incorrectly and the machine created didn't exist when I retried. I'm back to the issue.
<ennoble> lazyPower: I set logging-config to juju=TRACE;unit=TRACE where is the logging done? I don't see anything on the orchestration node
<lazyPower> with the logging set to trace, it will show up on your workstation when you run 'juju debug-log'
<lazyPower> but its going to be a firehose of info, so be prepared
<ennoble> lazyPower: while I see pings and such from other instances, nothing about the machine I'm trying to add seems to show up
<lazyPower> thats weird
<lazyPower> you should at the very least be getting something from the action, and anything post cloud-init
<Bialogs> lazyPower: Hey, I have some more information. The hook that Juju is running executes a losetup -f command which fails because it can't find the loop module. When I modprobe from the container I get the following error: modprobe: ERROR: ../libkmod/libkmod.c:556 kmod_search_moddep() could not open moddep file '/lib/modules/3.13.0-53-generic/modules.dep.bin
<Bialogs> lazyPower: does that point more to app armor?
<lazyPower> Bialogs: does the command work after you sudo depmod?
<bbaqar> Has anyone done a complete kilo openstack HA deployment?
<Bialogs> lazyPower: I did a depmod on the host before I ran this command -- any difference in LXC?
<lazyPower> Bialogs: if it made no difference, i would point at apparmor, if it works its ounds like the module.bin was missing the module you needed
<lazyPower> you should be getting an error in syslog (i think) about the apparmor denial
<lazyPower> if its apparmor
<Bialogs> lazyPower: lol depmod fails in the container
<lazyPower> ok, yeah
<lazyPower> its more than likely the apparmor profile for lxc then, which is unfortunate.
<lazyPower> that gets fiddly
<Bialogs> The app armor profile is on the host machine, correct?
<lazyPower> correct
<cholcombe> lazyPower: i think my leader stuff works now :)
<Bialogs> lazyPower: don't really see anything that says denied, just a lot of status messages
<cholcombe> i'm not seeing a crap ton of duplicate unit messages now
<lazyPower> cholcombe: awesome!
<cholcombe> :)
<lazyPower> Bialogs: :(
<Bialogs> Every smile has a frown
<lazyPower> Bialogs: thats the pitts, i'm not sure what to recommend if its not giving you a big red shiny signal
<lazyPower> Bialogs: what i can however s uggest is emailing the juju list with the issue
<lazyPower> someone thats not around now (primarily the UK openstack team) should be able to lend a hand with whats going wrong if theyve encountered this before, otherwise they will point you to file a bug
<lazyPower> Bialogs: juju@lists.ubuntu.com - or alternatively you can open a bug first and ping the list with the bug, which may be the better route to go, to raise visibility and get some bug-heat on it.
<Bialogs> I'm going to see if I can poke around the app armor config first
<Bialogs> Then I'll email the list or post the bug
<Bialogs> Thanks!
<lazyPower> sorry i wasnt more help Bialogs :( good luck
<Bialogs> So this is strange
<Bialogs> Tried it again and It's not failing at that part, just can't communicate with RabbitMQ
<marcoceppi> mbruzek: sweet blog post!
<mbruzek> thanks marcoceppi
<mbruzek> http://bruzer.net/2015/06/10/deploy-a-kubernetes-development-cluster-with-juju/
<tvansteenburgh> great write-up mbruzek, just read it myself
<mbruzek> thanks tim
<tvansteenburgh> nice work on kubes lazyPower, mbruzek, and whit; cool stuff!
<kwmonroe> cory_fu: do you recall if unitdata is available in charmhelpers render_template?  and if so, does this feel right in a .j2 template? hostname = {{ unitdata.kv['relations.ready']['my-charm'].values()[0]['private-address'] }}
<cory_fu> kwmonroe: Do you mean in the bigdata branch, or in trunk?
<kwmonroe> either cory_fu, but i got it.. i'm using any_ready_unit instead of relations.ready now.  please disregard :)
<cory_fu> kwmonroe: It is available in the templates in the big data branch, yes, but I'd recommend you use {{ any_ready_unit('my-charm')['private-address'] }} instead
<kwmonroe> duh
<cory_fu> :)
<lazyPower> thanks tvansteenburgh :)
<Bialogs> lazyPower: I submitted that bug - ended up getting another error eventually...Still super confused. Thanks for your help
<Destreyf> guys, i'm running into a weird issue
<Destreyf> when i deploy a machine using the manual method
<Destreyf> my 2nd machine never starts up the virbr0
<Destreyf> and then i get connction timeouts
#juju 2015-06-11
<rick_h_> PSA: we're migrating jujucharms.com to the new environment and experiencing some glitches atm. Work in progress
<rick_h_> PSA: migration aborted for now, things should be back to 'normal'
<Destreyf> When adding a machine, should it set up a virbr0 with the ip 192.168.122.1 like it does on the bootstrap node?
<Destreyf> using manual provisioning
<marcoceppi> Destreyf: I'm not sure, I think that virbr0 is only for KVM stuff?
<Destreyf> my 2nd machine is trying to connect 192.168.122.1
<Destreyf> and it can't reach it
<lazyPower> Destreyf: juju uses hardcoded ip addressing with host only networking when deploying to containers/kvm
<Destreyf> lazyPower: so there's no way to mix lxc and physical machines?
<lazyPower> Destreyf: this is a known issue, and we have some solutions to work around it using flannel, calico, and a few other SDN providers that are being worked on
<Destreyf> ah okay
<Destreyf> lazyPower: that was the information i needed, thank you
<lazyPower> Destreyf: there is, but if you need reachability into that instance, you should look at flannel-docker charm which will spin up a flannel0 interface
<lazyPower> and that gives you a private networking space to interfact with services on your network
<lazyPower> if you're using LXC containers
<lazyPower> there's additional work that needs to be done
<lazyPower> we dont have a magic bullet charm that will configure your containers properly atm
<lazyPower> well, kind of, but its a weird deployment pattern
<lazyPower> theres an older flannel charm in ~hazmat's namespace
<lazyPower> and if you deploy that charm to a host *first*
<lazyPower> as in before doing juju deploy --to lxc:#
<lazyPower> it will configure flannel based networking for any lxc containers created after the fact
<Destreyf> lazyPower: I'm trying to do something stupid, but its a test case scenario
<hazmat> fortunately it has the binaries in it, so old but still works ;-)
<lazyPower> hazmat: o/ :)
<Destreyf> lazyPower: any experience with using juju on debian?
<lazyPower> hazmat: its funny you say that, i just updated that entire ecosystem
<hazmat> lazyPower: i know
<Destreyf> because juju... is fantastic in its relationship management
<lazyPower> Destreyf: i haven't personally, no
<lazyPower> i've used juju on windows, and juju on ubuntu so far
<lazyPower> there's one or two centos based charms out there too that our partners cloudbase have been working on
<lazyPower> but i have yet to get my hands on a debian charm
<hazmat> Destreyf: there's some patches out there for it, but its not out of the box afaicr, the cloudbase folks might know more
<lazyPower> hazmat: its inc. actually
<lazyPower> the work has been done
<lazyPower> i dont think its landed yet tho
<hazmat> lazyPower: oh.. awesome
<lazyPower> hazmat: mbruzek was looking for dockercon hackathon participants btw, do you have any interest in hackathon'ing with us?
<Destreyf> hazmat, lazyPower thank you very much for your time, you've answered questions that i've been racking my brain on for 2 days now
<lazyPower> Destreyf: anytime :)
<lazyPower> and on that note i need to grab a quick lunch, if you need anything else dont hesitate to ping Destreyf
<lazyPower> i'll circle back when i get back from lunch
<hazmat> lazyPower: i'll be there, haven't decided on anything yet in terms of projects, i've got two ideas, one is really boring, but just a pull request idea i wanted to work on (ie. won't win, but accomploshies a personal goal), the other is more like a gee whiz thing, also not a winner.  preference is to hack in go at the event
<lazyPower> hazmat: i think they would be receptive to *any* ideas
<lazyPower> i came up with some good suggestions on how to get ideas but no clear idea winners. so far its just mbruzek and wwitzel3 for camp juju
<lazyPower> they just released that you can have up to 10 ppl in a hack team
<lazyPower> which is a big change from the eventbrite page we signed up on so many moons ago
<lazyPower> it was originally 3 per pod
<hazmat> so nutshell one was adding proxy support for building images, the other was setting up an ssh server that attached to a container namespace.
<lazyPower> so we're scrambling to either find ppl or come up with a game plan to show up and just pollenate other teams
<lazyPower> oh dude
<lazyPower> ssh server + container namespace = interesting
<lazyPower> whats the objective behind that?
<hazmat> lazyPower: darned if i know ;-).. solomon suggested it to me at the ams dockercon hackathon
<lazyPower> haha
<lazyPower> fair enough :)
<lazyPower> allright, brb o/
<Destreyf> hazmat: on flannel
<Destreyf> the 10.10.x
<Destreyf> is that x the machine number?
<Destreyf> so machine 0 is 10.10.0.0/24
<Destreyf> and machine 1 is 10.10.1.0/24
<hazmat> Destreyf: basically yes, the 10.10.0.0/16 is just the cidr for the cluster, each machine gets a /24 for containers on it
<Destreyf> hazmat: so the 3rd octet is different for each machine though?
<hazmat> Destreyf: ie. its more like a container on machine 0 has 10.10.0.1/28 and a container on machine 1 has 10.10.1.1/28 .. the machines themselves have subnets assigned to them, but they don't have ips from the cluster range
<Destreyf> k
<hazmat> Destreyf: upstream has a few more modes added since my version of the charm was first published, lazyPower would know more about the latest version, most importantly they support configuring under iaas substrates for zero overhead, and using in kernel data plane paths (vxlan over udp) for faster performance.. the version in my charm was just doing udp
<hazmat> overlay afaicr.
<Destreyf> hazmat: do you recommend i use it from yours or the upstream?
<hazmat> Destreyf: i'd use lazyPower's newer one, i'm not actively working on charms atm
<hazmat> Destreyf: not sure if the newer one also works with lxc, there's also limited juju support in 1.24 for managing iaas routing for container addressabilty (lxc) with ec2 (only using default vpc if enabled on an account and maas)
<Destreyf> hazmat: hsi is listed as flannel-docker
<lazyPower> right, it will install the flannel0 bridge if related to a non-docker host
<lazyPower> otherwise it reconfigures docker for use with flannel
<lazyPower> there's more work that needs to be done w/ flannel to make it work with LXC containers. namely lxc introspection, reconfiguring of existing containers, and ensuring lxc is reconfigured to start new containers w/ the flannel networking enabled by default
<Destreyf> looks like i'm uzing hazmats :D
<lazyPower> ya know, small stuff :)
<lazyPower> i would for now, its older, but gets the job done.
<lazyPower> once we have the newer stack updated for lxc, migration should be a short hop jump skip and --replace away
<lazyPower> er --switch
 * mbruzek reads scrollback
<mbruzek> Thanks for the ideas hazmat
<mbruzek> hazmat: do you have any interest in the hackathon competition?
<Destreyf> hazmat: sorry to bug you so much, how hard would it be to change the ip address inside of flannel?
<lazyPower> Destreyf: the ip address you get back from flannel is determined by etcd
<Destreyf> hazmat: so under hoooks, i see flannel_network set to 10.10.0.0/16, this would be the spot right?
<Destreyf> hooks.py*
<Destreyf> oh
<Destreyf> lazyPower: replied not hazmat
<Destreyf> :D
<Destreyf> i saw a reply and assumed
<Destreyf> that's bad of me
<lazyPower> Destreyf: yeah if you change that it should change the A block
<Destreyf> awesome, what would be the best way to go about routing connections to these? so i can access from say my pc (currently) 10.10.81.254
<Destreyf> my ip is 10.5.1.36
<lazyPower> Destreyf: im not sure I understand what you're trying to do
<Destreyf> me either, let me think it over
<Destreyf> i appreciate your help
<lazyPower> are you trying to setup a vxlan between your desktop and those containers?
<lazyPower> if thats the case, you have 2 options. 1) setup a vpn style connection to a machine that has a flannel0 interface. Such as using sshuttle
<lazyPower> 2) Install flannel on your pc to gain teh flannel0 interface and join in the vxlan.
<Destreyf> basically, say i install a service, say wordpres on a lxc container, using hazmat's flannal charm, how can i access the wordpress that' d be hosted there
<lazyPower> ah, why not deploy haproxy on the parent node
<lazyPower> and reverse proxy to the wordpress charm?
<Destreyf> and you are geneous
<lazyPower> you dont even need flannel for that :) its host-only networking
<Destreyf> lazyPower: reason for flannel is HA setups between machines, containers and the likes
<Destreyf> on the LXC side
<lazyPower> Destreyf: check this out
<lazyPower> http://blog.dasroot.net/container-networking-with-flannel.html
<lazyPower> its a little dated now
<lazyPower> i need to do an updated post with the new stuff
<Destreyf> also lazyPower i haven't looked in depth, but any chance you guys support (or would support) linode as a backend?
<lazyPower> we currently support linode with the manual provider
<Destreyf> lazyPower: that's what i figured
<lazyPower> but you do lose some of the magic, as you manually add machines to your env.
<Destreyf> lazyPower: i'm currently doing manual adding anyways for tests/local hardware without MAAS
<lazyPower> there's been talk of writing a linode manual provider plugin based on hazmat's work w/ the DO provider plugin but nobody's taken on the project yet.
<Destreyf> lazyPower: i might just do it one of these days
<Destreyf> awww the link's broken
<Destreyf> XD
<lazyPower> which link Destreyf?
<Destreyf> on that article you linked
<lazyPower> hmm.. shouldn't be
<Destreyf> it mentions his do provider
<Destreyf> http://blog.dasroot.net/juju-digital-ocean-awesome
<lazyPower> lame :|
<lazyPower> http://blog.dasroot.net/juju-digital-ocean-awesome.html
<Destreyf> yeah
<Destreyf> found it
<lazyPower> thats what i get for migrating from ghost to pelican without running a broken link checker against my content
 * hazmat yawns
<cholcombe> this might be a flawed finding but i noticed that juju log logs any std out that your program writes regardless if you call log or not.  Is that weird?
<thumper> cholcombe: nah... expected
<thumper> but probably not well documented :)
<thumper> o/ hazmat
<hazmat> thumper: greetings
<thumper> hazmat: do you know much about tox?
<thumper> hazmat: I have a MP up for python-jujuclient
<thumper> but couldn't get the tests working
<hazmat> thumper: what was the error?
 * thumper finds the mp
<thumper> https://code.launchpad.net/~thumper/python-jujuclient/jes-cache-file/+merge/261586 it is on there
#juju 2015-06-12
<thumper> hazmat: the juju deployer uses python-jujuclient, yes?
<hazmat> ues
<thumper> good, so I'm fixing the right thing at least :)
<hazmat> thumper: yes
<hazmat> thumper: yeah i saw this over email
<hazmat> i thought the traceback into tox was wierd..
<thumper> me too
<hazmat> thumper: the env variable its looking for is used by the unit tests  its trying to parse the jenv file to connect a client to the named test env
<hazmat> but its unclear if the test code needs to be updated to the newer location, or just an issue with the tox config on passing that env var
<hazmat> thumper: what version of juju is this?
<hazmat> with the format change, trunk?
<thumper> hazmat: nah, a feature branch right now, but trunk soon
<thumper> soon meaning by the end of next week
<thumper> hazmat: what happens when tox reads this?  {env:JUJU_TEST_ENV:"test"}
<hazmat> thumper: hopefully the above is enough to find next steps, else if you want to me look and the branch is otherwise solid, i can take a look
<thumper> as you can see, I did set that env var
<hazmat> thumper: i assume tox sets that env var for the test, but its barfing it appears there
<thumper> but the error says: unkown environment variable 'JUJU_TEST_ENV:"test"'
<thumper> it appears to think everything after env: is the name of the env variable
<hazmat> thumper: ask on openstack, much more tox experience floating around there
 * thumper grumbles
<thumper> ETOOMUCHELSEON
<hazmat> thumper: or the tox docs, its not a tool i use
<thumper> me neither
 * thumper blames tvansteenburgh
<thumper> the tox docs don't have anything like this
<thumper> they specify hard coded values in all the tox examples
 * thumper has already looked
<hazmat> thumper: what happens when you remove that config entirely?
<hazmat> re the env var
 * thumper shrugs, didn't try
 * thumper removes the default
<hazmat> if that runs, then export the env var, and try invoking tox
<thumper> ERROR: tox version is 1.6, required is at least 1.8
<hazmat> pip -U install tox
<thumper> hence it not knowing about that expression format I bet
<hazmat> probably
 * thumper hacks stuff...
<thumper> Ran 22 tests in 0.143s
<thumper> FAILED (errors=13)
<thumper> some things to fix I guess
<thumper> cheers
 * thumper adds to the list of shit to fix
<thumper> hmm...
<thumper> fixed that bit
<thumper> now it is just failing tests...
<thumper> for different reasons
<hazmat> thumper: got a pastebin?
<thumper> for?
<hazmat> thumper: errors in the tests
<hazmat> failing test stdout/err
<thumper> I now have four failures and three errors
<thumper> I need to go through them to work out why they are failing
<hazmat> thumper: k
<thumper> some are due to different bits in juju
<thumper> some seems to be just weird
<hazmat> thumper: good luck, ping me if you want a second pair of eyes
<thumper> ack
<thumper> thanks
<hazmat> Destreyf_: you should have a look at the source to my version to see how to configure lxc for flannel, you basically drop a config into /etc/default/lxc-net and /etc/lxc/default.conf
<hazmat>  after reading the values from flannel's on disk state file
<lukasa> jamespage: You about?
<Odd_Bloke> Anyone around to help me debug a Nagios servicegroups problem?
<Dong> hello all? need some light to understand what is the bootstrapper
<Dong> and when I do manual targeting cloudstack, do I need to have one VM up and running and config it to be the bootstrap
<Syed_A> Hello folks, I was wondering how the lxc template "juju-trusty-lxc-template" is generated ?
<Syed_A> I want to change the "juju-trusty-lxc-template" before it is transferred to the target host for the lxc container. Is it possible to do it ?
<tvansteenburgh> Syed_A: you could manually modify that template
<Syed_A> tvansteenburgh: Before it get transferred to the target host ?
<tvansteenburgh> Syed_A: while not bootstrapped, start and attach to it, make your changes, exit, then bootstrap
<hazmat> Syed_A: its not transferred its built on the individual hosts
<hazmat> Syed_A: specifically its using the lxc ubuntu-cloud template
<hazmat> which is a script to create the actual lxc instance which is used as a template for other containers
<Syed_A> hazmat: oh!, intersting. Do you know how can i manually create it ?
<hazmat> Syed_A: log into to the host.. lxc create -t ubuntu-cloud -n juju-trusty-lxc-template    and pass additional options to the ubuntu-cloud template per taste... perhaps a better question is starting with why?
<Syed_A> hazmat: I want all the containers to have 3 nics which gets ip from 3 bridges. And /etc/hosts file of the containers to have the host entry for juju api node.
<Syed_A> hazmat: And i don't want to manually edit on every machine which is hosting containers.
<hazmat> Syed_A: hmm.. you could also use manual provider, most of that config for is going to be the conf for each container, you could set it up in the template container. another option is to modify the lxc ubuntu-cloud template, or modify the lxc default template on each host.... nutshell if that's what you want your going to need to modify something on each
<hazmat> host, be it lxc conf or templates, or juju.
<Syed_A> hazmat: Allow me to explain my setup.
<Syed_A> hazmat: I have 4 VM's
<Syed_A> hazmat: deploy, alice, bob and charlie
<Syed_A> hazmat: On deploy i have the juju gui and juju charms
<Syed_A> hazmat: I want to start containers on alice, bob and charlie
<Syed_A> hazmat: So i will just use juju deploy sevice-x --to lxc:1(alice)
<Syed_A> hazmat: and it will deploy sevice-x in a container on alice.
<Syed_A> hazmat: But by default, the container will take the ip address from lxcbr0 bridge. Which i can change by changing the bridge configuration inside /var/lib/lxc/conatiner-x/config
<Syed_A> hazmat: I don't want to manually change the configuration on alice. I want to automate it. Such that whenever i add a machine to deploy lxc containers on it; all the containers use 3 bridges specified in the template for the container.
<Syed_A> hazmat: So a modified template on alice goes in to /var/lib/lxc and all the new containers use this modified template.
<jrwren> Syed_A: can you set your 3 interface network defaults in /etc/lxc/default.conf ?
<Syed_A> jrwren: Let me give it a try.
<hazmat> jrwren: +1 thats the solution imo
<hazmat> alternatively on the juju-trusty-lxc-template
<Destreyf_> hazmat: your charm works fantastic, but i learned a harsh reality, setting up a OpenStack cluster HA on 3 nodes won't work because you need floating ip's for the services such as Keystone, Cinder and others.
<Destreyf_> the flannel charm
<pmatulis> does 'juju generate-config' only work on linux? or windows as well?
<Destreyf_> pmatulis: juju is mostly (if not all) python based, so it should work on windows as well
<pmatulis> Destreyf_: ok, i'm wondering about the generation of boilerplate file and directory
<Destreyf_> I haven't used juju on windows (yet) so i'm not certain on where it places files
<lazyPower> Destreyf_: actually juju-core (which includes the client application) is Go based.
<lazyPower> pmatulis: the generate-config works on windows as well. its part of our CI tests
<pmatulis> lazyPower: thank you
<lazyPower> pmatulis: no problem. are you going to be working primarily on windows?
<lazyPower> pmatulis: if so, there is a native windows client for juju, and if you find that you *need* that touch of a posix environment we have both vagrant boxes and docker containers with an isolated juju client environment for you to use.
<Destreyf_> lazyPower: You guys rock btw.
<lazyPower> Destreyf_: well thanks :)
<Destreyf_> lazyPower: I used juju a while back and had nothing but trouble, granted i knew it was an emerging concept and its gotten much better since then.
<lazyPower> Destreyf_: we've put a lot of focus over the last year on user experience, focus on the charms and making them really useful
<lazyPower> now that actions has landed you'll start seeing even better core concepts with charms that are well written. You'll no longer need to ssh into a server - just juju action do :)
<Destreyf_> lazyPower: i can tell, its been a great experience so far, sadly doing something as foolish as attemping 3 node HA OpenStack doesn't seem to want to work :P
<lazyPower> well, thats a tough cookie to begin with Destreyf_
<lazyPower> HA anything can be problematic to model, let alone cramming services on a single machine
<Destreyf_> lazyPower: so you can setup charms that don't actually deploy to servers right? kinda like when you setup hacluster to manage say keystone, it just does the local stuff for each instance
<lazyPower> and openstack is far from simple
<pmatulis> lazyPower: no, i am helping with the juju documentation
<lazyPower> pmatulis: oh excellent!
<lazyPower> pmatulis: we appreciate you :)
<Destreyf_> lazyPower: OpenStack is amazing, in all honesty, but i've never gotten to actually play with a deployment of it before.
<pmatulis> lazyPower: right now the generate-config just says that the linux dir/file is created - nothing on windows
<lazyPower> pmatulis: it might be worth poking your head in #juju-dev, or reaching out over the list to get the specifics. I know the cloudbase guys listen to the juju list, and they are the resident experts on juju+windows
<pmatulis> lazyPower: thanks for the tip
<hazmat> Destreyf_: bummer re ostack ha and float
<hazmat> Destreyf_: if you need something that can do more manual ip mgmt, there are other options (weave, and maybe socketplane), but not sure if that's going to play nice with ostack unless its plumbed into nova
<hazmat> Destreyf_: we've got some folks around here though that probably could point you to a dev setup... typically we run it virtualbox registered into maas, and then a maas env on juju with the ostack charms
<hazmat> pmatulis: on windows it goes to the user's HOMEDRIVE + HOMEPATH env var then .juju
<pmatulis> hazmat: roger
<Destreyf_> hazmat: that might be worth doing, i'm just already using VM's for my tests
<pmatulis> hazmat: so the online docs are wrong then. they currently show
<pmatulis> %LOCALAPPDATA%/Juju
<jrwren> docs folks: https://github.com/juju/docs/pull/475
<pmatulis> jrwren: is that the only docs file where outfile.name was used?
<Syed_A> Can anyone please tell me how can i generate "juju-trusty-lxc-template" manually ?
<jrwren> pmatulis: good question.
<jrwren> pmatulis: yes, that is the only docs file where outfile.name was used.
<pmatulis> jrwren: ok
<Syed_A> jrwren: I can see 3 interfaces inside the containers but  the containers /etc/network/interfaces has congiguration only for eth0. eth1 and eth2 are not configured.
<Syed_A> jrwren: *configuration
<jrwren> Syed_A: for that you will need to update the -template :(
<Syed_A> jrwren: I can set the other two nics by using "lxc.network.ipv4".
<Syed_A> jrwren: Yes, i figured that but that's the problem i am trying to solve precisely :(
<Syed_A> jrwren: By any chance do you know how can i generate "juju-trusty-lxc-template" manually ?
<jrwren> Syed_A: not offhand. hazmat gave you some background on that already. More than I ever knew.
<hazmat> Syed_A: lxc-create -t ubuntu-cloud -n juju-trusty-lxc-template
<hazmat> Syed_A: to get all the options lxc-create -h and /usr/share/lxc/template/lxc-ubuntu-cloud -h  (last is from memory) there's one more script for common config.. don't have an ubuntu box handy to spot check
<jrwren> /usr/share/lxc/hooks/ubuntu-cloud-prep
<jrwren> I wrote a blog post on using -t ubuntu-cloud http://jrwren.wrenfam.com/blog/2015/05/26/ubuntu-cloud-image-based-containers-with-lxc/
<jrwren> Syed_A: you can steal the cloud-init script to pass with -u by copying it out of /var/lib/cloud/instance/user-data.txt
<jrwren> i don't think that varies by juju provider, but I could be wrong.
<Syed_A> hazmat: creating it ...
<Syed_A> jrwren: Cool, Reading it
<therealmarv> Hi. I have a question. Iâve submitted a charm to my personal namespace 5 hours ago on trunk and it is not updated yet on https://jujucharms.com (it still shows the old revision) Did I missed a step beside pushing it to my repo? How long does it take. this ingestion process?
#juju 2016-06-13
<blahdeblah> OK, I'm feeling very dumb today.  I'm working on a layered DNS charm (inspired by lazyPower's previous DNS work), but I'm still having the problem I described at http://irclogs.ubuntu.com/2016/05/23/%23juju.html
<blahdeblah> stub set me straight a while back about the need to call the basic layer from all hooks in order to get the config.set.* states, but they're still executing in a rather unexpected manner.
<blahdeblah> Code is at https://git.launchpad.net/~paulgear/charms/+source/layer-dynect/tree/reactive/dynect_provider.py - the methods on lines 70 & 96 both execute during the same hook, even though they're opposites. :-(
<blahdeblah> Any ideas on why?
<blahdeblah> Code it depends on (but is not terribly relevant here) is at https://git.launchpad.net/~paulgear/charms/+source/layer-dns/tree/reactive/dns_auto.py
<blahdeblah> And here's the log from the unit in question: http://pastebin.ubuntu.com/17283348/
<blahdeblah> Any pointers greatly appreciated
<manolos> hi!
<manolos> i get an error from some nodes "update-status" hook failed
<manolos> resolve or retry does not solve it
<manolos> do you know anything that I can do? or where to look whats going on?
<blahdeblah> manolos: that depends on the charm itself; check out the logs for it in /var/log/juju/unit-*
<manolos> ok I will  check there, love you bro
<manolos> <3
<stub> blahdeblah: I'd try breaking the when_all and when_not_all decorators into several @when and @when_not statements. You might have found a bug in the new @when_all and @when_not_all shortcuts (or maybe @when_not_all doesn't work that way - is it when none of the states are set, or is it when one of the states is not set?)
<blahdeblah> stub: according to doc, when_not_all == when not all of the states are set; when_none == when all of the states are not set
<stub> I can parse that first one two ways
<blahdeblah> stub: It's when any one (or more) of the states is not set.
<brunogirin> Hi all, I'm trying to write a simple charm in juju 2.0 beta 8 on Xenial and when I run `charm build`, it crashes the terminal window: any idea how I can find out what's going wrong?
<blahdeblah> stub: I did some more experimenting, and I've come to the conclusion that the config.set.* states are just not terribly useful.  I couldn't split up the conditions, because that would turn the semantics of @when_not_all into @when_none, which is not appropriate in this instance.
<blahdeblah> stub: I decided to just go back to the solution I used when lazyPower & I discussed this previously: checking config items @when('config.changed'), and setting a single reactive to indicate that config is ready.
<blahdeblah> *single reactive state
<tinwood> gnuoy, time for a quick question?
<gnuoy> tinwood, sure
<tinwood> gnuoy, shall we do it here or in #openstack-charmers?
<gnuoy> tinwood, I didn't realise that was athing!
<tinwood> beisner set it up and there's a few there now.
<tinwood> gnuoy, I guess, probably here for now.
<gnuoy> tinwood, I seem to be alone in there
<tinwood> apologies - #openstack-charms
<tinwood> gnuoy, ^^^
<gnuoy> k
<xilet> *joins too*
<mhall119> marcoceppi: jcastro: hey, I was talking to someone from Linode yesterday about Juju, and he said they've got a new API coming out that might allow a Juju backend to bring up/down instances on their cloud, is anybody currently talking with them?
<lazyPower> mhall119 I was talking to @FelicianoTech ~ this time last year
<lazyPower> (twitter handle above ^)
<rick_h_> mhall119: there's sales folks that interact with linode, but not sure if they're looking at this new api/etc
<lazyPower> looks like they moved though :( no longer with linode
<lazyPower> so thats a dead end
<rick_h_> mhall119: worth an email to Udi or such to put it on their radar, especially with a contact/etc.
<mhall119> rick_h_: will send that out, thanks
<kjackal> kwmonroe, cory_fu, admcleod, petevg: I will have to update the hadoop-client we have on bigdata-dev because mahout adds an interface to that
<cory_fu> kjackal: Why does Mahout add an interface to hadoop-client?
<kjackal> mahout is a library that sits on your client. When you submit a job that uses mahout the mahout jars get packaged
<kjackal> cory_fu: ^
<kjackal> cory_fu: as a consiquence anyone that uses the hadoop-client layer can relate to mahout
<cholcombe> just realized that a bad file in the apt/sources.list.d can break bootstrapping
<rick_h_> cholcombe: ? how did you do that?
<rick_h_> cholcombe: this is juju bootstrap? off an image?
<cholcombe> rick_h_, i'm on my desktop and i had a bad entry in the sources.list.d that wouldn't download
<cholcombe> and it sent bootstrap into an endless loop where it try to apt-get update
<cholcombe> with the manual provider
<cholcombe> seems juju can't bootstrap raspberry pi 3's.  The lxc containers fail to start
<cholcombe> it can add them as a machine but i can't deploy anything
<cholcombe> looks like it fails to load the seccomp policy and then blows up
<cholcombe> anyone else tried using raspberry pi's as machines?
<admcleod> can a reactive state have punctuation (specifically underscores) in it?
<lazyPower> cholcombe - i haven't in a while, i know that mattyw has done a lot of work there. he released a poc juju snap
<lazyPower> admcleod - yep, leadership.is_leader as an example
<admcleod> lazyPower: oh yeah - thanks :)
 * lazyPower hattips
<cholcombe> lazyPower, yeah i saw his blog
<kwmonroe> cory_fu: when i push a bigdata charm that's going to be promulgated, i push to u/bigdata-charmers/trusty/X, and promulgate that.. so stuff like charm show always shows bigdata-charmers in the id.
<kwmonroe> cory_fu: but i see these are different:
<kwmonroe> charm show cs:~bigdata-charmers/trusty/apache-hadoop-namenode id revision-info
<kwmonroe> charm show cs:trusty/apache-hadoop-namenode id revision-info
<kwmonroe> the difference being the ~bigdata-charmers in the charm id
<cory_fu> kwmonroe: Yeah.  For reference, here's the output: http://pastebin.ubuntu.com/17298727/
<kwmonroe> so why does the un-namespaced charm have extra revisions?
<cory_fu> So, I'm not entirely certain, but I think that promulgation is not a plain reference like channels are.  It seems to be its own entity, with its own revision history, which contains a channel-like pointer to another entity
<cory_fu> Consider the case where a charm foo is promulgated from the namespace ~user-a
<cory_fu> User-a no longer wants to maintain it and now User-b does.  So, we unpromulgate ~user-a/foo and promulgate ~user-b/foo
<cory_fu> But we don't want the revisions for cs:foo to reset
<kjackal> kwmonroe: regarding the flume-syslog, the change is in jujubigdata library, the 6.4.1 does not have the patch
<kwmonroe> ah, ok, that makes sense
<kjackal> let me verify that the two revisions i gave you are correct, just a sec
<cory_fu> So instead cs:foo just gets another rev on its own list of revs that says "now point to cs:~user-b/foo"
<cory_fu> I should have included revnos in that example
<cory_fu> Maybe we need a section for explaining promulgation in https://jujucharms.com/docs/devel/authors-charm-store
<kwmonroe> +1 kjackal, they're legit.  i just lapsed on the purpose of those updates -- for net restricted envs, you're right, everybody needs a later jujubigdata
<kjackal> Cool
<kjackal> Now rearding the spotlight in the blog
<kwmonroe> cory_fu: so is it safe to assume that the promulgated cs:trusty/apache-hadoop-namenode-2 is probably cs:~bigdata-charmers/trusty/apache-hadoop-namenode-1?
 * kwmonroe fetches to check
<kjackal> kwmonroe, cory_fu, admcleod regarding George (last line in the blog). George is the head of IT in the lab, mentioned inside the text
<kjackal> We are friends, yes
<cory_fu> kwmonroe: It's not same to assume, but I think there's a "hash" field that we can use to verify
<kwmonroe> kjackal: both admcleod and i are +1 to drop that line.  it could start a war we're not ready to fight.
<kjackal> I thought it is good to poke the IT head(s) to start offering Juju
<kwmonroe> yup cory_fu, that's way better (hash256)
<kjackal> what kind of war?
<cory_fu> kwmonroe: http://pastebin.ubuntu.com/17299258/
<cory_fu> LGTM
<kwmonroe> see what happened there kjackal? i was joking about a war but you didn't catch it.  just drop the line so i can sleep better.
<kwmonroe> cool cory_fu
<kjackal> Ah if it is for your sleep I will defentely remove the line
<cory_fu> I think I'm going to make an alias for that
<kwmonroe> that beats my current workflow of 'charm pull' x2 and 'diff -pur'
<kwmonroe> lol, thx kjackal ;)
<cory_fu> kwmonroe: Maybe we should open an issue somewhere to see what namespace a promulgated charm is pointing to, in the output of `charm show`
<kwmonroe> that's a nice idea cory_fu.. what do you call that entity?  the source?
<cory_fu> no idea.  They must have an internal name for it.
<cory_fu> I don't even know where to report that issue.  I guess https://github.com/juju/charmstore/issues maybe?
<cory_fu> kwmonroe: Looks like stub beat us to it: https://github.com/juju/charmstore/issues/631
<kwmonroe> nice - if only i could turn up the heat on that.  lp ftw.
<kwmonroe> thumb < flame
<kjackal> kwmonroe, cory_fu, admcleod, petevg: how do we publish blogs? is there a project/wiki we edit?
<kwmonroe> kjackal: checkout the gh-pages of this repo: https://github.com/juju-solutions/bigdata-community/tree/gh-pages
<kwmonroe> add your markdown to _posts
<kjackal> kwmonroe: great thank, I will have it ready by the end of the day
<kwmonroe> kjackal: you can add it to a local _drafts dir if you want to serve/verify locally before publishing it to _posts.
<kwmonroe> cory_fu: this is a pickle for apache-based bundles: https://github.com/juju-solutions/layer-hadoop-client/commit/995a21f5149a36182dfb6eec59dfd4b52abfef38  this came in between hadoop-client-3 and -4.  need to make sure apache bundles stay on 3 or include openjdk if we need > 3 in the future.
<cory_fu> Hrm.  :/
<cory_fu> Well, it wouldn't be difficult to add openjdk for the client, right?  I wonder if it's worth the effort to go back and add support for the java interface to the apache charms?
<kwmonroe> no, it wouldn't be difficult.  no, i don't think it's worth the effort until we need a newer hadoop-client.  currently, v4 only adds java, so no harm in v3 imho.
<cory_fu> kwmonroe: The only issue with that is that hadoop-client is also the base layer for some of the apache charms, like apache-spark
<cory_fu> So we may run in to it if any of those need to be updated
<kwmonroe> that sounds like some far off magical bridge.  i'm sure future big data team members will have no trouble crossing it.
<cory_fu> :)
<kjackal> kwmonroe, cory_fu: I have this PR for the blog post https://github.com/juju-solutions/bigdata-community/pull/7
<kjackal> should I merge it?
<kwmonroe> kjackal: is this the thing you deployed? http://svg.juju.solutions/?bundle=cs:bundle/apache-processing-mapreduce-2
<kwmonroe> kjackal: i think it would be nice to put an svg of whatever bundle they used somewhere around the "Deploying a mapreduce processing bundle" sentence towards the end.
<kwmonroe> kjackal: it's getting super late for you -- i don't think there's a rush to get this out this afternoon, and i think a pic would be nice to show the bundle they used.
<kjackal> we used one of the old bundles (hadoop-processing-core)
<magicaltrout> work, the greek economy depends on you!
<kwmonroe> heh
<kjackal> We had the same problem with the paper as well, they were referencing a bundle that is depricated now, so we had to change the reference to point to our landing page
<kjackal> magicaltrout: you have no idea how true this is!
<cory_fu> Hey, can someone think of a good, non-big data example charm to point Roman to that demonstrates handling scaling the service up and down?
<cory_fu> Ideally one that's using reactive, but I'd be fine w/ a non-reactive charm too
<magicaltrout> Apache Drill :P
<cory_fu> :)
<kjackal> non, bigdat and easy, mediawiki ?
<magicaltrout> or some mysql-esque charm must scale sanely?
<magicaltrout> he deals with databases enough
<cory_fu> Actually, I guess petevg you should just link him to the handler and lib function in your branch.  :)
<petevg> cory_fu: yeah. Just working on doing that. :-)
#juju 2016-06-14
<nobuto> hi, can somebody please review my PR for apache-php layer? https://github.com/juju-solutions/layer-apache-php/pull/4
<kjackal> admcleod: not enough slaves?
<admcleod> kjackal: it appears that trying to run the hbase smoke-test when i dont have a 3 slave cluster deployed (only 1 slave) it dies due to min/max replication factor issue
<kjackal> but the smoketest setsup only one slave, right?
<kjackal> I mean the amulet test that runs the smoketest sets up a single slave cluster
<admcleod> kjackal: im not running the amulet test yet, i deployed it manually
<kjackal> admcleod: I have a question
<kjackal> you know how we have this flag --no-local-layers
<kjackal> is there a way to say no-local-layers except layer XYZ ?
<admcleod> kjackal: im not sure. i would probably rename a local dir or something to work around that
<admcleod> kjackal: or move things around
<admcleod> kjackal: --include and --exclude migth be nice though
<kjackal> Yes, for now i am moving thing around
<kjackal> admcleod: that is stange.... I see that hadoop-client becomes ready before the plugin even installs java....
<admcleod> kjackal: which state?
<kjackal> admcleod: I must have something setup worng...
<kjackal> http://pastebin.ubuntu.com/17320656/
<brunogirin> Hi all, I'm trying to configure Juju 2.0 to work with an OpenStack cloud and I'm getting very confused about how to do this. I've got a Juju 1 environments.yaml file for it but am struggling to migrate that to 2.0. Can anybody help?
<admcleod> kjackal: one moment
<admcleod> brunogirin: where are you getting stuck?
<kjackal> admcleod, brunogirin: I will try!
<brunogirin> admcleod: understanding what goes into the cloud config, the model and if anything goes in the model, how I define it
<tzounox> hello I got hook failed "update-status" on juju an this error from logs unner.go:275 stopped "api", err: login for "machine-0-lxc-3" blocked because upgrade in progress
<tzounox> could you help me with this?
<kjackal> brunogirin: what instructions did you follow to configure juju 2.0 with openstack?
<admcleod> brunogirin: so there is a basic example here: https://jujucharms.com/docs/devel/clouds (if you search the page for 'openstack.example.com' you'll find it)
<admcleod> tzounox: have you actually recently tried to upgrade the charm on that unit?
<tzounox> yeah it says that it is on latest version
<tzounox> it says upgrade in progress but it is not
<brunogirin> admcleod, kjackal: yes, that's the doc I was following; 1st question: the enpoint specified under region-name, is it the identity endpoint, the compute one, another one?
<tzounox> dont know whats happening
<brunogirin> and where do attributes like "use-floating-ip" ad "tenant-name" go?
<admcleod> tzounox: does the juju debug-log show anything more? its not doing something strange?
<tzounox> i go this one ERROR juju.worker runner.go:223 exited "firewaller": machine 5 not provisioned
<admcleod> brunogirin: https://bugs.launchpad.net/juju-core/+bug/1576750
<mup> Bug #1576750: juju2 usability: many options have to be specified for every bootstrap <bootstrap> <juju-release-support> <jujuqa> <landscape> <usability> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576750>
<admcleod> brunogirin: see the 3rd from last comment
<admcleod> tzounox: i have seen this before but am struggling to recall why
<admcleod> tzounox: which version of juju?
<admcleod> brunogirin: does that help?
<brunogirin> admcleod: thanks, that clarifies some of it! Is the tenant-name specified when I run juju bootstrap then?
<tzounox> admcleod
<tzounox> admcleod
<tzounox> admcleod
<tzounox> 1.25.5-trusty-amd64
<admcleod> brunogirin: it looks like that should also go in the config
<gnuoy> brunogirin, I have OS_TENANT_NAME env variable set, among others, when bootstrapping to point juju  at the tenant
<admcleod> brunogirin: e.g https://jujucharms.com/docs/devel/temp-release-notes, search for 'tenant-name' there
<admcleod> brunogirin: but im guessing at this point so probably what gnuoy said :)
<brunogirin> admcleod: ah right, it makes sense in the credentials!
<brunogirin> admcleod: yes, I might do that for now but in the future I'll have multiple environment on that same cloud (UAT, prod, etc) so it's useful to be able to put it in a config file that can be used on bootstrap
<admcleod> brunogirin: ah right ok
<admcleod> tzounox: so.. yeah it was a bug that affected an older version. is this an environment that has been bootstrapped for a while, functional, and just stopped working?
<admcleod> tzounox: it seems like perhaps lxd is having an issue provisioning machines, so perhaps theres a local resource issue?
<tzounox> yeah this error appeared 2 days later and it is a local setup
<brunogirin> admcleod: thanks for the help, I'll try all this and tell you what happens
<tzounox> is there anything that I can do?
<admcleod> brunogirin: no worries, interested to know
<admcleod> tzounox: there is some 'lxd refresh' type command, im trying to find ..
<brunogirin> admcleod: last question before that, the endpoint in the cloud config, is it the identity endpoint?
<admcleod> brunogirin: i think its identity
<brunogirin> admcleod: thanks, I'll try that then
<admcleod> tzounox: i cant find what im looking for, but are you able to just restart the lxd service?
<tzounox> i have done this before, i juju ssh into the service and then sudo reboot but it does not fixed it
<tzounox> I have restart the node also and the hole system
<admcleod> tzounox: does juju status say anything revealing about machine-0?
<kjackal> I remember having this error in the past... "ImportError: cannot import name 'layer'" does anyone know what do I need to do to see the layers from an action?
<tzounox> i got this from juju status
<tzounox> cannot run instances: cannot run instances: gomaasapi: got
<tzounox>       error back from server: 409 CONFLICT (No available node matches constraints:
<tzounox>       zone=default)
<admcleod> tzounox: have you tried to destroy and re-bootstrap the environment? (or is this not practical for you?)
<tzounox> i have done it in the past but again i got the error after 2 days and it is very time consuming
<tzounox> so its not actually an option
<kjackal> tzounox: anything interesting under /var/log/lxc or /var/log/lxd ?
<brunogirin> admcleod: result of my test: I get a beautiful go stack trace in openstack/provider.go
<admcleod> tzounox: right - i understand. is upgrading to juju 2.0 an option?
<admcleod> brunogirin: pastebin?
<tzounox> i go this from /var/log/lxc
<tzounox> lxc-start 1465824044.634 ERROR    lxc_cgmanager - cgmanager.c:cgm_destroy:575 - Error connecting to cgroup manager
<brunogirin> admcleod: http://pastebin.com/aQFfnnuj
<admcleod> tzounox: do you have any more errors after that?
<tzounox> no its only this
<admcleod> tzounox: one recommendation is to install 'cgroupmanager' but i doubt thats going to help
<tzounox> i should install it on the machine with the juju services?
<admcleod> tzounox: on the lxd host
<tzounox> and whats the other option that i have :)
<admcleod> tzounox: well... there should be more people waking up soon (US time zone) who might be able to help with this
<tzounox> ok thank you very much for your time
<dimitern> brunogirin: hey there
<admcleod> dimitern: so brunogirin is trying to bootstrap openstack and gets this: http://pastebin.com/aQFfnnuj
<dimitern> brunogirin: you were having some issues with juju 2.0?
<brunogirin> dimitern: hi, yes, I seem to struggle with bootstrapping OpenStack
<dimitern> nasty! :/
<admcleod> tzounox: no problem, ill see if i can find someone who might be able to help more
<dimitern> brunogirin: what version of openstack is that if you know?
<brunogirin> dimitern: no idea, it's those guys: http://www.datacentred.co.uk/
<brunogirin> dimitern: according to the compute endpoint, it might be v2
<dimitern> brunogirin: thanks, I'll have a look; and you're using the juju-2.0-beta8 release tarball ?
<dimitern> brunogirin: or you build from source?
<brunogirin> dimitern: yes 2.0-beta8 from xenial
<brunogirin> dimitern: do you want me to file a launchpad bug with the cloud config in it?
<dimitern> brunogirin: that will be great, thanks!
<dimitern> brunogirin: I'll have a look what might be causing this
<brunogirin> dimitern: thanks, I'm happy to create a user for you on that cloud if it helps find out what the problem is
<admcleod> it looks like datacentred is running juno
<brunogirin> dimitern: stupid question: how do you file a bug when ubuntu-bug tells you that juju-2.0 is not an official ubuntu package and the problem can't be reported?
<brunogirin> dimitern: bug filed here: https://bugs.launchpad.net/ubuntu/+source/juju-core/+bug/1592365
<mup> Bug #1592365: Juju 2.0 crashes on bootstrap with datacentred.co.uk OpenStack cloud <juju-core (Ubuntu):New> <https://launchpad.net/bugs/1592365>
<dimitern> brunogirin: awesome! We'll track the progress there, thanks!
<brunogirin> dimitern: thanks! If you need more info, please tell me and as I said, happy to give you access to the environment if it helps
<dimitern> brunogirin: hey, can you try something first please? remove the config section in the datacenterd section of your clouds.yaml and try bootstrapping again?
<dimitern> brunogirin: default config per cloud cannot be specified in the clouds.yaml as far as I know, and might be the reason of that panic (well, it shouldn't panic)
<brunogirin> dimitern: same error; I got the default config example from 3rd comment before last in this bug report: https://bugs.launchpad.net/juju-core/+bug/1576750
<mup> Bug #1576750: juju2 usability: many options have to be specified for every bootstrap <bootstrap> <juju-release-support> <jujuqa> <landscape> <usability> <juju-core:Fix Released by axwalk> <https://launchpad.net/bugs/1576750>
<dimitern> brunogirin: oh, ok (news to me); can you try bootstrap with --debug and attach the log to the bug please (scrub secrets / keys before that, if any)?
<brunogirin> dimitern: done
<dimitern> brunogirin: thanks! That revealed the issue, I think
<brunogirin> dimitern: excellent!
<dimitern> brunogirin: please try using the endpoint with the version - https://compute.datacentred.io:5000/v2.0/ - as endpoint in your confing
<dimitern> brunogirin: that should let bootstrap go further (hopefully complete)
<brunogirin> dimitern: it did go further and is now finishing with an error that I'll paste in the bug report
<brunogirin> dimitern: pasted
<dimitern> brunogirin: thanks, looking..
<dimitern> brunogirin: ok, that's more like it - it fails now because it needs the metadata for juju tools and cloud images
<brunogirin> dimitern: OK so how do I provide it with that?
<dimitern> brunogirin: please try following the docs for setting up juju on private clouds: https://jujucharms.com/docs/devel/howto-privatecloud
<dimitern> brunogirin: instead of 'tools-metadata-url' though, use 'agent-metadata-url' (the doc needs updating)
<brunogirin> dimitern: I'm reading through it and I struggle to understand what I need to do. Isn't that part of the config dependent on my cloud provider and how do I get the right info?
<dimitern> brunogirin: basically, you need 2 things - agent metadata (the juju tools tarballs), which you can use 'sync-tools' to copy to a local dir from the official sources, and image-metadata-url (the cloud images, those need to match what you have as images on datacenterd)
<dimitern> brunogirin: steps: 1) cd $HOME; mkdir -p juju-metadata/tools; juju sync-tools --local-dir=$HOME/juju-metadata/tools; 2) juju metadata generate-tools -d $HOME/juju-metadata/tools; 3) juju metadata validate-tools -d $HOME/juju-metadata/tools;
<brunogirin> dimitern: thanks I'll try that; for the second part (images), do I need to provide a mapping from provider image IDs to series?
<dimitern> brunogirin: then for the images, 4) juju metadata generate-image -u https://compute.datacentred.io:5000/v2.0/ -d $HOME/juju-metadata/images -a amd64 -i <nova-image-id>; 5) juju metadata validate-image (same args) to verify
<brunogirin> dimitern: thanks! I'll try that and report back
<dimitern> brunogirin: yeah, the "nova-image-id" must match for arch and series (e.g. amd64 ubuntu xenial)
<dimitern> brunogirin: you should pass -s xenial (for example) to generate-image (and validate-image later), in addition to -i -d and -a
<brunogirin> dimitern: I get the following error when I do juju metadata validate-tools: http://pastebin.com/TUnYsV6F
<dimitern> brunogirin: I think you need to pass the -u (endpoint url) and -p openstack and -r (region name)
<brunogirin> dimitern: for tools as well?
<dimitern> brunogirin: yeah, see 'juju help metadata validate-tools'
<brunogirin> dimitern: adding the series to all of this works, thanks
<dimitern> brunogirin: great! so if validate-image also worked ok, you should be good to bootstrap
<dimitern> brunogirin: would you do me a favor please? if you add a comment to the bug describing what commands you needed to run to set up images and tools, that will be extremely useful both for other users and for updating the docs properly for 2.0
<brunogirin> dimitern: yes I'll do that
<dimitern> brunogirin: awesome! thank you very much, and sorry for the troubles :)
<brunogirin> dimitern: apparently, there's no metadata validate-image command so I'll skip that
<dimitern> brunogirin: sorry, it's validate-images
<dimitern> (but strangely 'generate-image' :()
<brunogirin> dimitern: I assumeit's because you can create multiple images and validate them all
<dimitern> brunogirin: hmm, good point!
<brunogirin> dimitern: I get the same error when I do juju bootstrap and it looks like it doesn't check my local folder: http://pastebin.com/0VqPqYMX
<dimitern> brunogirin: sorry, otp, will get back to you soon
<brunogirin> dimitern: no worries :)
<dimitern> brunogirin: try using "file:///home/<yourusername>/juju-metadata/tools" as "agent-metadata-url" and the same but with "/images" for "image-metadata-url"
<dimitern> brunogirin: to pass these to bootstrap, use e.g. --config agent-metadata-url='file:///...' --config image-metadata-url='file:///..'
<brunogirin> dimitern: working so far, it's running apt-get upgrade! thanks! I'll add the full set of commands that work to the ticket.
<dimitern> brunogirin: great! I'm glad it finally worked ;)
<brunogirin> dimitern: so am I :) What's the process to contribute out of the box support for new clouds so that others don't have to go through this?
<rick_h_> jamespage: ping, invited you to a chat to get me up to speed on HEAT related stuff if you can do it tomorrow. Let me know if it's not a good time.
<dimitern> brunogirin: our documentation can be improved - there's a link in the left menu below about contributing
<brunogirin> dimitern: yes I saw that, once I've got it all working, I'll see if I can contribute some docs
<dimitern> brunogirin: even if it's a rough draft, that's OK we have technical writers that will take care of formatting, etc.
<brunogirin> dimitern: it was so close! But it eventually failed after starting Mongo still because it can't find images but this time on the controller it seems...
<dimitern> brunogirin: can you paste? maybe the series / arch do not match?
<brunogirin> I'll add the log to the ticket as it's quite long
<brunogirin> dimitern: added to the bug with attached log file
<rick_h_> lazyPower: got a sec?
<dimitern> brunogirin: thanks!
<lazyPower> rick_h_ certainly, whats up?
<rick_h_> lazyPower: chef questions for you if you have a few to chat?
<rick_h_> lazyPower: https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=0 if so
<lazyPower> neiljerram hey o/ are you around
<brunogirin> dimitern: I got it to work and can deploy charms now, however, it doesn't seem to expose them properly so I need to investigate that
<dimitern> brunogirin: sweet! for exposing should pass the 'use-floating-ip' setting to true (at bootstrap or with `juju set-model-config use-floating-ip=true` (IIRC)
<brunogirin> dimitern: I did that but the floating IP points to the Juju controller, not to the charm instance so when I go to that URL I get nothing and `juju status` still shows the internal IP
<brunogirin> dimitern: do you need 2 floating IPs for it to work on OpenStack, one for Juju, one for the service you're trying to expose?
<dimitern> brunogirin: use-floating-ip=true means juju will add one FIP per instance
<dimitern> brunogirin: if you deploy a unit to the controller node and expose its service, you should be able to access it
<brunogirin> dimitern: ah right, so my provider needs to allow me to do that! They limit me to one floating IP at the moment so I'll ask them to increase that.
<dimitern> brunogirin: no, sorry - it's not that juju will add more than 1 FIP
<dimitern> brunogirin: I'd suggest this - try `juju add-machine --series xenial` and once that machine is up, `juju add-unit <yourapp> --to <#>` (the newly added machine id, likely 1)
<gennadiy> hi everybody, we are deploying windows(windows server 2012) machine with juju to openstack. so how to get password for this machine?
<gennadiy> seems there are not cert with user credentials
<gennadiy> so "nova get-password <instance-id> <private-key-path>" it does not work
<dimitern> brunogirin: assuming you also did `juju expose <yourapp>` and `juju status` shows the added unit on the new machine is running and has open ports (e.g. 80/tcp), you should be able to access it
<dimitern> gennadiy: can you ssh to a linux machine ok?
<gennadiy> it's not a linux
<gennadiy> it's windows
<dimitern> gennadiy: ok, but can you start a linux instance and ssh into it ok?
<gennadiy> sure
<dimitern> gennadiy: if you keypair is set correctly you should be able to
<rick_h_> gennadiy: so this is a hyperv on nova?
<gennadiy> i have 10 machines with linux and 1 with windows
<gennadiy> it's nova
<brunogirin> dimitern: mmm, juju add-machine says that my new machine is in error
<rick_h_> gennadiy: right so this is a nova compute instance on openstack you're trying to get into? Was it a juju deployed charm?
<gennadiy> i use empty windows charm, just for testing
<rick_h_> gennadiy: ah ok
<dimitern> brunogirin: what's the error?
<gennadiy> machine is created - 40         started 10.9.8.110 9b003da2-bc22-41e3-81ad-def9485df4d2 win2012r2 nova
<gennadiy> but i can't connect to it - juju ssh 40 doesn't work
<brunogirin> dimitern: juju status just shows state to be error and no explanation, not do I have any log in juju debug-log
<gennadiy> also i don't know password for rdp.
<dimitern> brunogirin: try juju status --format=yaml for more detail
<gennadiy> metadata: Key Name - None
<brunogirin> dimitern: ah right, it now says "IP allocation over quota." so I need to ask them for additional IPs
<gennadiy> when i create windows machine in openstack manually i can provide pem file and after that i can use "nova get-password <instance-id> <private-key-path>" to get password. but i don't how to do it in juju case
<dimitern> brunogirin: yeah, that's it :)
<roadmr> hey folks, what's the story about creating new charms using ansible integration? e.g. charmhelpers.contrib.ansible.AnsibleHooks? is it still OK to do so, or should we move toward something else?
<dimitern> gennadiy: I'd suggest asking somebody from cloudbase (I've pinged them about your question in #juju-dev)
<gennadiy> dimitern, thank
<gennadiy> *thank you
<dimitern> gennadiy: np, if you get no response, I'd suggest sending a mail to juju@lists.ubuntu.com to ask
<gennadiy> dimitern: have you got any updates about windows machine? i don't have access #juju-dev
<lazyPower> gennadiy they haven't responded, but they are also far eastern europe
<lazyPower> romania i believe
<lazyPower> its past EOD (6:55 pm) for that timezone, so you may want to go ahead and ping the list
<lazyPower> also, it wont let you join #juju-dev?
<rick_h_> balloons: do you have any idea for gennadiy in our tests as to if we deploy a windows application how we log into the server at all?
<rick_h_> alexisb: do you recall who might be idling from cloudbase that would know? ^
 * balloons looks at scrollback
<lazyPower> rick_h_ - from what i recall there was an openssl requirement in maas, you add the tls key to use rdp
<rick_h_> lazyPower: hmm, so this is openstack deployed sample windows charm.
<lazyPower> oh right
<rick_h_> lazyPower: so I'm wondering what we do to get into that then
<lazyPower> sorry, I missed that very important detail...
<lazyPower> i'm not seeing anything relevant in a cursory google search either
<gennadiy> another question related to juju. can we manage openstack networks from juju? also can we manage NICs? because for some machines we need to have 2-3 NICs from different networks
<balloons> sounds like a job for maas
<alexisb> rick_h_, I would start with gabriel he should be on juju-dev
<rick_h_> alexisb: k ty
<rick_h_> ,
<Prabakaran> My charm uses MySQL database charm and I am developing my charm as layered charm. To connect with MySQL database my charm uses MySQL interface. here my requirement is I will have to create a database running mysql commands (meaning MySQL database in the different container). Is it possible for me to run mysql command from the remote container (where my charm is installed)?
<lazyPower> Prabakaran - if you have the mysql client tools available to you, you certainly can. You'll get the data to build a connection string from the interface, which includes a database, username/password, address and port.
<Prabakaran> Can u please explain me in detail?
<Prabakaran> how to make use of it in my charm?
<lazyPower> Prabakaran - so in your layered consumer of mysql, you need to include the mysql interface layer in your layer.yaml, update your metadata.yaml with the requires: declaration, and use that relation name:  @when('mysql.available') - install the mysql client tools, and execute your database migration from there using the mysql tooling (or substitute with your language tooling)
<Prabakaran> k.. i will to explore this.. and do u have any example charm for me to understand?
<lazyPower> Prabakaran - there's a walkthrough by example of something similar here: https://jujucharms.com/docs/devel/developer-layer-example
<jose> bdx: lemme see if I can reproduce that bug
<Prabakaran> Hi Lazypower , i just referred this link what you sent, it just uses the database and it is not running any mysql scripts. And adding to this question how ssh works between two containers..if so, we need to exchange ssh keys between two containers?
<lazyPower> Prabakaran - you dont need to ssh with the mysql connection string. you can install mysql-client apt package and run mysql -u db.username() -p db.password() db.database()   << prabakarans_mysql_script.sql
<Prabakaran> can you help me with an example>
<lazyPower> Prabakaran - i dont have a ready made example off hand, no
<lazyPower> Prabakaran are you familiar with executing mysql scripts remotely with the mysql cli?
<Prabakaran> no i am not.. i will have to explore
<lazyPower> Prabakaran - http://stackoverflow.com/questions/10676432/how-to-execute-an-remote-sql-in-mysql-command-line
<lazyPower> Prabakaran - are you writing your layer in bash or python?
<Prabakaran> in python..
<lazyPower> ok, the path to get from a-z will be a little long winded, but bear with me and i'll guide you on irc. i'm multi-tasking so expect long pauses
<lazyPower> Prabakaran - add layer:apt and interface:mysql to your layer.yaml to start. confirm when ready for next step
<Prabakaran> ya sure
<Prabakaran> i am using ibm-base layer which uses apt....i need to use apt here ?
<Prabakaran> i have written layer.yaml
<lazyPower> Prabakaran - if you have layer:apt in a lower layer you do not need to re-include it, no
<Prabakaran> i have written it
<lazyPower> Prabakaran - define the mysql requires relationship in metadata.yaml:  http://paste.ubuntu.com/17331636/
<Prabakaran> ya i have written metadata.yaml
<Prabakaran> metadata.yaml is http://paste.ubuntu.com/17332041/
<Prabakaran> Lazypower i am ready for next steps..
<Prabakaran> meanwhile
<lazyPower> Prabakaran : add the reactive code to your layer - http://paste.ubuntu.com/17332169/
<lazyPower> Prabakaran - then build, deploy, and debug the charm interactively. That code is incomplete, and i left documentation around whats happening on each line. But it will attempt to run the sql, against the remote mysql unit. If you have multiple units in the conversation, or encounter the code path twice (there's no @when_not decorator as a tip) it will likely error. So i got you far enough to work with the databag and see one general workflow.
<lazyPower> sorry, s/databag/interface
<RAJITH> Hi, I have
<RAJITH> I have two charms both on different machine, if I need to delete few files in unit 1 from unit 2, please let me know how I can do
<Prabakaran> Since i am new to python .. what these two LOC's does "    # Assemble the command     cmd = "mysql -h mysql://{0}:{1} -u {2} -p{3} {4} << {5}"     cmd.format(mysql.host(),mysql.port(), mysql.username(), mysql.password())"
<Prabakaran> http://paste.ubuntu.com/17332169/
<lazyPower> Prabakaran - that is string manipulation. We're assigning position in the string to replace with variables. In this case, we are surfacing the data coming from the mysql interface, to replace those inline
<lazyPower> mysql -h mysql://192.168.1.2:3600 -u chumba -pwomba -d ibm_was_here << sql-template.sql  -- is what that string turns into.
<Prabakaran> same way is it possible for me to write a this command 	mysql> use pac; 	mysql> source /var/lib/juju/agents/unit-pac-helper-0/charm/files/archives/dbschema/DBschema/MySQL/egodata.sql; 	mysql> source /var/lib/juju/agents/unit-pac-helper-0/charm/files/archives/dbschema/DBschema/MySQL/lsfdata.sql; 	mysql> source /var/lib/juju/agents/unit-pac-helper-0/charm/files/archives/dbschema/DBschema/MySQL/lsf_sql.sql; 	mysql> source /var/lib/juju
<Prabakaran> ya i got it
<lazyPower> Prabakaran - that assumes you are on the mysql host, in the mysql cli from that paste.   What i offered was a path to remotely execute the database scripts.
<lazyPower> http://paste.ubuntu.com/17332728/ - i also had some errors in there, and i think i missed the database flag, check the mysql cli on the host and adjust as necessary :)
<Prabakaran> i got little bit.. but i have some sql script file... like egodata.sql and lsfdata.sql
<Prabakaran> i want to execute those scripts
<Prabakaran> can u please tel me where i can include these in this code what u sent  http://paste.ubuntu.com/17332728/
<Prabakaran> is it under cmd section in the line number 17 and 18 http://paste.ubuntu.com/17332728/?
<Prabakaran> manually i will be running in mysql command promt like "source <script file name>". I am not sure where and how i need to implement in the code  http://paste.ubuntu.com/17332728
<Prabakaran> hi <lazyPower>
<Prabakaran> manually i will be running in mysql command promt like "source <script file name>". I am not sure where and how i need to implement in the code  http://paste.ubuntu.com/17332728
<lazyPower> Prabakaran - then generate additional commands to execute those files in the matter outlined in the reactive code i sent?
<lazyPower> Prabakaran i'm about to head out for lunch, give what i sent a go, and try to sort it with remote execution instead of assuming you'll be attached over ssh to the mysql cli.
<Prabakaran> ys sure
<Prabakaran> i will try this
<Prabakaran> and come back to you if i have any more doubts
<petevg> cory_fu: I've got a philosophical question about smoke tests for you (or for anyone else who wants to chime in). If a test requires more than one unit of a given service to be deployed, is it proper to code it up as a smoke test? Or is it better to just run it with the other amulet-based integration tests?
<petevg> My inclination is to say the latter, because a smoke test is an action that you run against a specific unit, and I wouldn't expect it to require me to do some separate action, like spinning up other units.
<cory_fu> petevg: The latter.  A smoke-test should be something that you can run on each unit to determine that it's functioning as expected
<petevg> That makes sense. The problem is that kind of blocks me writing smoke tests for Zookeeper -- pretty much anything interesting that you can do to test Zookeeper requires that you deploy multiple Zookeeper nodes :-/
<cory_fu> The smoke-test could perhaps determine if it's part of a cluster and test that it can communicate with the cluster, for instance, but it should not depend on the cluster
<lazyPower> you could put the smoke test in a bundle...
<lazyPower> which makes sense in this case
<lazyPower> logstash is useless by itself, so all the comprehensive tests live in the bundle deploying the additional components
<petevg> lazyPower: the other piece of this is that I'm trying to make a smoke test that can live independently of the charm, in an upstream project.
<cory_fu> lazyPower: I don't think that's a smoke-test in that case, it's a bundle or integration test
<lazyPower> fair point cory_fu
<cory_fu> To me, a smoke-test is something akin to a health check.  Sort of a "am I up and able to perform my role" check
<petevg> Agreed. The tricky bit is that Zookeeper's role is mainly to talk to other Zookeepers.
<cory_fu> But it is perhaps more comprehensive than a health check, or maybe takes a bit longer (though it shouldn't take too long, eitherR)
<petevg> So smoke testing it is hard to disentangle from deploying multiple instances of it.
<cory_fu> petevg: I would say make the smoke-test have some conditional logic in it.  If it's deployed by itself, just make sure that the service is up and running and can handle requests.  If it detects a cluster (ensemble), then do some additonal checks to make sure it can work with its peers properly
<petevg> cory_fu: That sounds reasonable.
<petevg> lazyPower, cory_fu: thx for the thoughts. I think I have a better idea about what I need to do.
<kwmonroe> smokes could also be as simple as "is the zk init script present, is the zk java process running as the correct user, is the port open".
<kwmonroe> we've hit plenty of errors where just knowing the right process/port was what we thought it should be.
<lazyPower> hey cory_fu - wanna see something fun? :) http://paste.ubuntu.com/17336222/ <- thats output of charms.reactive get_states on a k8s leader
<cory_fu> lazyPower: That's fair number of states.  :)
<lazyPower> Yeah, i was debugging yesterday and that struck me as a heap :)   But there's really a lot going on in here.
<cory_fu> lazyPower: https://github.com/juju/plugins/pull/67
<lazyPower> cory_fu lgtm - merged
<cory_fu> lazyPower: :)  https://github.com/juju/plugins/issues/63 should be resolved now
<lazyPower> nice
<magicaltrout> the nature of app containers make me sad when it comes to networking. Working in a non-juju docker environment is like pulling teeth when you want stuff to communicate with each other
<lazyPower> magicaltrout - i too feel your pain
<magicaltrout> thanks lazyPower ! ;)
<lazyPower> magicaltrout - i just revised layer:flannel, and i'm still feeling like we can get more punch out of sdn than what i'm currently doing.
#juju 2016-06-15
<stub> Can anyone confirm that relation-list no longer includes the departing unit in a -departed hook, and since which version of Juju?
<stub> cory_fu: (^^ from your issue comment, which might help me in other ways)
<Makyo|away> wallyworld: would it be alright to land https://github.com/juju/bundlechanges/pull/24 ?  We've just about finished implementing the changes in the GUI
<wallyworld> Makyo: that would be great
<Makyo> wallyworld: ty!
<wallyworld> Makyo: juju master pulls in a specific rev off that repo so I'll update the deps once that lands
<Makyo> wallyworld: ah, thanks, that makes sense
<Makyo> wallyworld: landed, ty
<wallyworld> Makyo: awesome, thanks you for doin ght gui changes
<gennadiy> hi everybody,
<gennadiy> does it windows support subordinate charms now?
<xilet> Really juju newbie question, if I need to make one change to a single config file in an already deployed charm (adding a new parameter to nova.conf), what is the syntax to do that? Or do I just ssh in and change it manually?
<jamespage> ChrisHolcombe, https://review.openstack.org/#/c/328374/3
<jamespage> some comments; I think we should bump the default as well if our reference deployments need a larger value than 300 seconds...
<gennadiy> @xilet seems you should make new version of charm and upgrade it: juju upgrade-charm
<jamespage> xilet, don't ssh and change it manually - the next time a config-changed runs (like on a reboot) your change will be overwritten
<jamespage> xilet, what are you trying todo?
<xilet> still working on the iscsi issue, so trying to add in volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver
<jamespage> gennadiy, I can't think why windows would not support subordinate charms
<gennadiy> it doesn't work for me
<jamespage> xilet, you might be about to poke that in using config-flags
<gennadiy> ok for linux machine
<jamespage> juju set nova-compute config-flags="volume_drivers=iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver"
<jamespage> but that only writes to the DEFAULT section so it might not do the trick
<jamespage> xilet, this is for the userspace iscsi support right?
<xilet> yeah
<xilet> ahh thanks, that is the syntax I was looking for
<jamespage> xilet, this might make a nice first contribution for you to make it an offical feature
<jamespage> juju set nova-compute userspace-iscsi=true
<jamespage> is so much nicer!
<xilet> if I can get this working, I will happily do so
<jamespage> xilet, prove the theory and then if you'd like to work on a more formal config option let me know
<jamespage> they are normally quite easy to add for this sort of thing
<xilet> good to know, clearing out the old tests then will go through and see if it can access the disks from openstack
<ionutbalutoiu> Hello! Can I configure Juju to boot new machines with a keypair via OpenStack provider ? Currently machines added to OpenStack via Juju do not have any keypair.
<gennadiy> @ionutbalutoiu for linux machine or windows?
<gennadiy> for linux machine it will add juju key
<ionutbalutoiu> Either one don't get any nova keypair. But for Linux, I think the juju key is authorized via metadata.
<ionutbalutoiu> But for Windows, you don't have any option.
<gennadiy> juju uses keys from - ~/.local/share/juju/ssh/
<gennadiy> for windows - i have asked this question yesterday
<gennadiy> http://ask.cloudbase.it/question/1239/jujuopenstack-how-to-get-windows-machine-password/
<gennadiy> also i have asked this question on juju mail list.
<gennadiy> i have got this answer
<gennadiy> You should be able to do something like:     juju run --machine <machine-id> "net user JujuAdministrator <password>"
<gennadiy> but i haven't tested it
<gennadiy> now we use workaround - our charm add local admin
<ionutbalutoiu> Yes. This is only possible only with Juju 2.0
<ionutbalutoiu> for for stable version 1.25.5 you cannot set/retrieve any password for a Windows machine unless you have access to console.
<ionutbalutoiu> Juju run on Windows doesn't work on version of Juju < 2.0
<ionutbalutoiu> After you boot a Windows machine with Nova, you can retrieve the Admin password via "nova get-password <instance_name> <private_key>"
<ionutbalutoiu> but that's possible only if you boot the machine with a keypair. Is would be nice if Juju could have a config option, so that every machine it boots, it uses that configured keypair.
<gennadiy> yes, i know.
<gennadiy> so our workaround - create own user on machine from charm
<gennadiy> it's very easy - https://github.com/cloudbase/juju-powershell-modules
<gennadiy> there are - New-LocalAdmin function
<ionutbalutoiu> I know. I'm working at Cloudbase. :)
<ionutbalutoiu> Still, that's sort of a hack. I still consider that adding a config option for keypair in case of OpenStack provider, would make things cleaner.
<lazyPower> heyyyy powershell modules
<lazyPower> right on, i forgot those were published
<admcleod> kjackal: im interested in the 'slow sync' messages in the hbase log during the smoke-test - have you seen these?
<kjackal> admcleod: nope, I suspect what they mean but have never seen them
<kjackal> let me see in hbase docs/list
<kjackal> admcleod: something like this: https://issues.apache.org/jira/browse/HBASE-11240
<lazyPower> https://github.com/juju-solutions/interface-etcd-proxy/pull/2 - could use a quick CR on this if anyone has the bandwidth
<lazyPower> i'm down a team-mate for spot reviews
<admcleod> kjackal: right. >> "abnormal datanode" <<
<kjackal> yes, basicaly, HBase writes everything in a write ahead log and then keeps it in memory. There are phases when everything that is in memory has to be written down to the secondary storage (compaction pahses). I guess this message means that the secondary storage takes toolong
<kjackal> admcleod: ^
<admcleod> kjackal: right, which means...
<gennadiy> hi lazyProwe, do you have some windows subordinate charms in jujustore or github?
<lazyPower> gennadiy - I dont believe we do, i think you're pioneering
<lazyPower> gennadiy - the only windows charms I have interfaced with were principal charms provided by cloudbase
<gennadiy> seem i was wrong with windows subordinate issue. i have just deployed empty subordinate charm - everything is ok
<lazyPower> gennadiy - when you say "I have deployed empty subordinate charm" - do you mean on a windows series?
<gennadiy> yes, i used this one - https://github.com/cloudbase/windows-charms-boilerplate
<lazyPower> i'm not surprised that would work, if it has no hooks. The agent would skip all hooks.
<lazyPower> and it would appear to be fine, when it really just no-op'd
<lazyPower> gennadiy - have you logged a bug with the charm that appears to be broken? I dont have the time right now to dig into it but I'm interested in following the conversation
<gennadiy> it's my charm. i'm creating 2 charms for windows. one is principal, another is subordinate
<lazyPower> ah, well that does present certain... hurdles
<cory_fu> stub: I confirmed it yesterday in 2.0-beta8
<cory_fu> admcleod, kjackal: I don't know if you noticed that I moved the card, but the plugin ready issue was, I think, best fixed in charms.reactive, if you want to take a look: https://github.com/juju-solutions/charms.reactive/pull/71
<admcleod> cory_fu: ah yeah ok
<cory_fu> Technically, the issue was a subtlety of Python, where `None and <anything>` returns None and not False like you would expect.  That, combined with using None as a deafult value to mean something significant made for unexpected behavior
<stub> cory_fu: Ta. I'll assume is is a 2.0 feature for now.
<cory_fu> stub: "Feature"
<cory_fu> Seems like a bug to me, but at least I was able to work around it
<cory_fu> Though I suppose it depends on how strictly you take the fact that the hook is -departed and not -departing
<stub> cory_fu: Oh, I was thinking it might be part of a fix. Before, in a departed hook neither end had an idea about which unit was departing.
<cory_fu> That seems like a round-about way of communicating it.
<cory_fu> I mean, not wrong, per se, but then you still have to jump through the hoop of comparing JUJU_REMOTE_UNIT to the relation-list to figure out if it's you or the other unit
<stub> cory_fu: So I guess I shouldn't rely on the behavior and wait for a real fix to https://bugs.launchpad.net/juju-core/+bug/1417874
<mup> Bug #1417874: Impossible to cleanly remove a unit from a relation <canonical-is> <charms> <feature> <hooks> <sts> <sts-needs-review> <juju-core:Triaged> <https://launchpad.net/bugs/1417874>
<lazyPower> cory_fu - the one thing i was curious about that pr is we're catching CalledProcessError
<lazyPower> does relation_get not ever return > 0 except only in that scenario?
<lazyPower> seems like we could be hiding other failure and skipping it due to that one little stanza. But it seems nitty and i dont have a solid use case for where thats bad
<cory_fu> lazyPower: Well, no charm that I've ever seen captures CalledProcessError around the call to relation-get and I've never seen it fail in another circumstance.  But you're right that it's perhaps a bit heavy-handed.  Ideally, hookenv.relation_get would capture the stderr output on failure and we could inspect that, but that would require changes in charmhelpers.
<lazyPower> I'm not about that life at the moment
<lazyPower> carry on as you were sir
<cory_fu> lazyPower: We could also assume that the units list cleanup is functional and remove the check entirely.  I'm not against that; I added the except before I was sure if I could reasonably do the cleanup
<lazyPower> Thats the one niggly thing, and its really a nit
<lazyPower> without empirical evidence that "hey this fails under x condition"
<cory_fu> I did have the same concern, so I'm not against changing it.
<lazyPower> :O Does this mean you're rubbing off on me cory_fu?
<cory_fu> :)
<stub> cory_fu, lazyPower , marcoceppi : So I knocked up https://github.com/stub42/ReactiveCharmingWorkflow a while ago, to codify and improve the process I'd embedded in the PostgreSQL charm Makefile. And per the 'Future' section in that am thinking of making this process simpler with some git plugins.
<lazyPower> export JUJU_REPOSITORY=$CHARM_ROOT/repo
<lazyPower> i want that to go away
<lazyPower> its a legacy concept and we should kill it with fire
<stub> cory_fu, lazyPower , marcoceppi : Does this seem like I'm going in the right direction, or do people have reasons to prefer multiple repos for source layer and built charm?
<stub> lazyPower: Yes, but for now I need both Juju 1 and Juju 2. Plugins will be targetted for Juju 2 though.
<lazyPower> from what im reading so far you're on the same path i've been using
<lazyPower> --no-local-layers before building for a publish and testing that artifact pre-push, this is all like the workflow i wanted for CI too
<lazyPower> i like your push/publish hack too
<marcoceppi> stub: I'm traveling, but will review in a bit and leave comments, thanks for typing this up!
<stub> I've opened isues on github to hopefully avoid that hack ;)
<lazyPower> yeah LGTM stub - there's a few things in here i'll comment on later but initial scan was good
<lazyPower> there are some conventions in here we could probably put into layer-basic as a Makefile and make this useful to more people out of the box
<stub> I'm also considering if I should generate the branch name to contain the built version of a layer (so the master branch gets built to master-built)
<lazyPower> but thats up to cory_fu
<lazyPower> and other maintainers
<stub> Well, I think the git bits are ugly enough to put into a git plugin, which I could package or snap
<lazyPower> snap ftw
<stub> git plugins should at least make things readable and avoid the ugly underlying details.
<lazyPower> fair, i use a shell hack i got from gary bernhardts dotfiles for git pretty tree. its invaluable
<stub> (git clones into temp directories etc. to ensure clean publication, committing uncommitted changes to the built branch before building)
<kwmonroe> cory_fu: teach me some python.. are double underscored vars the convention for defining globals?  eg https://github.com/juju-solutions/charms.reactive/pull/71/files
<stub> It should really be "TOGGLE = object()". You only want magic strings if you need a readable string representation
<cory_fu> kwmonroe: What stub just said.  I went with strings for debugging, but object() would have worked just as well and perhaps been less misleading
<cory_fu> kwmonroe: Also, leading / trailing underscores are a convention in Python for representing internal things.  Double underscores are usually reserved for Python-internal things, so I probably should have used single underscores, but it's just a string value and not an identifier, so it's slightly less of an issue
<cory_fu> stub, kwmonroe: Feel free to nack the PR and I'll update it
<stub> I'm not going to be that pedantic
<kwmonroe> thx stub and cory_fu.. i didn't read that PR right from the get go.. i think i was thinking the *var* was __TOGGLE__ and didn't pick up that was just a string.. anyway, __SORRY__.
<cory_fu> kwmonroe: Yeah, I think the way I wrote that encourages that confusion.  I'm up for changing it to object() to avoid that
<cory_fu> kwmonroe, stub: Updated.  Thanks
<saudk> hey guys
<saudk> having an issue deploying openstack using juju charms and maas
<saudk> i am seeing the same IP assigned to juju-br0 and the interface that is connected to
<saudk> in /etc/network/interfaces the interface e.g eth1 is set as inet manual
<saudk> and in the same file /etc/network/interfaces.d/*.cfg is sourced where a file eth1.cfg exists in which eth1 is set as inet dhcp
<marcoceppi> saudk: what versoin of maas?
<saudk> 1.9.3+bzr4577-0ubuntu1~trusty1
<kwmonroe> hey cory_fu, i'm reviewing https://github.com/juju-solutions/charms.reactive/pull/72, but don't understand why -departed has to be handled in __init__.py, or for that matter why there's no handling of -joined that does rel.conversation().join().
<cory_fu> kwmonroe: join happens implicitly in RelationBase.__init__: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/relations.py#L156
<cory_fu> The reason we need to call .depart() is that we're now depending on the internal list of units to be accurate, since Juju no longer includes the departing unit in relation-list
<kwmonroe> yup, got it
<cory_fu> It's actually better for that list to be accurate, anyway, for charm use
<cory_fu> kwmonroe: Give me a second to push up a minor edit to that.  Going to add logging to that try / except
<kwmonroe> ack
<cory_fu> kwmonroe: Updated
<cory_fu> kwmonroe: You'll notice that it was always my intention to have depart() called implicitly
<kwmonroe> yes i will notice that cory_fu, but no one else will since the todo is going away.
<cory_fu> :)
<kwmonroe> frankly, that change is probably the only thing i'm qualified to review here ;)
<lazyPower> kwmonroe - i dub you the pr poobah
<lazyPower> you're now qualified to review everything by virtue of being the poobah
<kwmonroe> we're doomed
<kwmonroe> i remember back in the day (like monday), when the curtain looked so pretty.  why oh why did i stick my head behind there?
<lazyPower> :) well your self confidence rating certainly shines through
<kwmonroe> i can be poobah now lazyPower!  i totally get it.  i was missing the link between get_remote and relation_get -- it's related_units (and relation-list therein) causing the problem.
<lazyPower> yep, that niggly little workflow we used to use back in the day
<lazyPower> cory_fu - correct me if i'm wrong, but there's a key in layer.yaml i can use to control omission of files right?
<lazyPower> eg: ignores: ['known-weird-module-that-doesnt-play-well-with-other-modules.py']
<cory_fu> Yes, ignores, but be aware of this issue: https://github.com/juju/charm-tools/issues/220
<cory_fu> lazyPower: The TL;DR of that issue is that ignores is currently not scoped to an individual layer
<cory_fu> It will prevent a file with that name from being included in the final charm, period.
<cory_fu> Also, that behavior is likely to change, and the calling convention of ignores might change, too
<lazyPower> ok
<lazyPower> i have need to exclude a test in the final product thats fine for the layer (for now)
<lazyPower> so this is a good enough impl i'm opting fo rit
<lazyPower> except that it doesn't appear to work with pathed files
<lazyPower> http://paste.ubuntu.com/17376175/
<bdx> hey whats up guys? Is there a way to update a resource without pushing the charm too?
<lazyPower> bdx - see: charm attach --help
<bdx> lazyPower: http://paste.ubuntu.com/17379141/
<bdx> :-(
<lazyPower> bdx  are you trying to push this to the store or to your controller?
<lazyPower> the 502 bad gateway is troubling... but first things first :)
<bdx> lazyPower: to the store
<lazyPower> bdx ok thats odd.
<lazyPower> jrwren - Charm store status looks good to you yeah?
 * lazyPower remembers he needs to finish a reply to the etcd email now
<magicaltrout> i love a prime, thats primest
<cory_fu> lazyPower: The fix for the -relation-departed issue turned out to be a bit of a lame duck, and I had to go back and take a different approach
<cory_fu> https://github.com/juju-solutions/charms.reactive/pull/75/files if you want to review
<cory_fu> kwmonroe spotted my mistake and is also reviewing
<magicaltrout> you made a mistake cory_fu ?!
<magicaltrout> sad times
<cory_fu> As far as making mistakes goes, I'm on fire today
 * magicaltrout spots smoke on the horizon
<cory_fu> https://i.imgur.com/F0NtTsP.gif
<magicaltrout> its like me on a daily basis
<magicaltrout> of course as IT professionals, we never openly admit to the weird stuff that happens daily! ;)
<kwmonroe> cory_fu: the "if unit not in self.units, continue" is to keep us from calling relation_get on globally scoped relation ids?
<kwmonroe> (referring to https://github.com/juju-solutions/charms.reactive/pull/75/files#diff-3c5467f4229b8dd06bdf1c43813c03d8R623)
<cory_fu> kwmonroe: Not just GLOBAL.  Not every unit on a given relation will be in the state that the conversation is representing.  Some might still waiting on the unit to finish setting up and providing its relation data
<kwmonroe> well, i guess not just global rel id, but more .... yeah, what you said.
<cory_fu> I'm not sure I trust Travis at this point. :p
<kwmonroe> lol cory_fu
<kwmonroe> don't be mad
<kwmonroe> https://github.com/johnsca/charms.reactive/blob/812a06f657a0b6fb3f1488f91a1a0a0aa13fe761/charms/reactive/relations.py#L19
<cory_fu> Oh snap.  Linty linty
<kwmonroe> that's your old frient F401
<cory_fu> Stupid Travis
<kwmonroe> truth
<cory_fu> May the force-push be with me
<kwmonroe> looks good
<kwmonroe> cory_fu: cool for 0.4.4?
<cory_fu> Ok, we finally ready for a release, then?  I need to go on my damned vacation.  :)
<cory_fu> ha
<cory_fu> kwmonroe: You want me to release it, or you want to try your hand now that you're a maintainer on pypi?
<kwmonroe> the latter
<cholcombe> thedac, interesting MR here
<kwmonroe> ugh, my first objective as maintainer will be to remove the pesky 'make test' prereq of 'make release'.  this is taking too long.
<cholcombe> thedac, there's actually a 3rd option but it's a PITA to setup
<cory_fu> Only takes 6 seconds (times 2, I guess) for me.  How long does it take for you, kwmonroe?
<kwmonroe> like at least 10 seconds
<cory_fu> ha
<thedac> cholcombe: hey, which MR are we talking about?
<kwmonroe> anyway, she's away cory_fu.  go forth and vacation.
<cholcombe> thedac, the dns HA for rgw
<thedac> got it
<cholcombe> thedac, so the other possibility here is we could use bgp
<thedac> cholcombe: just keep in mind this is one of many https://review.openstack.org/#/q/topic:dnsha
<cory_fu> kwmonroe: Sweet, thanks.  Be sure to let admcleod and kjackl know they can rebuild hbase and test it again
<cholcombe> i see
<lazyPower> haha guyzzzz
<cory_fu> Actually, we'll need to rebuild hadoop-plugin to fix it
<kwmonroe> what lazyPower?  you see something?
<cory_fu> lazyPower: You talking to us?
<kwmonroe> if you see something, you have to say something.
<thedac> yes, one could use BGP, but these are charm handled HA options
<cholcombe> thedac, interesting.  so the charm handles the vip?
<cory_fu> lazyPower: Don't listen to kwmonroe.  You had your chance to review and you blew it.  ;)
<thedac> Yes, you have to add it to the config but yes hacluster/corosync handles that
<lazyPower> cory_fu kwmonroe  -nahhh i just looking over this thread of review comments ;)
<cory_fu> Ah, lol
<lazyPower> solid comedic gold considering i know how today has been :P
<cory_fu> Alright, I'm out.  Everyone have a good weekend!
<lazyPower> cheers cory_fu
<kwmonroe> adios
<cholcombe> thedac, i've had a lot of good experience with floating vip's using ctdb.  It's very solid
<lazyPower> (its only Wed.)
<kwmonroe> lol
<cholcombe> thedac, corosync i have no experience with so i'll do the best i can reviewing
<magicaltrout> youth of today
<magicaltrout> always on holiday
<cory_fu> lazyPower: I'm out for the rest of the week
<lazyPower> oo snap
<kwmonroe> pretty sure it's thanksgiving this week
<lazyPower> (jackie face)
<thedac> cholcombe: keep in mind the corosync VIP solution was already there (it is just indented). I added the DNS bit which is a call out to charmhelpers which has already been reviewed. You can also just do a +1 and wait for a second opinion.
<cholcombe> thedac, cool
<lazyPower> > You can also just do a +1 and wait for a second opinion.
<lazyPower> pretty much exactly what i do
<lazyPower> cholcombe - you're getting solid guidance here. i approve
<cholcombe> lazyPower, :)
<thedac> :)
<cholcombe> thedac, looks reasonable.  I just have a question on the hooks.py code
<thedac> shoot?
<cholcombe> thedac, i'm just wondering if the iface turns up as None on the vip if we should error or log there.  The code currently skips that case
<cholcombe> thedac, I know it's not your code
<thedac> looking
<kwmonroe> lazyPower: the only thing holding kubes-core from passing cwr is a "charm push . cs:~containers/bundle/kubernetes-core" from a recent https://github.com/juju-solutions/bundle-kubernetes-core dir.  feeling frisky?  (i am, but i'm not in ~containers)
<thedac> so, good point. What I would like to eventually do is move the VIP code to charmhlpers just like the update_dns_ha_resource_params.  Then we could fix it in one place. Right now I am trying to change as little as possible so that only the additional feature is added if it needs to be reverted for any reason.
<thedac> Was that weasaly enough :)
<cholcombe> thedac, fair enough :)
<lazyPower> kwmonroe - i cant fix it like that
<lazyPower> kwmonroe - not without landing the pile of CR's i linked elsewhere
<lazyPower> i'd love to publish though :)
<kwmonroe> ah, ack lazyPower.  i thought it was just a case of the bundle not using the latest rev of the kubes charm.
<lazyPower> that indeed is the case, but its more involved :)
<magicalt1out> :P
<lazyPower> primey wimey spacey wacey amirite?
<lazyPower> bdx - i circled back and it appears to work now?  http://paste.ubuntu.com/17383707/
<bdx> lazyPower: did it not work for you earlier either?
<lazyPower> bdx - i didnt try at that time, i was judging based on the output. can you give it another go?
<lazyPower> it may have been transient
<bdx> yea, omp
<bdx> lazyPower: yeah, its working now
<bdx> lazyPower: trickery
<lazyPower> ok cool. sorry that happened :( i got no response when i pinged so i circled back
<bdx> hey thanks!
<lazyPower> np mate
<lazyPower> glad we got it sorted
#juju 2016-06-16
<kjackal> admcleod: yes I saw your comments on Mahout
<admcleod> kjackal: cool! :)
<kjackal> do you think the execution time was acceptable?
<admcleod> hmmm it seemed ok yes
<kjackal> ok, cool
<admcleod> but it would still be preferable to have a non-hdfs non-yarn smoke test
<kjackal> agreed
<kjackal> I initialy thought that the localhost target was depricated for Mahout, but i was wrong
<admcleod> and there are several references to recommender algorithms that dont require hdfs
<kjackal> there are some algorithms that you can run localy
<admcleod> yep
<kjackal> really?
<kjackal> It could be an easy fix then
<kjackal> ok, so! My plan is finish Kafka, and then either Mahout of HBase depending on the feedback i get from you
<admcleod> kjackal: this one for example: https://mahout.apache.org/users/classification/twenty-newsgroups.html
<kjackal> Nice! I will try to refactor this
<kjackal> this=action+amulet
<admcleod> kjackal: cool
<admcleod> kjackal: https://mahout.apache.org/users/misc/testing.html
<kjackal> admcleod (or anyone else): I was looking at flannel yesterday.  https://jujucharms.com/u/hazmat/flannel/trusty/1 I was trying to make a monster VM from smaller ones. Have you seen this before. Wouldn't be great if there was an option on the manual provider to provision lxc containers inside machines you give him?
<admcleod> kjackal: pretty cool although i dont understand your last sentence
<admcleod> kjackal: https://github.com/apache/mahout/blob/b25a70a1bc6b9f8cb6c89947e0eaba5588463652/mr/src/test/java/org/apache/mahout/driver/MahoutDriverTest.java
<kjackal> You know how you have the manual provider and you manually give the machines you have access to
<kjackal> then if you deploy something it gets deployed to that machines, right?
<admcleod> kjackal: right
<kjackal> What if you could tell the manual provider to spawn lxc containers inside the manualy provided machines
<kjackal> based on a round robin or whatever other policy
<kjackal> What happened? did we move to IPv6?
<admcleod> kjackal: i see, right
<kjackal> admcleod: http://pastebin.ubuntu.com/17392595/ look at zkb
<admcleod> kjackal: what cloud is that?
<kjackal> local with lxc on juju 2.0 deploying apache-zookeeper
<admcleod> oh, somethings up with your lxc then
<kjackal> next deployment got IPv4 !!!
<admcleod> logs please
<kjackal> I wouldn't dare!
<kjackal> Ahhh ok only because its you!
<kjackal> admcleod: http://pastebin.ubuntu.com/17392643/
<admcleod> kjackal: if you ssh into it does it actually have an ipv4 address aswell?
<kjackal> admcleod: looks legit http://pastebin.ubuntu.com/17392669/
<admcleod> kjackal: so, if the others have ipv6 also, seems like juju displaying the wrong address
<kjackal> yes, probably this
<admcleod> kjackal: https://bugs.launchpad.net/juju-core/+bug/1574844
<mup> Bug #1574844: juju2 gives ipv6 address for one lxd, rabbit doesn't appreciate it. <conjure> <juju-release-support> <landscape> <lxd-provider> <juju-core:Won't Fix> <rabbitmq-server (Juju Charms Collection):Fix Released by james-page> <https://launchpad.net/bugs/1574844>
<kjackal> hm.... I must be running an old juju version
<kjackal> beta7
<kjackal> there should be at least a beta8, right?
<admcleod> kjackal: yes
<kjackal> admcleod: WHER
<kjackal> admcleod: where you involved in the IP issues we were seeing because of java?
<kjackal> I think kafka could be affected by this: http://stackoverflow.com/questions/1881546/inetaddress-getlocalhost-throws-unknownhostexception
<kjackal> admcleod: did we fix this with upgrading to java8?
<admcleod> kjackal: i am aware of a few different issues re dns and hostnames and java 7
<admcleod> kjackal: is this on lxc also?
<kjackal> yes, lxc juju 2.0
<Yash> Hello
<Yash> I'm facing a problem.
<Yash> 2016-06-16 08:36:27 DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused [14:07] <Yash> How to solve this?
<Yash> I rebooted my machine many times with no luck
<Yash> Ubuntu 16.04 and juju 2.0 beta
<admcleod> kjackal: right well we came across the problem on joyent since the default joyent resolves are googles public nameservers, so you can't use InetAddress.getLocalHost() to resolve the local hostname (for example)
<kjackal> http://pastebin.ubuntu.com/17392798/
<admcleod> kjackal: yeah, cos theres no DNS
<Yash> @kjackal are you suggesting anything to me? or other problem?
<kjackal> hi Yash I think we have separete problems :)
<Yash> ok
<admcleod> Yash: how did you bootstrap the environment?
<admcleod> Yash: and what cloud/substrate is it?
<Yash> juju bootstrap lxd-test localhost
<Yash> https://jujucharms.com/docs/devel/getting-started
<Yash> It was working fine
<Yash> I installed many openstack components
<Yash> then pending machine problem.. so I removed those machine and services and rebooted whole machine
<Yash> Now I'm trying on desktop
<Yash> SIngle machine with 24 GB ram and + 4TB
<admcleod> can you actually telnet to fd4f:23ae:5d73:5c67:216:3eff:febc:8b38 port 17070? is that port open on that ip?
<Yash> I found out if we can restart machine 0 it would resolve problem
<Yash> but juju 2.0 changed like there is no /var/lib/juju dir
<Yash> instead lxd and lcxfs
<Yash> which contains all of it
<admcleod> babbageclunk: hello!
<babbageclunk> admcleod: hi!
<babbageclunk> admcleod: uh oh
<admcleod> Yash: unfortunately my experience with juju 2.0 and networking isnt great.. however..
<Yash> Any help?
<admcleod> ;)
<admcleod> Yash: maybe babbageclunk can help ;)
<Yash> @babbageclunk can you please suggest anything?
<babbageclunk> Yash: so, you're trying to reboot machine 0 but can't?
<Yash> Yea
<Yash> Trying as per stackoverflow to solve issue
<Yash> but since 2.0 changed this also
<Yash> so can't find proper way
<babbageclunk> Yash: can you ssh into the machine with "juju ssh 0"?
<Yash> juju is in hang stat so I can't
<babbageclunk> And it's all deployed on lxd?
<Yash> Yes
<babbageclunk> How about rebooting the container with lxc restart?
<Yash> How can I do?
<Yash> I've 2 weeks exp only
<babbageclunk> :)
<babbageclunk> I'm pretty new here too.
<Yash> ok :)
<babbageclunk> Try "sudo lxc restart <container name>"
<babbageclunk> you can get the container name from "sudo lxc list"
<babbageclunk> It should come back up pretty quickly.
<babbageclunk> (Assuming there's not some other problem.)
<Yash> ok Let me try..Thanks
<babbageclunk> Any luck?
<Yash> nope
<Yash> Its just restarted but same problem
<Yash>  DEBUG juju.api apiclient.go:500 error dialing "wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api", will retry: websocket.Dial wss://[fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070/model/3923551f-dcf9-4ca8-8b32-dc010722721b/api: dial tcp [fd4f:23ae:5d73:5c67:216:3eff:febc:8b38]:17070: getsockopt: connection refused [14:07] <Yash> How to solve this?
<Yash> Worse..I can't use juju so don't know what is problem and how to solve it
<Yash> I'm not using MAAS as it's optional for single machine. Right?
<Yash> @babbageclunk anything you may suggest?
<babbageclunk> Yash: Sorry, on the phone at the moment, davecheney's comment in #juju-dev might be worth a look
<admcleod> i want to constrain a particular service deployed in an amulet test has an SSD (i only care about running it on AWS at the moment) - how do i achieve that?
<magicalt1out> just told the guy who owns the other 50% of my company I'm leaving to work for NASA full time..... talk about dumping a spanner in the works.....
<rick_h_> magicaltrout: congrats?
<magicaltrout> thanks rick_h_ , one of those weird things, I don't really want to quit, or at least go very part time
<magicaltrout> but how often do you get a job offer from NASA to work on big data stuff?
<magicaltrout> especially when you live in the UK
<kjackal> nice! you will see realy big data there!
<kjackal> true big data!
<lazyPower> no doubt right? all that historical sensor and probe data to churn through
<lazyPower> magicaltrout - im not a data scientist, but the prospects of that are making me jealous
<kjackal> What software stack do they have over there in NASA, magicaltrout? In house?
<magicaltrout> a lot of Hadoop, SciSpark and IPython/Zeppelin at the mo kjackal
<magicaltrout> of course depends what area you work in I guess
<kjackal> magicaltrout: nice!
<magicaltrout> i'm gonna charm up scispark at some point over the next few months
<magicaltrout> I did a big data demo in San Diego last week
<magicaltrout> the guys who were there loved it
<kjackal> :) thank you magicaltrout
<magicaltrout> sadly for the project I joined we're too late for Juju so I'm introducing them to docker
<magicaltrout> 1 step at a time I guess
<kjackal> magicaltrout: what about going to space? any progress there? :)
<magicaltrout> the mrs banned that career move a long time ago
<admcleod> magicaltrout: congrats
<admcleod> so...
<admcleod> i want to constrain a particular service deployed in an amulet test so that it has an SSD (i only care about running it on AWS at the moment) - how do i achieve that?
<aisrael> admcleod: Pretty sure you can specify that via constraint. Let me see if I can find an example
<admcleod> aisrael: thanks :}
<aisrael> admcleod: http://pastebin.ubuntu.com/17397999/
<aisrael> basically, you have to force it to one of the aws instance types that has ssd backing
<admcleod> aisrael: ah yeah cool thatll do, thanks
<petevg> lazyPower: I'm review queuing, and have a question about https://bugs.launchpad.net/charms/+bug/1587641, which you +1ed a week ago. It's still in a "new" state. Can I move it to "fix released"?
<mup> Bug #1587641: Update for MariaDB charm <Juju Charms Collection:New> <https://launchpad.net/bugs/1587641>
<lazyPower> petevg - oh indeed! i missed closing the bug, sorry about that
<petevg> No worries.
<petevg> lazyPower: I closed out the bug. (Learning how to do so was useful :-) )
<lazyPower> \o/ glad my mistake was a learning experienc e:D
<lazyPower> thats the best kind of mistakes to make
<balalaika> What's the best way to expose a charms IP to the user?
<balalaika> I'm deploying gitlab and I want them to be able to point their domain to the address.
<balalaika> I'm aware of unit-get public-address juju helper method.
<balalaika> Should I just document in the README that they should inspect the deployed service via CLI?
<Prabakaran> For my other charm requirement I had to write this particular template http://paste.ubuntu.com/17332169/ in bash. <lazypower> could you please help me on this?. And also I have to copy some JAR files to mysql charm container where it is installed? Is it possible for me to do it?
<lazyPower> Prabakaran - ok so your charm layer is no longer in python? its in bash now?
<Prabakaran> I am asking this for other charm which i am developing IBM Platform RTM
<lazyPower> ah, ok. I was confused on why the last minute language change
<Prabakaran> I have noted in my learning section what ever u had sent it tome
<Prabakaran> it was helpful
<Prabakaran> but need ur help in bash alos
<lazyPower> Prabakaran http://paste.ubuntu.com/17404518/
<lazyPower> my bash reactive is questionable at best, fix syntax where applicable
<Prabakaran> ya sure.. Thanks for your immediated help on this <lazypower>.. i will implement this.... but adding to that I have to copy some JAR files to mysql charm container where it is installed? Is it possible for me to do it?
<lazyPower> It is, but i'm unclear on what you'll do after you've sent them to the mysql unit. jar's aren't very helpful on their own right? you'll need to not only copy them but also action on them right? such as run the jar files in a JRE
<lazyPower> so right while you need to copy them, it seems that what you'll really want to do, is add *another* relationship and interface to mysql specific to your use case, so you can take those actions once you've received the jar files. Ohtherwise you've put bits on disk and cant do anything with them
<lazyPower> kwmonroe - does that sound about right? ^
<lazyPower> Prabakaran - before you go off to implement htat, i'd like to clarify with kwmonroe as he's got some familiarity with the goals of your charm(s)
<kwmonroe> Prabakaran: what jars do you need to copy to the mysql unit?
<kwmonroe> if they are something that others could use, i'd suggest opening a feature req against mysql to control the inclusion of those jars as a config opt in mysql.
<kwmonroe> or set up a shared filesystem that both mysql and your charm support (nfs, etc).. you could stick the jars out there, but then would probably need to tell mysql to use that shared location as a 'classpath' (if that's even a thing for mysql)
<Prabakaran> Sorry i was wrong.. there was a tar file which has some *.sql files which i need to copy to the container and run all those...but as per recent chat from <lazypower> we can do this with mysql-client wherein we dont need to copy those tar--->*.sql file
<Prabakaran> but i asked this for my understanding
<kwmonroe> right - no need to copy *.sql files over to the mysql host.  you can execute any *.sql against the mysql charm using the 'mysql -h host -u user -p password <command>' like lazyPower had in that earlier pastebin
<beisner> lutostag, jcastro - re: the jenkins charm ... is the source of truth for dev @ https://github.com/jenkinsci/jenkins-charm ?    i ask because of this proposal in the queue:  https://code.launchpad.net/~lutostag/charms/trusty/jenkins/xenial/+merge/296222
<lazyPower> oooo
<lazyPower> beisner - good catch on that one
<lazyPower> i'm pretty sure we did move source of truth to upstream
<beisner> lutostag, ps tons of thx for fixing up jenkins on X.  that was on my oh-shoot list o' stuff to do.
<lutostag> beisner: ah, I had no idea about that
<lutostag> I can do a PR through github if that is preferred
<beisner> just looked back to confirm.  if ya blinked, you might have missed it.  https://lists.ubuntu.com/archives/juju/2016-February/006611.html
<lutostag> indeed, I'll PR there tomorrow/next week
<beisner> lutostag, thx again sir :)
<Prabakaran> Thanks kevin and lazypower for the explaination
<Prabakaran> I have small doubt in mysql charm and its interface.
<Prabakaran> Here mysql charm is non-layered charm and it would have exposed hostname, port, username and password using relation-set command in relation hook but mysql charm should set/pass those values to the mysql interface like how we use relation-call in the layercharm to interface. Can you please explain me the flow how it works?
<beisner> lazyPower, so i wonder:  what does the get-it-into-the-cs story look like for jenkins @ its new home, and what should become of the lp branch?
<lazyPower> beisner - we've been dropping lp branches like flies post ingestion kill-off
<lazyPower> beisner - get it into the cs right now depends on a manual review and push by a ~charmer until we launch the new rev q, which is still pending iirc
<lazyPower> beisner - so if you've got a hot item fix, lmk. otherwise, I'd like to wait for the revq to launch so we have a breadcrumb trail of whats happened.
<lazyPower> but thats just me :)
<lazyPower> i'm sure there are other opinions out there
<beisner> lazyPower, ok.  it may be worth going ahead and doing a charm push from the gh repo and setting the homepage and bugs url metadata so that the cs points to the right place, then nixing the lp branch.  whaddaya say?
<lazyPower> beisner - do you have good test run output for me so i get the warm fuzzies?
<lazyPower> i can update the store meta no problem, but i want test results before i push :)
<beisner> lazyPower, i don't have anything hammering on that charm's dev flow atm.  and... i think it needs tests to be added.
<lazyPower> beisner - tests became a mandatory requirement as of the trusty series :\
<lazyPower> i cannot in good faith push a charm without tests
<beisner> lazyPower, oh it does have these: https://github.com/jenkinsci/jenkins-charm/tree/master/tests
 * beisner is with ya lazyPower 
<lazyPower> LOL
<lazyPower> i wrote these tests
<lazyPower> forever ago
<beisner> welcome baaaaack
<lazyPower> https://github.com/jenkinsci/jenkins-charm/blame/master/tests/100-deploy-trusty#L21
<lazyPower> oh man, thats tricky
<lazyPower> its deploying a precise series charm to validate leader/follower
<lazyPower> beisner - i suggest we pin this for tomorrow, and fold in marco/jcastro if they're around. Otherwise lets vet the charm and make sure its ready for a release
<lazyPower> will you have 20/30 minutes tomorrow to do so? we can pair and knock it out quickly
<lazyPower> at least the boolean "yes we can push" portion
<beisner> lazyPower, yep, no pressure here.  just taking a spin through the queue to see what is in my familiarity zone.  thx for your help.
<lazyPower> np beisner, happy to help
<lazyPower> beisner - did you see my post to the list re: tags in github?
<lazyPower> beisner - as you do a lot of test/release planning, i'd love your feedback on that
<beisner> lazyPower, i think that's a good value for projects that have a github dev focus.
<beisner> lazyPower, for the openstack charms, github repos are just a sync from cgit @ openstack upstream, and i'm not sure what our tagging abilities are.
<lazyPower> tags are independent of github, they are a git native primitive
<beisner> we've begun injecting a repo-info file during our build/push/publish automation.  ie.  https://jujucharms.com/u/openstack-charmers-next/neutron-gateway/xenial
<lazyPower> so you can tag and push to remote if you have write access to the repository
 * lazyPower looks
<beisner> lazyPower, that solves one of our big challenges:  "Joe is in possession of a charm.  we don't really know where it came from."
<lazyPower> ah man i like that
<lazyPower> the repo-info file
<lazyPower> its exactly what i would have expected from the revision file
<lazyPower> https://api.jujucharms.com/charmstore/v5/~openstack-charmers-next/xenial/neutron-gateway/archive/repo-info  <- that thing
<beisner> that's the one
<lazyPower> have you already built tooling around this?
<lazyPower> and if so, how much of it is specific to the openstack setup?
<beisner> we just launched openstack's rendition of layer ci.    it'll be specific in that it is centered around gerrit.
<beisner> ie. no git pull requests
 * lazyPower snaps
<lazyPower> thats such a tease beisner
<lazyPower> you show me what i'm looking for in another format, and then pull it away D:
<beisner> but yeah, now we can put up a review, tests fly in all sorts of directions, a reviewer can approve and land with a vote, then it pushes/published to CS.
<lazyPower> right, makes sense for your use case
<lazyPower> ok i'll noodle this s'more and wait for feedback on the ml post, i feel like there's a big gap i'm not seeing in that process though.
<lazyPower> and i'm going to regret doing it once i start
<beisner> lazyPower, anyway +1 if we can tag revs with cs refs, that will be sweet indeed
<beisner> i already had someone ask if we could do that, and owe a bit of research to the idea and whether our tags will survive the flows and syncs through systems outside our direct control.
<beisner> i do know this:  we don't have perms to directly tag @ the cgit repos.
<beisner> only the bots do
<beisner> and a few core/root infra team peeps
<lazyPower> ok, so you're mirroring that cgit right?
<lazyPower> which means you can carry meta information in your fork
<beisner> we aren't mirroring it though.  it happens for anything in https://github.com/openstack/*
<beisner> no fork
<lazyPower> ah
<lazyPower> what a funky setup
<lazyPower> i kind of like that its locked down though
<beisner> it's a bit weird.  if one were to raise a pr against the gh repo, it will get nack'd and squashed by a bot
<lazyPower> as it should if there's a gerrit review process
<beisner> yep, there is exactly one way to land a bit. :)
<beisner> so it'd be analogous to an enterprise operating their own private internal cgit systems, but also making those repos avail via github for one-way consumption.  a bit funky, yah.
<bdx> hey whats up everyone? Whats the reccomended best practice for including pip deps in a charm?
<bdx> recommended*
<bdx> i can spell
<bdx> or, in a layer
<bdx> my bad
<rick_h_> bdx: so with juju 2.0 I'd say to put together an offline cache of pip deps and zip that as a juju resource
<rick_h_> bdx: but maybe otherw siwll have other suggestions as well
<bdx> rick_h_: thanks
<bdx> rick_h_: for example -> https://github.com/DarkHorseComics/layer-whelp/blob/master/lib/charms/layer/whelp_utils.py#L5
<rick_h_> bdx: hmm, so that's for a charm. For a layer, that you want to reuse it's more interesting
<stub> bdx: If you are writing a reactive charm, embed them by creating a wheelhouse.txt file in your layer.
<rick_h_> heh that's what I was looking for. I knew they had some wheel setup for the layers
<stub> https://github.com/juju-solutions/layer-basic/blob/master/wheelhouse.txt
<huckst> The command `juju destroy-controller` hangs.
<huckst> 2.0-beta8-xenial-amd64
<bdx> stub: nice, so any layer can define a wheelhouse.txt and those deps will be picked up?
<rick_h_> huckst: can you ssh to the controller still?
<rick_h_> huckst: anything in those logs that might prove helpful? Any output to judge where/why it's hanging?
<bdx> thats what I thought .... but couldn't find any examples of subsequent layers using this so I wasn't sure
<bdx> stub, rick_h_: thx
<lazyPower> bdx - dont ilsten to rick. use the wheelhouse.
<rick_h_> yea, I was thinking of a string of app deps for a charm
<rick_h_> as a resource, I missed the layer specific bit there
<huckst> No machine to ssh to.
<stub> bdx: I believe so, but haven't used it in anger (I'm still pulling debs from ppas)
<huckst> I had setup the gce google controller and the lxd localhost controller. Started working on the lxd controller, forgot about the gce controller until now.
<huckst> Then when I went to start using the gce controller, nothing. So I'm trying to drop and re-bootstrap.
<rick_h_> huckst: ok, so which one did you destroy-controller on?
<huckst> gce google
<lazyPower> bdx - i use the wheelhouse dependency chain quite a bit
<lazyPower> bdx https://github.com/juju-solutions/layer-docker/blob/master/wheelhouse.txt  - as an example
<rick_h_> huckst: ok, and the instances are all shut down?
<rick_h_> huckst: if you look in the gce panel? so it ran some output during destroy?
<rick_h_> can you pastebin the output?
<huckst> Nothing ran in the gce panel.
<rick_h_> I understand nothing ran there, but if you had a GCE controller then you had an instance running on the GCE cloud and it would show in the GCE control panel
<huckst> Correct. I can confirm it had started weeks ago and was successful at doing somethings. But nothing recently.
<jhobbs> To make a new team in the charmstore, do I need to do anything other than create the team in launchpad?
<magicaltrout> nope
<jhobbs> ok, guess it takes a while to sync or something
<magicaltrout> where jhobbs, in charm tools?
<bdx> lazyPower: niccceeeee
<magicaltrout> you have to log out of literally everything and log back in again
<magicaltrout> charm tools, jujucharms.com etc
<lazyPower> jhobbs - what magicaltrout said, its a known bug. it only syncs groups on login.
<jhobbs> magicaltrout: yeah i created a group on launchpad about 15 minutes ago, logged out of charm tools and logged back in and it won't let me push
<lazyPower> bdx - hope that gets you unblocked, lmk if you need any further help
<jhobbs> ok
<magicaltrout> i like to think of it more as an annoing feature ;)
<jhobbs> i will log out of more stuff
<jhobbs> thanks
<lazyPower> magicaltrout - six of  one, half dozen of the other ;)
<magicaltrout> jhobbs yeah website and everything
<rick_h_> yes, once SSO can do the username stuff we can look at actually disconnecting from LP there
<magicaltrout> even then it doesn't always work :)
<stub> jhobbs: I think you need to relogin to the web interface, not charm tools
<jhobbs> yay, working now, thanks everyone
<magicaltrout> \o/
<arosales> jhobbs: not intuitive at all, but the UI folks are working to make that better
<huskc> I was wrong about the gce google controller hung issue. (new to gce) After navigating to the correct dashboard I saw a juju instance running.
<huskc> The logs output had an ongoing 'unable to connect to API' error.
<huskc> I manually deleted the gce (juju bootstrap) instance, but juju still hangs on destroying the controller locally.
<huskc> It doesn't exist, so shouldn't the CLI `juju destroy-controller gce-devenv` just flush what's configured locally?
<huskc> I worked around the CLI hangup by manually remove the 'dead' gce-controller from all juju config in ~/.local/share/juju/*
<huskc> Now I can successfully re-bootstrap the google cloud.
<huskc> It's nice that the config is just YAML and easily modified (and not some SQLite db). ;)
<valeech> Is this a good place to get help with this error: âERROR cannot resolve URL "cs:maas-region": charm or bundle not foundâ I am trying to deploy MAAS HA with juju following this guide https://maas.ubuntu.com/docs2.0/ha.html. I have a fresh install of xenial and juju 2.0 beta 7.
<magicaltrout> valeech: it is, but i've never used MAAS so I'm not the person to ask :)
<valeech> magicaltrout: understood. I am pretty new to juju myself. I am wondering if the bundle name changed in some fashion.
<magicaltrout> thats easy enough to find out i would hope
<magicaltrout> https://jujucharms.com/q/?text=maas-region
<magicaltrout> i have no clue about the tutorial but there doesn't seem to be a charmers charm there
<magicaltrout> so
<magicaltrout> juju deploy cs:~maas-maintainers/trusty/maas-region-3
<valeech> excellent! Thanks so much!
<magicaltrout> that looks like a likely solution
<arosales> nice find magicaltrout, it looks like maas-region is not a recommended charm and thus one would need to prefix the username as suggested
<arosales> valeech: thanks for the feedback I'll file a bug on the maas docs
<valeech> arosales: where are you seeing that maas-region is not a recommended charm?
<arosales> valeech: a recommended charm would have a flat name space like https://jujucharms.com/maas-region/
<arosales> and 'juju deploy maas-region' would just work
<arosales> in this cae it doesn't as it hasn't been through the review process and thus not mark as recommended
<valeech> arosales: makes sense. thanks for the explanation
<arosales> it very well may be a good working charm the MAAS folks would like users to use judging by the name maas-maintainers, but that explains why "juju deploy maas-region" didn't work per the MAAS docs
<arosales> valeech: happy deploying
<valeech> thanks@!
<valeech> !
<arosales> valeech: and welcome to the juju community :-) If you don't find an answer here askubuntu.com and the juju mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) are also good resources
<valeech> Great! thanks.
<valeech> I was having some weird issues on 2.0beta7 so I blew it away and spun 2.0beta9. Now itâs getting even more strange
<valeech> I canât seem to deploy any apps to my machines. I have 2 machines added to a manual cloud and the juju status shows them as started but I keep getting this error on all charms I load:
<valeech> cannot assign unit "wordpress/0" to machine: cannot assign unit "wordpress/0" to new machine or container: cannot assign unit "wordpress/0" to new machine: use "juju add-machine ssh:[user@]<host>" to provision machines
<arosales> valeech: to confirm you are using manual provider, correct?
<arosales> if so can you pastbin the output of juju status
<valeech> juju bootstrap maas manual/10.131.107.128
<valeech> like that?
<arosales> I think you just need to specify which unit you want to deply the charm to with --to
<arosales> valeech: what does `juju status` currently return?
<valeech> http://pastebin.com/93RGJfCw
<valeech> juju runs on one VM, and mass2 and maas3 are 2 other VMs
<lazyPower> valeech - i dont believe that wordpress is supported on xenial by the charm.
<valeech> I am batting a 1000 :)
<arosales> valeech: try 'juju deploy --to 0 wordpress wordpress2
<lazyPower> valeech - i see your enlisted machines are series: xenial. Juju didn't tell you a very helpful error message that its due to not having a series allocated the charm can consume :\  but thats the issue here
<arosales> lazyPower: one can still deploy trusty images though
<lazyPower> arosales true, but valeech will need to manually provision on ein maas and add it to his juju env, theres no magic here since its the manual provider
<lazyPower> s/his/their/
<arosales> lazyPower: ah yes, but they --to would manually place it on the xenail machine
<lazyPower> arosales not if its series: trusty, you cant force smash a series difference
<arosales> lazyPower: well I think valeech is just trying to test his setup
<lazyPower> i do get what you're saying tho :)
<arosales> valeech: perhaps try, "juju deploy --to 4 ubuntu"
<arosales> and see if at least you can get that to deploy
<arosales> lazyPower: that should work, right?
<lazyPower> arosales - it should, yeah
<arosales> and earlier when I said "--to 0" juju would have balked at that because machine 0 is reserved, so apologies there
<valeech> arosales - it did balk I will try âto 4
<bdx> while we are on wordpress ....
<valeech> results from "juju deploy --to 4 ubuntuâ: ubuntu/0     unknown   idle   4               maas3
<bdx> we need a wordpress charm reform effort initiated
<arosales> here are the xenial charms available: https://jujucharms.com/store?type=charm&series=xenial
<lazyPower> look at those beats!
<lazyPower> that just landed today :D
<arosales> lazyPower: :-)
<arosales> valeech: what is your current juju status post that deploy command?
<bdx> lazyPower: Nice!
<arosales> post that ubuntu deploy command
 * arosales waves to bdx
<valeech> so thereâs a juju-gui charm for xenial but isnât GUI in juju 2.0 included rendering the need for a charm useless?
<lazyPower> bdx - packetbeat landed too,  juju deploy cs:~containers/packetbeat  -- it needs review for promulgation but its done :)
<bdx> arosales: whats up!
<bdx> lazyPower: so sick! pumped
<lazyPower> valeech - correct, juju 2.0-beta6 started shipping with the juju-gui
<arosales> valeech: in 2.0 the juju gui comes baked into the controller :-)
<valeech>  juju deploy --to 4 ubuntu
<valeech> Added charm "cs:ubuntu-0" to the model.
<valeech> Deploying charm "cs:ubuntu-0" with the default charm metadata series "xenial".
<arosales> just issue "juju gui" and you get it out of the box
<arosales> valeech: looking good
<lazyPower> bdx - glad you're excited :) Gimme some bugs, i know they're in there
<arosales> bdx: finally a maas expert in here :-)
<valeech> from juju status:
<valeech> ubuntu     unknown  false    jujucharms  ubuntu     0    ubuntu
<bdx> lazyPower: nice work seriously
<bdx> lazyPower: did you ever figure out the geo_ip?
<lazyPower> bdx thanks man :)
<lazyPower> i did, but didnt ship with it, because the db is so old
<lazyPower> i'd like to engage with elastic to get something more recent in there
<bdx> ahh
<bdx> arosales: ha - I hope to be
<lazyPower> i've got a little video i'm working on to push on my social channels, once its done i'll ship it over to elastic as the intro to the work done, and see if they're interested in collaborating/upstreaming the charms
<arosales> valeech: could I get a full pastbin output of juju status, I lost the context if that was machine or application output
<valeech> sure..
<bdx> arosales: I only have 5 maas deploys
<arosales> bdx: only :-)
<valeech> http://pastebin.com/DkZ37RT7
<bdx> lazyPower: thats a great idea
<bdx> valeech: yea - 'juju deploy maas-region' just works if you have the needed configs set
#juju 2016-06-17
<bdx> but it needs the full path
<valeech> bdx: ok, so I just need to feed it a config.yaml, right?
<bdx> valeech: juju deploy cs:~blake-rouse/maas-region-4
<bdx> valeech: correct
<arosales> valeech: ubuntu deploys looks good
<arosales> bdx: is the latest blake-rouse or maas-maintainers
<valeech> arosales: so the status unknown is ok/expected?
<bdx> valeech, arosales: the ha functionality isn't there yet as the docs portray, but if you just want 1 rack-controller + 1 region-controller it works great
<valeech> bdx: curses! I was hoping for the HA pieces :)
<bdx> arosales: as far as I know, yes - I have cs:~blake-rouse/maas-region-4 deployed
<arosales> valeech: workload state looks to be "unknown" which is ok
<valeech> arosales: ok
<valeech> how do I go about removing those stranded mariadb and wordpress apps?
<bdx> valeech: from what I hear the ha portion will be getting some cycles again soon
<arosales> valeech: in juju 2.0 beta9 the new syntax is "juju remove-application mariadb"
<arosales> and juju remove-application wordpress
<valeech> bdx: cool. How does one go about contributing to something like that? I am new to this, but would love to give back
<valeech> arosales: I have run both of those and the apps persist, theyâre like extended family, they just wonât leave!
<bdx> valeech: start here -> https://jujucharms.com/docs/devel/developer-getting-started
<arosales> bdx, re ha no available, the docs @ https://maas.ubuntu.com/docs2.0/ha.html are very misleading then :-(
<valeech> arosales: Yes, the docs are misleading :)
 * arosales filing a bug on that
<bdx> valeech: then go here -> https://jujucharms.com/docs/devel/developer-layer-example
<arosales> valeech: huh remove-application doesn't want to work for you . . .
<arosales> let me see if there is a --force
<arosales> valeech: were you interested in maas ha contributions or charm contributions?
<arosales> or both :-)
<valeech> arosales: both! I plan on using these products heavily
<bdx> valeech: I'm with you there .... I've got the hardware already set aside for it :-)
<arosales> valeech: great to hear, the docs that bdx pointed are are excellent. I would also add https://jujucharms.com/docs/devel/developer-getting-started and suggest to join the mail list I pointed at earlier
<valeech> bdx: me too! trying to get this lab up and running so the my product managers can push it into production before itâs ready!
<valeech> arosales: I joined the mailing list! Iâll spend some time on the docs as soon as I can get a functioning concept of all of this :)
<bdx> valeech: the docs will aid you in getting the concept down too though:-)
<valeech> bdx: good point
<arosales> valeech: for maas contributions see http://maas.ubuntu.com/docs/about.html#contributing
<arosales> valeech: but your manual setup is running
<arosales> just not a lot of xenial charms yet
<arosales> suggest to spin up some trusty machines
<arosales> or if you have maas configured just point juju at that :)
<valeech> arosales: true. I was trying to use juju to deploy maas ha :) Then I would point juju at maas. I guess I need to get maas ha up manually.
<valeech> would it make sense to deploy 2 maas region controllers with juju and then do the work externally to ha them togeher? or is that more work than its worth?
<arosales> valeech: try:  juju deploy --to 3 cs:~maas-maintainers/trusty/maas-region-3
<valeech> Added charm "cs:~maas-maintainers/trusty/maas-region-3" to the model.
<valeech> Deploying charm "cs:~maas-maintainers/trusty/maas-region-3" with the user specified series "trusty".
<valeech> ERROR cannot add application "maas-region": cannot deploy to machine 3: series does not match
<arosales> ugh
<arosales> xenial
<arosales> same issue as wordpress, no xenail charm for maas-region charm
<arosales> valeech: you can try appending --force to see if deploy maas-region on xenail
<arosales> but that is uncharted territory
<valeech> arosales: sure
<arosales> as the charm maintainer may have to make special accommodations for xenail especially given systemd
<arosales>   juju deploy --to 3 cs:~maas-maintainers/trusty/maas-region-3 --force
<arosales> may get you a deploy, but results may vary if it works on a xenial machine
<arosales> valeech: may be better to just spin up a trusty machine and go from there
<valeech> arosales: yeah, perhaps. Iâll give that a shot and report back on results.
<arosales> valeech: good luck
<valeech> the âforce wonât take because between the time you asked to try the mass-region-3 and it failed and asking me to try the âforce I install maas-region-4 and it took. I didnât give it a yaml file though, I just wanted to see what would hapen.
 * arosales goes to file bug against maas docs
<valeech> but the remove-application function isnât working so I canât remove it and try the maas-region-3 âforce install :)
<valeech> arosales bdx lazyPower thanks for all the help!
<lazyPower> valeech anytime :)
<arosales> valeech: I will also file a bug on remove-application
<arosales> that should remove the application from the model
<arosales> valeech: could you pastebin me "juju remove-application wordpress --debug"
<valeech> arosales: sounds good. Is there any logs or data I should gather?
<valeech> sure
<arosales> juju debug-log --replay may be interesting
<arosales> but the --debug with remove-application I think will have the interesting bits
<valeech> http://pastebin.com/xyjcMz9m
<arosales> valeech: thanks
<valeech> arosales: np!
<arosales> valeech: the heavy handed approach would be to destroy the model and create a new one once you are done experimenting given remove-application doesn't seem to be happy atm.
<valeech> arosales: I tried that too and it just runs forever waiting for the service to stopâ¦
<arosales> valeech: nice :-/
<valeech> :)
<arosales> and no --force on destory-model
<arosales> there is on the controller, but at that point its a new bootstrap
<valeech> This is neat: http://pastebin.com/iSnUi1ez
<arosales> valeech: ah you may have to remove the units first, given they are manually added
<arosales> valeech: try "juju remove-unit 4"
<arosales> note if you have ubuntu on that still you may want to at least try "juju remove-application ubuntu"
<thumper> 4 what?
<valeech> error: invalid unit name "4"
<arosales> valeech: sorry try remove-machine
<arosales> so "juju remove-machine  4"
<arosales> thumper: you just trying to cause trouble :-)
<valeech> that worked
 * thumper lobs hand grenades from the safety of another country
<arosales> thumper: remove-application not being helpful tonight
<thumper> well that sucks
<arosales> thumper: http://pastebin.com/xyjcMz9m
<arosales> valeech: try the same with 3
<valeech> that worked too for 3
<valeech> but the apps are still there :)
<arosales> valeech: what does "juju destroy-model default" return now
<valeech> ok, now destroy-model worked!
<thumper> arosales: ooo...
<thumper> 2016-06-17 00:26:42 DEBUG juju.api apiclient.go:520 health ping failed: connection is shut down
<thumper> that line right there
<arosales> thumper: timeout?
<thumper> uh
<thumper> maybe not
<arosales> valeech: ok suggest to start a new model, and spin up some trusty images
<thumper> debug on the client when things aren't working don't tell us much
<thumper> the debug log of the controller is much more useful
<arosales> thumper: well deploy ubuntu worked
<arosales> thumper: but couldn't remove wordpress
<thumper> if you have debug logging on
<thumper> grab the output by doing this:
<thumper> juju debug-log -m controller --replay --no-tail > foo.log
<arosales> valeech: also if you want some run time on AWS and are interested in contributing to juju charms please feel free to enroll in https://developer.juju.solutions/
<thumper> that will get the apiserver calls and internal errors or issues with removal
<arosales> valeech: that will give access to more "machines" if you want to focus on just the charm piece if you don't have a lot of spare hardware in your maas to test against.
<arosales> thumper: already destroyed the controller :-/
<arosales> thumper: I'll try to recreate on AWS, but this issue was on manual provider so I may try Digital ocean to reproduce
<thumper> ack
<arosales> thumper: but " juju debug-log -m controller --replay --no-tail > foo.log" noted if I am able to reproduce
<thumper> yeah, that is a handy command
<valeech> arosales and thumper I can spin it back up again
 * thumper is in the bowels of juju ripping at the bad code he left several years ago
<arosales> thumper: make sure you get _all_ the bad code then
<arosales> valeech: if you have time, sure. If not I can also try to produce
<valeech> arosales: sure. It doesnât take long to setup
<arosales> valeech: thanks
<thumper> arosales: no chance...
<thumper> but I'll get the cancer called fslock out
<arosales> thumper: well at least there is that :-)
 * arosales filed https://bugs.launchpad.net/maas/+bug/1593516 in reference to the incorrect syntax for deploys
<mup> Bug #1593516: docs2.0/ha juju deploy maas-region not correct syntax <MAAS:New> <https://launchpad.net/bugs/1593516>
<Yash> nova-compute/10         error           idle        2.0-beta7 2                      10.100.100.200 hook failed: "install"
<Yash> How to solve this?
<Yash> Please help.
<admcleod> Yash: juju debug-log -i nova-compute/10 -n 30
<Yash> admcleod: yash@ugocloud:~$ juju debug-log -i nova-compute/10 -n 30 ERROR invalid entity name or password yash@ugocloud:~$ juju debug-log -i nova-compute/9 -n 30 ERROR invalid entity name or password yash@ugocloud:~$ juju debug-log -i nova-compute/11 -n 30 ERROR invalid entity name or password
<Yash> ERROR invalid entity name or password
<admcleod> Yash: do you still have this 'upgrade in progress' message somewhere?
<admcleod> Yash: also i forget, which juju version are you on? (juju --version)
<admcleod> Yash: also can you pastebin the full results of 'juju status --format=yaml'
<Yash> juju 2.0 7 beta
<Yash> nova-compute/9           waiting         executing   2.0-beta7 2                      10.100.100.200 Incomplete relations: image, storage-backend             neutron-openvswitch/36 error           idle        2.0-beta7                        10.100.100.200 hook failed: "install"                                   ntp/73                 error           idle        2.0-beta7                        10.100.100.200 hook failed: "install"       
<admcleod> Yash: hmm
<admcleod> Yash: how did you resolve the issue you were having yesterday? with connecting to the ip / port?
<admcleod> Yash: also can you upgrade juju to the latest version?
<Yash> admcleod : I restarted machine after login using container exec command
<Yash> 2.0-beta7-xenial-amd64
<Yash> How to upgrade?
<Yash> using ppa?
<Yash> devel one?
<admcleod> yes
<admcleod> beta8 is available
<Yash> ppa:juju/devel
<Yash> right?
<Yash> And do I need to bootstarp again?
<Yash> babbageclunk: welcome :)
<Yash> *bootstrap
<Yash> amd64 2.0-beta9-0ubuntu1~16.04
<Yash>  2.0-beta9
<Yash> its downloading 9
<Yash> is that ok?
<Yash> admcleod: please confirm
<admcleod_> Yash: im sorry i seem to have disconnected and only saw you ask 'devel?' to which i responded 'yes', was there something else?
<Yash> ppa:juju/devel
<Yash>  amd64 2.0-beta9-0ubuntu1~16.04
<Yash> is that ok?
<Yash> np :)
<Yash> Also do I need to bootstrap again?
<admcleod_> beta9 -even better :)
<admcleod_> you should bootstrap again, i think there is something wrong with the current deployment you have there
<admcleod_> Yash: ^
<Yash> ok
<Yash>   Thanks let me try with beta 9
<Yash> Thank you
<Yash> Is there anytime when 2.0 stable will release?
<admcleod> Yash: i was told, but ive forgotten and dont want to guess so ive asked
<admcleod> yash: (the date)
<Yash> which date... I don't get it?
<admcleod> Yash: the release date for 2.0 stable
<Yash> ohh..my bad ok :)
<Yash> admcleod: Thanks
<admcleod> Yash: let us know if re-bootstrapping with the new version fixes things please?
<Yash> admcleod: ok..I'm doing it now. I will keep you posted.
<Yash> admcleod: is there any tester position? I can join. ;)
<admcleod> Yash: haha im not sure, maybe :)
<Yash> admcleod: please ask and let me join your team :) ..lol
<admcleod> Yash: :D
<Yash1> https://api.jujucharms.com/identity/v1/idp/usso/callback?waitid=a593011ca1d5b53b399df26515435a53&openid.assoc_handle=%7BHMAC-SHA1%7D%7B5763c2e4%7D%7BCAq6Vg%3D%3D%7D&openid.claimed_id=https%3A%2F%2Flogin.ubuntu.com%2F%2Bid%2FhpYfrJt&openid.identity=https%3A%2F%2Flogin.ubuntu.com%2F%2Bid%2FhpYfrJt&openid.mode=id_res&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.ns.sreg=http%3A%2F%2Fopenid.net%2Fextensions%2Fsreg%2F1.1&openid.op_endpoint=https%3A%2F%2F
<Yash1> {"message":"cannot get user details for \"https://login.ubuntu.com/+id/hpYfrJt\": not found: not found","code":"forbidden"}
<Yash1> juju authentication problem
<jamespage> cholcombe, hey - can you join #openstack-charms as well pls
<jamespage> cholcombe, I tidied https://review.openstack.org/#/c/328374/ in the interest of getting that cleared for the landscape folk
<codepython777> anyone here?
<jamespage> bdx, hey - could you respond to my re-licensing request on openstack-dev pls (ideally want all contributors to agree before we switch to Apache 2.0)
<valeech> is there a way with a manual cloud to prevent juju from removing the machine after all services are removed from it?
<lazyPower> valeech - there is a setting to tweak the machine reaping, but i dont recall what it is offhand
<gennadiy> hi everybody, we have got issue with juju-gui-130. it doesn't display changes. i have found error in dev console - "Unknown delta type: actionInfo"
<valeech> Thx lazyPower
<lazyPower> valeech - i'm still looking
<lazyPower> thats buried in one of our FAQ documents i think
<catbus1> Hi, JUJU 2.0 will create a linux bridge interface Â as "br-eth0" on the node with eth0 interface. Is there a way to stop juju from creating that bridge interface?
<dooferlad> catbus1: As it stands the bridge is always created.
<dooferlad> catbus1: is it actually causing problems, or is it just untidy?
<catbus1> dooferlad: I will find out if it's causing problems from this charm partner.
<bdx> lazyPower, valeech: https://www.jujucharms.com/docs/devel/howto-harvesting
<valeech> bdx: w00t! thanks. One more question, where is the environments.yaml in juju 2.0???
<magicalt1out> gone away
<magicalt1out> never to be seen again
<magicalt1out> banished, if you will
<valeech> haha I just read that
<valeech> ok so reading the 2.0 docs, I set the provisioner-harvest-mode to none with this command:     juju set-model-config provisioner-harvest-mode=none
<valeech> I then spun up some services on a machine, verified they worked and then destroyed them. After the services were removed juju stopped the machine and them removed it. Am I missing something?
<lazyPower> valeech - it may be prudent to send that to the list - juju@lists.ubuntu.com, that way the core devs can get a chance ot see it and respond, as a lot of them are over in europe and way past EOD for them.
<valeech> lazyPower got it!
<lazyPower> bdx yo
<bdx> yo
<bdx> lazyPower: writing documentation like a madman :-)
<lazyPower> bdx - http://146.148.77.23/app/kibana#/dashboard/Dockerbeat-Dashboard
<bdx> lazyPower: no
<bdx> redic
<bdx> ulous
<lazyPower> awww
<lazyPower> he told me no ;_;
<bdx> :-)
<lazyPower> it just keeps on giving man.
<lazyPower> now if only it supported triggers and notifications like prometheus, and we'd have a killer ops app on our hands.
<bdx> no doubt ... did dockerbeat pre-exist, or did you build that?
<lazyPower> its a community beat :)
<lazyPower> https://github.com/Ingensi/dockerbeat
<bdx> I have a feeling the beats stack is going to gain traction real fast .... its sooo useful and lightweight. Once people start to understand how simple and powerful it is .... game over
<lazyPower> interesting prediction :)
<lazyPower> I hope it does happen though, its a great stack to work with. Super simple and straight forward
<bdx> thedac: nice mp for dnsha
<thedac> bdx: thanks. It is coming along. Should be in our 16.07 release
<thedac> bdx: follow along if you care to https://review.openstack.org/#/q/topic:dnsha
<bdx> thedac: thats great. thanks, I will
<bdx> I'm documenting like a monster as of late .... what do you guys think about these top level juju admin categories ->  http://imghub.org/image/HHh4
<valeech> I have made a lot of progress on my juju understanding last night and today. Thanks for all of the help!
<valeech> I am now stuck and canât seem to find an answer.
<valeech> I blew my whole setup away ealier and started with 3 fresh Trusty machines.
<valeech> I have installed juju 2.0 beta 9 on the first, created a new model and add a manual cloud with 2 machines. I need to setup a postgres cluster. So I ran juju deploy cs:trusty/postgresql-101 âto 0. After postgres came all the way up I then ran juju add-unit postgres âto 1. Postgres installed on the other node and everything looked great until I get âFailed to clone postgresql/0â
<valeech> That failure shows up in juju status
<valeech> It changes between that and â(leader-settings-changed) Failed to clone postgresql/0â
<valeech> How do I troubleshoot this one?
<valeech> Fixed it.
<valeech> I added my machines to maas using host names not IPs. The two machines werenât able to resolve each otherâs names so postgres could replicate.
<valeech> I love this channel! you all are so helpful :)
<arosales> valeech: good to see your making progress
<valeech> arosales thx!
<lazyPower> valeech tell your friends about us ^_^
<valeech> lazyPower Thatâs why Iâm here. I need friends :)
<lazyPower> :) We welcome you with open arms my friend
<valeech> So I have juju 2.0 beta 9 working well on trusty. I would now like to delpoy maas 2.0. If I look at the apt-cache I see the maas versions are old. Is there a way I can specify what version of maas I want loaded at deploy time? Do I have to add the maas dev repositories to the 2 trusty machines so that when the charm installs it gets the latest version?
<bdx> valeech: 'sudo add-apt-repository ppa:maas-maintainers/experimental3' will get you the daily
<bdx> valeech: then you must 'sudo apt-get update && sudo apt-get install --upgrade maas'
<valeech> bdx: I only do the upgrade if I have maas installed, right? I donât have maas installed yet. I am using a manual cloud to deploy maas and then I will connect juju to that cloud once installed
<arosales> have a good weekend folks
<valeech> arosales you too
<valeech> bdx: apparently the maas-maintainers donât have a trusty experimental release.
#juju 2016-06-18
<bdx> sup sup
<bdx> has anyone successfully bootstrapped an openstack provider with 2.0?
#juju 2016-06-19
<lazypower-travel> bdx - several times
<rjw> Hi guys, Have an issue with a juju local based install
<rjw> I am new to the docker and testing it now
<rjw> suddenly saw how awesome juju is
<rjw> Could not add the requested unit. Server responded with: unknown object type "Client"
<rjw> this happens every time I deploy something
<rjw> anyone please....
<aisrael> Is juju 2 beta supposed to be upgradable between releases (in my case, 6 -> 9)?
<jcastro> aisrael:it should but they don't guarantee compatability inbetween 2.x
<jcastro> it's worked for me everytime but I dunno, depends on how much stuff you have running on beta 6 I guess
<aisrael> jcastro: I have a weird case where my machines are still running, but juju status doesn't show state, dns, or what charms are deployed. Basically, just the machine id and series
<aisrael> and upgrade-juju errors on a missing agent-metadata-url
<jcastro> huh, maybe there's a way to feed it that to unblock the upgrade?
<aisrael> I suspect so. I'll poke at that later today and see what I can do.
#juju 2017-06-12
<jac_cplane> I'm seeing this error on Centos7  15:48:11 ERROR juju.worker.proxyupdater error writing apt proxy config file: open /etc/apt/apt.conf.d/95-juju-proxy-settings: no such file or directory
<jac_cplane> apt does not run on Centos7:  any thoughts
<jac_cplane> unit-rac-master-0: 15:48:12 ERROR juju.worker.proxyupdater error writing apt proxy config file: open /etc/apt/apt.conf.d/95-juju-proxy-settings: no such file or directory
<jac_cplane> can I ignore this
<tvansteenburgh> jac_cplane: i'm on the way out the door, but i'd ask that one on the mailing list.
<tvansteenburgh> jac_cplane: you can probably ignore it but i'd recommend you ask there anyway
<jac_cplane> ok - tahnks
<kjackal> Good morning Juju world!
<vlad_> Hey everyone just wanted to say this is a great IRC channel and you have all been really helpful. I also want to say thanks for putting up with me I know I ask a lot of questions, but I'm deeply interesting in learning juju and maas and learning it right.
<vlad____> Anyway I was wondering if anyone in here is decently experience with maas?
<lazyPower> vlad____: i have a bit of experience with MAAS. i'm not certain i can answer any advanced questions but i have a fundamental understanding of the application
<zeestrat> vlad____: Anything in particular you're wondering about? If it's maas specific make sure to ask the guys in #maas too
<vlad____> zeestrat: I'm having issues with the underlying networking of my openstack bundle. It complains that there are no public or private IPs available. I'm assuming this means juju needs to rely on something running dhcp and I wanted to know what the best practice is/was. Is it to run dhcp from maas for everything or have something like infoblox doing that? Or can juju operate without dhcp and I'm a complete idiot?
<lazyPower> vlad____: maas really likes to have that DHCP control.
<lazyPower> however, it can outsource to a third party DHCP server if configured to do so. it just stops its dhcp server and will let the networks existing DHCP do its thing.
<lazyPower> vlad____: in any case, juju doesn't care. Whatever the machine enlists with network wise will be passed to juju. However, it will favor any interfaces that are set to "auto-assign" unless all interfaces are declared as DHCP, then it should hand over all the interfaces in one go, that's one gotchya i discovered last week
<vlad____> Thanks for the help guys! Sorry had to afk for a bit
<vlad____> lazyPower: Can you give me more details on that gotcha? I think this is what's kind of messing me up
<lazyPower> vlad____: so, in your unit detail under the 'network' tab, you'll notice your interfaces list their fabric, and configuration
<vlad____> Right now the cloud runs perfectly fine if all network configs in my bundle point to my PXE zone (that has its dhcp handled by maas) so I've assumed I was having a dhcp issue. However, I also have issues assigning all interfaces to dhcp in maas for some reason
<vlad____> lazyPower: Yeah
<lazyPower> if any of those interfaces are set to 'auto-assign', juju will favor that interface instead of any DHCP configured interfaces. in my experience, it yielded units with only a single configured interface. once i changed the 2 interfaces to DHCP i got the 2 interfaces as i expected.
<vlad____> lazyPower: Were you using network zones and bindings in that example?
<lazyPower> yeah i had tha tissue too, i would set it to DHCP and it would revert, what i found helps (in maas 2.2) is to set it, navigate to a different tab and then reload the interfaces tab to ensure it stuck.
<vlad____> lazypower: Ahhh ok cool that's going to be super helpful!
<lazyPower> I was not, i have a very vanilla flat network scheme. 1) management network that is host only, 2) proper top level lan network for the public-address.
<lazyPower> i should note, i run VMAAS (maas in a vm) and its only managing VM's that are statically enlisted, not a pod or anything cool that maas currently supports :)
<lazyPower> so some of this may not be useful to you, but anecdotally i hope it helps.
<vlad____> lazyPower: It helps a lot. I'm on a team by myself deploying to less than ideal infrastructure being managed by a maas that's hosted on a VM. So I've got lots of moving targets to hit right and each deploy takes quite a while to finish (not too long to fail though).
<vlad____> The VM is in vcenter as well which means it doesn't see the network the same way the physical boxes do... I've gotten the cloud deployed 90% complete but am having issues on the last leg with networking
<lazyPower> vlad____: yeah, i understand your plight, i'm using direct KVM myself, so there's a bit of a gap there
<lazyPower> but if maas is seeing the network config, and they are set to DHCP (either managed or unmanaged) juju should be getting the interfaces.
<lazyPower> now i'm not 100% positive about the spaces, thats on my list to dive into but i haven't quite gotten there yet.
<roaksoax> mpontillo: /win 3
<roaksoax> err
#juju 2017-06-13
<narindergupta> hi juju team are we working on kubernetes provider by any chance? Just wondering whats our take on instatiating a workload on top of kubernetes.
<tvansteenburgh> narindergupta: juju deploy canonical-kubernetes
<narindergupta> tvansteenburgh: i understand but anything on deploy on top of kubernetes like we do it on lxd or openstack etc....
<narindergupta> tvansteenburgh: thanks
<kjackal> Good morning Juju world!
<armaan> jamespage: Hello, I am trying to find the official documentation for Mitaka to Newton upgrade? I am not sure if there is one such document which is publicly available. Could you please let me know where can i find the docs for the M->N upgrade?
<armaan> jamespage: If i just change the source to openstack-origin=cloud:xenial-newton, will that be enough?
<jamespage> armaan: yup
<armaan> jamespage: great, thanks :)
<erik_lonroth_> Hello again. I'm about to demo an installation of a hadoop/spark bundle with juju and wonder if someone has a working example I can use? Preferably some failsafe bundle....
<magicaltrout> https://jujucharms.com/hadoop-spark/ canonical are currently working on a jaas tutorial for that erik_lonroth_
<magicaltrout> it works pretty well
<erik_lonroth_> Great, I'll try give it a go and see
<erik_lonroth_> thanx!
<erik_lonroth_> I tried deploy it but it fails with errors on the namenode and spark...
<magicaltrout> weird
<erik_lonroth_> I think it might be some apt things...
<magicaltrout> we spun it up 30 minutes ago without any issues
<erik_lonroth_> I'll push up a debug-log soon
<erik_lonroth_> https://pastebin.com/WpxCbx7K
<erik_lonroth_> I will mention that we have a proxy..... Its been a source for extremely many problems so far.
<magicaltrout> does look nasty
<magicaltrout> you should have a chat with the lovely kwmonroe
<kjackal> erik_lonroth_: magicaltrout I am giving it a try now
<magicaltrout> goooooooo kjackal
<magicaltrout> he is also lovely
<magicaltrout> but a little bit crazy
<kjackal> did you get a car magicaltrout?
<magicaltrout> i have a car
<magicaltrout> its a 2nd hand vw golf
<magicaltrout> the same one i had in pasadena
<kjackal> disaster....
<magicaltrout> i'm thinking about a 2nd hand ford mustang import
<rick_h> my neighbor just got a new mustang, pretty nice
<rick_h> long live the stick!
<magicaltrout> maybe i'll get a huge rick_h style RV instead....
<rick_h> Fun at the campground https://goo.gl/photos/C9v1yhoAg2PhnF5u7
<rick_h> Actually https://goo.gl/photos/978vPSWqGgP3u6Ai6
<magicaltrout> do you have a stupendous Pickup to go with it rick_h ?
<magicaltrout> I believe thats the law in the us
<magicaltrout> ah
<magicaltrout> only moderate
<rick_h> magicaltrout: but of course
<rick_h> Tough to fuel up sometimes https://goo.gl/photos/VD1cZZ6uXk8dSdmr9 lol
<magicaltrout> not like the pickups we saw when crusing around florida though
<magicaltrout> some of those were bonkers
<kjackal> erik_lonroth_: spark bundle seems to deploy fine ever here
<kjackal> erik_lonroth_: let me see the repo it is trying to get bigtop packages from
<jrwren> magicaltrout: i assure you, there is nothing moderate about that pickup and that it is stupendous.
<kjackal> erik_lonroth_: the repo for bigtop is this one: http://bigtop-repos.s3.amazonaws.com/releases/{version}/{dist}/{release}/{arch}
<magicaltrout> ha!
<kjackal> erik_lonroth_: is it possibe your firewall/proxy is blocking this repo?
<erik_lonroth_> It is the prime suspect.
<erik_lonroth_> I'll investigate that first as that is normally our proble,.
<erik_lonroth_> problem.
<erik_lonroth_> Could it be related to another error I found that looks like this from something called "Bigtop" https://pastebin.com/wzVYUtc5
<magicaltrout> bigtop is the apache hadoop distribution fyi
<erik_lonroth_> unit-namenode-0: 14:01:54 INFO unit.namenode/0.install FileNotFoundError: [Errno 2] No such file or directory: Path('/etc/default/bigtop-utils') -> Path('/etc/default/bigtop-utils.bak')
<magicaltrout> I think its likely to have failed prior to that
<magicaltrout> like the apt failures
<magicaltrout> that error is because there is nothing to copy
<magicaltrout> because bigtop doesn't seem to have been installed
<dakj> good evening everyone, someone can help to understand how to deploy landscape-client on a virtual server where is installed already a service? its README is few clear. thanks in advance
<dakj> I've also a post here https://askubuntu.com/questions/918493/deploy-of-landscape-client-on-a-node
<kwmonroe> hey erik_lonroth_ - the big data charms/bundles do require external apt access.  if you're behind a proxy, you can configure a model to use them:  https://jujucharms.com/docs/stable/models-config
<kwmonroe> erik_lonroth_: as an example:  juju add-model mymodel --config http-proxy=http://squid.internal:3128 --config https-proxy=http://squid.internal:3128 --config no-proxy=127.0.0.1
<kwmonroe> you may be able to set those vars on an existing model with "juju model-config -m mymodel foo=bar"
<vlad_> Hey guys what networks does the juju controller need to be on for an openstack deploy to work?
<vlad_> Does it need to share all networks that the nodes for the deploy themselves are on
<magicaltrout> see i told you kwmonroe was very nice
<magicaltrout> much more useful than that kjackal
<kwmonroe> lol
<kwmonroe> you're pretty lovely too magicaltrout
<magicaltrout> aww only cause rick_h files feature requests on my behalf, saves my fingers from RSI
<rick_h> magicaltrout: heh, glad to be of service
<rick_h> vlad_: are you deploying the openstack or on the openstack?
<rick_h> vlad_: generally it needs binds to all interfaces on the machine it's deployed to and it needs to share a common interface with the nodes
<vlad_> rick_h: Deploying the openstack
<rick_h> vlad_: there's an open bug about getting juju to support specifying the network to use for juju communication
<vlad_> rick_h: Makes more sense now that I was able to get the system to boot up using one network that the controller shared with the rest
<vlad_> rick_h: Think I could get around this issue as long as the openstack nodes and the controller share at least one subnet?
<rick_h> vlad_: I think so, but the question is going to be some juju quirks as to how it'll pick which subnet to talk across and making sure it's in the right place in the list on the nodes to be picked up as a common place to chat
<rick_h> vlad_: beisner and others from the openstack teams probably have more experience with those quirks than I can think off of the top of my head
<vlad_> rick_h: Good point. To be honest you guys have been the most helpful IRC to date for me anyway. Which I really really appreciate by the way. I think my biggest issue is that I'm trying to set this up in a system that's not physically setup right and that's why I keep hitting quirks
<rick_h> vlad_: yea, there's a lot of moving parts to work through for sure
<rick_h> vlad_: there's some charms that admcleod has that helps test network among nodes in maas (/me isn't sure what you're putting this OS on)
 * rick_h takes this opportunity to poke admcleod about the blog/demo stuff on them. 
<rick_h> https://jujucharms.com/u/admcleod/woodpecker and https://jujucharms.com/u/admcleod/magpie
<vlad_> rick_h: Awesome thanks I'm deploying everything onto xenial I believe
<vlad_> rick_h: Yeah this is strange I've taken out the bindings and all config references in juju of this network and it keeps using it for some reason
<bdx> hello!
<bdx> is there a xenial docker charm laying around somewhere?
<cholcombe> if i call relation_list() on a reactive charm when I only have 1 unit is it expected that it will fail?
<jrwren> fail as in traceback? no.
<cholcombe> jrwren: i get 2 error code back saying: error: no relation id specified
<cholcombe> it ran relation-list --format=json
<cholcombe> i'm running juju 2.1.3
<cholcombe> here's what i'm seeing: https://gist.githubusercontent.com/cholcombe973/2ffd92cc7afc50fa01299600c05b3c09/raw/4d84efb719a9299c2a0febbd7ef0459334250f63/gistfile1.txt .  I'm deploying a new gluster charm with 1 unit
<cholcombe> i'm calling relation_list() and it blows up
<cholcombe> i see the same thing in the debug-hooks for the install hook
<cholcombe> maybe that's the issue.  i'm in the install hook and it's running this
<jrwren> IMO definitely a reactive bug.
<jrwren> or charmhelpers bug.
<cholcombe> jrwren: yeah one of them is broken
<jrwren> part of the point of charmhelpers is to wrap those calls and make it easy for a charm writer to never have a traceback surface.
<cholcombe> jrwren: indeed
<lazyPower> bdx: got your metadata pr merged. should be g2g if you rebuild now
<lazyPower> bdx: anything you find in the store is going to be old at this point. I'd like to ideally setup a job to build and publish that charm to edge on repo-update. I haven't had the bandwidth to circle bakc to that effort though.
<lazyPower> cholcombe: i dont see relation_list in this. https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html
<lazyPower> so i suspect reactive, or you're invoking some wrapper thats calling that via subprocess?
<cholcombe> lazyPower: i'm calling related_units which calls relation-list on the cli
<lazyPower> cholcombe: did you capture that output? i would have suspected relation-list would return empty, but may be > 0 during the install hook
<cholcombe> lazyPower: yeah it's in the paste above
<lazyPower> afaik the charm has no notion of any relations that early in the invocation, as no other events have been processed other than *possibly* storage-attached in some charms.
<cholcombe> yeah i prob messed up somehow and it's stuck in the install hook
<lazyPower> returned non-zero exit status 2 -- ok, so have you attempted to invoke?
<lazyPower> sorry incomplete sentence
<lazyPower> Have you tried to invoke that command in a shell in that hook context?
<cholcombe> i did
<cholcombe> same error
<lazyPower> it gave you a python traceback?
<cholcombe> yeah
<cholcombe> i also have the cli output but it's identical
<lazyPower> i meant have you invoked relation-list --format=json on the cli
<lazyPower> relation-list is a golang app, if your'e getting python somethings really wrong.
<cholcombe> oh.  yeah i did but i threw that terminal away
<lazyPower> ok i want to do 2 things is why i'm asking
<cholcombe> ok
<cholcombe> i'll try again in a sec
<lazyPower> 1) make sure that traceback isn't masking a bug, 2) capture the behavior so we can patch or document charm-helpers so this doesn't bite other people
<lazyPower> i suspect you'll need to capture this in a try/except block and handle the CalledProcessError exception to keep going today.
<lazyPower> that or guard against invoking that during install
<lazyPower> but i dont like option 2, because it's not indicative as to why thats the case.
<cholcombe> lazyPower: yeah.  i'll get the cli output in a sec
<lazyPower> ty <3
<cholcombe> i remember it giving the same output thought.  it says no relation id specified
<cholcombe> and then echo $? says 2
<bdx> lazyPower: thx
<lazyPower> cholcombe: relation-list --help?
<cholcombe> i've gotta wait for a vm to start up again to try
<cholcombe> 1 min
<lazyPower> i think you need to run relation-id's, to get the relationship id, and then plug that into relation-list ot list the sessions of that relation type
<lazyPower> so either a param was missing, or something is really funky in ch
<cholcombe> the related_units code doesn't do a try except in the charmhelpers source
<cholcombe> it just expects it to work
<lazyPower> yeah, i'm not surprised. i think the thought was better to raise an error than silently fail
<cholcombe> i see
<cholcombe> relation id shows as None in the install hook
<lazyPower> ok give me 1 sec, let me trap a unit in a hook and get some more detail
<lazyPower> 1 sec
<lazyPower> cholcombe: you're forcing me to reactivate dead braincells in this domain :) I haven't had to think about this since the move to reactive
<cholcombe> hahaha
<cholcombe> lazyPower: sorry man
<lazyPower> you're all good bruv, we need to know this stuff too
<lazyPower> i've just been spoiled
<cholcombe> i'm building up the new gluster charm as reactive that's why i'm asking
<lazyPower> So, why are you probing for relation id's instead of using the conversation objects?
<lazyPower> this level of tracking is handled for you when you use the newer style interfaces. you can just invoke object.conversations() and count how many occurrences you have. if you need more detail, the conversation object has it in the dict.
<cholcombe> lazyPower: hmm alright.  yeah i'm prob going about this the wrong way
<cholcombe> it's because the old charm was classic and i'm trying to convert it
<lazyPower> cholcombe: however, lets teach you :)
<lazyPower> relation-ids kube-api-endpoint
<lazyPower> kube-api-endpoint:8
<cholcombe> ok
<lazyPower> you start with the relation-id's command to get your relationship id, you have to specify the interface
<lazyPower> or is it relation name?
<lazyPower> its relation name
<lazyPower> you use that id at the end of the name, so in this example its 8
<lazyPower> relation-list -r 8
<lazyPower> kubeapi-load-balancer/0
<lazyPower> when you invoke relation-list -r  it tells you what units are attached in the scope of that id
<cholcombe> right
<cholcombe> i think i had that written down somewhere in a cheat sheet
<lazyPower> i thought there was a command that gave you -all- relations in a list plus id, but that doesn't appear to be the case.
<lazyPower> again, dead braincells, i might be conflating it with something else
<lazyPower> so, i'm going to presume that the related_units() invocation needs the relationship name
<cholcombe> ok
<lazyPower> and that should get you what you're looking for
<lazyPower> lemme pull up the source and verify
<cholcombe> ok
<cholcombe> lazyPower: +1 :)
<vlad_> Hey guys if one of my unit fails to deploy is there an easy way to have juju just rerun it? For example my openstack-dashboard failed to deploy but nothing else on that machine had errors
<lazyPower> vlad_: it should auto-retry with a backoff timer by default unless you disabled that behavior in model config
<lazyPower> cholcombe: where did you import relation_list() from?
<cholcombe> from charmhelpers core hookenv
<cholcombe> i believe
<lazyPower> that method doesn't exist
<lazyPower> what are you actually calling again? :)
<lazyPower> its lost in scrollback to me
<cholcombe> haha
<cholcombe> um
<cholcombe> related_units i think it's called
<lazyPower> def related_units(relid=None):
<lazyPower> yeah sure does, it wants the ID of that relationship
<lazyPower> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/core/hookenv.py#L428
<lazyPower> thats why its blowing up on you. the method incorrectly defaults to None and masks the error. it should have faulted at the parameter level and forced you to provide one
<cholcombe> right or it asks for relation_id which turns out to be None in the install hook
<cholcombe> yeah
<lazyPower> you're not in a relationship context, so that seems to jive with what we're seeing
<cholcombe> right
<cholcombe> i'm always breaking stuff :-/
<lazyPower> all of these were intended to be used in the scope of a relationship context, and overridden when you were outside of it to invoke relations during non-relationship events.
<lazyPower> nah, its just a confusing mess bruv. i feel your pain
<cholcombe> haha
<lazyPower> relationships are still one of the most advanced concepts we have
<lazyPower> the reactive model has made it easier, but its not perfect
<lazyPower> so, i would encourage you to write a proper interface using the reactive model, and use the interface object you get back to perform your checks you're trying to do
<lazyPower> i presume this is for peering and minimum node count?
<cholcombe> yeah
<cholcombe> that's correct
<lazyPower> yeah, you're gonna need to write a peer interface, which both provides and requires
<lazyPower> so you get to mash that logic up into a single interface
<lazyPower> lemme link you to etcd peer interface
<cholcombe> ok
<lazyPower> it may help
<lazyPower> if it doesnt, i take no credit/blame ;)
<cholcombe> wolsen wrote a peer interface and i'm trying to wrap my head around it still
<lazyPower> https://github.com/juju-solutions/interface-etcd/blob/master/peers.py
<lazyPower> yeah this is too light to actually be helpful
<lazyPower> i forgot i'm using leadership to coordinate the cluster
<lazyPower> i'm only using peering to do detail probing and control states.
<cholcombe> that's ok
<wolsen> I think cholcombe's code will use essentially the same level of details
<lazyPower> there's a whole heap of logic that's not represented in here because i used a different mechanism
<lazyPower> oh ok :) well, neat
<lazyPower> cholcombe: best of luck to you :)
<cholcombe> thanks :)
<lazyPower> vlad_: that being said, juju resolved openstack-dashboard/0 will retry the hook execution that last failed
<Guest73> Hi All, What is the difference between Juju and DCOS ?
#juju 2017-06-14
<kjackal> Good morinig Juju world!
<Naz> Good morning, I have 2 questions please :) , 1- How could I move a service (Application/unit/machine) from a hosting model A to a model B?          2- Could I deploy the same application twice? (I mean, using the SAME charm, I want to run two machines serving the same application but using different configuration]).
<Naz> for question 2, obviously, I am talking about deployment of same app twice to SAME model.
<zeestrat> Naz: Not a juju dev but, 1: I think Juju is currently limited to migrating model migrations, see: https://jujucharms.com/docs/2.1/models-migrate, rick_h might know if migrating individual services is in the pipeline
<zeestrat> Naz: 2: To deploy the same application but with different names and configurations, use "juju deploy <charm> <name-of-application>", so "juju deploy mysql different-mysql" for example
<Naz> @Zeestrat, Big than you, much appreciated. However, Probably, I didnt explain well, for 1, I am not talking about a model as a whole migrating from a controller A to controller B, I need to migrate a machine and its service from a Model A to a Model B. For 2, Good tip, Thank you :)
<zeestrat> Naz: No problemo. I'd suggest reaching out to the juju mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) to ask about 1.
<boolman> hi, what does this mean, I cant find any information about this error: machine-3: 08:36:52 ERROR juju.provisioner cannot start instance for machine "3/lxd/0": unable to setup network: host machine "3" has no available device in space(s) "internal"
<boolman> what is it trying to do?
<zeestrat> boolman: If think juju is trying to setup network for the LXD container on machine 3 connected to a network space called internal, but cannot find it on the machine. Are you using maas with juju? If so, what versions? What are you trying to deploy?
<boolman> zeestrat: the machine 3 has a network interface on space 'internal' and a configure IP from that network. the space/subnet shows in 'juju spaces'
<boolman> I'm trying to deploy openstack
<zeestrat> boolman: Gotcha. Are you using the openstack bundle from https://jujucharms.com/openstack-base/ by any chance?
<boolman> zeestrat: yes I believe so
<zeestrat> boolman: Alright. I'm not sure how that bundle works per today with spaces. You might want to ask jamespage and/or folks in #openstack-charms
<boolman> ok thx
<boolman> think I figured it out, lxd requires the space on a bridge port, not a physical port.
<erik_lonroth_> kwmonroe: https://pastebin.com/aC7pmiqh  --- it seems from me debugging the install-hook, the  "bigtop" installation is failing. What happens here is yet covered in smoke for me.
<kwmonroe> yeah erik_lonroth_, the bigtop apt repository is not reachable.  did you set http/apt proxies on the model?
<erik_lonroth_> Yes I did. Let me run the "apt-get update" manually to see what happens. Just a sec.
<erik_lonroth_> That fails utterly
<kwmonroe> heh
<erik_lonroth_> https://pastebin.com/RjHZF2SQ
<kwmonroe> ok erik_lonroth_, you can get here, right?  http://bigtop-repos.s3.amazonaws.com/releases/1.2.0/ubuntu/16.04/x86_64/dists/bigtop/contrib/binary-amd64/Packages
<kwmonroe> i'm gonna assume you can get there, but you can't wget that url from a juju deployed unit.  that reeks of a proxy issue.  does your network require you to whitelist unknown domains?
<kwmonroe> also, run "juju model-config | grep proxy" just to make double sure your proxy settings look right
<erik_lonroth_> It is proxy issues and I've added them to our whitelist now and trying again with a clean model
<cnf> so in juju, with maas 2.2, how can you filter on subnets now that spaces are vlan based, instead of subnet based?
<jam> cnf: you would still have subnets in your space, wouldn't you? and "how can you filter" in what UI? MaaS UI? 'juju status' ?
<cnf> jam: constraints
<cnf> and i have vlans with multiple subnets, where i want to specifiy juju needs to use a specific subnet
<cnf> in 2.1 i can use maas spaces, but with 2.2 they moved spaces to vlans
<cnf> so that doesn't work anymore
<boolman> does juju/maas create a local user with password? I can't deploy on my nodes and I want to login from console
<erik_lonroth_> :kwmoroe It "almost" worked this time. The state I'm now in for some good 20 minutes is for spark/0    (juju status -> "spark/0*                   maintenance  idle   4        10.54.83.216                        configuring spark in yarn client mode"
<erik_lonroth_> spark/0 never leave this state for me....
<rick_h> boolman: it should setup the ubuntu user with ssh access from the ssh key in MAAS or your local machine it was bootstrapped from
<boolman> rick_h: no I dont think it's installed my ssh key
<boolman> certainly cannot login
<jam> boolman: ssh ubuntu@X.Y.Z ?
<cnf> boolman: maas adds your ssh keys
<cnf> boolman: did you add your ssh keys to maas?
<boolman> cnf: yes I've added my key to maas, I can ssh into other nodes. but not the 3 I'm trying to figure out why the deployment failed
<boolman> by the looks of it, the ubuntu install is successfull, but on firstboot when running cloud-init it fails to find the maas datasource or something
<boolman> thats why I need to login
<admcleod> boolman: is it possible you have an MTU mismatch somewhere?
<boolman> not likely, but I can double check
<admcleod> boolman: might be worth it, can cause problems with cloud-init (not so much "cant find" though)
<boolman> admcleod: hmm ok I do see some strange stuff in the config on this crappy switch
<boolman> hm no that was 'system mtu'
<vlad_> Hey guys is there a 5th follow up post for this series of blog posts? (Here's the 4th and last one I could find): https://insights.ubuntu.com/2016/01/31/nodes-networking-deploying-openstack-on-maas-1-9-with-juju/
<rick_h> vlad_: check out http://blog.naydenov.net/
<vlad_> rick_h: And to everyone else on here. This is the best IRC ever!
<erik_lonroth_> =)
<boolman> admcleod: No the mtu settings should be fine. however untill I can login to the node, I cant be sure
<kwmonroe> erik_lonroth_: any luck on spark/0 coming out of "maintenance"?  if not, can you paste a "juju status && juju show-status-log spark/0 -n 100"?
<kwmonroe> hello friends!  jujubox:devel and charmbox:devel docker images have been updated with juju rc3.  happy modelling!
<Budgie^Smore> o/ juju world :)
<wpk> ð
<lazyPower> ð budgie
<thumper> wpk: some people are more capable with emoji than others, my limits lie with :-) and o/
<cnf> ðð»ðð
<thumper> cnf: I can't tell if that is telling a story or just random stuff...
<cnf> monkey smelt but didn't see the banana!
<cnf> that, or money didn't want to see the penis, and wanted a banana
<cnf> you pick
#juju 2017-06-15
<Budgie^Smore> are we having fun yet?
<rick_h> Budgie^Smore: but of course
<kjackal> Good morning Juju world!
<erik_lonroth_> kwmonroe: No, its stuck in "configuring spark in yarn client mode" and some of the spark tests included in the actions of the bundle fails related to spark. I'll try post it again soon as this is really close to work for us.
<kjackal> hi erik_lonroth_ based on https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/spark/layer-spark/reactive/spark.py#L167 in the juju logs you should be seeing something like that  "Configuring spark with deployment matrix" can you share this so we know why is the charm trying to do?
<boolman> how do juju choose which IP to use as dns/agent-address? on my deployment it uses the wrong one and I end up with nodes that are unreachable
<kjackal> boolman: I do not think you have a control over the IP unless you use the manual provider (aka create the machine/unit yourself and then add it to juju). There is however the concept of network spaces: https://jujucharms.com/docs/1.25/wip-spaces
<kjackal> Havent used spaces myself but could help you specify your network requirements
<kjackal> boolman: sorry this is the right url for spaces: https://jujucharms.com/docs/2.1/network-spaces
<boolman> kjackal: the problem I'm having is that I have nodes with 4-5 subnets/vlans and I have no real control over what IP the juju agent gets. so You're saying that I need to configure the nodes myself? via curtain for example
<kjackal> boolman: I think you should try configuring network spaces
<boolman> I have configured network spaces, or what do you mean by that?
<boolman> I have 1 space per purpose
<kjackal> boolman: awesome, when deploying with the network space constraint you get an IP from a subnet you donot want?
<boolman> where do I specify constraint, on the machine or on the services/units?
<boolman> but does that actually force the subnet on which the juju agent uses?
<kjackal> boolman: you should be able to do something like "juju deploy mediawiki -n 2 --constraints spaces=cms"
<boolman> I'm not sure if I follow, on the services I'm configuring binding: space: myspace and I guess thats working. but since the nodes get the wrong DNS/IP the agents are unreachable
<boolman> so its not deploying shit right now
<boolman> if I delete that interface on the node, it chooses another random ip/subnet
<kjackal> boolman: can you show me how you created the network spaces?
<boolman> they are created from maas
<kjackal> I see... so juju list-spaces and juju list-subnets gets you anything?
<boolman> http://ix.io/xvt
<kjackal> ok so if you do a "juju deploy mediawiki --constraints spaces=internal" you get a machine with an IP on 10.1.3.0/24 but it is not reachable by the juju controller?
<boolman> i will get an IP of 8.8.8.0/24 range
<kjackal> that is strange... so you get a machine on the front-end space although you requested the internal.
<kjackal> boolman: can you try this other syntax: "juju deploy mysql --bind db-space" ?
<boolman> this seems pretty dead on https://bugs.launchpad.net/juju/+bug/1473069
<mup> Bug #1473069: Inconsistency of determining IP addresses in MAAS environment between hosts and LXC containers <2.0> <addressability> <bug-squad> <cdo-qa-blocker> <landscape> <lxc> <maas-provider> <networking> <uosci> <juju:Fix Committed> <Landscape Server:Invalid> <https://launchpad.net/bugs/1473069>
<vds> I have a subordinate charm that failed on some units, https://paste.ubuntu.com/24864747/ , I know I can't remove the subordinate charm unit without removing the principal, but the system is working fine and the error doesn't really have any effect, what other option do I have to fix the deployment?
<SimonKLB> vds: that's actually not the case, if you remove the relation to the subordinate charm it should be stopped and removed from the machine
<kwmonroe> yeah vds, in the just released juju-2.2, removing sub relations should now remove the sub unit as well.  bug 1686696 for reference.
<mup> Bug #1686696: Subordinate units won't die until all principal relations are gone <juju:Fix Committed by 2-xtian> <https://launchpad.net/bugs/1686696>
<vds> SimonKLB kwmonroe thx!
<jay_> Hello all, would like to know if anyone has ever tried taking a running docker container, and then adding it as a machine with juju? I gave it try, but come across some errors that don't really give my much insight into what might be wrong. The error just states something about an invalid entity name or password. Not much to work off of.
<rick_h> jay_: so I think the big thing is that Juju expects to be able to download, install, and modify things in the container in such as way that normally you'd build a new image to launch.
<rick_h> jay_: so I don't know that the juju way of "give me a machine and I'll get it all set up" works.
<rick_h> jay_: now there's definitely some ideas around a different kind of charm or something down the road that might play along nice with other Juju managed applications but that's idea stuff atm
<Budgie^Smore> O/
<lazyPower> Budgie^Smore: \o
#juju 2017-06-16
<kjackal_> Good morning Juju world!
<dakj> hello guy, i need a support with juju and its model. can anyone help me? please
<dakj> how do i must to add a new node on a model already present and deploy Landscape-client via gui? thanks in advance
<cnf> hmm
<cnf> ugh, juju / maas networking is a pain :/
* hatch changed the topic of #juju to: Juju as a Service Beta now available at https://jujucharms.com/jaas | https://review.jujucharms.com/ | https://jujucharms.com/docs/ | http://goo.gl/MsNu4I || https://www.youtube.com/c/jujucharms
<aisrael> Hey all. Just a reminder that we're holding our first virtual doc sprint in an hour. Link to the livestream will be posted beforehand, but we'll be carrying on discussion, coordinating and taking questions here.
<evilnick> aisrael, I can be here for about 15 minutes ;)
<evilnick> i believe pmatulis is joining in too
<aisrael> evilnick: pm
<aisrael> Everyone: Feel free to watch and join in: http://youtu.be/SKBVtXOXd40
<pmatulis> evilnick, i'm here (there?)
<bdx> aisreal: super echo
<bdx> nah
<aisrael> We're doing a hour long sprint to work on docs, so if you've had problems finding docs or ran into other papercuts, help us out!
<aisrael> https://gist.github.com/AdamIsrael/73e2aadc41c9077712e296c8b7564e9e
<bdx> yeah better nwo, thanks
 * lazyPower perks up at the mention of NWO
<bdx> s/nwo/now/ - srry
<lazyPower> awe and you got me excited too
<bdx> lol, what does nwo mean to you?
<lazyPower> i was hoping we were going to start talking illuminati or something
<kwmonroe> Nubernetes World Order
 * lazyPower speaks in a hushed tone "or the containerati"
 * lazyPower winks
<aisrael> https://github.com/juju/docs/blob/master/README.md
<evilnick> o/ I'm sure pmatulis can help you with anything arising
<aisrael> Much appreciated Nick!
<Budgie^Smore> o/ juju world
<Budgie^Smore> lazyPower you really should be careful of who you said that too ;-)
<lazyPower> Budgie^Smore: i spoke in a hushed tone in accordinance to the bylaws
<Budgie^Smore> yes lazyPower you did but you forgot to verify that you were talking to members ;-)
<lazyPower> Budgie^Smore: you didnt tell me you were a card carrying member
<Budgie^Smore> lazyPower I thought that was obvious from my interest in the project ;-)
<kwmonroe> cory_fu: is it dangerous to exit 1 in a bash reactive script?  i thought it was, but this doesn't seem to trigger any hook failure:  https://github.com/juju-solutions/layer-nvidia-cuda/blob/master/reactive/cuda.sh#L47
<cory_fu> kwmonroe: It won't fail the hook, but it will prevent any other handler from running.  That one is actually at the top level of the file (not in a handler function) so it'll actually prevent *any* handler from running.  Given the nature of the check, that doesn't seem  unreasonable
<cory_fu> kwmonroe: Really you just don't want to `exit` from within a handler function, because it aborts the entire file, not just handler
<kwmonroe> ack cory_fu - thx
#juju 2018-06-12
<TheAbsentOne> Anyone has a ready-to-use redis/mongo/cassandra example by any chance on how to use the charms/interfaces. It would save me some time. Also is it possible for 2 charms to have 2 relations? Let's say charm A and mysql can I use both mysql-root interface and ordinary mysql interface?
<TheAbsentOne> Root for root user and ordinary mysql for a db request
<stub> TheAbsentOne: https://git.launchpad.net/cqlsh-charm/tree/ is the only Cassandra example in existence as far as I'm aware. Its published at cs:~cassandra-charmers/cqlsh
<stub> TheAbsentOne: Yes, you should be able to use both mysql-root and mysql relations at the same time.
<stub> (I did trip over an Amulet bug the other day though that I need to follow through on, where it never actually adds the second relation)
<TheAbsentOne> I've never used cassandra before though but what does 'write_cqlshrc('root')' and ubuntu actually do stub? As far as I could understand I need a host of cassandra, a port and a keyspace name to properly connect to it right?
<TheAbsentOne> I see, well tbf I think it's not a good practice, but I fail to properly create a database with the mysql-root interface so kinda an issue
<stub> TheAbsentOne: The write_cqlshrc helper is specific to writing the .cqlshrc file, which is probably not used except by the cqlsh tool.
<TheAbsentOne> ahn I see
<stub> (I just put it in the interface as I expect people may want their clients to include cqlsh for reasons)
<TheAbsentOne> so in the example there isn't actually a request for a db
<stub> TheAbsentOne: You want ep.details to get the raw connection details.
<stub> Cassandra doesn't have databases, it has keyspaces
<TheAbsentOne> euh yeah i meant keyspaces my bad
<stub> And the interface doesn't support asking the Cassandra charm to create it. There are too many options
<stub> (choosing replication factors, compaction strategies.... Cassandra is... Cassandra)
<TheAbsentOne> so let's say I have a charm and I install some sort of third party python library for cassandra (will need to google for that), with that I would be able to create a keyspace right?
<stub> Yes. You might need to do it over the database-admin relation rather than the database one (I don't recall)
<TheAbsentOne> "        >>> ep = reactive.endpoint_from_flag('endpoint.mydb.available')         >>> first_details = ep.details[0] if ep.details else None" <-- what does the ep.details[0] return in this case?
<stub> You probably want a handler that waits until you have 3 Cassandra units ready, then issue a CREATE KEYSPACE command specifying the keyspace name and replication strategy
<TheAbsentOne> I see, well I start of with fixing my mysql then redis and mongo and maybe I'll try cassandra but I doubt I will succeed x)
<TheAbsentOne> You know what I'm gonna put some repo's online with minimal working examples so you guys can better help me and hotfix where needed x) I'll get to it asap and I think it might be a nice collection of examples for newcommers too
<TheAbsentOne> thanks for the info stub
<stub> TheAbsentOne: That returns a the CassandraDetails instance for the first unit, or None if there are no nits ready
<stub> Sorry, that is wrong
<TheAbsentOne> it's a list with all these attrubites probably
<stub> That returns the CassandraDetails instance for the first relation found, or None if there are no units ready
<TheAbsentOne> ahn right
<stub> (you can relate a charm multiple times to different Cassandra deployments, if you are into really, really big data ;) )
<TheAbsentOne> right right that's good stuff but way over my head right now xD just a stupid student over here
<TheAbsentOne> but it's there that I would get my connection detail info ^^
<TheAbsentOne> great!
<stub> Yes
<stub> I should create a shortcut for that, because it is what almost all charms will need to do, but don't know what to call it.
<TheAbsentOne> connectiondetails is clear and pretty universal
<TheAbsentOne> I do think it would be usefull if a requesting charm could simply get that like how the standard mysql renders config file. With ep.host(), ep.port() etc but that's probably me wanting it to easy x)
<stickupkid> hml: code review done
<kwmonroe> TheAbsentOne: regarding mysql:db vs mysql:db-admin, yes, a single charm can relate to both.  i've updated the readme at https://github.com/juju-solutions/interface-mysql-root showing what the -root interface does.  i do not believe you can specify an admin vs non-admin user for a single charm, so charm A will only have 1 set of user creds passed over the db and db-admin relations.
<kwmonroe> in hindsight, it feels like the mysql interface should be able to handle an admin as well as non-admin request.  maybe that can be refactored to make the mysql-root interface obsolete.
<kwmonroe> even if the interface doesn't do it, maybe it would fit better as an action on mysql/mariadb charms.  juju run-action mysql grant-admin.
<stub> kwmonroe: I just changed that in the Cassandra charm, so the database and database-admin relations both use the 'cassandra' interface. I don't think anything cares about the interface name apart from charms.reactive, so it seemed just legacy cruft from how it used to be.
<stub> Might be too late for mysql if the interfaces are already published
<TheAbsentOne> hi kwmonroe I wanted to talk to you about it actually. I really miss the feature to name a database on the mysql interface (mysql-shared has this) and I need a way to have a user that can access the db from another host/charm/whatever then the "requesting charm" and right now it seems only mysql-root interface can do this (kinda). Imo it should all be possible on mysql interface and it would make way more sense too but that's just
<TheAbsentOne> talking here
<TheAbsentOne> because in my use case I don't really need an administrator per se, I need the user as provided by the mysql-shared (or mysql) interface but he needs to be able to access the db from other (known) hosts as well
<TheAbsentOne> and I can't figure it out how to properly do that :P
<rick_h_> zeestrat: :P I was just typing that out. You beat me!
<zeestrat> rick_h_: Had to do a quick check myself if anyone had me beat before I hit send ;)
<zeestrat> rick_h_: As a note, is that documented anywhere on docs.jujucharms? My quick search doesn't turn up with anything.
<rick_h_> zeestrat: no, but it came up around more developer related docs and the teams' getting a discourse instance soon that we'll move stuff like that to
#juju 2018-06-13
<wallyworld> babbageclunk: here's that refactor PR https://github.com/juju/juju/pull/8815
<babbageclunk> wallyworld: yeah, looking
<babbageclunk> (Weird, I don't know why I didn't get a notification about that)
<wallyworld> babbageclunk: awesome. will be some work ahead but i think i have line of sight to removing IAASModel/CAASModel
<wallyworld> we'll see how it goes
<anastasiamac_> wallyworld: babbageclunk (or anyone else keen), PATL https://github.com/juju/juju/pull/8816 - invalidate credential call back when coming from bootstrap
<wallyworld> ok
<wallyworld> anastasiamac_: lgtm assuming it's been tested live
<babbageclunk> wallyworld: reviewed! I complained about naming a bit, ping if you want to discuss.
<wallyworld> ok, ty, looking
<wallyworld> babbageclunk: i specifically avoided the parent interface containing all 3 - that was the point of the split. i needed to be able to pass in something providing filesystem but not volume
<wallyworld> or did i not understand?
<babbageclunk> wallyworld: the parent interface would have 3 methods, one each returning the new interfaces.
<babbageclunk> wallyworld: (or nil, if this model doesn't support that interface)
<wallyworld> hmmm. that essentially is what we have now with model.IAASModel() and model.CAASModel() and is what i was trying toget away from
<babbageclunk> well, instead you'd call model.FilesystemAccess()
<wallyworld> the IAASModel() and CAASModel() methods return err, not just nil thouhj
<wallyworld> i'll take a look to see if it works nicely
<babbageclunk> well, you could do that too. This just matched what you had already (checking for nil for that argument).
<wallyworld> i was trying to avoid the existing pattern
<wallyworld> it seems more idiomatic to pass in things satisfying smaller interfaces
<wallyworld> i guess it's just the constructor you want changed
<wallyworld> the api struct would still have 3 attributes representing the smaller interfaces
<wallyworld> assuming i understand correctly you point
<babbageclunk> Yeah, I think so
<wallyworld> ok, i'll dive in and see. i'll just do a bit more work on the followup PR first
<wallyworld> will probably push changes tomorrow, we see how i go
<babbageclunk> I mean, it matches what you're doing in functions like StorageAttachmentInfo or ClassifyDetachedStorage
<babbageclunk> ok
<wallyworld> yeah, it's messy, thanks for helping em untabgle it
<wallyworld> it will still be a mess even when i'm done
<wallyworld> the state vs model stuff is still in so many places
<babbageclunk> wallyworld: yeah, it's definitely still not going to be super-elegant
<wallyworld> got to start somewhere right
<BlackDex> how do i add a local charm to the bundle, i can't find it any more, and i forgot howto ;)
<BlackDex> is it just "charm: /path/to/local/charm" ?
<zeestrat> BlackDex: Yeah, last I checked it's just a path. Not sure if it's only relative or not though.
<BlackDex> i will see that at that time
<BlackDex> thx :)
<rick_h_> BlackDex: starts with a .
<BlackDex> so relative then
<BlackDex> thx rick_h_
<KingJ> If I set a http-proxy and a https-proxy, do I also need to set an apt-http-proxy and apt-https-proxy or will Juju use the proxies defined by http-proxy and https-proxy?
<BlackDex> KingJ: You need to set all
<BlackDex> apt-* are set in the /etc/apt/* special files
<BlackDex> https-* are set as environment values
<BlackDex> if applications support that they will try to use it
<BlackDex> most python apps do
<KingJ> Ahhh right, that explains why I have a few units with apt issues then - I thought just setting the general proxy would be enough.
<rick_h_> KingJ: right, because many folks will run mirrors/etc for apt that won't follow the normal http rules for traffic
<KingJ> I've just set the apt-* proxies in my model config - will the machines/units pick that up automatically?
<BlackDex> keep in mind that it will then traffic everything over the proxy
<rick_h_> KingJ: so it's the cost of flexibility tbh
<rick_h_> BlackDex: you trying out 2.4 rc yet?
<BlackDex> if you have a local network which you do not want to be proxied
<BlackDex> you need to set that
<BlackDex> rick_h_: no, not yet
<BlackDex> but i like the new stuff in it :)
<KingJ> I'm looking to proxy all http(s) traffic, but not any other traffic.
<BlackDex> i know that if you set the proxy, and deploy something like openstack, even the openstack services will use the proxy, even when they are on the same subnet
<BlackDex> best is to exclude the local network or some special IP's if that is the case
<KingJ> Ah... that would be problematic. I'll set no-proxy to the local network then.
<BlackDex> you can add those exceptions to the "no-proxy" setting of the model
 * rick_h_ is looking for 2.4 rc feedback so starts bugging folks bdx magicaltrout zeestrat TheAbsentOne 
<BlackDex> i don't know if using a CIDR works these day's
<BlackDex> else you need to add every IP in the subnet ;)
<BlackDex> rick_h_: if i have some spare time (and i do not have much) i will try maas 2.4 my self :)
<rick_h_> BlackDex: :) cool. we're going to have another rc2 for a bug in the oracle provider but I want to start panning folks for feedback before we go final.
<TheAbsentOne> rick_h_: unfortunately I won't be much of a help as I'm not even allowed to try it out, I don't even have access to my controllers
 * rick_h_ would rather catch any issues in rc and fix vs having to spin the .1 in a hurry
<rick_h_> TheAbsentOne: on JAAS?
<rick_h_> TheAbsentOne: or some other reason you don't have access?
<KingJ> BlackDex: When I set these values in the model config, will Juju push it out to the machines automatically? Or do I need to do something extra to ensure that the machine is reconfigured with the appropriate config?
<rick_h_> KingJ: Juju will handle it
<BlackDex> hmm
<BlackDex> ah greate
<TheAbsentOne> I don't even know how they installed juju, all I know my machines are managed by a vmware-cluster and I (my user) have 2 models as my plaground :P
<rick_h_> TheAbsentOne: oh, I see
<BlackDex> it didn't do that when the models where first introduced if i'm correct
<TheAbsentOne> I'm busy with a repo that might interest you too though rick_h_
<BlackDex> atleast i had some issues then
<rick_h_> BlackDex: hmmm, let's say that I'd expect Juju to and if it doesn't please file a bug :)
<BlackDex> haha
<BlackDex> i haven't checked it latley
<BlackDex> same goes for the cidr in the no-proxy
<BlackDex> i created a script which will create a bootstrap config which will add all the ip's of a subnet i enter
<rick_h_> BlackDex: so I do know there's updates around that in 2.4 specifically for the issue you mention
<BlackDex> else it was to much work
<KingJ> Right, http(s)-proxy set back to default, apt-http(s)-proxy set, and no-proxy set to a CIDR... let's see how this goes :)
<rick_h_> BlackDex: so there's new proxy values that the charms can selectively use or not use vs setting the main system ones for all traffic and having to manage no-proxy for large groups of IPs
<BlackDex> KingJ: when you `juju ssh X` and execute `env` you should see the proxy settings
<KingJ> /etc/apt/apt.conf.d/95-juju-proxy-settings seems to have picked up the settings
<KingJ> env isn't showing any proxy settings
<BlackDex> also check if the proxy is set in the /etc/apt/preferences.d if i'm correct
<BlackDex> oke cool
<KingJ> /etc/apt/preferences.d/ is empty
<BlackDex> conf.d it is yea
<BlackDex> did you logged in after the changes
<BlackDex> or whas it an old connection?
<KingJ> New connection... I think. It was an LXC container so I just exited and re-ran lxc exec
<BlackDex> hmm
<BlackDex> don't know if it works like that
<BlackDex> i think you need to `juju ssh application/0` to it
<KingJ> Ah right, let me try that then
<KingJ> Hrm, connected in that way, ran env and it's hanging. Hmm.
<BlackDex> hanging?
<BlackDex> on env :s
<KingJ> Yeah, not quite what i'd expect heh
<KingJ> Ah hold on, I think i've spotted something odd that could be causing issues at a lower level - MASS DHCP assigned this LXC container a .254 address, that's probably not going to play too well with things.
<BlackDex> ;)
<KingJ> Right now that's corrected in MAAS, probably easiest to tear down the model and recreate :)
<BlackDex> KingJ: you can provide the model config during the add-model command
<BlackDex> which includes all the config you want like http-* apt-* and no-proxy stuff
<KingJ> Can I put the model config in a yaml file and run juju add-model model.yaml ?
<BlackDex> yea
<BlackDex> KingJ: https://docs.jujucharms.com/2.3/en/models-config
<BlackDex> it is stated over there
<KingJ> Perfect, let's try this...
<BlackDex> just simple "http-proxy: http:maas:8000"
<BlackDex> without the " ofcourse
<KingJ> Silly question, what key would I use to set the model name?
<rick_h_> KingJ: you have to do that at add-model time
<KingJ> Ah right, so juju add-model name, then juju model-config file.yaml
<BlackDex> no key
<rick_h_> KingJ: right
<BlackDex> `juju add model --config myconfig.yaml default`
<rick_h_> or I think add-model takes a --config
<rick_h_> yea
<BlackDex> where default is the model name
<BlackDex> no special stuff you need to do in the yaml regarding the model-name
<BlackDex> only the settings with each setting on a new line
<KingJ> Excellent, that all seemed to work
<KingJ> Ok, time to deploy the bundle again. Thanks for all your help so far :)
<BlackDex> yw :)
<BlackDex> goodluck
<BlackDex> if  you are using juju 2.3 you can use the --dry-run now :)
<BlackDex> Very nice
<BlackDex> it filters some basic errors out of it
<BlackDex> not all, like bad config options of the charms
<BlackDex> mistyped etc..
<rick_h_> BlackDex: cool, glad to hear you're using that and finding it useful
<KingJ> Ah yeah, that would have been a good idea, but I think the bundle should be OK now - i've made enough revisions to it over the past few days heh. I'm on 2.4-rc1 at the moment because of a bionic related issue.
<BlackDex> ah!
<BlackDex> that would explain some issues then i had with 2.3.x and bionic
<BlackDex> didn;t look any further
<BlackDex> no time ;)
<BlackDex> used xenial again
<BlackDex> i normally want to wait for the .1 release anyway
<BlackDex> rick_h_: Yea, i really like it
<BlackDex> now it needs to be extended to check if all the config options are valid ;)
<rick_h_> KingJ: cool, let me know if you hit any 2.4 issues.
<KingJ> https://bugs.launchpad.net/juju/+bug/1764317 is the one I ran in to on 2.3.x and made me jump on to 2.4-beta. This is a new greenfield environment, albeit lab, so I wanted to jump on to the latest and greatest.
<mup> Bug #1764317: bionic LXD containers on bionic hosts get incorrect /etc/resolve.conf files <bionic> <cdo-qa> <cdo-qa-blocker> <foundations-engine> <kvm> <lxd> <network> <uosci> <juju:Fix Committed by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1764317>
<BlackDex> KingJ: yea netplan is a bit to new for my tast
<BlackDex> didn't expected it to be in a LTS release
<KingJ> I think it was in 17.10, but it's still quite a big change. I like it conceptually but still a few rough edges around.
<BlackDex> i had some issues with netplan and bonding
<KingJ> BlackDex: This bug? https://bugs.launchpad.net/maas/+bug/1774666
<mup> Bug #1774666: Bond interfaces stuck at 1500 MTU on Bionic <cdo-qa> <foundations-engine> <mtu> <netplan> <cloud-init:Fix Committed by chad.smith> <MAAS:Invalid> <cloud-init (Ubuntu):Confirmed> <netplan.io (Ubuntu):Confirmed> <cloud-init (Ubuntu Xenial):New> <netplan.io (Ubuntu Xenial):Invalid>
<mup> <cloud-init (Ubuntu Artful):New> <netplan.io (Ubuntu Artful):Invalid> <cloud-init (Ubuntu Bionic):New> <netplan.io (Ubuntu Bionic):Invalid> <cloud-init (Ubuntu Cosmic):Confirmed> <netplan.io (Ubuntu Cosmic):Confirmed> <https://launchpad.net/bugs/1774666>
<BlackDex> no, not that one, but that is nasty also
<BlackDex> it didn't connect
<BlackDex> or it didn't created the LACP bonding the right way
<KingJ> Huh interesting, i've not had any problems with the bond formation itself (using 802.3ad) but the MTU issue is affecting me, but it's not a blocker at least just slightly less optimal.
<stickupkid> rick_h_: here are the QA steps for the PR https://github.com/juju/juju/pull/8818
<rick_h_> stickupkid: cool ty, I'm going to try a slightly different tact and see if it works and if so share how that might be made a little easier
<stickupkid> rick_h_: it assumes you don't already have a tmp folder in your $HOME dir
<rick_h_> stickupkid: k, lol at using the charm as the resource to itself :)
<stickupkid> rick_h_: easiest way without forking the world!
<rick_h_> stickupkid: made me smile
<MrOldest2> hello
<u0_a274> hi
<u0_a274> hello
<u0_a274> come on
<rick_h_> having fun?
<u0_a274> hi
<zeestrat> rick_h_: got a rough eta when y'all want cut a ga? I'd love to kick the tires but got some pto coming up.
<BlackDex> hmm, how do i upgrade a local charm?
<BlackDex> do i need to create a new folder, or can i overwrite the current and just tell juju to upgrade the charm?
<BlackDex> i probably need to update the revision then i think?
<zeestrat> BlackDex: Can overwrite but is probably good hygiene to do a clean build. Juju should bump the revision automatically when upgrading locally.
<BlackDex> clean build ?
 * BlackDex whistles ;)
<BlackDex> i'm currently just doing dirty hacks to get vmware working with a charm directly instead of manully hacking the configs afterwards with ansible or `juju run` scripts
<BlackDex> modified the nova-compute charm
<TheAbsentOne> is relate the new word for add-relation? :O
<rick_h_> BlackDex: just reuse the current space and use the path on the upgrade command.
<rick_h_> TheAbsentOne: a nicer alias heh
<TheAbsentOne> it's kinda romantic xD
<TheAbsentOne> rick_h_: if you would have time want to browse through these folders: https://github.com/Ciberth/gdb-use-case/tree/master/mininimalexamples
<TheAbsentOne> Your detective eye will immediately see if something is wrong, I haven't tested/deployed them yet I will in a bit normally
<rick_h_> TheAbsentOne: run charm proof on each?
<TheAbsentOne> I will as soon as I'm on a ubuntu machine x)
<BlackDex> rick_h_: it works
<BlackDex> i can now deploy/upgrade my nova-compute-vmware charm
<rick_h_> BlackDex: sweet
<TheAbsentOne> rick_h_: you know by any chance an example charm using mongo?
<TheAbsentOne> some sort of webapp or something
<BlackDex> i really need to get more into the charms
<rick_h_> TheAbsentOne: hmm...not really. There was the old mongodb cluster bundle
<BlackDex> like pushing them to the charm-store etc..
<rick_h_> So mongonwith itself
<BlackDex> if i want
<TheAbsentOne> hmm
<rick_h_> BlackDex: yea handy even if you just use for yourself
<BlackDex> indeed :)
<BlackDex> local is nice, but git/launchpad/store is better
<TheAbsentOne> and I was surprised that the mongodb database interface layer wasn't on the layer-index, this one: https://github.com/tengu-team/interface-mongodb-database
<BlackDex> now i have to check if it all works ofcourse and that openstack is able to use vmware, but that is the next step
<kwmonroe> TheAbsentOne: you can query the store for charms that use mongo -- https://jujucharms.com/q/?requires=mongodb.  here's how telegraf uses mongo: https://git.launchpad.net/telegraf-charm/tree/reactive/telegraf.py#n386 and here's something similar for graylog: https://git.launchpad.net/graylog-charm/tree/reactive/graylog.py#n465
<TheAbsentOne> gonna try this one out in a few hours kwmonroe: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mongo/mongo-proxy/reactive/mongo-proxy.py#L26
<TheAbsentOne> should work right? :/
<kwmonroe> yup TheAbsentOne, that'll work, but note that your request_mongodb function will fire every time a hook runs.  iow, you'll render that mongo template at least every 5 minutes (when update-status runs).
<kwmonroe> TheAbsentOne: to prevent that, consider adjusting the decorator to "when(mongodb.connected); when_not(template.rendered); blah blah blah; set_flag(template.rendered)"
<TheAbsentOne> hmm that's not good xD how would I solve that in a clean way setting up a flag and a when_not?
<TheAbsentOne> ow lol xD
<kwmonroe> you got it :)
<TheAbsentOne> awesome!
<kwmonroe> also, i said it will run on every hook invocation -- i meant it will run with every hook as long as mongodb.connected is set.  but you already knew that :)
<TheAbsentOne> thx for saying that though, I didn't think about that at all
<TheAbsentOne> I hope the collection might be of use to others too x)
<kwmonroe> fo sho
<kwmonroe> TheAbsentOne: you might also consider the case where mongodb is connected, but the connection string changes (perhaps a new mongo cluster member arrives and the address changes, or perhaps the port changes).  in that case, you may want to check for that in your request_mongodb function and only render if the mongodb relation data has changed... graylog does that here: https://git.launchpad.net/graylog-charm/tree/reactive/
<kwmonroe> graylog.py#n472 <-- see it returns if the data hasn't changed since the initial invocation.
<kwmonroe> TheAbsentOne: one other thing -- the tengu-team interface isn't in the layer index because therey's already a mongodb interface that points to https://github.com/cloud-green/juju-relation-mongodb.  so when you include interface:mongodb in your layer.yaml, you'll get that one.
<TheAbsentOne> correct but as I understood the tengu team interface was meant as a proxy between mongodb interface and another one
<TheAbsentOne> however I'm not sure about the changing connection string
<TheAbsentOne> if it changes my function wont run right? So what use is that check
<TheAbsentOne> if I add a when_not(template.rendered) flag that is
<kwmonroe> right TheAbsentOne -- i was just saying there's 2 ways of handling that render function.  either do it once and set a flag + a when_not so the function doesn't execute again, or leave it the way it is and return if ! data_changed.
<TheAbsentOne> ahn I understand my bad, I'll go with the flag I think it's more clear and it is more "reactive" programming
<kwmonroe> the latter is more robust because it'll handle the case of a changed connection string.  the former means you'd be making an assertion that you never want to re-render that template as long as it's be done once.
<TheAbsentOne> kwmonroe: if you have time would you mind checking my mysql folder too? It uses both the mysql-root and mysql-shared interface. I would love to hear your thoughts. Same remark about the rendering function I need to add a when_not
<kwmonroe> yup, will do TheAbsentOne
<TheAbsentOne> that's true too hmm since it's just minimal example I'll stick to the flags but I will add a note I think
<TheAbsentOne> first some cooking x)
<kwmonroe> +1
<rick_h_> zeestrat: sorry, missed your question. We're waiting for feedback atm. RC's are promised to be able to upgrade to final
<rick_h_> zeestrat: we've got one oracle bug that'll cause us to do a rc2 this week I think?
<rick_h_> zeestrat: and hopefully get some positive feedback and feel good calling it final
<magicaltrout> kev i'm gonna get my chaps going on hadoop storage in a week or so
<magicaltrout> before i do so, anything i need to know in advance other than "currently we don't support it"
<magicaltrout> kwmonroe
<kwmonroe> magicaltrout: first thing i think is probably top priority... don't expect my irc client to highlight "kev".
<magicaltrout> i know
<rick_h_> LoL
<magicaltrout> not sure why i did that
<magicaltrout> can you change your nick?
<magicaltrout> thats the easy fix
<rick_h_> magicaltrout: has a pet name for kwmonroe :p
<magicaltrout> when the lovely kevin comes up in discussion in the office
<magicaltrout> its usually kev
<magicaltrout> I apologise
<magicaltrout> when people refer to me, its usually dickhead, so to be honest you're a step above
<rick_h_> Well we tech types do hate tying long variable names
<kwmonroe> mag, second thing i would start with is right here: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L612.  we hard code hadoop_storage_dirs to those.  i feel like this would be a great place to replace with a "@when storage attached, hadoop_data_dirs = hookenv.storage_get(location)"
<magicaltrout> cool will do
<magicaltrout>  < kwmonroe> mag, second thing i would start with is right here: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L612.  we hard code
<magicaltrout>                   hadoop_storage_dirs to those.  i feel like this would be a great place to replace with a "@when storage attached, hadoop_data_dirs = hookenv.storage_get(location)"
<magicaltrout> meh
<kwmonroe> yes yes, you have mastered the act of middle clicking your mouse
<kwmonroe> now on to hdfs storage!
<magicaltrout> sad times
<kwmonroe> :)
<kwmonroe> magicaltrout: feel free to schedule a hangout when you're ready -- i have some ideas that cory_fu and bdx have helped mull over.
<magicaltrout> is that prior to or post embarking on storage kwmonroe ?
<kwmonroe> magicaltrout: you mean cory_fu bdx and myself having ideas?  that's pre-embarking.  we had a meeting about what it would look like.  at its simplest, the charms that cared (namenode / datanode) would define 2 storage bits in their metadata.yaml -- data1 and data2.  the operator would attach relevant storage to those charms and we would use that location in lieu of that hard coded part in layer-apache-bigtop-base.
<admcleod> 'big data' ?
<kwmonroe> ban admcleod
 * admcleod flee
<kwmonroe> hmph, not working
<magicaltrout> someones gotta do data
<magicaltrout> its not all openstack :P
<kwmonroe> magicaltrout: that approach assumes a fixed set of storage in metadata.yaml -- and that's not ideal.  what if i wanted 5 disks instead of just 2?  what if i wanted a pre-configured mdraid device?  what if i wanted xyz?
<magicaltrout> yeah makes sense
<kwmonroe> so it has flaws, but it gets us *at least* what we have now with the ability to provision storage outside of "mkdir -p /data/1 /data/2", which is all we do now.
<magicaltrout> don't see the problem with that :P
<magicaltrout> i'm moving house this week and next, but it'd be good to get a call with me, you and my 2 interns soon to be full time employees i hope, so they can look really scared and we can talk storage
<magicaltrout> are you around the week of the 25th?
<kwmonroe> yup, i'll iron a tie for maximum professionalism
<kwmonroe> that gives me 2 weeks to learn how to tie a tie
<magicaltrout> cool, i'll see what days they're around and ship over some rough ideas, i can tell you thursday is out cause it appears england will be losing to belgium in the world cup that night ;)
<wallyworld> vino_: here is where the charm (zip) uploads are processed https://github.com/juju/juju/blob/develop/apiserver/charms.go#L205
<wallyworld> and here is where we do the processing to update the charm doc in state https://github.com/juju/juju/blob/develop/apiserver/charms.go#L390
<vino_> wallyworld: ok i will have a look.
<wallyworld> in those places we look inside the zip to to get the metadata etc, so can extract version as well
<wallyworld> and use that to update the charm doc
<vino_> ok. nothing has to be done at client side.
#juju 2018-06-14
<bdx> magicaltrout: the storage entry in metadata.yaml "hdfs-devices" will be similar to "osd-devices" in the ceph-osd charm see https://github.com/openstack/charm-ceph-osd/blob/master/metadata.yaml#L32,L37
<seyeongkim> deploying juju with lxd is working correctly?, I could deploy it yesterday but today. lxd is started on host but juju status show me it is down or pending. ( I see this issue on artful release for several weeks )
<seyeongkim> and stuck on that status
<srihas> hi guys, we are trying to install openstack with maas and juju. currently we are facing an error which blocks neutron gateway service. The message is like "Services not running that should be: nova-api-metadata". When tried to restart the service manually, there is no unit itself in systemd for nova-api-metadata
<srihas> which charm could be responsible for installing it? This has been a long standing error for 3 weeks now, and could not find necessary data. Can someone please help?
<TheAbsentOne> srihas: I'm not an expert but maybe consider asking on the openstack channel as well I assume they are more experienced ^^ Otherwise you will eventually get an answer here but I think most experts aren't here right now
<manadart> stickupkid: Can you rubber-stamp this? https://github.com/juju/juju/pull/8820 It is the same 5 commits that you and John approved for the develop branch.
<TheAbsentOne> kwmonroe: https://github.com/Ciberth/gdb-use-case/tree/master/mininimalexamples#issues if you would have time could you tell me what stupid mistake I now did? I seemed to have miss something with my redis request and for mysql/mariadb everything seemingly works fine but my second page does not get rendered o.O
<TheAbsentOne> or rick_h_ if you are interested ^^
<stickupkid> manadart: done
<manadart> Ta.
<srihas> TheAbsentOne: thank you, is there another channel for openstack-juju?
<TheAbsentOne> I would try #openstack srihas
<srihas> TheAbsentOne: thank you
<elox> What would be the best way to retrieve node information in a charm about things like number of CPU:s, Available RAM etc on a running unit ?
<rick_h_> elox: just calling out to system details like /proc/cpu and running stuff like free or the like?
<elox> rick_h: Yeah, I was just curious if there was anything available for one charm to access other units info. I have a situation where I need to provide info to a "slurm-controller" (master) node from "slurm-nodes" (worker) where their number of CPU:s gets written to a configuration file.
<rick_h_> elox: yea, so that could either be built into the relation details sent back/forth which would require updating the interfaces used
<rick_h_> elox: are you using bdx's slurm stuff?
<elox> rick_h: Yeah!
<rick_h_> elox: very cool
<rick_h_> elox: so I'd chat with him, he'll be online in a couple of hours (he's US west coast) and see if it makes sense for the interfaces to be updated to add it
<elox> We are applying the slurm work to build a large HPC cluster at Scania (erik_lonroth <-> elox )
<rick_h_> elox: the other way to go is to use a middle-man like telegraf on those
<rick_h_> elox: and have the master get whatever details telegrad exports over their APIs instead
<elox> rick_h: Sure, I think we'll try our best to do it in some way we can to learn more about juju aswell.
<rick_h_> elox: cool, I don't know slurm that well but if you want to do a lot of this and the data you need varies a lot than I'd look at something like telegraf as it provides more data. However, if you just need the basics and it's common for all slurm use cases updating the charm interface and the relation data sent is probably the smallest change you can do
<elox> rick_h: thanx. Where can I find information about telegraf ?
<rick_h_> elox: https://jujucharms.com/telegraf/ it's a subordinate charm that surfaces machine level metrics and info that you can use to wire up to prometheus for data gathering and such
<elox> rick_h: Thanx alot! Its a lot to take in here. =D
<rick_h_> elox: <3 let me know how it goes
<hml> stickupkid: do we know if the locking issue is linux specific?  or could it happen with windows?
<stickupkid> hml: that i don't know
<rick_h_> hml: can you sudo with windows though?
<rick_h_> hml: I assumed it was ubuntu centric due to the sudo nature
<hml> rick_h_: stickupkid: not sure there is some sort of permission thing though
<hml> stickupkid: so the reproducer is to bootstrap with sudo?  then try to run juju status without it?
<hml> or any command with sudo, then the rest wonât work
<stickupkid> to reproduce, run `sudo juju list-controllers` followed by `juju status`
<stickupkid> hml: if you `sudo juju status`, because of the caching involved you can end up causing all sorts of permission issues in `~/.local/share`
<stickupkid> hml: the idea is to just warn on the lock file and if you can't gain access, at least just let people know with better messaging
<stickupkid> hml: it's a slippery road, allowing sudo, as you have to set the permissions on every file we ever save to the filesystem and that's a lot of work
<stickupkid> hml: rick_h_: you can run a windows command using `runas` and set it to run as administrator, I'm looking at how that changes the current windows code
<rick_h_> stickupkid: cool, good to know.
<manadart> stickupkid: Approved #8818 with a comment
<stickupkid> manadart: perfect, thanks
#juju 2018-06-15
<veebers> wallyworld: FYI bug I mentioned in startup that's holding up roll out of snap lxd 3 in the lab: https://bugs.launchpad.net/apparmor/+bug/1777017
<mup> Bug #1777017: snap install lxd doesn't work within a container <AppArmor:New> <https://launchpad.net/bugs/1777017>
<wallyworld> ack
<veebers> Hah, I doubt this Juju was written in Go: https://www.trademe.co.nz/a.aspx?id=1662397527
<veebers> wallyworld: is this statement correct "Starting with Juju 2.0, users can upload resources to the controller or the Juju Charm Store. . ." I thought it was only the charmstore that would take a resouce? (maybe the controller does some caching after first deploy etc.?)
<wallyworld> juju attach-resource
<wallyworld> i haven't got the exact workflow mapped out in my head
<wallyworld> but one scenario is to upgrade an existing resource for a deployed charm
<wallyworld> the controller does cache resources
<veebers> wallyworld: ack, thanks
<blahdeblah> Hi.  Anyone able to point me to a decent reference about how juju 2 configures lxd network interfaces on trusty?
<wallyworld> manadart: ^^^^^ did you have any info we could pass to  blahdeblah?
<wallyworld> or jam?
<blahdeblah> thanks wallyworld
<wallyworld> np, sorry i'm not the best person to ask
<manadart> blahdeblah: There is no "reference" as far as I am aware, but I can answer any questions you have.
<blahdeblah> manadart: So I've got a machine where one of the LXD containers doesn't have the network interfaces I expect, and I am trying to work out what actually controls that.  i.e. How does juju decide which network interfaces to give to a container when it creates one on a host?
<manadart> This is also beginning to differ between the 2.3 series and 2.4+ as we accommodate LXD clustering.
<manadart> blahdeblah: So this is container machines as opposed to the LXD provider, yes?
<blahdeblah> Correct; this is 2.3.4 running MAAS provider to drive an Openstack cloud
<blahdeblah> manadart: ^ In case you missed it
<manadart> blahdeblah: Got it, just doing a quick scan of the 2.3 logic :)
<blahdeblah> thanks - appreciated
<manadart> blahdeblah: OK, so there is a series of checks and fallbacks.
<manadart> The parent machine is interrogated for its interfaces and these are passed to the call for creating the container.
<blahdeblah> manadart: Oh, so the config of the parent is what really drives what the LXDs get?
<manadart> It appears that ones without a parent device are assigned the default LXD bridge.
<blahdeblah> What do you mean "parent device"?
<manadart> For example, when no usable interfaces are passed to the create container call, the default is to create "eth0" with the the default LXD bridge as its parent.
<blahdeblah> Hmmm - not sure how these ones got set up then; they're neutron gateways which end up with 4-5 "physical" interfaces.
<manadart> And there are containers on the same machine with *different* network config?
<manadart> blahdeblah: Scratch that. LXD differs from KVM. The host machine does not factor in network config when creating a container.
<blahdeblah> aluria: ^
<blahdeblah> manadart: Oh, so what does decide the network config?
<manadart> So when the machine is initialised for LXD containers, the LXD bridge is configured (via file for the older LXD on Trusty) with a 10.0.*.0/24 subnet.
<manadart> The container is created with an "eth0" device that has this bridge as its parent.
<manadart> Now there is a final fallback that will not attempt to create the container with explicit devices, and instead just pass it the default LXD profile.
<manadart> But that is invoked only when there is no bridge device, and I can't see in the logic how this would come about.
<blahdeblah> manadart: that's really odd, because these containers have 3-4 interfaces in most cases, passed through without being part of the usual NAT-ed lxdbr0.
<blahdeblah> And I would have thought that there would need to be some explicit request from something to make it that way.
<blahdeblah> anyway, dinner time for me - aluria will be able to answer any further Qs on this setup
<manadart> blahdeblah: OK. Let me interrogate this further.
<manadart> It would be good to know what version of LXD is on the machine.
<aluria> manadart: hi o/ -- let me move a couple of dhcp agents out from the failing container and I will grab all the info for you
<manadart> Gah. LXD/KVM is the other way around. The host interface *are* passed via network config.
<aluria> manadart: so "lxd" pkg on metal is 2.0.11-0ubuntu1~14.04.4
<manadart> aluria: Thx
<aluria> "neutron agent-list" shows container names of the type "juju-machine-5-lxc-1", except for 22/lxd/8, which was deployed after juju1->juju2 migration -- "juju-958f87-22-lxd-8"
<aluria> for now, I have checked "ip netns" on the container with issues (22/lxd/8) and removed the instances via "neutron dhcp-agent-network-remove" and "neutron dhcp-agent-network-add" somewhere else
<manadart> aluria: So the errant container is gone? Do we know what interfaces it got vs what the host had?
<aluria> manadart: the errant container is still there, but misses eth2; on a different host, a converted lxc->lxd container shows "convert_net2:" on "lxc config device show <lxdname>"
<aluria> manadart: I see 2 possibilities: 1) after the upgrade from juju1->juju2, containers are only created with a couple of nics;  2) we missed an extra constraint so as to create 22/lxd/8 with 3 ifaces (not 2)
<aluria> for now, I've run -> lxc config device add juju-958f87-22-lxd-8 eth2 nic name=eth2 nictype=bridged parent=br1-dmz  // and I see the 3rd iface on the container
<manadart> aluria: It looks like space constraints would limit the devices passed to the container.
<Cynerva> is there a way to specify ec2 availability zones in a bundle?
<Cynerva> i can do it without bundles, with `juju add-machine zone=us-east-1c` or `juju deploy ubuntu --to zone=us-east-1c`
<Cynerva> if i try to use a similar placement directive in a bundle, i get: invalid placement syntax "zone=us-east-1c"
<gman> I seem to be have issue with juju etcd  3.2.9, is there a way to install a partitcular version like etcd 2.3.8
<rick_h_> gman: hmm, that'll be up to the charm and where the charm gets the software from
<rick_h_> gman: looking at the charm readme: https://jujucharms.com/etcd/ it looks like you can use a deb or a snap so maybe there's a snap path you can use or maybe the deb is older than the snap and will work for you
<manadart> If anyone would be so kind... https://github.com/juju/juju/pull/8825. Not a big one.
<stickupkid> manadart: looking
<stickupkid> manadart: LGTM - becoming cleaner
<manadart> stickupkid: Yes; quite satisfying. The next one will remove a hoard of code.
<manadart> stickupkid: Thanks.
<stickupkid> nps
<bdx> hello all
<bdx> can we place some priority on stuck models being able to get --force destroyed or something
<bdx> I've got models stuck in "destroying" on every controller
<bdx> I'm not sure creating 5 new controllers and migrating all my non-stuck models to the new controllers is the correct answer
<bdx> sounds like a rats nest
<bdx> possibly a lion's den
<bdx> bear's cave
<bdx> either way
<pmatulis> bdx, maybe check the logs. you might be able to unstick them if you get more information as to what's going on
<pmatulis> controller model logs is what i would look at first
<bdx> pmatulis: right, I have a good idea as to why each is stuck
<bdx> some are stuck bc machines physically left the model without being removed via juju
<bdx> some are stuck because x-model relations wouldn't remove
<bdx> some are stuck for other reasons
<bdx> either way
<bdx> I've got a week worth of things to do today
<bdx> and everyday
<bdx> chasing these stuck models is not on the agenda
<bdx> nor should it be on anyones
<pmatulis> you can --force remove machines
<bdx> pmatulis: not if they are already physically gone
<bdx> possibly in some cases it works
<bdx> in others not so much
<pmatulis> bdx, did you try?
<bdx> oh my
<bdx> I'm probably 20hrs into trying to get rid of these models
<bdx> giving up
<bdx> I don't care enough to keep chasing this rat tail
<bdx> but I do care enough to bitch about it and make sure it gets nipped
<bdx> pmatulis: thanks for the insight
<rick_h_> bdx: it's on the roadmap for this cycle. we've started the spec and definitely will be getting it going
<bdx> awesome
<bdx> thanks, rick_h
<bdx> _
<rick_h_> bdx: we had some conversations around various cases for this last week and atm we're doing the series upgrade and cluster stuff but it'll come around
<bdx> awesome
<bdx> hey guys
<bdx> is there anyone around who still deals in maintaining the haproxy charm development
<bdx> I've had a PR up for some time that fixes a critical bug
<bdx> if someone could give me a review it would be greatly appreciated
<bdx> maintainership
<bdx> https://code.launchpad.net/~jamesbeedy/charm-haproxy/options_newline
<bdx> thanks
<stokachu> bdx: what's your deal
<stokachu> bdx: https://twitter.com/JamesBeedy/status/1007715556022079488
<stokachu> bdx: what part of the code of conduct do you not understand?
<bdx> I understand the code of conduct
<bdx> If I need to drop a few curse words to get the wheels rolling so be it
<bdx> sue me
<stokachu> ok we're done
<bdx> I'm sorry if I offended you
<stokachu> It's offensive to Ubuntu
#juju 2018-06-16
<KingJ> Is there any sort of 'empty' charm? I want to deploy some subordinate charms to all machines (e.g. lldp, snmp) but don't have a common charm to relate them to. To work around that, I was thinking of deploying an 'empty' charm to all machines that I can use to relate subordinates? If there's a better way to acheive this though i'm open to it!
<BlackDex> KingJ: just use the ubuntu charm :)
<BlackDex> that is as empty as you can get
<BlackDex> the only thing you can configure is an override for the hostname
<KingJ> BlackDex: Perfect! I'll give that a try. Is deploying an 'empty' charm like this then relating the subordinates to it the best way to go about doing this overall, or is there a better way?
<BlackDex> not that i know
<BlackDex> i use it like this also
<BlackDex> oke, trying to test something on aws with juju
<BlackDex> how do i get a bigger rootfs?
<BlackDex> my / is just ~7GB
<BlackDex> i need more
<BlackDex> preferably using settings in a bundle file
<KingJ> Is there a way to see a changelog of charm versions?
<BlackDex> okÃ©, i Just need to use the right constraint. root-disk that is
<BlackDex> KingJ: the only way i know if is checking the sourcecode
<BlackDex> and compare between the different versions
<KingJ> BlackDex: I had a peek, but it's hard to see the relation between the source tree and a given version - not really seeing any tags or branches inthe repo that allign with the charm version
#juju 2018-06-17
<BlackDex> KingJ: no indeed. would be nice if there is something like a git/bzr hash somewhere
<BlackDex> or better, a list of the charm versions and a link to the commit log :)
<BlackDex> KingJ: I do see that some charms have a repo-info file, but most of them don't
<KingJ> BlackDex: Ahh, shame
<KingJ> When i'm using network spaces, is there a way to get Juju to configure the interfaces to reply on the same interface they received the packet on? When i'm trying to reach a charm outside the subnet and the charm has multiple interfaces, it doesn't seem to use the correct interface.
<KingJ> Actually hm. Could be a haproxy issue. Basically, when I try and hit the keystone API from outside the network, some of the connections timeout. But i've got no problems curl'ing them inside any of the openstack subnets. Hrm.
<BlackDex> KingJ: Are you using MAAS?
<KingJ> BlackDex: Yes
<BlackDex> Of so, please check the squid/maas-proxy logs
<KingJ> Ah yeah, that's a point
<BlackDex> i think some traffic is being routed through there
<BlackDex> you need to add the subnet to the no-proxy
<KingJ> I do have no-proxy configured but I wonder if it's being captured transparently
<BlackDex> how have you configured it?
<KingJ> In the Juju model config
<BlackDex> with CIDR?
<KingJ> Yeah
<BlackDex> i think that won't work
<BlackDex> haha
<BlackDex> you have to add every ip
<KingJ> http-proxy/https-proxy aren't set, only apt-http(s)-proxy and no-proxy
<BlackDex> well, that should not be a problem then
<BlackDex> but better check the proxy log of maas if it still want to go through the proxy
<KingJ> And just checking the Squid access logs, not seeing any hits (MaaS proxy disabled)
<BlackDex> hmm
<BlackDex> KingJ: is that netwerk where you are comming from know to the receiving end? If not, is the default gateway correctly configured? Sounds like a routing issue
<KingJ> Default gateway is configured correctly, but each interface has a default gateway defined.
<KingJ> I'm somewhat getting past this now, one underlying issue was that not all my chams were bound to the appropriate network spaces. Now most of them can reach their relations via the same subnet, so no routing issues.
<KingJ> And at least going on the public subnet from outside seems to route correctly back?
<thumper> morning
#juju 2020-06-08
<wallyworld> hpidcock: is the TestArrayArraysUnorderedWithExpected test, why diesn't jc.NewMultiChecker().Add("[1]", jc.SameContents) work? why does it need the additonal jc.ExpectedValue arg? To me we should just be able to specify jc.SameContext and job done
<hpidcock> wallyworld: because some checkers don't take any arguments, some checkers take more than one
<hpidcock> so this way the syntax matches c.Check
<hpidcock> and it means you declare the compared value to be different than whats in the object you are comparing
<wallyworld> ok, that make sense
<wallyworld> looks nice
<hpidcock> wallyworld: yeah we can iterate on it to make sure its useful for all cases in the future
<wallyworld> yup, land and iterate :-)
<oyrogerg> Hi everyone, quite new to Juju. I am comparing two different bundle files for deploying the same final system, and would like to know about 'gui-x' and 'gui-y' settings. Do they do much of interest to a non-GUI user?
<oyrogerg> (Meanwhile, I see there's a Discourse. Searching there...)
<pmatulis> oyrogerg, they are a GUI-only thing
<oyrogerg> Thanks pmatulis, I wil ignore those differences then.
<pmatulis> https://juju.is/docs/bundle-reference
<pmatulis> oyrogerg, ^^^
<oyrogerg> Even better, thank you.
<hpidcock> PR please https://github.com/juju/juju/pull/11676
<thumper> landings seems to be faster and less buggy
<thumper> based on my sample size of one
 * thumper looks at PR
<thumper> hpidcock: looks good
<hpidcock> thumper: awesome thanks
<thumper> hpidcock: looking at the 2.7 merge updating gorilla ws, did we have a fix for this yet?
<tlm> wallyworld: got 2 minute for HO ?
<wallyworld> tlm: in another meeting, can ping when done?
<tlm> no rush might see if hpidcock wants to assist ?
<manadart> hml or stickupkid_ or achilleasa: https://github.com/juju/juju/pull/11678
<manadart> I'm EoD. If it's good, feel free to merge it.
<hml> manadart: iâll take a look
<manadart> hml: I'll add a new test specifically for the port change life.
<hml> manadart:  to the pr?
<manadart> hml: Yeah.
<hml> manadart:  ack, working on qa now
<hml> manadart:  thatâs weird, wonder why everything started failing with build fail in the middle of the make-check-juju
<manadart> hml: Added that test.
<hml> manadart:  k, kicked off !!build!! a bit ago, see how it ran
<manadart> hml: Great. See you tomorrow.
<hml> manadart:  g'night
<tlm> 2.8 => develop PR https://github.com/juju/juju/pull/11679
<tlm> another PR as well https://github.com/juju/juju/pull/11671
#juju 2020-06-09
<pmatulis> apparently when a LXD-backed controller is created there are some LXD profiles that come with it. one is supposedly called 'juju-default'. where can i see these?
<wallyworld> pmatulis: off hand i can't remember but heather wil know for sure
<wallyworld> tlm: got 5?
<Chipaca> is 2.7.1 the right cutoff for deciding whether relation-get/relation-set know about --app ?
<Chipaca> https://discourse.juju.is/t/juju-2-7-1-release-notes/2495 seems to imply that but i'm not positive; 2.7.2 already talks about fixing bugs in relation-get --app :)
<Chipaca> and i have reports of it not working on 2.7.0
 * Chipaca realises there aren't many options left in there :)
<tlm> wallyworld: back now
<wallyworld> tlm: dog ok?
<tlm> for the moment :)
<tlm> HO wallyworld  ?
<wallyworld> Chipaca: not sure off hand - but have you considered upgrading to 2.7.6 where any such issues will be mot?
<wallyworld> tlm: i answered by own question, so no need for HO
<tlm> ah cool
<wallyworld> *moot
<Chipaca> wallyworld: i'm running 2.8 here :) but some heathens dare run older versions and still wnat to run charms
<wallyworld> they ned to upgrade cause really, anything < 2.7.6 is not so much supported
<wallyworld> if there's a bug we'd to them to upgrade
<wallyworld> relation-set app came in in 2.7
<wallyworld> but there were early bugs from emmory
<wallyworld> so if they use < 2.7.6 they may not be happy
<Chipaca> they're aware it's an issue, but they can't upgrade everywhere at this time
<Chipaca> wallyworld: but, good to know relation-get and relation-set got it at different times
<Chipaca> er, or maybe i misunderstood you
<wallyworld> relation-get/set would have been done at the same time
<Chipaca> ok :)
<wallyworld> it was part of an overall feature
<Chipaca> to be clear i don't think they're using the feature, it's just that we always sent --app=<bool> and it trips up on them
<Chipaca> ok, back to sleep. ð
<tlm> wallyworld: got 5 min for HO ?
<wallyworld> ok
<wallyworld> manadart: there's been a field critical bug, here's a PR i need your input on; not sure about my new link layer device removal code, i think we need to delete them even if a parent device (furture PR will want to compose as a model op as well but for now...). also, i am unclear why we would have excluded containers in the first place, that code seems like it was done way back in 2013 so who knows https://github.com/juju/juju/
<wallyworld> pull/11682
<manadart> wallyworld: Hmm. That code path is used now by both the instance-poller and the machiner. So I think this will cause flapping if there is a device seen by the machine, but not the provider.
<manadart> I am actually working on this at present in the context of another bug.
<wallyworld> ok, no problem, i can abandon the PR if appropriate
<wallyworld> the quick fix was to remove the container ceck
<wallyworld> but then we got orphansed entries
<wallyworld> i think we need to then separate out provider vs machine devices, like we do for addresses?
<wallyworld> ot at least tag where they came from
<wallyworld> so we can then delete macine vs provider devices properly
<wallyworld> and we aggregate in the getter?
<manadart> wallyworld: Prior patches have introduced the idea of device address origin. This way the provider can effectively relinquish devices to the machiner, and the machiner can safely remove them if not observed.
<manadart> That's the bit I'm on at present.
<wallyworld> sounds good, i was sort of hinting at something similar i think. so this new work is in 2.8 right? we need a quick fix for 2.7
<manadart> wallyworld: I will see if I can back-port.
<wallyworld> ok, ty, i should close my PR then
<wallyworld> did you want to include the small actual fix in yours? ie remove the container check? or i can slim mine down to just d that bit
<achilleasa> manadart: can you take a look at https://github.com/juju/juju/pull/11670?
<manadart> achilleasa: I'm working a field-critical bug ATM.
<achilleasa> manadart: nw, I will look into another bug for now
<manadart> achilleasa: Actually, can you hop in Daily?
<stickupkid> manadart, CR when you get a chance https://github.com/juju/juju/pull/11684
<jam> manadart or achilleasa did either of you see: https://bugs.launchpad.net/juju/+bug/1882127
<mup> Bug #1882127: nil pointer dereference in 'relation-set' <juju:Triaged> <juju 2.7:New> <https://launchpad.net/bugs/1882127>
<manadart> jam: Yeah, I'll take a look when I've a moment.
<achilleasa> jam: that's an odd one. It means that either r.Settings or r.ApplicationSettings returned nil without an error (https://github.com/juju/juju/blob/2.7/worker/uniter/runner/jujuc/relation-set.go#L150-L160). Is it confirmed that we saw this in 2.7.6?
<jam> achilleasa, that is what was reported, I haven't reproduced it
<achilleasa> asking because we did some work in 2.7.6 to fix some of the issues accessing application data (inc. fixing some err var shadowing issues)
<achilleasa> I will take a closer look later today
<jam> achilleasa, thanks
<achilleasa> jam: also, we do have integration tests that check access to unit/app data in various scenarios: https://github.com/juju/juju/blob/develop/tests/suites/relations/relation_data_exchange.sh
<stickupkid> achilleasa, I bet you this is a Settings(nil) issue
<achilleasa> stickupkid: they should be getting an empty map or an error
<achilleasa> I suspect based on the bug report that this is a nil returned when fetching the ApplicationData
<stickupkid> achilleasa, yeah, i don't get it yet, doing more looking
<stickupkid> achilleasa, I'm just putting my flag in the sand
<achilleasa> if you got a bit of time, can you run the integration tests against 2.7.6?
<achilleasa> (the one linked above)
<jam> achilleasa, stickupkid : yeah given the Python code is grabbing the application data, it would be that ApplicationSettings is returning nil, nil
<stickupkid> achilleasa, sure, let me sort that out
<stickupkid> achilleasa, do it after daily
<stickupkid> there are a lot of places that check that certain things exist in a map, but not actually the value of said item in the map is valid
<jam> stickupkid, achilleasa for more context, it is a k8s charm where this is happening
<jam> specifically this charm: https://github.com/davigar15/zookeeper/tree/master/src happening in this code: https://github.com/davigar15/zookeeper/blob/master/mod/zookeeper/zookeeper_provides.py
<stickupkid> petevg, I think we can mark it as wontfix tbh https://bugs.launchpad.net/juju/+bug/1882571
<mup> Bug #1882571: juju schema doesn't contain docs <juju:New> <https://launchpad.net/bugs/1882571>
<petevg> stickupkid: agreed. Thx for the comment on it :-)
<pmatulis> hml, i've been told you may help me. apparently when a LXD-backed controller is created there are some LXD profiles that come with it. one is supposedly called 'juju-default'. where can i see these?
<hml> pmatulis:  on the machine the lxd containers are running on
<hml> pmatulis:  there is a profile created per model.  so by default âjuju-defaulâ and âjuju-controllerâ
<stickupkid> pmatulis, lxc profile list and lxc profile show juju-default
<pmatulis> hml, stickupkid: ok, so juju effectively creates a lxc profile (lxc profile create) along for every model that it creates and will apply that profile for that model?
<hml> pmatulis:  yes.  the profile is applied to all machines in that model
<hml> pmatulis:  the default profile is also use
<hml> for all lxd machine
<pmatulis> hml, i don't understand. two profiles are used?
<hml> pmatulis:  the default one is whatâever the user put into the profile.  and contains network interfaces by default
<hml> pmatulis:  the juju one is what juju requires
<hml> pmatulis:  yes, 2 profiles minimum
<pmatulis> hml, wow ok thanks
<stickupkid> pmatulis, think of them as a stack or profiles to apply. You concat them together to give you the container you want/need
<pmatulis> stickupkid, so that default profile should normally never be deleted
<stickupkid> pmatulis, indeed because we use that to find the bridge name
<pmatulis> would an error occur if a container was created w/o the 'default' profile? i mean, even for pure lxd (not juju)
<stickupkid> pmatulis, not sure tbh, would need to test that scenario
<pmatulis> stickupkid, cheers thanks
<hml> stickupkid: doesnât juju verify the network info is in the default profile?  i canât remember if it rewrites the default profile if its not
<stickupkid> hml, it never re-writes the default profile, it just queries it
<hml> stickupkid: so what happens if there isnât a nic defined then?
<stickupkid> hml, there is logic around this, will check
<stickupkid> hml, pmatulis we expect default profile in a lot of places
<stickupkid> hml, oh wow, we do update the default, I was wrong sorry
<stickupkid> https://github.com/juju/juju/blob/develop/container/lxd/network.go#L183
<hml> stickupkid: i thought soâ¦ remember being in that code a long time ago
<stickupkid> hml, forgot about that, been a long time
<pmatulis> stickupkid, ok thanks for that follow up
<pmatulis> stickupkid, and if a charm itself included a profile the associated container would use a stack of three profiles?
<stickupkid> pmatulis, correct
<pmatulis> got it thx
<stickupkid> achilleasa, I can't reproduce the bug
<stickupkid> achilleasa, well with the integration tests
<achilleasa> even on 2.7.5?
<stickupkid> achilleasa, 2.7.5 returns     |     |
<stickupkid>     | ERROR invalid value "db:2" for option -r: relation not found
<stickupkid>     | ERROR permission denied
<stickupkid>     | ERROR permission denied
<achilleasa> stickupkid: yeah... that was one of the things that got fixed in 2.7.6
<stickupkid> achilleasa, ho?
<achilleasa> stickupkid: omw
<kelvinliu> wallyworld: ho for azure stuff?
#juju 2020-06-10
<kelvinliu> wallyworld: I can't reproduce the missing sku name even on the 2.6 model upgraded from 2.4
<wallyworld> kelvinliu: hmmm, not sure. i thought i reproduced it a while back. maybe ask him in #juju if he's tried it again with 2.7?
<kelvinliu> yep
<kelvinliu> wallyworld: I think I found the problem here, the model has to be upgraded from 2.3 but not 2.4+.
<kelvinliu> wallyworld: im wondering we should land the fix to 2.7 or?
<wallyworld> kelvinliu: 2.8 is fine
<kelvinliu> ok, cool
<wallyworld> i thought it was 2.4, but naybe we chnaged sku type in 2.4 and not 2.5
<wallyworld> thumper: so yeah, i reproduced the issue where upgrade-charm hook runs "for no reason" deploying a bundle. WTF. need to figure out why
<tlm> ian got 5 min for HO ?
<tlm> wallyworld:
<wallyworld> sure
<thumper> wallyworld: cool, being able to reproduce is one of the hardest issues
<wallyworld> yup
<thumper> timClicks_: we should create a simple doc around agent ratelimiting with examples
<thumper> something to add to the queue
<kelvinliu> wallyworld:  it's not related with the SKU change at all. It's because we use unmanaged storage which requires a storage account but we changed to use managed storage for all model after 2.3. so no storage account anymore.
<kelvinliu> for no storage account case, we set the SKU.Name to `Aligned`, but set an empty SKU struct for models do have a storage account (the one upgraded from 2.3)
<wallyworld> kelvinliu: ah i see, nice pickup
<kelvinliu> wallyworld: ah, can i grab u 2mins?
<wallyworld> sure
<flxfoo> Hi there
<flxfoo> I am trying to patch an interface (solr). I have one first issue with current version, calling `add-relation` works fine, but `remove-relation` breaks with ('Unable to determine default scope: no current hook or global scope'
<flxfoo> is that something that is on the interface side or something missing from the charm (which I doubt) ?
<flxfoo> sorry working in the `apache-solr` charm not the interface.
<stickupkid> I guess we use labels on PRs now
<stickupkid> :)
<achilleasa> manadart: on develop, is the core/network.InterfaceInfo populated by the machiner? I want to add a new field to indicate if this is a virtual port (for OVS)
<achilleasa> does that mean that I need to bump the relevant APIs or are they already bumped on develop?
<manadart> achilleasa: Yes, it will be populated by the machiner and instance-poller. I don't think they have been bumped on develop since the 2.8 release, but I may be wrong.
<manadart> achilleasa: Technically it should be bumped right stickupkid? I think there have been instances where we have gotten away with omitempty.
<achilleasa> manadart: stickupkid weren't they bumped when Origin was introduced?
<manadart> achilleasa: Does it matter if it is a virtual port? InterfaceInfo already has IsVirtual returning true if the NIC is VLAN-tagged.
<manadart> achilleasa: Origin went out with 2.8.0.
<achilleasa> manadart: when starting kvm containers you need to pass "<virtualport type='openvswitch' />" to bridge to OVS
<manadart> achilleasa: Righto.
<achilleasa> to get that info in kvm/libvirt I can either add it to NetworkInfo which is decorated as a libvirt.Interface
<achilleasa> or add it to the decorated type as an extra arg
<achilleasa> which means that I need to fetch the OVS bridges and populate that on the fly when I boot the kvm container
<achilleasa> I am doing the latter but thought that maybe if I added this info to NetworkInfo it would be more elegant
<achilleasa> (also my NIC does not use VLANs so I can't use IsVirtual)
<stickupkid> achilleasa, manadart yeah, but I'd need to check
<manadart> achilleasa: We'll need it to flow back and forth right? In to state via InterfaceInfo->NetworkInfo->LinkLayerDevice and back the other way as network config?
<achilleasa> manadart: yes... consirerably more effort than detecting on the fly
<achilleasa> but I think it's better from a modeling perspective
<manadart> achilleasa: Yes.
<achilleasa> given that brctl skips ovs bridges I think it's better to have a more accurate view on the hardware (virtual or not)
<achilleasa> plus, getting it in via the machiner will also allow me to use it in the lxd PR
 * manadart nods.
<achilleasa> ok, I will do that then and amend my lxd PR afterwards
<achilleasa> we might be able to get away with omitempty here though given that the machiner will report it correctly eventually, right?
<achilleasa> (so no bump would be required?)
<stickupkid> anybody know where this package went in 2.8 https://github.com/juju/juju/tree/2.7/permission
<stickupkid> ah core
<stickupkid> I wish we had a job for `go mod tidy` <--
<stickupkid> manadart, achilleasa https://github.com/juju/juju/pull/11689
<achilleasa> stickupkid: I can review this in about 1h if that's OK. Trying to get a fix for the panic for the next 2.7 sha. Can you take a quick look at https://github.com/juju/bundlechanges/pull/62?
<stickupkid> achilleasa, this is super super low priority, yeah I'll take a look
<stickupkid> achilleasa, I love how the documentation for SplitN is wrong "The count determines the number of substrings to return"
<stickupkid> achilleasa, well that's a lie, we asked for 2, you gave us 1
<achilleasa> stickupkid: it's all best effort :D
<stickupkid> achilleasa, SplitAtMaxNButProbablyWillBeLess()
<manadart> stickupkid achilleasa: Can I get a review of https://github.com/juju/juju/pull/11690 ? It's just regenerated mocks.
<stickupkid> manadart, ....
<stickupkid> ah, you've added 2.7 as well
<flxfoo> endpoint_from_flag() would return an object from a given flag that can be found in the logs. for endpoint_from_name() what would be the name form?
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/11692?
<achilleasa> petevg: can you also take a look at the warning output and let me know if the wording is OK or tweaking is needed? ^^
<petevg> achilleasa: the warning is a little unclear on the consequences. Will the endpoints show up as dupes, or not show up at all?
<achilleasa> petevg: take a look at the end of the QA steps; basically, you will see them be removed and added
<petevg> achilleasa: got it. I suggested a small change to the wording to reflect that.
<achilleasa> hml: can you take a look at ^
<hml> achilleasa:  iâm knee deep in other stuffâ¦ is there a time rush?  i can look later
<achilleasa> hml: no but I was hoping to land this some time today so it's included in the next 2.7 sha
<achilleasa> if you can take a look later and hit merge if it's ok that would be great
<hml> achilleasa:  will do
<stickupkid> I wish `juju upgrade-controller --build-agent` had a `--splat` flag, which would also update the one in the PATH
<skay> postgresql charm question. I've changed the backup_dir setting, but the DUMP_DIR in the pg_backup_job script still points to the old setting
<skay> shouldn't it change?
<skay> aha. it might be a bug that was fixed and I haven't upgraded the charm
#juju 2020-06-11
<wallyworld> tlm: did you find the issue with the controller upgrade?
<tlm> not yet wallyworld just finished having some lunch
<wallyworld> no worries
<tlm> wallyworld: got time for another HO, This isn't going great
<wallyworld> tlm: sure, after current meeting,soon
<wallyworld> tlm: free now
<tlm> wallyworld: think I sorted myself out and found the problem
<wallyworld> awesome ok
<tlm> the trick was to pull a little more hair
<wallyworld> glad it was you cause i have none
<tlm> i've noticed
<wallyworld> \o/
<kelvinliu> thumper: or anyone got one minute to take a look?  https://github.com/juju/juju/pull/11694
<wallyworld> looking
<kelvinliu> ty wallyworld
<wallyworld> kelvinliu: we were going to land in 2.7?
<kelvinliu> will back port to 2.7 in next PR
<wallyworld> kelvinliu: awesome, lgtm. great to see this fixed
<kelvinliu> ta
<kelvinliu> wallyworld: this one goes to 2.7 https://github.com/juju/juju/pull/11695
<achilleasa> manadart: when we convert the interface info into LinkLayerDeviceArgs (setLinkLayerDevicesAndAddresses in networkconfigapi.go), why is only a subset of the information persisted?
<achilleasa> for example, shouldn't the origin be persisted?
<achilleasa> (I am on develop)
<stickupkid> achilleasa, origin was never extended to it's fullest
<manadart> achilleasa: Yes. This patch will set it for the provider: https://github.com/juju/juju/pull/11683
<manadart> I have some work locally to add. Should be up for review today.
<achilleasa> cool. I will also be adding the virtual port type to the link layer devices doc
<stickupkid> I wish juju passed context.Context through the stack
<manadart> achilleasa: Was just looking at bringing 2.7 forward again. The patches are 11695, 11682 and 11682. The only one we want is the last one (your bundle-diff changes).
<manadart> I had a crack, but couldn't update the dependency. You mentioned this in standup, so do you want to bring it into 2.8?
<achilleasa> manadart: you mean to cherry-pick it and land it standalone to 2.8?
<achilleasa> manadart: doing the ovs detection as we discussed allows the bridge policy to work out its magic and the lxd containers get assigned to the bridge. However, I noticed that when the container gets removed, the veth device remains attached to my switch; guess we need to fix that as well
<manadart> achilleasa: Do veth devices usually go away immediately? This is only for the OVS case?
<manadart> achilleasa: And yes to the question above, just cherry-pick forward.
<achilleasa> manadart: I happened to notice this for OVS; I will check a normal lxd to see what happens
<achilleasa> btw, you can review https://github.com/juju/juju/pull/11697 instead of the LXD one which I will close now
<achilleasa> manadart: or stickupkid https://github.com/juju/juju/pull/11698 (forward port of diff-bundle fix and a drive-by for a spurious warning by lxd container)
<achilleasa> manadart: the lingering veth issue only seems to affect the OVS bridge; the nic is dropped as expected from lxdbr0
<achilleasa> quick sanity check on a clean 2.8 -> develop PR https://github.com/juju/juju/pull/11699
<stickupkid> how do you know if a charm has been upgraded from status
<stickupkid> is there a revision somewhere
<stickupkid> charm-rev
<achilleasa> stickupkid: removed the gitignore bit; can you take another look?
<stickupkid> hml, https://github.com/juju/juju/pull/11700
<hml> stickupkid: suggest an upgrade-charm hook as well
<stickupkid> hml, yeah I'll add, it takes time haha
<hml> at first quick glance
<stickupkid> we should totally point the lxd image server to the host image server... we download the images twice
<wallyworld> hml: does the latest comment on bug 1829393 look like it's something we need to be worried about?
<mup> Bug #1829393: model upgrade tries to upgrade the lxd profile of kvms <juju:Fix Released by hmlanigan> <https://launchpad.net/bugs/1829393>
<hml> wallyworld:  reading
<hml> wallyworld:  something to fixâ¦ looks like a situation we havenât seen beforeâ¦ not a breakage of the bug fixed at first glance
<wallyworld> i wonder if a blocker for 2.7.7, i guess there's no workaround
<hml> wallyworld:  no work around is coming to mind right now
<hml> wallyworld: how often do we have an lxd cloud with manual kvm machines?
<wallyworld> shrug
<wallyworld> how big is the fix? can we do it quickly?
<hml> wallyworld:  looking
<hml> wallyworld:  need to thread a check for manual machines thru to the instancemutaterâ¦ from state to the worker. in theory not too difficult.   pondering if fix to deploy needed as wellâ¦.
<hml> wallyworld:  nope, since manual machines donât go thru the provisioner afaik
<wallyworld> hml: "nope" as in no change needed to deploy as well?
<hml> wallyworld: i do not believe so.  would have to double check
<wallyworld> k
<timClicks> where are the default machine constraints specified? they're not available via get-model-constraints
#juju 2020-06-12
<pmatulis> i remember asking about that a year ago. iirc, they are provider-dependent
<timClicks> pmatulis: ty
<timClicks> that's the conclusion I have come to
<wallyworld> thumper: https://github.com/juju/juju/pull/11703
 * thumper headdesks at 2.8 branch needing apiserver to test state package
 * thumper looks at pr
<thumper> igmores
<thumper> 15 files?
<wallyworld> small changes
 * thumper nods
<thumper> ah FFS
 * thumper wonders about 2.7 all watcher bits
<thumper> wallyworld: this isn't going to work
<thumper> did you want to jump in a HO to discuss
<wallyworld> sure
<thumper> 1+1
<wallyworld> thumper: tweaked the cache lookup, just need to test with libjuju
<wallyworld> and need to fix one unit test
<thumper> wallyworld: is it ready for a rereview?
<wallyworld> thumper: HO?
<thumper> sure
<thumper> hpidcock: did your new multichecker land in 2.7?
<thumper> hpidcock: I have a perfect usecase for it
<hpidcock> thumper: I don't think so
<thumper> hpidcock: boo
<hpidcock> Feel free to update the juju/testing dep
<thumper> hpidcock: hmm...
<thumper> hpidcock: may well do
<thumper> hangon
<thumper> I'm in 2.8
<thumper> yay
<thumper> hpidcock: what is it called again
<hpidcock> jc.NewMultiChecker()
<hpidcock> there is an example of its use somewhere in 2.8
<thumper> found it
<thumper> now I need to work out how to get it to ignore the right field
<thumper> hpidcock: got 2 minutes to help out?
<thumper> I'm not sure how to map this thing through
<hpidcock> sure
<hpidcock> standup?
<hpidcock> thumper:^
<thumper> hpidcock: yeah
<wallyworld> thumper: you good with the PR then? i wan tto land it so i can forward port
<thumper> wallyworld: sorry, will do another quick pass over
<wallyworld> all good
<thumper> and you push while I'm looking?
<wallyworld> thumper: sorry, there was a go fmt thing
<wallyworld> an extra ' '
<thumper> approved with a questino
 * thumper is done and heading off now
<thumper> leaving with 15 failing tests in state package
<thumper> get to them next week
<thumper> later peeps
<achilleasa> manadart: to fix the lingering port issue with ovs we could override the port name via the 'veth.pair' option to a known (based on the container id or similar) value so we can remove it when we destroy the container
<manadart> achilleasa: Yep.
<stickupkid> yay, managed to replicate the LXD bug, finally
<achilleasa> now, we have deterministic iface naming for lxd veths ;-) https://pastebin.canonical.com/p/ysbqGHWJVV/
<achilleasa> manadart: stickupkid comments/concerns about format? ^^ (it's "machID-ifaceIdx" where machID has / converted to _ to distinguish from last dash)
<stickupkid> achilleasa, i'm never sure if you even need _, but otherwise looks ok to me
<stickupkid> achilleasa, i.e. 0lxd2-0
<stickupkid> achilleasa, lxd is the separator
<stickupkid> achilleasa, like doing `1_,_2_,_3` for csv ;-)
<manadart> achilleasa: Agree with stickupkid. This is cool BTW.
<manadart> achilleasa: This will apply to all LXD veths created for Juju right? not just connected to OVS?
<achilleasa> stickupkid: makes sense; I will just trim it off. manadart yes; I wanted to put a juju prefix but there is a 16 char limit so I tried to keep it shorter
<manadart> achilleasa: I love that "ip a" will now give a less opaque picture of what's going on.
<manadart> achilleasa: And in the state link-layer data for that matter.
<hml> stickupkid: pr review changes made to 11701.  iâm finalizing k8s testing now
<stickupkid> hml, been looking
<hml> stickupkid: testing my k8s myself
<stickupkid> hml, approved again
<hml> stickupkid:  cool, ty
<hml> stickupkid: ffs: https://pastebin.canonical.com/p/J3Tx9pF6Wc/. will have to fix this
<hml> stickupkid: and k8s is now failing to install *sigh
<stickupkid> hml, weird
<hml>  stickupkid actually noâ¦ the LXDProfileRequire doesnât check machine for KVM or manual etcâ¦. only if the charm has a profile in it
<hml> *facepalm*
<hml> not sure on the k8s change yet
<stickupkid> hml, if you've got 5 minutes before my EOD around https://github.com/juju/juju/pull/11700
<hml> stickupkid: had a question for you on thereâ¦ forgot to click send.  ho?
#juju 2020-06-14
<thumper> make check in the 2.8 branch now doesn't seem to be running all the tests
