#juju 2012-01-02
<nijaba> Good morning
 * SpamapS shakes the rust off his IRC client
<nijaba> Hey SpamapS, Happy new year
<SpamapS> likewise
 * SpamapS has not even been reading email.. slogging through it now
<nijaba> SpamapS: no worries.
#juju 2012-01-03
<rog> mornin' all, and happy new year!
<mpl> rog: same to you Roger :)
<rog> mpl: hiya
<nijaba> HNY rog & all
<koolhead11> hello nijaba
<nijaba> Hey koolhead11!  hope you had a nice break
<koolhead11> yes sir. :)
<TheMue> A stormy HNY from Germany too. ;) I hope the trees in front of our house will survive those gusts.
<koolhead11> TheMue:
<koolhead11> :P
<TheMue> koolhead11: Hey   :S
<TheMue> koolhead11: But so far everything is ok. ;)
<koolhead17> TheMue: the sounds good.
<TheMue> koolhead17: yep
<freeflying> wondering how can I get juju + lxc works on a laptop, anyone has any clues?
<therve> freeflying, https://juju.ubuntu.com/CharmSchool should have the necessary bits
<freeflying> therve: I did following that pages tells, but no lucky at all on oneiric, bootstrap is fine, afterwards, all machines will be just pending there
<therve> freeflying, did you reboot in the process?
<freeflying> therve: after libvit-bin installed, yes
<therve> I don't know then
<therve> it used to work at least
<jcastro> woo, back to the grind folks!
<koolhead17> hey jcastro :)
<m_3> morning gang
<marcoceppi> Morning
<freeflying> still have problem with juju + lxc on laptop, container started successfully after deploy, but juju status shows them in pending, can ssh into the container, anyone has ideas on this?
<koolhead17> freeflying: it takes sometime i suppose :)
<freeflying> koolhead17: usually how long does it take? :)
<koolhead17> freeflying: pastebin juju -v status
<koolhead17> freeflying: on machine like mine with 2 gigs RAM long enough :P
<freeflying> koolhead17: http://paste.ubuntu.com/791723/
<freeflying> koolhead17: I'm going to leave my laptop running alone there :), and get into bed, hope everthing will be going as expected in 8 hrs :P
<koolhead17> freeflying: another thing you can do is check log inside your data-dir
<koolhead17> freeflying: hehehe. i guess it will take less time :P
<koolhead17> freeflying: if your interested in debugging i can show u the way :P
<freeflying> koolhead17: I'd love to, since I'm going to give a talk on juju in couple of days at a confenrence :P
<koolhead17> jcastro: wont it be good idea to modify the documentation and come up with some starred kind of thing saying about memory and time taken for juju to bring up instance?
<jcastro> yeah, I need to talk to hazmat about docs
<koolhead17> freeflying: awesome, well in that there are other folks here who can help you as they are more knowledgeable than me :P
<jcastro> freeflying: is this the first time you're running it?
<jcastro> the first time takes a while
<jcastro> after that when everything is cached it's like a few seconds for me
<freeflying> jcastro: actually I tried before, but not sure if I gave it enough time before I destroy them :)
<koolhead17> freeflying: one simple suggestion
<koolhead17> try installing only mysql charm
<koolhead17> and unless/until it comes in running mode dont deploy wordpress charm
<koolhead17> i remember facing same in first time :P
<freeflying> koolhead17: ok, just destroyed previous environment, and re-deploy mysql only
<koolhead17> freeflying: cool
<freeflying> koolhead17: how can I check whether the charm has been deployed into the container?
<koolhead17> freeflying: juju -v status will give you information about the container <id>
<koolhead17> in the environments.yaml you must have provided juju data-dir
<koolhead17> go there
<koolhead17> you will find something like <username>-sample
<koolhead17> inside there u will have a directory units
<koolhead17> it has all the meat u require :P
<hazmat> jcastro, pong
<freeflying> koolhead17: ssh into the container just deployed(not with juju ssh), and there is nothing running inside the container shows mysql will be deployed
<koolhead17> freeflying: is mysql charm finally running"?
<freeflying> koolhead17: no, juju status told its still in pending
<hazmat> freeflying, do you see the containers twice in the output of lxc-ls ?
<hazmat> lxc-ls shows all extant containers, and does a separate line for running containers (so its listed twice if running)
<freeflying> koolhead17: yes
<freeflying> hazmat: yes
<freeflying> freeflying@x200:~$ sudo lxc-ls
<freeflying> freeflying-lxc-0-template  freeflying-lxc-mysql-0
<freeflying> freeflying-lxc-mysql-0
<m_3> marcoceppi: do you have a gist for using the ch downloader/verifier?
<koolhead17> jcastro: around
<marcoceppi> m_3: Not really, I could make one if you like
<m_3> marcoceppi: nah, just wondering if you had one in place already... thanks tho!
<koolhead17> freeflying: did you install Juju from PPA?
<freeflying> koolhead17: yes
<koolhead17> freeflying: juju -v status
<koolhead17> if any luck
<koolhead17> :P
 * koolhead17 being optimist
<hazmat> freeflying, inside of $data-dir/units/mysql-0 there should be logs from the unit agent that might be helpful here
<hazmat> also worthwhile to verify the unit agent is running via  ps aux | grep juju
<hazmat> should show the machine agent for local host and the unit agent for the container
<freeflying> hazmat: only the one for on host is running
<hazmat> freeflying, hmm.. could you pastebin the master-customize.log in $data-dir
<hazmat> freeflying, typically if the unit agent doesn't come up its a problem getting the initial container setup correctly
<koolhead17> nijaba: i have been able to get few off the issues in owncloud charm as you pointed getting fixed :P
<nijaba> koolhead17: sounds great! Tell me when you need another review
<koolhead17> nijaba: by this week end sir :)
<nijaba> koolhead17: looking forward to it
<koolhead17> thanks
<freeflying> hazmat: http://paste.ubuntu.com/791761/
<koolhead17> hazmat: any idea when are we getting done with the juju bug so it works with eucalyptus?
<hazmat> freeflying, it looks like the install failed into the template container
<hazmat>  Hash Sum mismatch and failed to fetch some archives
<jcastro> oh neat, cacti over the break
<jcastro> marcoceppi: thanks for the review
<marcoceppi> jcastro:  Yeah, still writing the full review, there's a lot left do to for that one
 * jcastro nods
<jcastro> I'm just happy to see a guy show up!
<SpamapS> koolhead17: I'm not sure there is a bug in the latest Ubuntu+juju
<SpamapS> koolhead17: IIRC the problem was that txaws was too old on lucid
<koolhead17> SpamapS: so we need to try juju+eucalyptus+oneiric to get that particular doubt/issue whatever fixed
<SpamapS> koolhead17: eucalyptus can be on any platform, but juju should be run on oneiric talking to the eucalyptus box
<SpamapS> We should probably try to get a euca and an openstack setup and run our ftests against it
<koolhead17> SpamapS: that would be awesome.
<koolhead17> SpamapS: also as you mentioned i need to get an oneirc instance running via eucalyptus and get juju configured on that.? am i correct sir ?
<SpamapS> koolhead17: um, no. ;)
<SpamapS> koolhead17: you just need oneiric *somewhere* that can communicate with eucalyptus
<SpamapS> koolhead17: it could be an instance, or a VM on a laptop, or something else.
 * koolhead17 scratches his head 5 times
<nijaba> SpamapS: do you have time to work on my buildd issue?
<SpamapS> nijaba: sure, remind me where to start.
<nijaba> I'll resend my last email which is a nice summary
<nijaba> SpamapS: see your inbox
 * SpamapS recoils in horror upon seeing his swollen bloated spam infested inbox
<nijaba> SpamapS: I can pastbin my email if you prefer :)
<SpamapS> nijaba: regarding your question, I think I should change the Makefile to only run tests on 'make check' instead of also on 'make install'. This makes sense, make install is run with fakeroot, while make check is not.
 * SpamapS does that
 * nijaba does happy dance
<SpamapS> nijaba: ok, pushed
<SpamapS> nijaba: you should be able to kick off the auto-build from code.launchpad.net/charm-tools
<nijaba> SpamapS: I am, but I need to reauest a merge of take-trois first
<nijaba> SpamapS: merge review requested
<SpamapS> hm looks good, testing now.. but I just noticed a bug in my code..
<SpamapS> using exit 0 where the test should really return 0 or something. Hrm.
<SpamapS> may have to refactor the tests a bit to stick them each in their own function so they can skip themselves.
<nijaba> ah, true...
<SpamapS> Fine for now though
<SpamapS> tests/helpers/helpers.sh: 39: /home/clint/src/charm/buildd-fixes-take-trois/tests/helpers/test_peer.sh: cannot create /tmp/tmp-juju-log: Permission denied
<SpamapS> looks like that needs to be a mktemp -d
<SpamapS> tests/helpers/helpers.sh: 190: /home/clint/src/charm/buildd-fixes-take-trois/tests/helpers/test_peer.sh: cannot create /tmp/result: Permission denied
<nijaba> SpamapS: it's a file actually, and I wanted to keep the same name over multiple runs
<SpamapS> nijaba: then you'll need to put the username in it
<nijaba> SpamapS: ??
<SpamapS> I can't run "as a test user"
<SpamapS> because I ran as my regular user already
<nijaba> SpamapS: ah, right! will fix
<SpamapS> should probably clean those files up on success
<SpamapS> /tmp/result is also a bad file name. ;)
<nijaba> SpamapS: I truncate on each run
<SpamapS> yeah but you can't just use /tmp/result .. mktemp
<nijaba> SpamapS: you are right
<SpamapS> (and then make sure that file is cleaned up in your trap if it exists)
<SpamapS> Otherwise it works fine once I rm those two files from my main user
<nijaba> SpamapS: ok, changes pushed
<jcastro> m_3: I'm going to give zenoss one day or so to catch up with the new years and then I'm going to ask for a call this week or next
<SpamapS> nijaba: you can't just rm tempfiles after you're done with them. You have to do it in the exit trap.
<SpamapS> nijaba: otherwise an error leaves files sitting in /tmp
<SpamapS> not the end of the world
<SpamapS> but bad form. ;)
<SpamapS> actually I think you're ok here..
<SpamapS> there's no possibility for error.
<SpamapS> unless echo fails. ;)
<SpamapS> nijaba: note that you don't actually need that result file.. you could just | grep
<SpamapS> 2>&1 | grep anyway
<nijaba> SpamapS: noted
<nijaba> thanks
<m_3> jcastro: sounds good on zenoss... anytime's good for me
<m_3> jcastro: maybe before budapest tho?
<jcastro> yeah so, this week
<m_3> jcastro: should I talk to the guys I know there beforehand?
<jcastro> oh I didn't know you knew people
 * m_3 gets around
<jcastro> m_3: sigh, I failed to include you on CC
<jcastro> fwding the info now
<m_3> (beach boys stuck in my head now)
<mchenetz> Good afternoon and Happy Holidays!
<jcastro> hey mchenetz!
<jcastro> how were your holidays?
<mchenetz> jcastro: they were good and yours?
<jcastro> good good!
<mchenetz> I was busy during the holidaysâ¦ However, i am getting back to the OSX implementation now
<jcastro> SpamapS: Got time soonish?
<SpamapS> jcastro: now's good
<jcastro> SpamapS: I just sent you a mail with the text.
<jcastro> G+?
<SpamapS> sure let me grab my G+ machine
<SpamapS> jcastro: invite when ready
<SpamapS> I hate technology
<jcastro> SpamapS: I can hear the feedback when I yell
<jcastro> so I think it's getting to your laptop
<jcastro> phone?
<jcastro> or irc is fine
<EvilBill> Wow, back here from the holidays.
<EvilBill> Big fun.
<SpamapS> EvilBill: w/b
<freeflying>  /win 34
#juju 2012-01-04
<SpamapS> hrm.. I think we need a charm helper for dbconfig-common
<SpamapS> marcoceppi: looks like you had to jump through some hoops to turn it off for phpmyadmin
<marcoceppi> SpamapS: It wasn't pleasent
<SpamapS> the whole idea of dbconfig-common actually isn't very pleasant
<SpamapS> its a constant source of bug reports
<marcoceppi> Trouble is, it'll likely be different for each service
<marcoceppi> SpamapS: that's why I wanted to use upstream install for the charm :)
<marcoceppi> The solution was easy enough after playing around for a while
<SpamapS> So I'd like to reduce it down to 'ch_dbconfig_install packagename' and then in the hooks 'ch_dbconfig_config answer1=foo answer2=bar'
<marcoceppi> What would ch_dbconfig_install do?
<SpamapS> marcoceppi: install it but leave it unconfigured
<marcoceppi> How would that be an advantage over this? https://gist.github.com/1557729
<SpamapS> marcoceppi: taking advantage of the packager's work to configure for the database seems useful.
<SpamapS> marcoceppi: I suppose another option would be to ch_dbconfig_disable packagename .. and after that it would be up to the user.
<SpamapS> marcoceppi: one can always assume $packagename/dbconfig-(install|upgrade|reinstall) for disabling
 * SpamapS is still resisting the urge to use git.
<marcoceppi> I like the idea of having it simplified down to "auto-filling" based on package name, so ch_dbconfig_config <package> selection1=value selection2=value which would just translate it out
<marcoceppi> I would have like to maybe used dbconfig and set db-host, db-user, db-pass when db-admin-relation-changed was fired, but it just seemed too... flaky to me
<marcoceppi> Not very idempotent
<marcoceppi> at least, from what I understand
<SpamapS> postinsts and configs are idempotent by nature
<SpamapS> so you would just re-run debconf-set-selections, and then 'dpkg --reconfigure foo'
<SpamapS> err
<SpamapS> might be --configure foo
 * SpamapS is still re-learning his job after a week detached from the internets.. :-P
<marcoceppi> :)
<smoser> hey.
<smoser> wondering if someone could point me at what would be the most current instructions (or where i should start) for using the orchestra backedn
<koolhead11> hi all
<_mup_> juju/ssh-known_hosts r460 committed by jim.baker@canonical.com
<_mup_> orchestra knows the dns_name; no need to look it up (and change corresponding mocks)
<SpamapS> smoser: around now.. I think there are some instructions on wiki.ubuntu.com .. but I forget. They *should* be on https://juju.ubuntu.com/docs
<_mup_> juju/ssh-known_hosts r461 committed by jim.baker@canonical.com
<_mup_> Fixed bad mock setup
<_mup_> juju/ssh-known_hosts r462 committed by jim.baker@canonical.com
<_mup_> Properly mock saving launch state
<hazmat> yikes the review queue is huge
<SpamapS> hazmat: https://code.launchpad.net/~daker/juju/small-fix/+merge/68038 , should we mark that as WIP ?
<hazmat> SpamapS, i suppose, i ended up contributing the unit test there, but lacking the agreement its not clear that it can go in, its a trivial though
<hazmat> yeah
<hazmat> wip
<SpamapS> hazmat: I have a better idea for how to automate destroy-environment btw
<SpamapS> hazmat: -y is scary, like rm -rf. What about requiring  UUID of some kind?
<hazmat> SpamapS, uuid?
<hazmat> like pass in a secret on the cli for confirmation
<hazmat> seems odd, the unix'e way is to just pass a flag
<SpamapS> hazmat: not a secret..
<SpamapS> hazmat: the environment should have a UUID that is generated at bootstrap, and shown in status. Then you have to specify *that* UUID to destroy-environment
<SpamapS> hazmat: that way you don't accidentally destroy an environment you didn't mean to destroy.
<SpamapS> hazmat: if no UUID is expressed, then the y/n question is still asked (and if there's no TTY, then flat out refusal happens)
<SpamapS> hazmat: I'm still worried about automated oopses
<hazmat> SpamapS, hmm.. as a check it seems sound, as for usage, it seems over wrought. i'd rather just spec the flag.
<hazmat> SpamapS, at the end of the day humans are always a weak link ;-)
<SpamapS> hazmat: one other thought is to require the environment name be specified explicitly (ignore the default)
<SpamapS> this is not humans, this is code
<SpamapS> hazmat: I'm just concerned about automated scripts that have no idea whats in ~/.juju/environments.yaml and get run accidentally.
<SpamapS> hazmat: effectively we are encouraging people to write scripts that do 'rm -rf .'
<hazmat> SpamapS, good point
<hazmat> but scripting environment destruction is what their trying to do
<hazmat> we can either make that command non orthogonal wrt to environment spec, come up with arbitrary uuid that's not used anywhere else, or just do the common thing which is request a flag for specifying force/assume yes
<SpamapS> yeah, we should recommend that -e xxxxx always be used with destroy-environment
<jimbaker> reminds me of gmail goggles
<SpamapS> hazmat: training wheels off.. I think thats the way to go
 * SpamapS wishes his man page had been accepted so he could update it.. ;)
<hazmat> SpamapS, that seems like a reasonable safety belt if using -f
<hazmat> SpamapS, is it in the review queue anymore?
<SpamapS> no
<SpamapS> it was rejected since we should, in theory, be able to coerce argparse into generating the man page
<jimbaker> that was back from dublin days
<hazmat> ah
<SpamapS> I sat down to do that a few weeks ago..
<SpamapS> and argparse made me throw up in my mouth a little
<hazmat> i think we've given up on that
<hazmat> SpamapS, yeah.. its not pretty to extend
<SpamapS> I think actually what we *can* do is parse out the --help .. as its structured enough to be machine parsable
<SpamapS> sometimes the way python classes are written makes me really really confused
<SpamapS> coming from C++ -> java -> perl -> php .. extending and overriding is so much more natural in most cases.
<SpamapS> python classes seem to just do weird stuff
<jimbaker> SpamapS, i don't think you can use argparse as an exemplar. probably too much attempt in that codebase to support the old optparse approach however
<SpamapS> $ juju deploy --help
<SpamapS> error: Environments configuration error: /home/clint/.juju/environments.yaml: environments.sample.region: expected 'us-east-1', got 'sa-east-1'
<SpamapS> Heh.. thats annoying.
<SpamapS> --help shouldn't care. :)
<_mup_> juju/ssh-known_hosts r463 committed by jim.baker@canonical.com
<_mup_> Do not assume current test code is correct ;)
<SpamapS> hazmat: https://code.launchpad.net/~nijaba/juju/autocomplete/+merge/86159 .. do you think that counts as a trivial?
 * hazmat peeks
<hazmat> SpamapS, indeed
 * nijaba frowns to see his first juju core contrib counted as trivial :D
<SpamapS> we need to use a different word
<SpamapS> how about 'elegant' ? ;)
<nijaba> hehe MUCH better
<SpamapS> hazmat: ok I'll just commit that
<_mup_> juju/ssh-known_hosts r464 committed by jim.baker@canonical.com
<_mup_> Fixed remaining orchestra provider test
 * SpamapS notices a second trivial change that is needed... remove --placement
<SpamapS> nijaba: https://code.launchpad.net/~charmers/+archive/charm-helpers/+recipebuild/149052
<SpamapS> nijaba: cross your fingers!
<jcastro> m_3: so hey, crazy jorge idea validation time
<jcastro> I was reading on planet devops about this: http://newrelic.com/
<jcastro> it's saas but it has agents and whatnot
<jcastro> https://github.com/newrelic/rpm
<SpamapS> jcastro: a friend of mine has been using that and said its really nice.
<jcastro> right, ditto
<_mup_> juju/ssh-known_hosts r465 committed by jim.baker@canonical.com
<_mup_> Removed debugging
<jcastro> so my question is, is this a place where juju can help?
<jcastro> juju add-relation newrelic myservice does the magic and then you just go to the website and you have it all set up kind of thing.
<SpamapS> jcastro: feels a lot like something you would subordinate into your machines
<jcastro> yeah so this is why I asked
<jcastro> SpamapS: http://www.jedi.be/blog/2012/01/04/monitoring-wonderland-moving-up-the-stack-application-user-metrics/
<jcastro> then I started wondering how many of these are open and could be juju'ed
<jcastro> or if it's lower in the stack
<SpamapS> "Part of their secret of success is the easy at how developers can get metrics from their application by adding a few files, and a token."
<SpamapS> bad translation?
<SpamapS> jcastro: we'd have to look into what they do, but if they are passive and just observe existing output of ruby/php/etc. apps, then it might be an easy win once we have subordinate services
<jcastro> statsd looks cool
<SpamapS> hazmat: where are we with lp:juju/docs ? still generating from trunk or is it fully split now?
<SpamapS> jcastro: didn't westby tackle that or something?
<jcastro> dunno, but it must be awesome, it's node.js
<jcastro> :)
<SpamapS> sooooo hot
<avoine> I'm working on the a charm and I'm not sure how much configuration I should add in the config.yaml. Is more is better then less?
<avoine> for a public charm
<SpamapS> avoine: depends entirely on the service.
<jcastro> what service? It would be nice to know so I can track it!
<avoine> it's already there it's python-moinmoin
<SpamapS> avoine: but for the most part, I say if it *is* tweakable, put it in config.yaml.
<avoine> ok
<SpamapS> avoine: if there are like, 100's of variables, then it may be best to have a single config option.. "override values" which users can just set after reading moinmoin's docs. But if there are only like, 20 or so, its nice to have them spelled out in config.yaml.
<jcastro> oh rock and roll, that would be a nice one
<avoine> SpamapS: "override values" is a good idea
<avoine> I'll do that
<jcastro> good one, captured! https://juju.ubuntu.com/CharmGuide
<SpamapS> jcastro: ^5
<jcastro> marcoceppi: I am pretty sure you have like, dozens of opinionated things you could add there, heh
<marcoceppi> jcastro: heh, yeah :)
<SpamapS> please keep it to your charm opinions.. "And whats with Burritos? They're not little burros, so why burritos?!"
<marcoceppi> A charm review check list could be nice
<m_3> jcastro: newrelic is ok... I used it for a couple of clients in the past.  there're a few different tools in that space
<m_3> jcastro: have to think about how juju could best help... the monitoring setup is pretty lightweight (it'd be a trivial charm) but the app integration is where the meat really is
<m_3> don't even know if they have an option to set up your own monitoring service with it
<m_3> i.e., your own mother ship
<jcastro> m_3: yeah as I peruse some of these I am seeing some that could be awesome, and some not.
<jcastro> I just hadn't thought of measuring tools in such a detailed way, now I am seeing all these tools and wondering.
<jcastro> also, the more I learn about things like this, the more I wish I could go back in time and fire myself from my old jobs for being dumb.
<m_3> seems like sensu would be a better way to spin up your own... newrelic does the full saas package for you
<m_3> ha!
<m_3> yeah, hindsight huh
<hazmat> SpamapS, not sure its being generated at all atm
<SpamapS> hazmat: oi. Ok, well there are a few docs/* things waiting to be merged.. we should make it clear where they go and get them merged and out of the queue.
<hazmat> SpamapS, i only see one from mbp?
 * hazmat looks around
<SpamapS> hazmat: there's one from Ahmed that I put back to WIP but will be simple to correct, and one from rog as well
<SpamapS> anyway, I have to run, bbiab
#juju 2012-01-05
<SpamapS> hazmat: https://launchpadlibrarian.net/89092620/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr440-1juju2~precise1_FAILEDTOBUILD.txt.gz
 * SpamapS sometimes wonders if the buildds are just too hostile to run juju's test suite inside of
 * SpamapS retries it
<jelmer> SpamapS: urgh, spurious failures suck
<jelmer> SpamapS: that last failure looks odd - "dictionary changed size during iteration" while you're already taking a list() of the dictionary items to iterate over
<SpamapS> jelmer: I'd pinpoint it to a precise library problem if I could get the automatic builds to work on oneiric/natty/maverick/lucid ... but the java build-dep causes us to hit the 1G limit about 90% of the time.
<jelmer> ah, bug 693524 ?
<_mup_> Bug #693524: Daily builds of Java packages fail: "Could not reserve enough space for object heap" <recipe> <Launchpad Auto Build System:Triaged> < https://launchpad.net/bugs/693524 >
<SpamapS> jelmer: indeed
<SpamapS> jelmer: most frustrating about that is that because it is seemingly random.. the PPA has 4 different versions of juju in it.
<_mup_> juju/ssh-known_hosts r467 committed by jim.baker@canonical.com
<_mup_> Test concurrent update of known_hosts file
<_mup_> juju/ssh-known_hosts r468 committed by jim.baker@canonical.com
<_mup_> Cleanup of concurrent process test
<wrtp> mornin' all
<koolhead17> hi all
<koolhead17> jcastro: ping
<nijaba> koolhead17: would be surprised if jcastro wa already awake, he is east coast based
<koolhead17> nijaba: sorry, i always get confused with the timezone slaughter :(
<TheMue> koolhead17: My little friend here is http://www.worldtimebuddy.com/. Here I've stored the locations of the whole team. ;)
<koolhead17> TheMue: cool
 * hazmat yawns
<lynxman> hazmat: whenever you got your coffee with you, what was the file again where the juju client writes its zookeeper node address?
<hazmat> lynxman, its the key 'provider-state'  in the provider storage
<lynxman> hazmat: hmm okay, any recommendation in how to alter it through the command line? :)
<hazmat> lynxman, um.. there isn't one, what's the problem your trying to solve?
<lynxman> hazmat: just continuing on what we talked, trying to instantiate a machine, setup stunnel and a juju client through cloud-init and then connect back to zookeeper through stunnel
<hazmat> lynxman, ic, so in this case you just want to get the zk addr?
<lynxman> hazmat: exactly, just tell to the new instantiated machine "your zookeper is at localhost:37173" for example
<lynxman> hazmat: which will be the endpoint of the stunnel connection
<hazmat> lynxman, it really depends on the provider, typically a machine is setup to launch the agents with zk address.. ie the cli for the agent has JUJU_ZOOKEEPERS environment var set to the address, it doesn't poke directly at the provider storage
<lynxman> hazmat: aaah so it'd be as easy as making sure the variable is setup in the permanent environment of the machine then?
<hazmat> lynxman, for testing sure
<lynxman> hazmat: k, the var is a dictionary? "1.2.3.4:1234,2.3.4.5:1234" and such?
<hazmat> ie. you could just launch the agent process and hand it another address, the place where we give the machine agent process its address is in the providers/common/launch.py via cloud-init.. it (the cli or provisioning agent) will poke at the provider-storage to figure out the extant zk servers to setup in cloud-init
<hazmat> lynxman, its a comma separate string, of the form you just posted
<lynxman> hazmat: excellent, many thanks :)
<hazmat> lynxman, np
<wrtp> hazmat: is there a team meeting today?
<hazmat> wrtp, good question, we can do one, but i'm not sure we have quorum
<hazmat> it looks like gustavo, william and myself are on vacation.. i'm game though
<wrtp> hazmat: ah, i wondered who was still on holiday.
<TheMue> wrtp: I've got to miss the meeting, we've got an important parents' evening here. Our younger daughter has a class trip the next week (no, not to Budapest *lol*).
<wrtp> TheMue: that's fine - i'd assumed it's not happening anyway
 * TheMue too
<hazmat> wrtp, yeah... let's skip it.. we'll all be around next week in person
<wrtp> hazmat: k
 * hazmat takes dog to vet
<koolhead17> jcastro: around?
<jimbaker> hazmat, sounds good
<jcastro> koolhead17: yo
<koolhead17> jcastro: am still waiting for mail sir!! :P
<jcastro> sorry I've been slammed
<jcastro> I'll do it now
<koolhead17> jcastro: awesome.
<koolhead17> jcastro: got it!! :P
 * SpamapS works through the juju MIR
<SpamapS> hazmat: have you tried running the txzookeeper test suite? I get quite a few failures on it right now
<hazmat> SpamapS, checking
<hazmat> SpamapS, the txzk test suite is a little odd, it requires a running zookeeperd it doesn't manage it the same way that juju does
<SpamapS> hazmat: working on running test suites on all package builds.. but 'trial txzookeeper' is looking pretty gruesome on the precise version
<SpamapS> oh
<hazmat> SpamapS, basically apt-get install zookeeperd
<SpamapS> so just install zookeeperd package an it will start working?
<hazmat> yup
<SpamapS> ok I can do that.
<SpamapS> still getting tons of fail
<SpamapS> ew and it creates a statically named temp dir
<SpamapS> hazmat: ugh, there's a bug in zookeeper where it listens on :::2181 and not 0.0.0.0:2181
<hazmat> SpamapS, it doesn't listen on all ifaces?
<hazmat> SpamapS, i put in a bug to switch over the txzk tests to manage their own server
<_mup_> juju/ssh-known_hosts r469 committed by jim.baker@canonical.com
<_mup_> Test key update in known_hosts
<SpamapS> hazmat: no it doesn't.. bug 888643 ..
<_mup_> Bug #888643: Zookeeper listen only to IPv6 interface <zookeeper (Ubuntu):Confirmed> < https://launchpad.net/bugs/888643 >
<SpamapS> hazmat: or rather, it does, but only all ipv6 interfaces
<SpamapS> there seem to be other problems
<SpamapS> its creating /var/lib/zookeeper wrong as well
<_mup_> juju/ssh-known_hosts r470 committed by jim.baker@canonical.com
<_mup_> Test key update in known_hosts
<SpamapS> adduser: Warning: The home directory `/var/lib/zookeeper' does not belong to the user you are currently creating.
<SpamapS> ahh
<SpamapS> they've listed the dir in the package
 * SpamapS fixes
<jimbaker> oops, way too quick on choosing which command to repeat ;)
<SpamapS> jimbaker: restoring from backups is just an opportunity to go get coffee. :)
<jimbaker> SpamapS, ;)
<jimbaker> running our unit tests fully is a perfect amount of time to make some tea, for sure
<SpamapS> thats true
<_mup_> juju/ssh-known_hosts r471 committed by jim.baker@canonical.com
<_mup_> Missing file from prev commit
<_mup_> juju/ssh-known_hosts r472 committed by jim.baker@canonical.com
<_mup_> Sample keys use a fake user, not my account
<SpamapS> hazmat: would the txzookeeper tests be destructive to an existing zookeeper or does it use some kind of unique namespace so it won't trash whats already there?
<SpamapS> hazmat: ahh, I see now.. its extremely dangerous!
<SpamapS> hazmat: deletes the entire thing. Doh.
<jimbaker>  SpamapS, could use chroot for zk
<SpamapS> jimbaker: I'd have to build another chroot, inside my chroot, to install zookeeperd.
<jimbaker> SpamapS, no i mean in the connection descriptor for zk itself
<jimbaker> (also called chroot)
<SpamapS> jimbaker: makes *far* more sense for txzookeeper to spin up its own ephemeral zk and manage it.
<jimbaker> SpamapS, sure. just an option
<SpamapS> jimbaker: I don't understand zk or txzk enough to do that. :P .. just going to mark it as affecting the package so we know why we can't run the test suite on build. :)
<jimbaker> SpamapS, no worries. but it's as simple as changing the connection string to append the desired root for the nodes to be put in
<hazmat> SpamapS, yeah.. its dangerous, the chroot trick is just a matter of  specifying the connection string  a little different, but per the bug txzk should just setup the zk instance
<hazmat> its already got the code todo so, just not been rewired into a an appros test layer.
<SpamapS> hazmat: yeah, medium importance.. we'll get to it at some point but its not uber critical since txzk doesn't change that much.
<SpamapS> hazmat: have you taken a close look at the quality fo the 'epsilon' python module used in txaws ?
<SpamapS> hazmat: its kind of abandonware ... the usage is so thin too.. just some date parsing stuff that I'm sure is doable by something already in main... :p
<SpamapS> actually, wow.. ISO8601 is like, the easiest date format ever to parse.
<SpamapS> aha.. python-dateutil .. author: Gustavo Niemeyer. :)
<nijaba> looks like https://code.launchpad.net/~nijaba/charm-tools/peer-replay/+merge/86721 is finally ready to be reviewed. it builds and runs nicely too :)
<nijaba> if anyone got some spare time to review..
<nijaba> SpamapS: why do we need charms to work with lucid again?
<SpamapS> nijaba: we don't need the charms themselves to work with lucid, but charm-tools we do.
<SpamapS> nijaba: so we can technically just skip the charm helper test suites on anything before oneiric I suppose.
<nijaba> SpamapS: can you elaborate?  I thought we could only run guests which are oneiric or >
<nijaba> SpamapS: ah, juju tools, not the helpers need to run on 10.04... makes sense
 * SpamapS patches dateutil in for epsilon and jumps for joy when all tests pass.
<SpamapS> seems like we need to have a txaws release at some point soon.
#juju 2012-01-06
<SpamapS> hazmat: https://code.launchpad.net/~clint-fewbar/txaws/drop-epsilon/+merge/87700 ... nice juicy txaws change for you to review. :)
<hazmat> tasty
<hazmat> SpamapS, nice
<SpamapS> next on the hit list is pytz
<hazmat> that ones harder
<SpamapS> not really
<SpamapS> only used for UTC
<hazmat> not sure what the usage is, but as far as replacements..
<SpamapS> which dateutil also has
<hazmat> cool
<SpamapS> if we're going to pull txaws into main, might as well get txaws using the existing main stuff rather than expand main even further
<hazmat> SpamapS, sounds good to me
<hazmat> hmm.. utc support should be builtin
<hazmat> oh.. its there for constant
<hazmat> sad
<hazmat> ah.. there tzinfo object instances
<SpamapS> yeah, just need datetime.datetime.utcnow()
<nijaba> I seem to run into cases where hooks are skiped when running juju on lxc.  anyone seeing this?
<SpamapS> hm, its not quite the same thing.. datetime.utcnow() will return a naive datetime.. I need it to be a UTC datetime
<hazmat> SpamapS, as you said dateutil includes the nesc utc tzinfo impl
<hazmat> nijaba, skipped ? or not observed ;-)
<hazmat> nijaba, which hooks in particular (start/install)?
<nijaba> hazmat: skipped as in state: running but obviously the install hook was never called
<SpamapS> yeah exactly, just had to wrap my head around it
<hazmat> nijaba, is the hook executable?
<hazmat> nijaba, i'd check the unit agent log on unit's disk to verify, it will skip if not found
<nijaba> hazmat: it is.  and what's funny is that it happens when I start two unit of the same service in short time period.  One does it fine, the other one does not
 * nijaba looks at the log
<hazmat> nijaba, can you pastebin the unit agent log from one thats being skipped?
<hazmat> nijaba, there shouldn't be any way for that to happen, to get to started it passes through install/start transitions which execute relevant hooks. i'm happy to take a look though
<hazmat> SpamapS, its a drop in replacement for the imports UTC/tzutc, the only delta of note is the unit test
<hazmat> for the FixedOffset test
<SpamapS> hazmat: yeah, I don't even think I need dateutil .. as datetime.utcnow().replace(tzinfo=tzinfo('UTC')) seems to work fine
<hazmat> SpamapS, cool.. i could have sworn it was builtin
<nijaba> hazmat: ok, I think I know where it comes from.  I started debug-hook before the unit was ready, went to have dinner, came back, the debug-hook timed out on the "accept key" prompt and installed was never actually run
<SpamapS> hazmat: you get the UTC time, but to be identical, you need tzinfo to be set just like pytz does it
<hazmat> "An object d of type time or datetime may be naive or aware. d is aware if d.tzinfo is not None and d.tzinfo.utcoffset(d) does not return None. If d.tzinfo is None, or if d.tzinfo is not None but d.tzinfo.utcoffset(d) returns None, d is naive."
<nijaba> hazmat: http://pastebin.ubuntu.com/794426/
<hazmat> nijaba, ah.. so debug-hooks was running and exited, so the hooks where run and there was no start hook
<hazmat> thats still a little odd
<hazmat> ah the cli client connection to zk timed out, removing debug mode
<hazmat> nijaba, did your laptop sleep or network get interuppted?
<nijaba> hazmat: well, it is quite logical. for the unit, I was in debug hook mode, so nothing was run (was waiting for me to launch it).  Since the session timed out  before I got back, it continued without ever running it
<nijaba> hazmat: nope, just left alone for 3h
<hazmat> there isn't an explicit timeout on debug hook sessions, but if the client looses connectivity for any reason it is ended
<hazmat> time for dinner bbiab
<nijaba> hazmat: the time out is on the client side for me to accept connecting to a host with a new key
<nijaba> is there a way to tell juju (lxc) to create the guest in a tmpfs ?  Would speed up tests quite a bit I think
<nijaba> gah, I'll just mount /var/lib/lxc/ in a tmpfs...
<hazmat> nijaba, that should work
<hazmat> nijaba, there's support in lxc-clone for snapshots but its not something we've explored
<hazmat> on an ssd after the first unit (which also does the master template creation) takes just a few secs
<hazmat> at least for me
<hazmat> er.. template lxc container creation
<nijaba> hazmat: sorry, only a good old spinner 7200rpm here :)
<SpamapS> hazmat: indeed, dateutil.tz's tzutc == pytz.UTC and tzoffset ~= pytz.FixedOffset
<nijaba> whaaa.  That was fast!
<nijaba> I have to be REALLY quick to be able to start juju debug-hook before the install hook is called
 * nijaba just added "tmpfs /var/lib/lxc tmpfs defaults,size=3G 0 0" to his /etc/fstab
<hazmat> nijaba, yeah.. deploy really needs an option for running with debug-hooks to help against the install/start case
<hazmat> ie.. deploy --debug
<nijaba> hazmat: would be nice indeed
 * nijaba juju deploy local:bed && juju deploy local:nick && juju add-relation nick bed
<SpamapS> hazmat: woot.. managed to replace pytz, zope.datetime, and epsilon with dateutil
<hazmat> SpamapS, score
<hazmat> nijaba, just don't destroy the env ;-)
 * hazmat hibernates
<koolhead17> nijaba: the limesurvey charm has been very helpful 2 me in understanding charms writing :)
<nijaba> koolhead17: I am glad :)
<koolhead17> nijaba: will you have time to review my charm later in day? i am almost done with all the blockers you mentioned in the review. I need some more time to get mysql part talking!! :P
<nijaba> koolhead17: If not today, I'll try this we.  Ping me when you think you are good
<koolhead17> nijaba: perfect. :)
<koolhead17> i had accidently shutdown my machine while juju was still running and i kept getting this http://paste.ubuntu.com/794719/
<koolhead17> the only solution i found for this was to remove the files inside my data-dir
<koolhead17> is it right way?
<fwereade> koolhead17, I don't know for sure; but what do you get if you destroy-environment?
<koolhead17> fwereade: well i got no solution even after trying that, so i applied the hard way :P
<fwereade> koolhead17, fair enough -- so destroy-environment just didn't do anything?
<koolhead17> fwereade: destroy-environment has worked for me in normal condition
<fwereade> koolhead17, indeed, just not with a restart in between
<koolhead17> this time my poor system hanged while juju was spawning some contaners
<koolhead17> fwereade: i would say its my system issue and 4 that solution was what i did. :P
<fwereade> koolhead17, well, glad it's fixed anyway ;)
<koolhead17> fwereade: indeed!! :D
<rbasak> Has anyone yet tried running juju on arm?
<rbasak> I get http://paste.ubuntu.com/794755/ but not sure if it's an arm issue or it's me doing something wrong.
<koolhead17> fwereade: i was going through my uunits.log and saw http://paste.ubuntu.com/794757/  is it okey? my charm is running well
<nijaba> argh, looks like my lxc is stuck on something.  Getting a weird error on juju destroy-environment http://pastebin.ubuntu.com/794769/
<koolhead17> nijaba: last time we were discussing about the possiblity of having a mail-server kind charm so we can simply use it for CMS
<nijaba> koolhead17: yep.  And I hinted that this may not be the only option we want to offer.  it is often a better choice to point the CMS to an external SMTP server rather than forcing to deploy one in the juju environment
<nijaba> koolhead17: which is what I did for roundcube and limesurvey
<fwereade> koolhead17, I'm afraid I don't know about that at all :(
<koolhead17> nijaba: yeah am going through the charm!! :P
<koolhead17> nijaba: fwereade in my charm case, the mail server simply needed to send password reset msg.
<nijaba> koolhead17: so either use a local smtp server or point to an external one
<koolhead17> nijaba: okey.
<koolhead17> nijaba: http://bazaar.launchpad.net/~charmers/charm/oneiric/limesurvey/trunk/view/head:/config.yaml  i see information about the mailserver settings, i can use something equivalant
<nijaba> koolhead17: yes, that's what I would suggest
<koolhead17> nijaba: but i  don`t see any of the mail server related package getting installed via Install file of the charm, am i missing something?
 * koolhead17 is bit confused :P
<koolhead17> shit, we can use remote host for that as you suggested nijaba , stupid me
<nijaba> koolhead17: right, but let's keep the s**t out :)
<koolhead17> nijaba: :)
<rbasak> running juju locally (lxc), juju is calling lxc is calling debootstrap without a proxy. Any idea how I can set one? debootstrap will accept an http_proxy environment variable AFAIK, but I don't know how to set it in the middle
<koolhead17> nijaba: so limesurvey mysql db while imported in mysql db contains authentication credentials like user: admin password: password, i am reading the readme.txt?
<nijaba> koolhead17: yes
<koolhead17> nijaba: aah nice, which means we are importing the db and accordingly adding the credentials in its config file :)  This makes me feel goooooooood
<nijaba> looks like I hit a juju bug when I have an unbound var in a peer-relation-joined.  once I have this condition, relation events won't be processed anymore until I restart my environment http://pastebin.ubuntu.com/794903/
<nijaba> hazmat: want me to open a bug for this ^?
<m_3> reminder to update your lxc caches (P _and_ O) and whatever you need in apt-cacher-ng caches before travel
<nijaba> m_3: may sound silly, but what are P and O lxc caches
<m_3> nijaba: sorry, local juju deployment caches for precise and oneiric
<nijaba> m_3: thanks, much clearer!
<m_3> i.e., bootstrap and deploy something with each series so the base install gets built out
<nijaba> rght
<m_3> the harder one to handle is apt-cacher-ng... gotta spin up whatever charms you might wanna work with offline
<_mup_> Bug #912812 was filed: Error condition on relation hooks locks events processing <juju:New> < https://launchpad.net/bugs/912812 >
<hazmat> hmm
<nijaba> hmhmm?
<hazmat> nijaba, do you have the complete log for that unit agent
<nijaba> hazmat: killed it, but will reproduced
<hazmat> nijaba, cool, i think i've worked it out, thats a rather serious bug issue imo
<nijaba> hazmat: looked like it to me.  I opened a bug, if you did not see
<hazmat> nijaba, got it just commenting on it now
<nijaba> hmmm, looks like this bug is even worse than I thought.  After destroying my env and bootstraping a new one, a new deploy seems to block
<nijaba> short of rebooting my system, what do I need to wipe to restart from a clean state?
<koolhead17> nijaba: i have fixed most of the blockers you mentioned in 1st review, if you have time please review it  :)
<koolhead17> https://code.launchpad.net/~koolhead17/charm/oneiric/owncloud2/trunk
<nijaba> koolhead17: looking
<nijaba> koolhead17: just our of curiosity, why in start are you using 'service apache2 start' and in stop '/etc/init.d/apache2 stop' ?  just curious, as both should work, but would tend to use the same form in both.
<hazmat> nijaba, huh? simplying destroying the env should resolve it
<hazmat> actually just removing the unit would suffice, its a local unit problem
<koolhead17> nijaba: :P lemme check it.
<nijaba> hazmat: well, it might be something else, but my first unit in the new env has been pending for 15min now.  it's a bit long for lxc in a tmpfs :(
<koolhead17> nijaba: because i wrote that part long time ago :P just modified it
<nijaba> koolhead17: your changes look good to me :)
<nijaba> koolhead17: can't wait for the mysql part!
<nijaba> koolhead17: do you want to wait for that until we promulgate? if not, please change the bug status to "fix-commited" and comment on the changes you made
<nijaba> hazmat: you interested in my current lock state or should I go ahead an reboot?
<hazmat> nijaba, yes i'd like a pastebin of the unit agent log in complete
<hazmat> nijaba, oh the locked state, sure
<koolhead17> nijaba: it currently works without any issue and uses sqlite, i will change it to fix-commited with comment that mysql integration part working with.
<hazmat> nijaba, can you pastebin a ps aux process listing
<nijaba> hazmat: sure
<nijaba> hazmat: http://paste.ubuntu.com/795106/
<hazmat> nijaba, yeah.. i've seen that occasionally lxc-wait basically hangs..
<hazmat> lxc-wait -n nbarcet-lxc-roundcube-0 -s RUNNING
<hazmat> nijaba, what's the output of lxc-ls ?
<hazmat> that should show if the container is running
<nijaba> hazmat: http://paste.ubuntu.com/795107/
 * nijaba loves pastebinit
<hazmat> nijaba, me too.. last one.. could you pastebin the machine agent log
<nijaba> hazmat: the one in the unit?  Can't ssh to it at this point
<hazmat> nijaba, /home/nbarcet/tmp/juju/nbarcet-lxc/machine-agent.log
<hazmat> nijaba, the unit log files are on local disk as well and symlinked in to the data-dir, although not sure that works across disk partitions like you've setup
<SpamapS> nijaba: 11.10 or precise?
<SpamapS> I believe there have been quite a few fixes to lxc in precise
<nijaba> hazmat: http://paste.ubuntu.com/795113/
<hazmat> i think this is an lxc issue, but not one reported upstream yet
<nijaba> SpamapS: 11.10
<hazmat> hmm.. so the container start fails, and then the lxc-wait hangs because the container isn't started
<hazmat> i guess the question is why the container start failed, which should be in the container console log under /home/nbarcet/tmp/juju/nbarcet-lxc/units/roundcube-0/
<hazmat> nijaba, okay.. last one ;-) could you pastebin the files in that dir?
<SpamapS> hazmat: and why lxc-wait doesn't return an error when start has failed.
<nijaba> hazmat: http://paste.ubuntu.com/795117/
<hazmat> SpamapS, we should probably lxc-wait on more than just start to detect the failure and avoid the machine agent hang, the lxc processes have completed, but understanding why the container fails would be nice helpful as well
<hazmat> hmm.. that looks normal
<koolhead17> Is it a good idea to have charm for splunk?
<nijaba> hazmat: arg...  truncated by pastebinit
<nijaba> hazmat: then end is not normal, wait a sec
<nijaba> hazmat: http://paste.ubuntu.com/795123/
<SpamapS> hazmat: indeed.. RUNNING|STOPPED maybe?
<SpamapS> koolhead17: the splunk server, yes. The splunk agent? not until subordinate services land (soon I think)
<nijaba> SpamapS: splunk wa my first use case to justify "virtual" charms, which we now call subordinates ;)
<jcastro> koolhead17: dude nice work there with owncloud
<koolhead17> SpamapS: i have two questions, 1) it has separate pkg for 32 and 64 bit.  2) it asks you to fill company details your name before it lets you download them actually.  How to overcome this part :D
<jcastro> we just need the mysql parts right?
<koolhead17> jcastro: thanks sir!!
<koolhead17> jcastro: yes i will get it hopefully by monday/tuesday.
<SpamapS> nijaba: I actually don't think they're at all similar, virtual and subordinate. We just happen to be able to mimic the end-result of virtual charms using subordinate ones.. but I'm still not really "happy" with that sort of hack. :p
<nijaba> jcastro: actually, I think we can promulgate as is, without the mysql part, as the limit is clearly stated in the readme.  do you agree?
<SpamapS> nijaba: a virtual service would be a charm that doesn't need a machine... so the hooks would run once, in one place. using a subordinate charm, we have to run the hooks on every machine that gets related to it..and, IMO, they will end up being more complicated.
<jcastro> I agree, I mean, if it's for personal use, I don't need mysql
<jcastro> the person who wants to deploy it for tons of  users will want mysql and will find the bug report and/or fix it.
<koolhead17> jcastro:  i can write saperate charm for mysql if its good idea :P
<jcastro> but we do have a bug on the charm for mysql
<nijaba> SpamapS: well, not really, as long as we do not have to start a container for it, I think it is the right way to go after quite a bit of brainstorming with jimbaker
<jcastro> koolhead17: config option?
<koolhead17> jcastro: currently it just works with sqlite
<nijaba> jcastro: what bug for for mysql?
<SpamapS> nijaba: this means that for an ELB charm, you have to copy the ELB credentials to every single machine.
<hazmat> SpamapS, yup re RUNNING|STOPPED
<nijaba> koolhead17: yes, we want to avoid duplicating charms, use config options instead
<jcastro> nijaba: oh hey, check this out: https://code.launchpad.net/~marcoceppi/charm/oneiric/owncloud2/mysql
<nijaba> SpamapS: why not provide them in config?
<jcastro> here we go, nijaba, let me find marco and see if that's review worthy
<SpamapS> nijaba: they'll be in config.. and copied, to *every* machine.
<jcastro> and we'll just fix it
<SpamapS> nijaba: a virtual charm would have those credentials isolated to a single machine, most likely a provisioning machine.
<SpamapS> I realize this is moot while we have no ACL isolation for ZK
<hazmat> nijaba, thanks thats helpful
<SpamapS> nijaba: its a workaround, not a solution, thats all.
<nijaba> SpamapS: agreed
<nijaba> hazmat: np
 * nijaba reboots
<SpamapS> Most places currently have similar problems.. root on any puppet box can read the entire corpus of puppet manifests and files in the puppet master.
<hazmat> SpamapS, really?
<hazmat> so they get transport level security with certs but no resource access granularity ?
<SpamapS> That may be different now..
<SpamapS> but in the past, it was true, the cert process was just to verify that you were allowed to read the puppet manifests and files
<SpamapS> Its then entirely up to your puppet run to do whatever you will with that information.
<_mup_> Bug #912879 was filed: Machine agent hangs if lxc container start fails <juju:New> < https://launchpad.net/bugs/912879 >
<SpamapS> hazmat: the key difference between that and our ZK problem is that its global *read*
<hazmat> ic
<koolhead17> SpamapS: splunk asks you to register an account to download there pkg, do u think its good idea to work on its charm?
<nijaba> koolhead17: <Canonical hat on>I have add some discussion with them, so I would wait for subordintate to land and discussion to conclude<canonical hat off>
<SpamapS> koolhead17: IMO, no, I'd focus on 100% open source software ..
<nijaba> s/add/had
 * hazmat lunches
<nijaba> bon appetit
<koolhead17> SpamapS: nijaba thanks. :)
<nijaba> koolhead17: just promulgated your owncloud charm!
<nijaba> jcastro: blogging material? ^^
<jcastro> yep
<jcastro> already working on it
<jcastro> Friday, close to EOD? That's when all the new charms land.
<jcastro> :)
<SpamapS> wow.. ZK's internal test suite is pretty comprehensive
<SpamapS> jcastro: friday is hack day. :)
 * nijaba hopes secretly that it will also be a bit of a review/merge day for SpamapS :]
 * koolhead17 kicks himself, why on earth he was not aware about pastebinit
<nijaba> SpamapS: it looks like we may have an issue with versioning in charm-helppers-daily: the last daily build failed because of that...
<jcastro> SpamapS: hey, can you do a "charm get owncloud" and tell me what happens?
<jcastro> mine comes up with an empty bzr branch?!
<SpamapS> jcastro: you did something wrong.. or charm tools is broke
<SpamapS> jcastro: works fine for me
<SpamapS> $ charm get owncloud
<SpamapS> Branched 8 revisions.
<jcastro> nm, I am not able to reproduce
<jcastro> weird
<jcastro> ok so this threw me off
<jcastro> koolhead17: the charm is "owncloud"
<jcastro> but the service is owncloud2
<koolhead17> jcastro: it should be owncloud2 because its the latest version, owncloud by default is older version 1 in our repo
<koolhead17> jcastro: it be good if it is owncloud2
<jcastro> oh, so we have owncloud1 in the archive I see as "owncloud"
<koolhead17> jcastro: no, owncloud as in pkg on ubuntu repository, this charm is working with owncloud version 2 so i have used owncloud2 everwhere
<koolhead17> *everywhere
<jcastro> ah, except "charm get owncloud2" doesn't work
<koolhead17> jcastro: i think that should be its proper path :)
<koolhead17> but  i see nijaba has used https://code.launchpad.net/~charmers/charm/oneiric/owncloud/trunk
<jcastro> yeah I am just saying it should be consistant one way or the other
<jcastro> wether it explicitly says 2 or not doesn't matter to me
<koolhead17> jcastro: what should i do now sir :)
<jcastro> nijaba: or SpamapS: can we make it so "charm get owncloud2" works, I'd like my video to be consistant when I record it
<jcastro> koolhead17: I think they fix this in the store, not sure, asking now. :)
<koolhead17> jcastro: :P
<SpamapS> $ charm get owncloud2
<SpamapS> owncloud2 does not exist in official charm store.
<SpamapS> why would we have 'owncloud' and 'owncloud2' ?
<jcastro> making it so it's just all "owncloud" is fine by me
<SpamapS> nijaba: about the charm-tools versioning thing.. that happens sometimes, I don't think its a real problem tho
<jcastro> SpamapS: ok so what should we do?
<jcastro> (sorry to be annoying but I'd love to videocast this right now. :)
<adam_g> whats the environments.yaml option to install from a specific juju PPA?
<hazmat> juju-origin: ppa
<adam_g> hazmat: danke
 * hazmat double checks
<hazmat> yup
<SpamapS> jcastro: it looks like the 'owncloud' charm downloads and installs 2.0.1 .. whats there to change?
<SpamapS> jcastro: if its not packaged.. sobeit. :-P
<jcastro> SpamapS: I mean the namespace
<jcastro> charm get is "owncloud"
<jcastro> but the service and stuff for the charm is "owncloud2"
<SpamapS> AH
<SpamapS> yes fix that, I'd suggest by changing the charm itself
<SpamapS> easier to fix the contents of a branch than rename it
<jcastro> ok
<jcastro> koolhead17: got time to rename it now?
<koolhead17> SpamapS: but when i say owncloud, as per our repo<ownclous pkg avaiblablity> it means older version, sorry if am wrong
<koolhead17> ubuntu package
<jcastro> well, at some point the package "owncloud" in the repo is just 2.x
<SpamapS> packages and charms are in a different namespace
<koolhead17> apt-get install owncloud will as of now give 1.1
<SpamapS> in this case, its just a version difference
<SpamapS> so I say, change name: owncloud2 to name: owncloud , and be done with it.
<koolhead17> SpamapS: okey, so am doing it as you suggested :P
<koolhead17> jcastro: 2 mins
<jcastro> \o/
<koolhead17> jcastro: what changes am i supposed to do other than replacing owncloud2 to owncloud inside charm?
<jcastro> that should be it?
<jcastro> SpamapS: right?
<SpamapS> we need to fix readme.txt too
 * SpamapS ponders making 'charm promulgate' ensure that this doesn't happen again
<koolhead17> SpamapS:  readme needs to be modified to charm get owncloud, instead my bzr repo url ?
<nijaba> looks like I messed up a bit my first promulgate...  maybe before making a tool, we could have a checklist?
<nijaba> koolhead17: yep, that would be better
<koolhead17> nijaba: i will fix it right away, looking at the roundcube file for help :)
<jcastro> I have a spot for a checklist here: https://juju.ubuntu.com/CharmGuide
<SpamapS> koolhead17: the readme doesn't need to even mention how to "get" the charm
<SpamapS> just how to use it
<koolhead17> SpamapS: okey, sorry for missing on that :(
<SpamapS> if you have the readme, you already got the charm.. or you're on the charm browser site, which should maybe have something copy/pastable on it.. like 'to get this charm,   charm get owncloud'
<nijaba> jcastro: yup, but can we edit it?
<SpamapS> koolhead17: smile, you're about to get blogged about. :)
<jcastro> nijaba: you should be able to, try it and lmk
<nijaba> jcastro: immutable page
<jcastro> grr, ok, when we figure out a list I will just add it
<koolhead17> jcastro: i have added the modification in my 9th revision just now :)
<koolhead17> SpamapS: nijaba modified the files as suggested
<adam_g> hey guys, it looks like the precise  juju ppa is stuck at r434 while oneiric is up to r440. any chance of syncing up? there happens to be  a critical fix for orchestra missing :)
<SpamapS> adam_g: I believe the daily builds are failing because of java problems. :-P
<SpamapS> adam_g: I suppose we can do a manual upload
<adam_g> SpamapS: ah, makes sense
<adam_g> SpamapS: if you wouldn't mind, it'd be helpful (especially in prepartion for next week)
<SpamapS> adam_g: its quite unfortunate but has something to do with java and the way it overcommits memory you can't even install it on the virtual buildds
<SpamapS> sometimes
<SpamapS> which is the frustarting part
<SpamapS> frustrating even
<koolhead17> SpamapS: please put my latest revision in the charm repository, i have added few more details in readme.txt :)
<SpamapS> koolhead17: branch?
<koolhead17> SpamapS: bazaar.launchpad.net/~koolhead17/charm/oneiric/owncloud2/trunk
<koolhead17> how will i explain about juju to someone who thinks juju is like puppet/chef? i tried explaining juju is a layer above it. i would love to get some info on same
<SpamapS> koolhead17: show them how your charm can work with the haproxy charm without either of them ever sharing code
<SpamapS> koolhead17: and without ever learning ruby, or a new language like puppet's DSL
 * koolhead17 notes down
<koolhead17> SpamapS: also another question, in order to run charm --> application , juju has to keep running? If we will upgrade juju version, will it effect the working/running of existing instances which are deployed via the charms
<SpamapS> koolhead17: upgrades are still up in the air. Eventually we'll need to go to each box and upgrade its agent.. stop/start..etc.
<SpamapS> bzr: ERROR: Not a branch: "bzr+ssh://bazaar.launchpad.net/~coolhead17/charm/oneiric/owncloud2/trunk/".
<SpamapS> doh
<SpamapS> k
<koolhead17> SpamapS: :P
<SpamapS> +> juju get owncloud
<SpamapS> thats incorrect
<koolhead17> +> juju get owncloud ?
<SpamapS> koolhead17: thats from your readme
<koolhead17> SpamapS: i used > juju get owncloud
<koolhead17> what should i modify it to ?
<SpamapS> koolhead17: the + is not what I mean
<SpamapS> it shouldn't be there
<SpamapS> if they have the readme, they already have the charm
<koolhead17> ooh ok
<koolhead17> so removing that whole part
<SpamapS> also thats not even valid advice.. it would be 'charm get'
<SpamapS> and really, you mean 'juju deploy
<koolhead17> ooh ok
<SpamapS> Step 1 is 'juju deploy --repository=charms local:owncloud' ..
<SpamapS> step 2 would be to expose it
<koolhead17> k
<SpamapS> 'juju expose owncloud'
<koolhead17> k
<SpamapS> then step 3 would be Access.. 4 user account.. etc. etc.
<koolhead17> SpamapS: 1 minute modifying it
<SpamapS> koolhead17: you have tested this on your eucalyptus, right?
<koolhead17> SpamapS: no on LXC
<SpamapS> ahh ok, it doesn't have a firewall ;)
<koolhead17> SpamapS: yeah :P
<koolhead17> i been pissed and i failed using juju on my openstack because of internal network
<koolhead17> a friend of mine will be here, he has setup of eucalyptus running
<koolhead17> on mon/tue
<koolhead17> SpamapS: revision 13 should have all changes you suggested
<SpamapS> koolhead17: merged, pushed. Thanks!
<SpamapS> jcastro: ^^
 * koolhead17 can finally go sleep :)
 * SpamapS curses that launchpad still doesn't support jira external bug trackers. :-P
<_mup_> juju/ssh-known_hosts r474 committed by jim.baker@canonical.com
<_mup_> Test key layout for ssh-keygen
#juju 2013-12-30
<maxcan> does the haproxy charm support SSL?
<maxcan> i can't find any relevant configs
<freeflying> try to deploy service into a container on a existing machine, but when create the container, it use br0 as bridge device in ubuntu the default bridge is lxcbr0
<freeflying> is it a known bug?
<freeflying> or my own corner case
<marcoceppi> freeflying: not sure, most Canonical folks are on vacation. You may want to open a bug anyways just to get feedback
<freeflying> marcoceppi, noted, thanks
<l1l> I am having a problem. A juju status just hangs, and it has something todo with mongodb.
<_bjorne> l1l juju status --show-log --debug :)
<l1l> It says unable to connect, even though the node is now allowcated to me (from the maas webui). It says to check the credentials.. yet still allowcates it to me.
<l1l> After a bootstrap is successful, a juju status still hangs..
<l1l> I am working with pxe boot and maas. I have the nodes declared, then comissioned. This is when they are ready, is that when the bootstrap should occur for juju?
<_bjorne> declar, comissioned, start the node(s) see
<manjiri> Hello! Where can I find juju-deployer documentation that explains "weights" for relations?
#juju 2013-12-31
<manjiri> Hello! Where can I find the most up-to-date documentation for juju-deployer?
<manjiri> Hello! Just by trying it out, I was able to use "local": "cassandra" in the juju-deployer json configuration file. Where is "local" documented? (only "branch" is documented as far as I can tell).
#juju 2014-01-01
<michal_s> hello everyone, happy new year at first!
<michal_s> can anyone confirm, that wp-content git handling works in wordpress-21 charm?
<michal_s> I'm deploying on Windows Azure but I don't think that platform matters
<michal_s> I'm deploying WP as it is in charm, with no tuning
<michal_s> I'm using command: juju set wordpress wp-content=https://github.com/smereczynski/jujuwp-content
<michal_s> and... nothing
<natefinch> #juju-dev
#juju 2014-01-02
<marcoceppi> michal_s: can you provide the logs from /var/log/juju/juju-wordpress-0.log ?
<michal_s> marcoceppi: I will deploy clean WP today and copy logs. Thank You for replay.
<fwereade> marcoceppi, ping
<marcoceppi> fwereade: pong
<jcastro> marcoceppi, heya
<jcastro> Welcome back everyone!
<hatch> wb jcastro!
<marcoceppi> jcastro: o/
<jcastro> Man dude, there's a foot of snow on the ground
<jcastro> marcoceppi, I was in the caribbean a week ago. What happened.
<marcoceppi> jcastro: sounds like you should head back to the caribbean. It's been quite around here, most people enjoying the holidays
<jcastro> nod
<jcastro> hmm, did you do a tools release over the break?
<jcastro> got a new one in the PPA
<marcoceppi> jcastro: maybe, I don't remember. I've got a new amulet release I'm pressing right now
<marcoceppi> and there were some updates to python-charmworldlib
 * jcastro nods
<fwereade> marcoceppi, hey dude
<fwereade> marcoceppi, I wanted a quick read on https://bugs.launchpad.net/juju-core/+bug/1100076
<_mup_> Bug #1100076: subordinate require juju-* relations should be deprecated <juju-core:Triaged> <https://launchpad.net/bugs/1100076>
<fwereade> marcoceppi, basically I would like to claw back the "juju-*" namespace for relations
<marcoceppi> fwereade: looks exciting, I'll take a peek
<marcoceppi> fwereade: oh, that was a lot more straight forward then I thought
<fwereade> marcoceppi, yeah, I didn't think it seemed *that* exciting, but who am I to judge? ;p
<marcoceppi> fwereade: so, I noticed a lot of charms doing this already. Just specifying a different interface name then having the scope variable set (or whatever it is)
<fwereade> marcoceppi, yeah, it's really just a bunch of trivial name changes
<marcoceppi> fwereade: My question, what about subordinates that don't share any inheritance?
<fwereade> marcoceppi, but it's all more work for busy people
<marcoceppi> fwereade: like the rsyslog-frowarder charm
<fwereade> marcoceppi, not sure -- restate please?
<marcoceppi> fwereade: so juju-info is used to say "this is the fallback interface, if there is no matching interface name on the parent charm of the relation"
<marcoceppi> that way the charm the sub is attaching to doesn't have to have an interface for a subordinate, juju-info is the implicit interface name for that matching
<marcoceppi> How would that be handled in a 2.0 where juju-* were removed?
<fwereade> marcoceppi, we'd still be providing juju-info, that'd just require (say) "info" -- and still require the juju-info *interface*, and it should keep working just fine
<fwereade> marcoceppi, the falling back happens anyway
<fwereade> marcoceppi, juju only matches implicit relation when it can't match an explicit one
<marcoceppi> fwereade: right, so what's the question here?
<marcoceppi> What's the goal of the bug, if juju-info will remain an option?
<marcoceppi> maybe I missed something, it's still still early in the year :)
<fwereade> marcoceppi, the goal is to prevent people from naming things "juju-"
<fwereade> marcoceppi, so that we have that namespace to play in ourselves
<marcoceppi> fwereade: as a relation name?
<fwereade> marcoceppi, without fear of collision
<fwereade> marcoceppi, yeah
<marcoceppi> oh, yeah that's totally cool then
<marcoceppi> I can add that as a proof item, just an INFO output, so we can start tailoring people away from that now
<marcoceppi> fwereade: but it makes sense
<fwereade> marcoceppi, that would be *great*, yes please
<fwereade> marcoceppi, and I'll throw the bug over the wall at arosales and leave the getting-it-scheduled to him, I guess?
<marcoceppi> fwereade: I can add it to our board, and I'll just generate a new bug on charm-tools with a reference to this one
<marcoceppi> fwereade: as it firms up and becomes an actually reserved relation name in juju we'll move the status of the proof item from I to E
<fwereade> marcoceppi, that's fantastic -- can you think of a good way to make us remember to actually disallow it in juju when your side of the bug is fixed?
<fwereade> marcoceppi, ie the fix-existing-charms side
<marcoceppi> fwereade: once it's in proof, it'll get caught during our audit that's on going
<marcoceppi> so the majority of charms should be addressed with this
<marcoceppi> fwereade: I'll make a note about the bug to have you pinge/comment on the bug once the audit is over
<fwereade> marcoceppi, ok, awesome
<fwereade> marcoceppi, I'll drop this one to "low" then with a note that it's an easy fix once the ecosystem is updated
<marcoceppi> fwereade: https://bugs.launchpad.net/charm-tools/+bug/1265524
<_mup_> Bug #1265524: juju-* relation names to be depreciated <Juju Charm Tools:Triaged by marcoceppi> <https://launchpad.net/bugs/1265524>
<manjiri> Happy New Year! Just by trying it out, I was able to use "local": "cassandra" in the juju-deployer json configuration file. Where is "local" documented? (only "branch" is documented as far as I can tell).
<rick_h_> manjiri: https://juju.ubuntu.com/docs/charms-deploying.html#local might be a good start.
<manjiri> rick_h_: I am referring to "juju-deployer" for which the documentation is here: http://pythonhosted.org/juju-deployer/
<rick_h_> manjiri: right, but it's just doing a deploy of those services specified
<rick_h_> manjiri: so understanding how to deploy a local charm fits into the usage you're doing
<manjiri> rick_h_: I think I understand how to use the documentation for "juju deploy", but it is not clear to me how to map that to juju-deployer. I feel like I am missing a link between the two.
<rick_h_> manjiri: what are you trying to do that you're not sure about?
<manjiri> rick_h_: juju-deployer documentation mentions "branch: lp:charms/precise/wordpress". There is no mention of "local: <charm>". Yet, when I tried it, it worked! That seems mysterious.
<rick_h_> manjiri: ah, well there's some limitations currently. The juju gui, which accepts deployer files, does not work with local charms.
<rick_h_> manjiri: I'm not sure where the docs on the deployer are as far as up to date-ness. A bug on that might be worthwhile.
<manjiri> rick_h_: Actually, what I am saying is, that it DID work. It works but there is no documentation about it. (I am aware that the charm icon won't show).
<rick_h_> manjiri: right, understand.
<rick_h_> manjiri: I'd suggest filing a documentation bug https://bugs.launchpad.net/juju-deployer
<manjiri> rick_h_: I will file a bug. I have already filed one for "weights" which is not documented either. Thanks!
<manjiri> rick_h_: About the icon not showing, I was aware that an icon that has not been reviewed will not show. But even ones that are officially published e.g. keystone do not show if juju-deployer is used. I have filed a bug for that as well.
<rick_h_> manjiri: it will only show if it's the official reviewed version of the charm.
<rick_h_> manjiri: from the charm store itself
<rick_h_> manjiri: otherwise, a forked charm, local charm, etc, it will not
<rick_h_> manjiri: note that only one has the icon https://jujucharms.com/fullscreen/search/?text=keystone
<manjiri> rick_h_: The configuration file for juju-deployer says "branch": "lp:charms/keystone" - I got this file from https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle
<rick_h_> manjiri: hmm, yea that's via the lp branch name and not the cs: url. I wonder if that's the issue then.
<rick_h_> manjiri: since it's not the cs:precise/keystone url I don't think it'll show.
<manjiri> rick_h_: I have tried to ask the person - jorge.castro@canonical.com  - who last updated that configuration questions about it - but I have not received any response. I am not sure what is expected to work and how well.
<manjiri> rick_h_: All of the bugs I have filed against juju/juju-deployer/openstack are based on that configuration file. But those bugs have yet to be looked at. Perhaps the holiday season is why the progress has been slow. I am looking forward to more information on that configuration file!
<rick_h_> manjiri: definitely, most people were well away and slowly getting back
<jcastro> manjiri, hi, can you reforward me your mail? I don't see a mail about the openstack bundle, maybe I missed it?
<manjiri> jcastro: Sure thing. You can ignore the particular question in that email - I found a bug that suggests that console support is known to be lacking in the nova compute charm.
 * jcastro nods
<manjiri> jcastro: I have re-sent the email.
<manjiri> rick_h_: jcastro:  Btw, I am working with Kapil, Adam and few others on this project. They suggested that IRC and Bugs are the way to get answers to my questions.
 * jcastro nods
<rick_h_> manjiri: definitely.
<jcastro> manjiri, most of us got back to work today; we're not normally slow
<jcastro> well, not _too_ slow anyway. :)
<manjiri> jcastro: rick_h has helped me out a couple of times... Its all good.
<jcastro> sweet!
<manjiri> rick_h_: Just tried cs:precise/keystone with juju-deployer. Does not like it. Only likes "lp:"
<rick_h_> manjiri: ah, did you use hte branch: keyword?
<rick_h_> manjiri: I think it only works with charm: cs:precise/keystone. /me goes to check
<rick_h_> manjiri: yea, see http://bazaar.launchpad.net/~jorge/charms/bundles/mediawiki-scalable/bundle/view/head:/bundles.yaml for an example
<manjiri> rick_h_: jcastro: It would be nice to know what in https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle is expected to work and what is not.
<manjiri> rick_h_: charm: keyword didn't work either
<marcoceppi> manjiri: where did you get juju-deployer from, what version is the package?
<manjiri> marcoceppi: 0.2.5-0ubuntu1~ubuntu12.04.1~juju1
<marcoceppi> manjiri: huh, yeah that's the lastest
<manjiri> marcoceppi: My biggest concern is the documentation on juju-deployer. The icon issue is negligible at this time.
<marcoceppi> manjiri: have you checked this? http://bazaar.launchpad.net/~juju-deployers/juju-deployer/trunk/files/head:/doc/
<marcoceppi> manjiri: possibly a bit out of date
<manjiri> marcoceppi: Sorry, but I don't see the "manual" doc here
<marcoceppi> manjiri: config.rst
<manjiri> marcoceppi: Ah! That does not mention "local" or "weight" keywords. I tried "local" on a hunch and it worked (!) and "weight" is actually used in https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle. I am not stuck at the moment, but knowing that "latest" documentation isn't complete warns me that future issues will be hard to figure out.
<marcoceppi> manjiri: dev seems to be going faster than docs on this
<marcoceppi> manjiri: so, IRC is probably your best bet, and other people's examples
<manjiri> marcoceppi: Sounds like it. Thanks for your help!
<marcoceppi> manjiri: I saw you mentioned keystone, here's a few openstack examples in deployer format: http://bazaar.launchpad.net/~james-page/charms/bundles/openstack-on-openstack/bundle/view/head:/bundles.yaml
<manjiri> marcoceppi: It appears that yaml is more popular that json for deployer? Is that correct?
<marcoceppi> manjiri: it's subjective. both are supported equally given one is a subset of the other
<marcoceppi> manjiri: yaml is generally preferred and used without the juju project (environments.yaml, config.yaml, etc)
<marcoceppi> as such deployer uses yaml and most of the examples are in yaml
<marcoceppi> but I know json is def supported if you're more comfortable with that format
<manjiri> marcoceppi: My starting point was json file posted on https://help.ubuntu.com/community/UbuntuCloudInfrastructure/JujuBundle, but since then I have only seen yaml examples... I will switch to yaml.
<marcoceppi> manjiri: huh, interesting
<bic2k> If I wanted to move the juju CLI configs from my personal machine to a more central machine, what would I do? I have copied over the contents of my ~/.juju but it insist on wanting to bootstrap still
#juju 2014-01-03
<lazypower> bic2k: Are you still working through moving your juju install? I just recently did that.
<bic2k> lazypower: I think I got it figured out, added the new machines ssh key to the authorized_hosts on machine/0. So far so good, but haven't tried any fancy stuff yet
<lazypower> Yeah, That was my gotchya I ran into. I ended up just sharing the SSH key with the new host and all was well with the world.
<lazypower> Considering i've only done this with AWS and LOCAL environments I cannot speak to the openstack specifics.
<michal_s> marcoceppi: I have deployed clean WP on Azure, with no tuning and with juju set wordpress wp-content=git@github.com:smereczynski/jujuwp-content.git. Again nothing. I have logs from machines 0 and 1 here: https://gist.github.com/smereczynski/8236592
<michal_s> marcoceppi: clould You please tell me if I'm doing sth wrong?
<marcoceppi> michal_s: huh, that's odd. One min
<michal_s> marcoceppi: no problem, You don't need to hurry
<michal_s> I wish to present Juju and Windows Azure to students and this one thing seems to not work properly :/
<marcoceppi> michal_s: yeah, the code looks sound, as it should work, I'm going to spin it up - it's just bailing before doing the do_vcs function
<michal_s> marcoceppi: If You wish, I can give You code for Windows Azure test account
<michal_s> with this code You can create subscription without credit card
<marcoceppi> michal_s: no, that's okay. I've got accounts in all the clouds
<michal_s> ok :)
<marcoceppi> michal_s: can you describe "doing nothing"
<marcoceppi> if you ssh and move to /var/www/wp-content do you not see your git stuff?
<michal_s> marcoceppi: there is no my stuff in wp-content
<marcoceppi> michal_s: is the deployment still running?
<michal_s> yes
<marcoceppi> I think there's an idempotentcy issue with the charm
<marcoceppi> michal_s: have you completed the setup?
<marcoceppi> via the web interface
<michal_s> yes, setup is completed
<michal_s> site is running
<marcoceppi> michal_s: run `juju set wordpress wp-content=https://github.com/smereczynski/jujuwp-content.git`
<marcoceppi> just so that the command is a different value, and the config-changed hook will run
<michal_s> agent-state-info: 'hook failed: "config-changed"'
<michal_s> from juju status
<michal_s> and no changes in wp-config
<michal_s> share juju config output?
<michal_s> juju status*
<michal_s> https://gist.github.com/smereczynski/8237758
<marcoceppi> michal_s: well, a config-changed error, that's interesting. Can you pastebin the /var/log/juju/unit-wordpress-0.log from wordpress/0?
<michal_s> ok, moment
<michal_s> marcoceppi: https://gist.github.com/smereczynski/8237809
<marcoceppi> michal_s: try `juju resolved --retry wordpress/0`
<michal_s> marcoceppi: ok, it works!
<marcoceppi> michal_s: interesting, there's an idempotency error in the charm during configuration changes.
<marcoceppi> michal_s: I'll file a bug and try to follow up with a patch next week
<michal_s> great :) Thank You very much for Your help
<tclarke> having trouble d/ling charms due to https/ssl intercept firewall. Is there a way to disable cert checking (alternately, where is the cacert file juju uses so I can add the firewall certs)
<marcoceppi> tclarke: downloading charms where?
<tclarke> ERROR cannot get latest charm revision: Get https://store.juju.ubuntu.com/charm-info?charms=cs%3Aprecise%2Fjuju-gui: failed to parse certificate from server: x509: RSA modulus is not a positive number
<tclarke> juju deploy juju-gui
<marcoceppi> fwereade: Is there a way to disable cert checking during deploy?
<marcoceppi> tclarke: One way around this is to deploy from local
<marcoceppi> tclarke: you can download the charm to your computer, then run a deploy using that version
<tclarke> where does juju get it's cacert's from? I can always add the local ones there
<marcoceppi> tclarke: it's in the public chain
<tclarke> the standard /etc/ssl chain?
<marcoceppi> yeah, they use a cert for all of *.ubuntu.com
<tclarke> I'll try installing the firewall certs there (the problem is ssl intercept proxies rewrite the cert chain for all ssl)
<tclarke> is there information on how to setup a local mirror? the only thing I see re: local repositories involves writing your own, nothing on how to pull a cs charm to a local mirror  and deploy
<marcoceppi> tclarke: you can use charm-tools to download charms
<marcoceppi> from the stable ppa (ppa:juju/stable)
<marcoceppi> once installed, and you have your directory structure, just `juju charm get <charm-name>`
<marcoceppi> there's a juju charm get-all but it takes forever to run, better to just grab the ones you're interested in
<tclarke> juju charm get juju-gui
<tclarke> Error: Could not locate charm in store.
<marcoceppi> tclarke: oh, haha, it uses SSL as well
<marcoceppi> to look up things via the API
<marcoceppi> tclarke: if you have bazaar installed, try `bzr branch lp:charms/juju-gui`
<surgemcgee> Is it possible to set up a meeting or maybe discuss our cloud infrastructure needs here? It is hard to wrap ones mind around the concept of cloud scalability. I would be licensing the charm GPL v3.
<hatch> surgemcgee this would be a good place but it looks like most people are offline atm, maybe try back tomorrow or Monday
<marcoceppi> surgemcgee: check pm
#juju 2014-01-04
<Gris> hello, I need help when I do a "juju add-machine lxc:0" it create the container but stuck in pending stat for ever some can explain ?
<jalcine> Question: under http://cloud-images.ubuntu.com/vagrant/precise/20131223/ there's vagrant boxes and then there's juju vagrant boxes
<jalcine> Obviously, juju's preinstalled; but is there anything else on there?
<jalcine> Just curious
<jalcine> because it doesn't work with Vagrant, it's missing vbox files
<g_hpc_geek> anyone using juju to deploy django ?
#juju 2014-01-05
<ghartmann> hi
<thumper> o/
#juju 2014-12-29
<weblife> Anyone interested in reviewing a tutorials code for an Express based CRM application.  I want ensure I am using some of the best practices around before I start to elaborate on the code itself in the article I am righting?  It works fine but I want to teach only good practices, even though I think mine are great!
<weblife> It is going to be part of a bigger series for deployment with Juju
<weblife> If anyone has input on code for a tutorial guide I am putting together for an Express based CRM using Node.js and MongoDB, I would greatly appreciate it. The code: https://github.com/webbrandon/simple-secure-auth
<thumper> avoine: how close is the python-django charm that supports django 1.7 and python 3?
<thumper> avoine: bonus points for virtual envs too...
 * thumper would like to use it now...
#juju 2014-12-30
<mwak> ping marcoceppi
<marcoceppi> mwak: pong
#juju 2015-01-01
<lazyPower> Happy  New Year all you charming people! May 2015 bring you all the best.
#juju 2015-01-02
<ayr-ton> happy new year, my friends \o/
<mmance> Do I need to bootstrap every node?
#juju 2016-01-04
<blahdeblah> What's the name of the API that charms.reactive uses to send data between layers?  Is it charms.reactive.bus?
<blahdeblah> Or maybe it wasn't in charms.reactive - does charmhelpers.core.unitdata ring any bells?
<marcoceppi> blahdeblah: unitdata.kv is best
<blahdeblah> marcoceppi: thanks!
<lazypower_> hellooo 2016. o/ bring on your juju questions, your tired charmers, and lets make this the year of juju
<rick_h_> lazyPower: :P
<lazyPower> little pep talk to get me back in the spirit of things :)
<rick_h_> lazyPower: wfm
<marcoceppi> \o/
<bdxbdx> Hey whats going on guys? Happy New Year!
<bdxbdx> I have a few questions regarding the review queue....I feel like I have tried to get a charm to the queue by adding "charmers" to the reviewers of my charm branch upon mr
<bdxbdx> MR
<bdxbdx> although, I can't get it to populate in the review queue, is there something I'm missing here?
<marcoceppi> bdxbdx: is it an openstack charm?
<bdxbdx> marcoceppi: no
<marcoceppi> link?
<bdxbdx> marcoceppi: https://code.launchpad.net/~jamesbeedy/charms/trusty/fiche/trunk
<marcoceppi> bdxbdx: where's the bug?
<bdxbdx> marcoceppi: there is none
<marcoceppi> bdxbdx: well, you need one to get a new charm into the review queue
<bdxbdx> marcoceppi: exactly....
<marcoceppi> bdxbdx: https://jujucharms.com/docs/stable/authors-charm-store#submitting-a-new-charm
<marcoceppi> bdxbdx: here's a better link: https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms
<bdxbdx> marcoceppi: awesome, that is just what I was looking for
#juju 2016-01-05
<suchvenu> Hi
<suchvenu> Does all the charms which are tested on local containers and AWS , work on Canonical MAAS and VMs if we just configure these environments ?
<lathiat> suchvenu: generally speaking the charms should work on any cloud provider
<suchvenu> ya. So if we have the environemnt configured for any of the Juju supported environments, it should ideally work, right ?
<suchvenu> Also is Softlayer supported by Juju ?
<lathiat> yeah, i am sure there are some exceptions, i.e. some less well supported/third party charms might make a bad assumption or may depend on some cloud specific service.. but that would be the exception and unlikely found in any of the major supported charms
<lathiat> certianly if it works on a local container, it would likely work in anything maas, vm/metal/containers
<suchvenu> ok
<suchvenu> Also is Softlayer supported by Juju ?
<lathiat> if you google, it would seem to suggest there is a third party plugin for it but it's possibly outdated
<lathiat> you can see a list of providers at https://jujucharms.com/docs/stable/getting-started if you expand the "Install & Configure" section with the + symbol
<suchvenu> Ya i had a look at this section, but couldn't find Softlayer there.
<suchvenu> Thanks lathiat for your response.
<jamespage> gnuoy, morning
<jamespage> any chance of a review of https://code.launchpad.net/~james-page/charm-helpers/lp1531102/+merge/281589
<jamespage> our version detection code is a little foobar for >= liberty
<gnuoy> jamespage, sure
<jamespage> gnuoy, do we have that auto-resync process yet?
<jamespage> ;)
<gnuoy> jamespage, we do not I'm afraid
<jamespage> gnuoy, OK I'll raise the MP's now then
<marcoceppi> lazyPower: you around?
<icey> anybody have time to take a look soon at a new charm layer before I push it out to the world?
<marcoceppi> icey: I could take a look but it wouldn't be immediate
<icey> marcoceppi https://github.com/ChrisMacNaughton/juju-layer-rails is where it lives for now
<icey> and thanks!
<lazyPower> marcoceppi i am
<sharan> Hi kevin
<sharan> i was implementing peer relation
<sharan> when i removing the unit, i want to restart the server  on all the unit
<sharan> where i can implement this piece of code
<sharan> do i need implement this code in relation-departed hook?
<sharan> Hi
<sharan> i am implementing peer relation, once i remove the unit, server has to be restarted on all the container, how can achieve this
<sharan> i am implementing peer relation, once i remove the unit, server has to be restarted on all the container, how can i achieve this
<bdx> marcoceppi: I've added my charm to the review queue, it looks like some jenkins tests ran and failed for aws and lxc, it also looks like jenkins deployed the tests on precise. Also, I can't seem to login to the review queue.
<bdx> marcoceppi: is this all expected behavior?
<sharan> i am implementing peer relation, once i remove the unit, server has to be restarted on all the container, how can i achieve this
<jose> tvansteenburgh: ping
<tvansteenburgh> jose: hey
<jose> tvansteenburgh: just wondering, is it usual for the CI infrastructure to delete test results after a certain period of time?
<tvansteenburgh> jose: it's expected, yeah
<jose> hmm ok
<tvansteenburgh> not necessarily ideal, but it will be addressed
<tvansteenburgh> plan is to link to the parsed results page instead of the ci log
<tvansteenburgh> since those are archived in a db and won't go away
<jose> ok, awesome
<jose> was worried because when I was checking at some test results for unreviewed MPs they were gone
<tvansteenburgh> yeah, sorry about that. feel free to rerun tests when that happens until this is improved
<jose> thanks!
<jose> marcoceppi: hey! just as a reminder, if there's anything related to charms that could use a hand, there's some GCI students eager to help
<marcoceppi> jose: yes, I'm about to submit some more tasks
<jose> woot woot!
<jcastro> marcoceppi: would submitting some charms consuming like the php layer be GCI appropriate?
<marcoceppi> jcastro: maybe?
<marcoceppi> I mean, we want people to maintain charms, not really drive by
<stokachu> is there docs other than the email that describe the layer.yaml options?
<stokachu> marcoceppi, ^
<stokachu> never did see that followup post to the announce email on an example
<rick_h_> stokachu: https://jujucharms.com/docs/devel/authors-charm-building have anything?
<stokachu> rick_h_, nah doesn't have anything describing the new layer.yaml options feature
<rick_h_> stokachu: oh, :(
<stokachu> it supposedly uses jsonschema
<stokachu> but the layer config is yaml
<stokachu> bcsaller, any docs on this ^?
<marcoceppi> stokachu: sorry, not yet, but ehre's an example
<marcoceppi> stokachu: https://github.com/juju-solutions/reactive-base-layer/pull/18/files
<marcoceppi> stokachu: while building the example I found a bug
<marcoceppi> https://github.com/juju/charm-tools/issues/85#issuecomment-168428138
<stokachu> marcoceppi, nice so defines: then the options in jsonschema form?
<marcoceppi> stokachu: yes, and the key name is the key that's used
<marcoceppi> stokachu: it follows actions.yaml format almost to a T
<stokachu> so packages would be the option name
<stokachu> and it would be prefixed?
<stokachu> so like nodejs-packages as the config option to query
<stokachu> marcoceppi, ^
<marcoceppi> stokachu: see this for example:
<marcoceppi> https://github.com/juju/charm-tools/issues/85#issue-124516900
<marcoceppi> stokachu: options is a dictionary of layers whos keys are dictionaries of key-val
<marcoceppi> stokachu: so it's options: nodejs: packages:
<marcoceppi> stokachu: however, packages we agreed should be a basic layer functionality
<marcoceppi> stokachu: we just have a problem where lists and dicts are not additive
<marcoceppi> we're working on a fix
#juju 2016-01-06
<stokachu> marcoceppi, ok cool
<jamespage> gnuoy, thanks for those +1's and merges
<jamespage> poking those test failures for swift-*, glance and keystone now
<gnuoy> kk
<jamespage> gnuoy, beisner, https://code.launchpad.net/~james-page/charm-helpers/systemd-support/+merge/281742
<jamespage> I'll raise a temp merge for glance with that branch synced
<jamespage> to test it out
<jamespage> like
<jamespage> no I won't - I just did that locally and its good
<jamespage> gnuoy, beisner: and https://code.launchpad.net/~james-page/charm-helpers/haproxy-stats-1.6/+merge/281747
<jamespage> for mitaka support
<Geetha> @mbruzek: Hi, I got a comment for was base to include a reverse proxy to exercise the website relationship. But was base does not support clustering and it is a standalone. Does it make sense to include reverse proxy for stand alone was base?
<lazyPower> Geetha: even if it doesn't support clustering, you  may still want to expose the endpoint elsewhere in your environments
<mbruzek> Hello Geetha.  Are you certain it does not support clustering?  I thought all Webpshere products could cluster?
<lazyPower> especially when we land cross environment relationships. I can see having a DMZ environment to expose load balancers, and running the apps in another environment. This isn't as distant future as you might think :)
<Geetha> Websphere ND does support clustering. And I confirmed with product team that Websphere base product does not support clustering.
<mbruzek> Geetha: If that is true (and I believe you) I would look for other opportunities for relations, such as a relation to DB2 or _something_ to relate to.  A charm that does not relate to anything is not very useful all by itself.
<mbruzek> Geetha: So my comment was meant to say that a charm with zero relations is not very useful, but if you could find any other relations (I listed a few without knowing that information) then we should implement those
<Geetha> ok, I will try to find other possible relations that could implement with websphere base product.
<icey> how do I configure juju to bootstrap an environment through a socks proxy
<jamespage> coreycb, hey - just looking at your aodh changes for ceilometer charm
<jamespage> coreycb, do we need to start generating some aodh config files?
<coreycb> jamespage, yeah it looks like at least aodh.conf
<coreycb> jamespage, tests are failing on that mp btw and I've not looked into them yet
<jamespage> if there is anyone around with the right memberships, I need a review on:
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/systemd-support/+merge/281742
<jamespage> and
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/haproxy-stats-1.6/+merge/281747
<beisner> coreycb, re-triggered your ceilo amulet test (looked like an undercloud instance issue).  looks like a lil lint cleanup to do on the unit tests while that cooks ;-)
<coreycb> beisner, ah thanks!
<beisner> yw!
<coreycb> jamespage, there's a patch in python-keystoneauth1 that's not yet been uploaded to debian (we have it in ubuntu), so that should fix it if you wouldn't mind uploading python-keystoneauth1 first
<coreycb> wrong channel
#juju 2016-01-07
<AlecTaylor> hi
<marcoceppi> Hi AlecTaylor
<AlecTaylor> hi
<AlecTaylor> Each of the charm dependencies and juju in general, are they available as seperate packages? - E.g.: kubernetes, etcd, mesos
<jamespage> gnuoy, I need a +1 on https://code.launchpad.net/~james-page/charm-helpers/systemd-support/+merge/281742 if you have time
<gnuoy> looking
<jamespage> that will unblock amulet tests on keystone glance and swift-*
<jamespage> gnuoy, and then:
<jamespage> https://code.launchpad.net/~james-page/charm-helpers/haproxy-stats-1.6/+merge/281747
<jamespage> that fixes mitaka/xenial and a long standing security concern from elmo
<gnuoy> ack
<jamespage> gnuoy, ta for the +1 on the first
<gnuoy> jamespage, want me to land it or will you?
<jamespage> gnuoy, I'll land them and then re-sync the failing branches
<gnuoy> kk
<jamespage> gnuoy, those last four blocking mps for resyncs now test ok so landing them
<gnuoy> tip top
<durschatz> Is there an issue with ppa:juju/stable?
<durschatz> sudo add-apt-repository ppa:juju/stable
<durschatz> Cannot add PPA: 'ppa:juju/stable'.
<durschatz> Please check that the PPA name or format is correct.
<durschatz> sorry the problem was my DNS config
<jamespage> gnuoy, thedac: time for a review? - https://code.launchpad.net/~james-page/charms/trusty/cinder/lp1521604/+merge/281799
<gnuoy> jamespage, looking
<gnuoy> jamespage, +1
<jamespage> gnuoy, ta - I also have a cherry pick of the version detection fix to land into stable charms - just raising the MP's now
<cmars> can layers include layers, or is layer.yaml only valid at the top level?
<rick_h_> cory_fu_: ^ ?
<marcoceppi> cmars: you acn include as many layers as you'd like
<cmars> so let's say i have layers a, b, and c. b has a layer.yaml that includes a, c has a layer.yaml that includes b, result gets a+b+c?
<marcoceppi> cmars: yes
<cmars> awesomesauce
<cmars> thanks
<lazyPower> cmars: be careful about recursively linking charm layers - there are scenarios where the builder will get caught in an infinite loop
<lazyPower> eg if layer a and c depend on b, but b includes a, you'll get caught in a never-ending loop of fetching those layers/wheels/etc.
<cmars> lazyPower, thanks for the warning. no cycles in this graph :)
<natefinch> urulama, rick_h_: not sure who to address this to but... is it me, or is the charm search kinda... wacky?
<urulama> natefinch: oh?
<urulama> natefinch: what seems to be wrong?
<natefinch> urulama: well, searches seem to return a bunch of random charms that have nothing to do with the search term
<natefinch> urulama: like, I can search for "natefinch" and it returns 100 charms, only 3 of which actually seem to reference my username anywhere
<urulama> natefinch: if nothing relevant is found, search term is broken to n-grams of different sizes
<urulama> natefinch: so, nch is 3-gram for vbench ... not good, but if nothing relevant if found, all is equally bad :)
<rick_h_> natefinch: it's the whole "promulgated shows up first" problem
<rick_h_> natefinch: agreed jujucharms.com needs to move past that as we get giong
<rick_h_> natefinch: e.g. not showing all user's charms but showing users might work out.
<natefinch> urulama: I'd rather see no hits than things that are very very very unlikely to be what I'm looking for.  Or at least say "limited results, here are some potential other charms that matches searches like yours"
<rick_h_> natefinch: and ratcheting up the scoring for matches so that fewer promulgated 'potential matches' are displayed in these bad cases
<rick_h_> natefinch: the first ones in the "Recommended" section are the real best matches
<rick_h_> natefinch: but we never show user material above promulgated
<rick_h_> urulama: weren't those going to get renamed?
<urulama> natefinch: yeah, agree, and search is being redefined
<rick_h_> urulama: did that deploy not get out yet?
<jrwren> a stemming search instead of ngram?
<urulama> rick_h_: with new deploy, yes
<rick_h_> urulama: ok, I wasn't sure when the deploy timeline was
<urulama> rick_h_: it goes on staging next week and then production
<rick_h_> natefinch: but in answer to your question, a bug filed with replication results and expected behavior is the 'who to adress this'
<rick_h_> urulama: gotcha cool
<natefinch> rick_h_: cool, will do
<rick_h_> natefinch: but yes, it's a known shortcoming of search and something that'll be tweaked as the store moves forward and expands
<natefinch> rick_h_: ok, good to hear.  'cause right now, the impression is "search is (almost) completely broken"
<natefinch> (at least if you're searching for a non-promulgated charm and/or your search methodology is poor :)
<rick_h_> natefinch: it all depends on what you look for
<rick_h_> search for 'open' and you get a ton of openstack as you should
<rick_h_> natefinch: but usernames, not general great but that'll get better with new publish and users 'owning' their promuglated stuff
<natefinch> rick_h_: the first thing I searched for was a charm name that I wanted to know if it had been picked up yet or not... I expected either 0 or 1 result. I got 100
<rick_h_> natefinch: right, and we want to be amazingly helpful and see if you typod, or if you meant something else, or ...
<natefinch> rick_h_: all it takes is a "no exact matches found, here's some others..." I think.  BUt I'll file a bug.
<bdx> thedac: sup
<thedac> hey bdx how's it going?
<bdx> hey whats up man? going well....I was wondering if I can get your take on an issue I'm having
<thedac> bdx: I'll try I am currently juggling a few issues.
<natefinch> rick_h_: uh... dumb question.  where do I file the bug?
<bdx> thedac: its cool man...no worries...
<rick_h_> natefinch: footer, "Report a bug on this site"
<natefinch> rick_h_: oh man.  It's snuck in with the copyright.  Never would have found that.
<bdx> I guess I could just let er' rip
<natefinch> rick_h_: (I even looked in the footer)
<rick_h_> natefinch: good, most folks will not report bugs then bwuhahaha!
<bdx> openstack-charmers: I'm experiencing an issue where, when I try to attach > 1 networks to a neutron router....everything breaks
<bdx> openstack-charmers: I've vlan tenant networks configured, and flat external
<bdx> using dvr
 * natefinch reports a bug about how hard it is to find the report a bug link.
<bdx> openstack-charmers: I have had success connecting multiple vlan networks to the same router before in kilo, just not with openstack deployed by juju
<bdx> I'm hesitent to say, but I think there is a misconfiguration somewhere within the neutron config
<thedac> bdx: best bet is to file a bug https://bugs.launchpad.net/charms/+filebug with as much detail on how to recreate the failure as possibe. (proably against neutron-api)
<bdx> openstack-charmers: I'm currently in the process of writing up a bug for this, but I was hoping to get some insight from someone asap
<bdx> I'm super bummed, because I only created a /24 network, and now need to expand ip space
<thedac> bdx: sorry, I have not run into this. so no off the cuff ideas
<bdx> openstack-charmers: I can create other netron routers, and add networks to them, and then create routes to eachother through the use of another router, static routes on each network and static ports
<bdx> but it hacky
<bdx> neutron*
<bdx> thedac: its cool man....I just am at a point where I can't be taking down the stack, neutron, or network anymore, so dealing with this is slightly more touchy
<bdx> and nerve wrecking
<lazyPower> bdx: understandable
<bdx> thedac: I was hoping you might help put some heat on this as you have with in the past, but if you are busy, no worries
<bdx> lazyPower: you trying to get some of this?
<lazyPower> no but i follow wha tyou're asking
<thedac> bdx: yeah, we have a number of priorities at the moment. So the bug report with logs is the best way to get a response
<lazyPower> and i dont have any off the cuff ideas either.
<bdx> lazyPower, thedac: my approach to this is to replicate the issue on a test stack, and test/apply fixes there
<lazyPower> bdx: sounds reasonable
 * bdx is sad
 * bdx is easily enabled to build new test stack and replicate issue w/o wasting time or effort
 * bdx is thankfull for juju
<bdx> *thankful
<bdx> openstack-charmers: ok bug time, look out
<natefinch> urulama: is the charm store ingesting stuff from launchpad still?  I pushed a charm to my local namespace 8 hours ago and it's still not showing up.
<lazyPower> natefinch: any proof errors when you run `charm proof`?
<lazyPower> natefinch: and the store is still ingesting from launchpad unless that changed today
<natefinch> lazyPower: does my charm have to pass charm proof for it to get published to my personal namespace?
<lazyPower> i feel like that was myth=busted in the past, but it doesn't hurt to give it a go
<rick_h_> natefinch: use charm2 and no
<rick_h_> natefinch: highly suggest trying the charm2 publish work vs ingestion and it'd be great to expand the folks dog fooding internally
<rick_h_> natefinch: yes, the old ingestion will skip things that have an E in charm proof
<natefinch> ah that may well be it
<natefinch> I'll try charm2, though. Certainly better than push and wait with no feedback :)
<rick_h_> natefinch: that's the plan :)
<marcoceppi> rick_h_: if you use charm2 publish, deployer won't work
<rick_h_> marcoceppi: oh? do we know why?
<marcoceppi> rick_h_: it's not in the v3 api
<marcoceppi> we didn't roll forward because we were expecting juju deploy in 1.26
<rick_h_> marcoceppi: huh? why does deployer look at v3 api for a deploy to work with a cs url?
<marcoceppi> rick_h_: /me shrugs
<marcoceppi> rick_h_: I thought that was the reason
<marcoceppi> it might not be
<rick_h_> marcoceppi: I'd hope not. I wasn't aware of it sorry.
<rick_h_> marcoceppi: a charmstore charm uploaded, published, and public ACLs I'd expect juju to deploy with deployer just fine
<marcoceppi> rick_h_: it doesn't atm, otp but I'll get some debug info
<marcoceppi> kwmonroe: ^^
 * natefinch has never in his life used deployer, so no problem there
<rick_h_> marcoceppi: k, getting bac to join so we can look into it
<bac> hi rick_h_
<rick_h_> bac: marcoceppi was telling me of an issue with charm2 and deployer: http://paste.ubuntu.com/14432496/
<rick_h_> bac: can you work with uros to schedule investigating this please?
<rick_h_> bac: I'd expect it to work fine, but must be missing something
<bac> rick_h_: sure
<rick_h_> bac: marcoceppi will get some more debug info on specific charm/bundle file used, etc
<rick_h_> bac: ty!
<urulama> marcoceppi: which charm is that? also, is it multiseries charm? if it is, not sure juju handles that already
<rick_h_> urulama: go EOD :P
<urulama> rick_h_: waiting for my summer clothes to get out of the washer :)
<urulama> not much use for them lately :)
<rick_h_> urulama: heh same here
<rick_h_> last load in the dryer!
<marcoceppi> urulama rick_h_ benchmark-gui
<marcoceppi> not multi-series
<urulama> marcoceppi: https://api.jujucharms.com/charmstore/v4/benchmark-gui/meta/extra-info
<urulama> marcoceppi: is this latest?
<marcoceppi> urulama: yup
<urulama> marcoceppi: hm, have just deployed that charm
<urulama> using 1.25 that is
<natefinch> so... how do I get charm2?
<marcoceppi> urulama: put it in a bundle and use deployer
<urulama> marcoceppi: ha, i think i know what it might be ... do "rm ~/.go-cookies" please
<urulama> natefinch: it's under PPA in yellow. if you want to be tester, we can add you
<natefinch> urulama: would love to help test.  No idea what yellow is, though :)
<rick_h_> natefinch: think LP teams
<rick_h_> natefinch: that have never been renamed over the years because that's work
<natefinch> lol
<rick_h_> now you know what yellow is :)
<natefinch> I think I joined just as the color teams were being phased out.
<rick_h_> natefinch: ah ok
<urulama> marcoceppi: did that help?
<marcoceppi> urulama: in a meeting, will have to try later
<urulama> kk
<kwmonroe> urulama: marcoceppi:  quick test.yaml with benchmark-gui fails with deployer giving a 404: http://paste.ubuntu.com/14432687/
<kwmonroe> bac: fyi  ^^ in case you're looking at deployer failing with charm2 publish
<bac> thanks kwmonroe
<kwmonroe> np bac, fwiw, the store_url it's hitting is https://store.juju.ubuntu.com/charm/trusty/benchmark-gui-0
<urulama> hm
<urulama> kwmonroe: deployer uses legacy charm store
<urulama> PING store.juju.ubuntu.com (91.189.95.67) 56(84) bytes of data.
<urulama> 64 bytes from pupunha.canonical.com (91.189.95.67): icmp_seq=1 ttl=128 time=35.4 ms
<kwmonroe> also fwiw, "juju quickstart test.yaml" deploys the gui and then doesn't seem to do anything.  notification area in the gui says "deployment in progress", but the benchmark-gui charm is never deployed.
<urulama> jujucharms is on 162.213.33.121
<urulama> so, no wonder it doesn't exist, you're not using the charmstore to which it was uploaded
<kwmonroe> that's with juju-quickstart-2.2.4+bzr147+ppa42~ubuntu15.04.1
<d4rks1d3> Hi
<d4rks1d3> Anyone know if it is possible to have different settings for units
<urulama> kwmonroe: ok, we'll look into it
<kwmonroe> thanks urulama - do you want me to try a different store url in deployer?
<urulama> kwmonroe: try with https://api.jujucharms.com/charmstore
<kwmonroe> yup urulama, that works when deployer grabs https://api.jujucharms.com/charmstore/charm/trusty/benchmark-gui-0
<kwmonroe> would something simliar be causing the apparently stalled deployment with quickstart?
<urulama> quickstart should be using the new store already
<kwmonroe> hm, ok.  i'll open a bug with the potential fix for deployer and repro steps for quickstart
<d4rks1d3> Does anyone know whether or not is possible to have different config settings for 2 instances/units of the same service? thanks in advance for your support
<blahdeblah> d4rks1d3: configs apply to all units of a service; to have different ones you need to create a new service with the same charm and set the config differently for that service
<d4rks1d3> Thanks blahdeblah you mean to copy the whole charms and mofidy the "name:" field in metadata.yaml?
<blahdeblah> d4rks1d3: No, there's no need to copy the charm; you just deploy it using a different service name
 * blahdeblah digs for exact syntax
<d4rks1d3> Thanks a lot blahdeblah
<blahdeblah> d4rks1d3: Looks like you just add the service name on the end of the deploy command.  By default it matches the charm name, but you can specify something different.  juju deploy --help should show you all you need to know.
<d4rks1d3> Thanks a lot blahdeblah
<blahdeblah> d4rks1d3: You're welcome! :-)
<kwmonroe> bac: urulama: marcoceppi, bugs for deployment failure with charm2 charms for quickstart (https://bugs.launchpad.net/juju-quickstart/+bug/1532005) and deployer (https://bugs.launchpad.net/juju-deployer/+bug/1531999)
<mup> Bug #1532005: deployment failure when bundle includes charm2 published charm <juju-quickstart:New> <https://launchpad.net/bugs/1532005>
<mup> Bug #1531999: deployer 404s on new published charms <juju-deployer:New> <https://launchpad.net/bugs/1531999>
<frankban> kwmonroe: the quickstart bug does not really seem a bug, did the deployment fail?
<kwmonroe> frankban: it didn't deploy any charm from the bundle.. it just stops after juju-gui.
<frankban> kwmonroe: oh I think it will fail because it still uses the deployer under the hood
<kwmonroe> ah, gotcha frankban.. so a deployer fix would fix quickstart too.. i'll note that in the quickstart bug.
<frankban> kwmonroe: I am updating quickstart bug
<kwmonroe> ack - thanks frankban!
<frankban> kwmonroe: done, ty
<lazyPower> blahdeblah nice pickup. :+1:
<blahdeblah> lazyPower: why thank ya! :-)
<blahdeblah> lazyPower: BTW, still trying to get my head around your DNS-Charm; still keen for a hangout or something to pick your brain on it sometime.
<lazyPower> sure
<lazyPower> I'd really like to re-write it using layers
<lazyPower> as implementing new providers as layers would be much easier than the python module rig-a-maroo i have in there now
<lazyPower> but thats an exercise in time management i cant afford this cycle
<blahdeblah> Yeah - layers makes a huge amount of sense for that
#juju 2016-01-08
<jose> marcoceppi: is there anyone who's got op access to this channel to fix ^?
<marcoceppi> jose: fix what?
<jose> marcoceppi: there's someone who's joining and parting massively due to ping timeouts, been like that for almost a week now
<marcoceppi> apuimedo, you mean? What would you like me to do about it?
<jose> yep, /mode +b$##fix_your_connection
<marcoceppi> jose: I'm not a channel operator, I'll find out who
<jose> says niemeyer but there's probably someone else
<niemeyer> jose, marcoceppi: have you tried talking to him?
<jose> niemeyer: he won't get messages, he's pinging timeout just after he connects
<lazyPower> I speak to apuimedo_  bi-weekly give or take. probably a bouncer issue
<niemeyer> Let's try that first..
<lazyPower> niemeyer: i've got a reminder set, i'll reacho ut in the AM
<lazyPower> EU hours
<niemeyer> lazyPower: Thanks!
<lazyPower> np
<lazyPower> cargonza - so your messages aren't coming through?
<lazyPower> mbruzek: https://bugs.launchpad.net/juju-core/+bug/1532063
<mup> Bug #1532063: Unit seems unable to proceed after watcher died in the middle of hook execution <juju-core:New> <https://launchpad.net/bugs/1532063>
<philip_stoev> Hi, I wanted to ask if there is a trick that would serialize the execution of the hooks so that they run sequentially across the entire environment, rather than just within a machine?
<philip_stoev> In other words, only one hook is running at any given time within the environment
<tiagogomes_> Hello! Is it possible to use JuJu with multiple providers? Or use Juju on an OpenStack installation with multiple regions?
<stub> tiagogomes_: I don't know about multiple regions. But for multiple providers, you currently need choose one and add vms from the user provider manually (using 'juju add-machine ssh:... ')
<stub> tiagogomes_: It works, but you need to handle networking stuff yourself. I use it for hybrid OpenStack/MAAS deployments.
<tiagogomes_> stub cool, and does the JuJu GUI support multiple environments as well?
<tiagogomes_> looks like no :(
<coreycb> jamespage, gnuoy:  can one of you review this? https://code.launchpad.net/~corey.bryant/charm-helpers/git-1531612/+merge/281994
<gnuoy> coreycb, ack, I can take a look
<coreycb> gnuoy, thanks
<lazyPower> tiagogomes_ the gui will support multiple environments (the term will change to models soon) ~ when juju 2.0 lands
<lazyPower> tiagogomes_: we debut'd this feature at the last ODS, the videos are still floating around on youtube :)
<tiagogomes_> lazyPower nice! is there an expected release date?
<lazyPower> tiagogomes_ dont quote me on this but i think its in march
<lazyPower> its coming soon, but i forget the dates. a hair preoccupied with the work leading up to that launch :)
<tiagogomes_> lazyPower thanks!
<d4rks1d3> Hi
<rick_h_> tiagogomes_: lazyPower the GUI will be out this month
<rick_h_> tiagogomes_: lazyPower but the multi-models support in Juju is 2.0 in 16.04 in april (testable in alphas currently)
<lazyPower> o/ d4rks1d3
<lazyPower> rick_h_ fan-tastic!
<lazyPower> rick_h_ correct me if i'm wrong, but hte new gui is currently bet-able as well right?
<rick_h_> lazyPower: yes
<rick_h_> lazyPower: there's a charm in the store to test
<lazyPower> when i'm not buried in TLS libraries, i'm soooo on board
<rick_h_> search the juju mailing list for title "Juju GUI 2.0 - public beta"
<rick_h_> with various insturctions on getting/testing/filing bugs and such
<lazyPower> flagged, and earmarked
<lazyPower> thanks rick
<rick_h_> lazyPower: np
<coreycb> gnuoy, I have another little follow-on mp for charm-helpers if you could take a look: https://code.launchpad.net/~corey.bryant/charm-helpers/depth-clone-only/+merge/282016
<coreycb> gnuoy, it's a corner case so I don't think we need to pick it up in the openstack charms until the next larger sync
<Odd_Bloke> lazyPower: Forgive my ignorance, but where can I find the test code that produced the failure in https://bugs.launchpad.net/charms/+source/ubuntu-repository-cache/+bug/1526928 ?
<mup> Bug #1526928: Test fails due to peer having rsync cron job <ubuntu-repository-cache (Juju Charms Collection):New> <https://launchpad.net/bugs/1526928>
<Odd_Bloke> lazyPower: Oh, it's in the charm.  *hangs head*
<lazyPower> Odd_Bloke <#
<lazyPower> and <3 as well
<natefinch> rogpeppe: first off, lemme say, holy crap, charm upload is awesome. First time I tried it I thought something was broke because it worked so fast and flawlessly.
<natefinch> rogpeppe: however, I can't actually deploy the charm... I keep getting "resolved charm URL has no series"
<pmatulis> trying to test 1.26-alpha3 with LXD. anyone know what's going on here? http://paste.ubuntu.com/14440220/
<marcoceppi> pmatulis: you're not on wily or xenial, probably trusty
<marcoceppi> pmatulis: it does not work on trusty
<marcoceppi> unless you do a whole lot of hacks
<kwmonroe> is personal namespace ingestion working? ~ibm-charmers had a few revs to ibm-db2/README.md in the last 3 weeks.. none of them are showing on https://jujucharms.com/u/ibmcharmers/ibm-db2/trusty/
<kwmonroe> charm proof shows 5 info issues, but no W or E.
<aisrael> kwmonroe: I'm wrapping up a review of that charm now. I didn't think it was promulgated?
<aisrael> Oh, just in their namespace
<lazyPower> kwmonroe: this is the second report of an issue w/ ingestion. :( we had one yesterday from natefinch about his charm, but it had proof errors
<lazyPower> probs should file an ingestion bug so they take a look
<natefinch> lazyPower: btw, my charm proof blows up with an exception (dumping a stack trace to the command line no less... bad form).  How do I update it?  I presume I have some ancient version
<lazyPower> natefinch: the charm-tools package has that
<natefinch> hmm... say it's already at the newest version
<natefinch> lazyPower: thoughts? http://pastebin.ubuntu.com/14440929/
<lazyPower> natefinch can i get a bug filed for that? https://github.com/juju/charm-tools/issues
<lazyPower> because i have no idea why pyopenssl is barfing :(
<natefinch> will do
<lazyPower> natefinch looks like you need to update requests though
<lazyPower> http://stackoverflow.com/a/29081240
<natefinch> lol... exception uninstalling the old requests
<natefinch> oh, permission denied.... someone couldn't have put in a catch for that? geez
<natefinch> oh man, charm proof is way too strict, geez
<natefinch> E: Unknown root metadata field (series)
<lazyPower> natefinch: thats a relatively new thing
<natefinch> the check isn't, though
<natefinch> who cares if I put unknown stuff in the root of the yaml?
<lazyPower> obviously proof does
<natefinch> sure, make it a warning... but don't prevent me from putting it in the charm store under my own namespace :/
<lazyPower> I disagree
<lazyPower> it should prevent you from putting garbage in the metadata
<lazyPower> but the lack of ingestion was a bummer, sure
<natefinch> if it deploys to juju, it should be able to exist in the store
<natefinch> perhaps not promulgated, but geez, in my own namespace?
<lazyPower> natefinch: juju doesn't validate policy though
<lazyPower> proof is our only line of automatically enforcing policy
<lazyPower> when charm2 lands, and you can just upload your charm - this issue goes away
<lazyPower> and it will only be enforceable on ~recommended
<natefinch> which makes me happy :)
<lazyPower> just a friendly reminder, that whats frustrating you, is a source of frustration for the ~charmer team as well :)   Imagine the correspondences we have when proof just lets things go and we nack a review on many many items that proof could, and should, be catching.
<lazyPower> <3
<natefinch> lazyPower: btw, thanks for the tip, updating requests fixed charm proof (perhaps this was obvious by my complaining about charm proof ;)
<lazyPower> np :)
<lazyPower> google-fu ftw
<kwmonroe> lazyPower: ingestion issue filed - you reckon this is the best place? https://github.com/CanonicalLtd/jujucharms.com/issues/189
<lazyPower> kwmonroe yep, thats where we were directed to file any store related bugs
<kwmonroe> and while i'm grumping around, *my* charm hasn't ingested either :/  (but it's only been a few hours)
<lazyPower> kwmonroe: rattle cages! shake trees!
<kwmonroe> :)
<pmatulis> marcoceppi: does not work on trusty? oh wow ok
#juju 2016-01-09
<sidDevOps> Am I at the right place for asking juju related questions?
<sidDevOps_> anybody there?
<pmatulis> marcoceppi: is that due to the kernel? b/c i have bleeding edeg juju and lxd installed
<pmatulis> *edge
<SiPL> hi guys
<SiPL> is this channel about big data?
<jrwren> pmatulis: its because packaging and SRU? things. trusty only had golang 1.2 in main, but lxd-provider requires newer. I've heard it will be backported and will work in trusty, it just isn't yet.
<pmatulis> jrwren: golang, ok, that makes sense. i am indeed interested to know whether these "things" will be backported to trusty eventually
<pmatulis> re kernel, i know that a modern one (4.4) is needed for some LXD operations (copying/moving containers)
#juju 2017-01-02
<kjackal> Good morning Juju world and a Happy 2017!
<zeestrat> BlackDex: Regarding what you wrote on the 27th, with multiple subnets, just wanted to let you know that we ran into the same issue.
<zeestrat> BlackDex: It does indeed sort the IP's and choose the lowest one. Here are the bugs I found to be closest to the issue at hand: https://bugs.launchpad.net/juju/+bug/1512875 and https://bugs.launchpad.net/juju/+bug/1603473
<deanman> good morning
#juju 2017-01-03
<kjackal> Good morning Juju world
<BlackDex> zeestrat: Thx for the info!
<zeestrat> BlackDex: No worries. Feel free to add some logs to the fire (bug) :)
<BlackDex> will do when i can! :)
<CarlFK> marcoceppi: do you plan on merging the branch you made for me, or should I keep a local copy? https://github.com/marcoceppi/charm-ubuntu/tree/carlfk                                                        "example of hostname "
<marcoceppi> CarlFK: I wasn't sure if you were interested in keeping it around, was looking for a +1 on the PR :)
<CarlFK> marcoceppi: I think I just submitted a PR to myself.. https://github.com/CarlFK/charm-ubuntu/pull/1/commits/0022fb1f424168f60ac2a34c37e99701b1bf7137
<marcoceppi> CarlFK: I merged that into my pull request
<marcoceppi> https://github.com/marcoceppi/charm-ubuntu/pull/1
<CarlFK> also, can you give me the 'right way' to do this: https://github.com/xfxf/video-scripts/blob/master/carl/ansible-misc/mk-hosts.py#L70-L73  # ssh ubuntu@streambackend3.video.fosdem.org "sudo cp .ssh/authorized_keys /root/.ssh
<CarlFK> brb, need to grab some breakfast
<marcoceppi> CarlFK: `juju scp` is one way
<themagicaltrout> marcoceppi: i know i've asked before, how do i register an interface on juju.solutions?
<bugg> oops
<tvansteenburgh> magicaltrout: click the + next to the "interface:" header (assuming you're logged in)
<magicaltrout> never noticed that login button in all my life
<magicaltrout> thanks tvansteenburgh
<magicaltrout> nice
<magicaltrout> works and everything
<CarlFK> marcoceppi: I am still getting "unauthorized: access denied for user "carlfk"  from:  juju deploy ubuntu --channel edge
<CarlFK> I think you said you did something, but I never tested
<CarlFK> I did get a browser login ... "login successful as user carlfk"
<marcoceppi> CarlFK: about to push it to the stable channel
<arosales> jcastro: is the k8 sig starting @ zoom?
<jcastro> it was supposed to start 10 minutes ago
<jcastro> but there's no one here, so I posted on the mailing list
<jcastro> I'm not crazy right, it's 10:40 PST right now right?
<jcastro> and it's tuesday
<arosales> jcastro: I am there, but it says "waiting for host to start"
<arosales> It is 10:40 pst
<arosales> and it is Tuesday, Jan 3 :-)
<jcastro> same with me
<jcastro> sigh, they did this last time too
<jcastro> they apparently haven't had a meeting since 12/13
<arosales> jcastro: ok, thanks for confirming. I'll close down zoom. Also thanks for posting to the list and following up/
<jcastro> I don't really have a choice
<jcastro> it's like, I need to escalate to them
<jcastro> because my thing has been sitting in github for over a month
<lazyPower> blerg :|
<lazyPower> being blocked on others is the pitts
<jcastro> ugh
<jcastro> it's a biweekly meeting
<lazyPower> oh :) thats fun then. so we're probably just off skew by a week right?
<jcastro> yeah
<jcastro> "This branch has no conflicts with the base branch
<jcastro> I AM SO READY CHUCK
<lazyPower> i wish i had the button clicking privs man
<lazyPower> i'd love to merge that monster doc pr
<lazyPower> we're aout to do the same thing to core, we have all this backlog of CDK work that needs to land upstream
<jcastro> I just want one thing
<jcastro> let us rev as fast as we want in /ubuntu
<jcastro> like, doing multiple reviews, etc. is fine
<lazyPower> yeah the fact we're blocked and have bad info in those docs is discerning
<jcastro> but like, matt and you should be able to ping pong PRs off each other for example
<jcastro> without waiting for some dude who has no time to comment on your thing
<lazyPower> yar
<lazyPower> i agree with you
<lazyPower> the effort to organize reviewers is still WIP though
<lazyPower> which is understandable given the size of the project
<jcastro> on the plus side
<jcastro> there's only 67 PRs now
<jcastro> it was like 110
<lazyPower> noice! I didn't notice that
<lazyPower> i'm still clearing the 400+ notifications in github
<jcastro> we're half way on page 2 now, so I think we're moving up lol?
<jcastro> rick_h: wanna sync up tomorrow and bust out this wikipedia page?
<CarlFK> marcoceppi: "Message": "not found: URL has invalid charm or bundle name: \"~marcoceppi/xenial,trusty,precise\"",  https://api.jujucharms.com/charmstore/v5/~marcoceppi/xenial,trusty,precise/ubuntu/archive/layer.yaml
<rick_h> jcastro: can see if we can find space
<marcoceppi> CarlFK: what version o fjuju?
<CarlFK> 2.0.2-xenial-amd64
<marcoceppi> CarlFK: so you're typing `juju deploy ubuntu`?
<CarlFK> marcoceppi: er.. fjuju what?
<marcoceppi> s/o fjuju/of juju/g ;)
<CarlFK> no - saw that on the jujucharms page
<marcoceppi> CarlFK: you shoul djust be able to `juju deploy xenial/ubuntu <name of app>`
<CarlFK> that works.  I was clicking around https://demo.jujucharms.com/?store=cs%3Aubuntu-8   and saw that message
<marcoceppi> CarlFK: weird
<rick_h> mbruzek: I can't seem to find the original conversation: https://bugs.launchpad.net/juju/+bug/1623217
<mup> Bug #1623217: juju bundles should be able to reference local resources <juju:Triaged> <https://launchpad.net/bugs/1623217>
<rick_h> mbruzek: I know you hit me up in some channel heh
<CarlFK> marcoceppi: is there a similar thing that uses debian?  (I am fumbeling with debops, wondering "maybe it would work with debian?" )
<marcoceppi> rick_h: do we have debian in juju agent support?
<marcoceppi> CarlFK: anything I could help out with? I mean, Debian and Ubuntu are so similar, but I've only really used Ubuntu so I might be able to help clarify
<rick_h> marcoceppi: no, we don't at the moment.
<rick_h> marcoceppi: I think that it's something that we're very interested in community involvement for enabling debian agents
<CarlFK> marcoceppi: welp.. sure.. this stuff is kinda nifty...   https://docs.debops.org/en/latest/debops/docs/index.html
<marcoceppi> Oh, debops, I thought that was a devops typo
<marcoceppi> I've never heard of debops /me reads
<CarlFK> marcoceppi: I am on day 2 with this..  the lead guy has been helping me in  #debops
<marcoceppi> CarlFK: sweet, I'll join there as well
<magicaltrout> marcoceppi or someone who might know, is it possible to get relation info inside an action?
<marcoceppi> magicaltrout: technically, yes
<marcoceppi> magicaltrout: you can get relation info whenever you want, if you have the right incantation
<magicaltrout> i like the "technically" bit.. makes me suspicious
<magicaltrout> if an action needed an ip of a service marcoceppi would you set a kv variable in the main charm reactive code, or pick it up inside the action?
<marcoceppi> I would query directly
<marcoceppi> so, there's two hook tools that make this possible. The first is relation-ids, the second is relation-list. `relation-get` has two extra parameters, that are taken as environment variables when in a relation context, but can be set in a hook context (or action) on the CLI in order to scope the call correctly, it's the `-r` flag and the JUJU_REMOTE_UNIT positional argument
<marcoceppi> relation-ids gives you the `-r` flag values, or all the JUJU_RELATION_ID that exist for a given relation. For example, if you have a relation called "database" you could run `relation-ids database` which would return a list of >=0 items of the unique lines for that relation. So if you have the database relation connected to two applications, you'd get back two ids. If it's only one, then one, none - none, etc
<magicaltrout> cool yeah found the stuff lurking in the docs
<magicaltrout> thanks
<marcoceppi> magicaltrout: you can then use the relation-list command to list all the units in a given relation id, `relation-list -r $JUJU_RELATION_ID` is a list of the units there
<marcoceppi> magicaltrout: finally, there's charm-helpers that make all that easy to manage
<marcoceppi> here's an example of that in action
<marcoceppi> magicaltrout: https://gist.github.com/marcoceppi/193a8e4c37463cae95807499160ea7df
<magicaltrout> thanks marcoceppi
<magicaltrout> also regarding actions as i have you on the horn. I saw the openstack folk write bash scripts and then call a python script from within them
<magicaltrout> is that the sensible/correct way, or should I just write a python script with a main function?
<magicaltrout> i've only written bash actions before but I may as well python-ise them
<marcoceppi> magicaltrout: you can do either
<marcoceppi> magicaltrout: I prefer all python
<magicaltrout> cool
<magicaltrout> seemed a bit ott to have a  bash script  just run a  python script but figured i should check
<magicaltrout> okay i'm trawling the random requirements tonight
<magicaltrout> marcoceppi: is there a `juju scp` alternative for charmhelpers?
<magicaltrout> or another way to pass files between charms
<magicaltrout> lazyPower you must know!
<lazyPower> oh oh
 * lazyPower reads backscroll
<lazyPower> ah
<lazyPower> err
<lazyPower> no, we tend to either proxy data over the relation wire (text based). If you're wanting ot push files we dont have a really good pattern for that
<magicaltrout> hmm nice
<lazyPower> unless i'm mistaken, has the big software team done any pioneering work around that question kwmonroe? (re: juju scp for charms to copy files among themselves)
<lazyPower> i know we haven't over here in k8s land, we're using resources to ensure everything is present before it kicks off (save for tls certs and the like)
<magicaltrout> yeah what i'm wanting though is to start Solr then when the relation is joined the other end can copy in its own config and stuff into solr
<magicaltrout> technically its all text based but its a directory with a bunch of arbitrary text files in and its just easier to send over a zip or something
<lazyPower> oh sure, sure, i understand the desire
<lazyPower> i just dont think we've established a good, functional, repeatable solution for this.
<petevg> lazyPower: afaik, we're using resources or relation data everywhere, too.
<magicaltrout> aww you all make me so sad
<lazyPower> yeah, i thought that was the case
 * petevg sheds tears
<lazyPower> magicaltrout openstack is still a hope, they have done some cool stuff in charmhelpers that might be lingering to help
<vmorris> juju 2.0, manual provider and a few added machines -- if I wanted to setup LXD bridges to the physical network for each of these machines, what would be the appropriate method? I am already altering the default lxd-profile for juju on each of the machines, but the LXD containers are just picking up a 10.0.0.0/24 address
<kwmonroe> magicaltrout: lazyPower:  super simple solution for sharing charm data.  simply deploy hadoop for all your workload needs.  everybody can see hdfs://tmp/myresource.
<magicaltrout> you make me even sadder kwmonroe
<kwmonroe> when all you have is a hammer, everything looks like hadoop.
<vmorris> ^^
<CarlFK> not sure this is juju problem - I need help connecting a .. container to a bridge network so that I can pxe boot a vm from the dhcp server running in a container
<vmorris> to answer my own question, I think I need to be changing /etc/default/lxd-bridge to suit on each of the manually added machines... perhaps there's a better way
<CarlFK> the container started with juju deploy ubuntu t3, installed dnsmasq dhcp server into it.  now I want to test it with a vm
<kwmonroe> vmorris: i *think* it's all in how you setup the lxd bridge (sudo dpkg-reconfigure lxd).  that's where you can answer questions like "what subnet to use?" and "do i need to NAT my ip4 addresses" etc...
<vmorris> kwmonroe: ah yea.. i was kinda hoping that there was something in juju that would let me do this when adding the machines, but i suppose this is appropriate & the same solution i'm looking at now
<kwmonroe> CarlFK: your vm may need to be on the same subnet as your container.  iirc, pxe blasts out over udp (at least for the tftp part) and doesn't cross subnets so well.
<CarlFK> yup.  so they would be all on the same.. something
<magicaltrout> lazyPower: i was thinking one option currently might be to write a charm that is subordinate of solr which basically just has the resource for my other charm
<magicaltrout> it would be a bit $hit but works I guess
<lazyPower> i dont like that solution
<lazyPower> it seems clunky
<magicaltrout> well the other solution currently is a bunch of juju scp && juju run blocks
<lazyPower> yeah neither are appealing
<kwmonroe> cough... hdfs... cough.
<lazyPower> magicaltrout i'll have a deeper think on this, but I dont know that i'll have a good suggestion. We've experimented with NFS and SSH in the past
<kwmonroe> magicaltrout: i'm only 80% kidding about hdfs.  is a shared filesystem of any kind an option?  nfs?
<lazyPower> and it was non eligant
<kwmonroe> s3 over sshfs
<magicaltrout> kwmonroe: yeah i know but its a one shot event, adding in a subsystem like that seems like overkill to ship a tarball from x to y
<kwmonroe> magicaltrout: netcat | gzip /etc/foo is always fun.. assuming you care nothing of the integrity of your payload.
<kwmonroe> fwiw, i also don't like the subordinate approach.  that feels like you're making 2 charms just to ship a tarball from x to y
<magicaltrout> well.... i would be :P
<kwmonroe> lazyPower: what's the max size of relation data?  65k?
<magicaltrout> but I also want to make this stupidly simple for some DARPA love and buy in so I'm trying to avoid anything more than juju deploy my-bundle
<jhobbs> is there a way to set default constraints for all new models after bootstrap?
<kwmonroe> jhobbs: juju set-model-constraints i think
<kwmonroe> oh wait.. nm.. that's not gonna handle new models.
<kwmonroe> magicaltrout: is the thing connecting to solr always going to have the same resource?
<rick_h> jhobbs: model-defaults
<kwmonroe> magicaltrout: iow, can you just include that in the solr charm, and then on relation, move it or enable it in some way?
<rick_h> jhobbs: juju help model-defaults for help/etc
<magicaltrout> in this context kwmonroe then yeah i could put it in the solr charm, but then they use solr for loads so you'd end up having a bunch of bespoke resources for different things in a generic solr charm
<magicaltrout> guess it would work for now though
<magicaltrout> enough to blag them through the demo phase
<kwmonroe> and that's what we're shooting for in 2017.  just enough to blag.
<magicaltrout> hehe
<petevg> We of the soaring ambition.
<magicaltrout> was 2016 the year of the out of arms reach demo?
<magicaltrout> 2017 just enough to blag through it when people start to use things
<magicaltrout> 2018 maybe approaching usable... ;)
<petevg> Something like that :-)
<magicaltrout> i like it
<jhobbs> rick_h: that doesn't work for me, i get this warning and then the constraints don't apply
<jhobbs> rick_h: http://paste.ubuntu.com/23736102/
<petevg> cory_fu, kwmonroe: speaking of breaking things in the name of preparing us for our glorious future, I've got that that log dumping, crash reporting branch of matrix working: https://github.com/juju-solutions/matrix/pull/63
<petevg> At least, it works great on my computer. Would appreciate some verification :-)
<jhobbs> hmm maybe that was because i didn't make a new model, hold
<rick_h> jhobbs: or sorry, I thought you meant config. I missed "constraints"
<rick_h> jhobbs: so...no, I don't think so. I think it defaults to a set of constraints and then is overridden at the set-model-constraints level
<jhobbs> ok
<jhobbs> thanks rick_h
<jhobbs> rick_h: bug filed https://bugs.launchpad.net/juju/+bug/1653813
<kwmonroe> jhobbs: rick_h:  isn't there a bootstrap-constraints that can be different than future model constraints?
<kwmonroe> jhobbs: i think "juju bootstrap --bootstrap-constraints mem=2G --constraints mem=4G" would mean your bs node gets a 2gb instance, and all future machines get 4gb.
<rick_h> kwmonroe: right but he wants to change them after bootstrap?
<kwmonroe> i don't think he knows what he wants
<kwmonroe> crazy texans
<jhobbs> kwmonroe: my understanding is that bootstrap constraints applies to bootstrap in addition to constraints
<jhobbs> kwmonroe: so that the constratins i specify with 'constraints' also apply to the bootstrap node, in addition to the bootstrap constraints
<jhobbs> i do not want the 'constraints' to apply to the bootstrap node - if that was the case, that would solve my problem too
<kwmonroe> ahhh, i'm really not sure jhobbs.  gimme 2 minutes.  i'll bootstrap with -bs-c and -c and see what happens.
<kwmonroe> maybe more than 2 minutes:  ERROR detecting credentials for "azure" cloud provider: credentials not found
<jhobbs> i will test kwmonroe
<jhobbs> thanks
<kwmonroe> oh, nm, i typed the region name wrong.. i'm on it.  start the clock!
<jhobbs> ah ok :)
<kwmonroe> aight jhobbs, bootstrap-constraints were honored.. i did this:   juju bootstrap azure/centralus --bootstrap-constraints mem=2G --constraints mem=8G  and got a bootstrap node with 3.5G (smallest size that fullfilled mem=2G).  i'm deploying ubuntu now to see if my default model constraints are set to 8.
<jhobbs> kwmonroe: i tested and it doesn't seem to work the way i want
<jhobbs> kwmonroe: juju bootstrap --config agent-stream=devel integrationmaas --to hayward-00 --constraints="tags=hw-jhobbs" --bootstrap-constraints=""
<jhobbs> kwmonroe: i don't want the tags requirement to apply to bootstrap but it does
<jhobbs> "No available machine matches constraints: mem=3584.0 name=hayward-00 tags=hw-jhobbs"
<kwmonroe> jhobbs: what about --bootstrap-constraints="tags=''"
<jhobbs> ahh good call
<jhobbs> i'll try that
<kwmonroe> not saying that's right, but i wonder if it'll override the value if given a key.
<jhobbs> success! it didn't like the single quotes, but this worked: juju bootstrap --config agent-stream=devel integrationmaas --to hayward-00 --constraints="tags=hw-jhobbs" --bootstrap-constraints="tags="
<jhobbs> thanks kwmonroe
<kwmonroe> np jhobbs.. now to determine if that's by design or not.  it seems like you're right -- constraints are passed as bootstrap-constraints if not explicitly overriden.
<kwmonroe> i dunno if that's how rick_h wanted it or not.
<jhobbs> seems like you should be able to change that setting after bootstrap still
<jhobbs> i updated the bug with that work around though
<jhobbs> thanks!
<kwmonroe> np
#juju 2017-01-04
<justicefries> o/
<babou> hi, has anyone seen an issue where juju just entirely refuses to work?
<babou> as in, I type "juju status" and it just hangs
<babou> i am running ubuntu 16, I have restarted my computer, I have reinstalled juju
<babou> i had been using it successfully to follow tutorials, but it has since stopped working
<blahdeblah> babou: There are a few bugs around which cause jujuds to become very busy; restarting jujud-machine-0 should sort them out
<blahdeblah> ^ That's for juju 1.x; for 2.x it might be called something slightly different
<babou> i'm running 2.0.2
<blahdeblah> It will be called something similar (maybe the same); ssh directly to your juju controller and systemctl restart 'juju*'
<babou> i'm running locally with lxd
<babou> in that case I would enter that command on my local machine, right?
<blahdeblah> babou: Within your lxd container
<babou> aah
<blahdeblah> babou: This could be something different as well, like your networking not set up quite right or something.
<babou> that doesn't seem to have worked
<babou> tried what you said and tried restarting the container externally
<babou> i can use a cloud container though, so I will just use that for now
<blahdeblah> babou: It could easily be something else then
<babou> any other ideas?
<blahdeblah> have a look in ~/.local/share/juju
<blahdeblah> there's a file called controllers.yaml
<blahdeblah> find your controller, and there should be a line in its section like this: api-endpoints: ['10.1.10.7:17070']
<blahdeblah> check that you can "nc -vz 10.1.10.7 17070"
<babou> i have no fille called controllers.yaml
<blahdeblah> hmmm
<blahdeblah> what does "juju list-controllers" say?
<babou> wait yes i do
<blahdeblah> phew
<babou> i found the line and tried this command
<babou> nc -vz 10.187.163.241:17070
<babou> but it just printed the help message
<blahdeblah> babou: no colon between IP & port
<babou> nc: connect to 10.187.163.241 port 17070 (tcp) failed: No route to host
<blahdeblah> babou: Sounds like network setup then
<babou> ok thank you for your help
<kjackal> Good morning Juju world
<mattyw> morning folks, reading through the docs on constraints (https://jujucharms.com/docs/stable/charms-constraints) was wondering if it was possible to specify a constraint in a file ahead of time, so for example, all controllers bootstrapped on the given cloud will have that constraint?
<holocron> mattyw: did you read this here? https://jujucharms.com/docs/stable/charms-constraints#using-constraints-when-creating-a-controller
<mattyw> holocron, I did thanks, I was hoping to have a file somewhere so I could not have to remember to do it on bootstrap
<holocron> Matty I see.. you want to do it post bootstrap. That I'm not sure about
<holocron> Ah, you want to apply it to all controllers. Now I got it
<mattyw> holocron, the other way around, I want it to happen to all future bootstraps
<mattyw> holocron, all future bootstraps for a given cloud anyway
<mattyw> holocron, if for example I'm too cheap to want to use m3.medium for all my instances
<mattyw> holocron, but I'm also too lazy to remember how cheap I am
<holocron> I see.. sorry I don't know of a way. All my juju use cases are local LXD or manual
<rick_h> mattyw: alias cheapbootstrap=juju bootstrap --constraints... ? :P
<rick_h> mattyw: no, there's no "constraints via file" currently.
<rts-sander> help I do "juju deploy mysql" but it says connection refused
<magicaltrout> what says connection refused rts-sander ?
<Merlijn_S> Hi @kjackal
<kjackal> Merlijn_S: Hello!
<kjackal> Merlijn_S: How can I help?
<Merlijn_S> I'm working on a setup for a spark course. ~20 teams of students, each get their own jupyter notebook with pyspark integration that's connected to a shared spark-on-yarn + hdfs cluster
<kjackal> Merlijn_S: sounds nice!
<Merlijn_S> now the issue is that while a sparkcontext is active, it is a running job in yarn. This means that it blocks resources, even though nothing is running.
<Merlijn_S> ~20 teams means ~20 spark contexts
<kjackal> yes, true. Is there any parameter we can play with, let me see if I find something.
<Merlijn_S> each sparkcontext has its own yarn job that marks a certain amount of resources as "used", so the cluster runs out of resources.
<rts-sander> magicaltrout, https://justpaste.it/12280
<rts-sander> I guess I could download the charm and then deploy it from local
<magicaltrout> rts-sander: looks like the server you're trying to run it on is behind a firewall
<magicaltrout> or missing a route to the interwebs
<rts-sander> juju version 1.25.6-xenial-amd64
<Merlijn_S> I'm also not sure how the yarn resources work exactly. Does each "application container" get a fixed amount of resources or does this depend on the job itself?
<magicaltrout> what you doing running 1.25 anyway ?! :P
<rts-sander> because we were going to use it in production before 2.0 was out of beta
<rts-sander> but now we switched to 2.0 but I'm too lazy to upgrade
<magicaltrout> hehe
<magicaltrout> well
<magicaltrout> if I go to the url in your error in my browser
<magicaltrout> it downloads the charm
<magicaltrout> so i guess its connectivity of some sort
<rick_h> rts-sander: magicaltrout exactly
<rts-sander> same here
<rts-sander> mysql.zip
<rick_h> rts-sander: it looks like you can't reach the api.jujucharms.com to get the zip of the charm from the store
<rick_h> rts-sander: so the only thing to do is to get the charms locally and deploy that way if you can't augment the connectivity
<rts-sander> I can wget it
<rick_h> rts-sander: there's a download link here: https://jujucharms.com/mysql/ on the right side to get the zip
<rick_h> rts-sander: right, but can where the controller lives get to it?
 * rick_h goes back to read where the juju controller is sitting (local or on an openstack or?)
<rts-sander> it's a local deployment
<rick_h> rts-sander: hmm, so is lxc setup with a bridge where it can reach the internet through your computer then?
<rts-sander> yes
 * rick_h thought we set that up in 1.25
<rick_h> rts-sander: maybe an intermittent issue? did you try more than once and get it?
<rts-sander> I tried multiple times
<rts-sander> when I first installed it I could get charms through it
<rts-sander> I'll deploy from local, it will take less time than trying to figure this out
<rick_h> rts-sander: hmm, that sounds like something intermittent then perhaps. I guess can you try now? If you can wget the file and lxc is setup to work.
<kjackal> Merlijn_S: this looks like a good description of YARN esource amangement: https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_ig_yarn_tuning.html
<kjackal> Merlijn_S: And this is how you set the cores and mem from spark https://www.mapr.com/blog/resource-allocation-configuration-spark-yarn
<kjackal> Merlijn_S: If I understand your request correctly you need to either overcommit resources so that the utilisation increases or do some kind dynamic allocation/resource preemption
<Merlijn_S> kjackal: We're running the slaves inside LXD containers; running multiple slaves in multiple lxd containers solves the issue by overcommitment, but I worry that when they start to run real queries everything will bork...
<Merlijn_S> kjackal: So if I understand correctly, It's indeed spark that decides what the size of the container is. The size of the container is the same no matter how much is actually used in the jobs.
<Merlijn_S> kjackal: So ideally, I find a way to make the sizes of these containers dynamic depending on what actual jobs are running. If that doesn't work then I'll just have to find a way to make the spark containers smaller.
<kjackal> Merlijn_S: give me some time. In a meeting
<Merlijn_S> kjackal: Related: How do the bigtop charms calculate what the "total memory" of a machine is?
<Merlijn_S> kjackal: aight :)
<petevg> cory_fu: I updated https://github.com/juju-solutions/matrix/pull/63/files, with code that catches the TestFailure in a slightly bettter place. (Also set the exit code to 1 if we have a generic uncaught Exception.)
<kjackal> Merlijn_S: I am back. Not sure how bigtop queries for the available memory. I will have to look at the code, If you want me to.
<cory_fu> petevg: Excellent
<kjackal> Merlijn_S: I would start with the configuration of spark. Like spark.executor.cores as described here: http://spark.apache.org/docs/1.6.1/configuration.html
<kjackal> Merlijn_S: with --conf you are able to set the config variables: in spark-submit
<Merlijn_S> kjackal: I'm using pyspark; any idea how those config values translate to that?
<kjackal> Merlijn_S: I see there is a SPARK_CONFIGURATION variable in jupiter configuration (never used that)
<kjackal> Merlijn_S: are you using this pyspark? http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkConf
<Merlijn_S> kjackal: I'm using findspark: https://github.com/minrk/findspark
<kjackal> nice
<Merlijn_S> kjackal: so that translates to: http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext
<kjackal> Seems so
<kjackal> Merlijn_S: and you can set the configuration there
<Merlijn_S> kjackal: k, thx, I'll take a look at that.
<Merlijn_S> kjackal: I'm also interested in the bigtop available memory logic. Currently, it only uses 2/3 of the real memory on the machines.
<Merlijn_S> kjackal: and it differs a lot between machines that have the same hardware.
<Merlijn_S> kjackal: scrap that, it's the same; it's 2/3 of the total memory of each machine.
<Merlijn_S> kjackal: ever heard of 'dynamic resource allocation' of spark contexts? That might be what I'm looking for: https://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
<kwmonroe> Merlijn_S: dyn res allocation won't be easy.  it looks like it requries a jar + yarn-site.xml config changes to all nodemanagers, and we're currently not exposing those config options :/
<kwmonroe> Merlijn_S: sorry i'm playing catch-up here.. but you also said "now the issue is that while a sparkcontext is active, it is a running job in yarn".  are you in yarn-client mode?  if so, the driver (which instantiates a sparkcontext) does not run on the yarn cluster.  it runs whereever you start the spark application.
<kwmonroe> Merlijn_S: if you're in yarn-cluster mode, of course, the driver will indeed take one of your yarn nodemanagers to do its thing.
<petevg> cory_fu: pushed one more update to that gating PR (I wrote some better .matrix tests, and found out that we couldn't turn off gating -- the update contains the fix + the tests.)
<petevg>  cory_fu: I kind of want to integrate those .matrix tests into tox, but your clever method of running main only works once -- the second time we try to do it, we've already set a bus and done other globally relevant stuff to matrix, so it doesn't work. I'm kind of tempted to write a script that sniffs out .matrix files in the test dir, and runs them via
<petevg> Python subprocess, checking for a 0 or non zero exit code. Don't want that to delay the gating stuff, though. Will open a ticket ...
<cory_fu> petevg: Couldn't you just add them as additional test plans in the existing functional test?
<petevg> cory_fu: they have to be completely separate matrix runs.
<cory_fu> petevg: Oh
<Merlijn_S> kwmonroe: sorry, was out to get food. I'm talking about the sparkcontext itself, not the driver. The sparkcontext itself takes up space while it is alive. So every interactive spark session blocks unused resources.
<Merlijn_S> kwmonroe: So basically, a multi-tenant interactive spark cluster is more or less a no-go without dyn res allocation..
<larrymi_afk> I'm trying to remove a relation to re-add it and juju remove-relation seems to work but when I do juju add-relation .. it keep saying relationship exist. Is there a way to force break the relation?
 * larrymi tries juju destroy-relation
<larrymi> hmm.. destroy-relation doesn't work either
<lazyPower> larrymi - is the unit halted in a hook context after removing the relation?
<lazyPower> as in, debug-hooks or some such
<larrymi> lazyPower: ah yes I see a config-changed hook failure
<larrymi> lazyPower: not on the unit itself but on the other endpoint
<lazyPower> yeah, that will halt the removal of the relationship in the controller
<larrymi> lazyPower: ah but that's not from the removal .. that was there before
<lazyPower> so  its in progress but not completed, and its right its been removed but "isnt really removed"
<lazyPower> yep
<lazyPower> its in queue
<lazyPower> if you resolve that failure and step that remote unit through, it should hit the -departed and -broken hooks and clear itself up
<larrymi> lazyPower: right
<larrymi> lazyPower: ah that makes sense. Thanks! :-) I'll give that a try.
<marcoceppi> rick_h: SHOWTIME?
<arosales> rick_h: got a link to the show ?
<marcoceppi> accidental caps
<rick_h> marcoceppi: arosales working on it
<rick_h> hmm, won't let me check the "hangouts on air" button :/
<rick_h> marcoceppi: arosales https://hangouts.google.com/hangouts/_/ytl/VusL3EZZPRDpX4QUibgOwiPCpKQKzhydHyEehU55OXw=?eid=103184405956510785630&hl=en_US&authuser=0
<rick_h> to join the hangout
<rick_h> had to create a new one :(
<rick_h> https://www.youtube.com/watch?v=n86tRu4xpYU to watch
<arosales> rick_h: thanks, omw
<rick_h> anyone else coming? jcastro
<rick_h> natefinch: or redir or anyone want to join feel free
<CoderEurope> https://www.youtube.com/watch?v=AfBnrLvDFy8     The Juju Show #3     |    Last streamed live on 5 Oct 2016
<CoderEurope> cfgmgmt session
<CoderEurope> Why aren't we showing this today ?
<rick_h> arosales: coming or should we go on?
<rick_h> CoderEurope: things and stuff, have to watch! :P
<CoderEurope> rick_h: I have been waiting 3 days for this show = Is it happening, or not ?
<rick_h> CoderEurope: yes, we were waiting for arosales
<rick_h> but he's gone awol so we're going on now
<marcoceppi> CoderEurope: the link to watch is here: https://www.youtube.com/watch?v=n86tRu4xpYU
<CoderEurope> okay - I shall await ....
<natefinch> rick_h: I have a fix for the lxd thing, so i'll be making a test and committing it.  have fun, though!
<arosales> rick_h: please start without me
<marcoceppi> https://github.com/juju/python-libjuju
<bdx_> rick_h: I neglected to see the examples directory in libjuju
<lazyPower> is that a community member that just joined?! hype!
<bdx_> there are a bunch of examples that I seemed to of missed, but it was a good learning experience none the less
<lazyPower> rick_h  is it unfiled if we just discovered it today? :)
<skay> hehe
<CoderEurope> QUESTION: First of all thankyou for the video \o/ - they're fabulous for the community ! 2ndly, @Discourse juju charm (Modern Forum Type) is really needed for the lug I am involved with - What are the chances ?
<lazyPower> oooo its mreed hype
<lazyPower> CoderEurope - we had a very old discourse charm, however we're interested if there's a community member thats interested in maintaining the discouse charm
<arosales> CoderEurope: there is a community developed Discourse charm @ https://jujucharms.com/u/marcoceppi/discourse/precise/27
<bdx_> thejujushow: the libjuju work I've been cutting has already made it into my jenkins post build step .... I'm simply using it to run actions on my juju deployed applications following successful build step completion
<CoderEurope> lazyPower: I would but it may be beyond me .. but I have spoken about this before :D
<lazyPower> CoderEurope - well, if your LUG is interested in learning, we can certainly host a charm school for anyone interested
<lazyPower> free training to get you up and moving with juju with a focus / eye towards developing the discourse charm
<marcoceppi> http://summit.juju.solutions
<Merlijn_S> bdx_: are you James? Have you looked at the ci pipeline bundle? It's more or less what we wanted to build..
<arosales> bdx_: https://jujucharms.com/u/juju-solutions/cwr-ci/ for reference, is what I think Merlijn_S  is referencing
<bdx_> Merlijn_s: yes, its me
<Merlijn_S> bdx_: Yes. + they're working on support for testing and publishing full bundles. preview here: https://github.com/juju/python-libjuju
<bdx_> Merlijn_s: niceeeee
<lazyPower> marcoceppi - juju status-history
<sparkiegeek> juju show-status-log
<lazyPower> ^
<lazyPower> that
<lazyPower> show-status-log
<lazyPower> sparkiegeek high5 on gmta
<sparkiegeek> lazyPower: o/ :)
<Merlijn_S> bdx_ : sorry, preview ci for bundles here: https://github.com/juju-solutions/layer-cwr/tree/build-bundles
<magicaltrout> 32 days....
<magicaltrout> bugger, better come up with some content
<bdx_> alias wjst="watch -n 0.2 -c juju status --color"
<bdx_> Merlijn_s: for me its more about integrating application ci with juju though
<lazyPower> marcoceppi - this may be a good time to bring up layer-debug
<bdx_> Merlijn_s: not so much CI on the charms themselves, which I think is more your use case right?
<skay> I am not familiar with layer-debug.
<skay> definitely bring that up
<CoderEurope> QUESTION: Not really a question- But , but ... I would just like to say that Beards' win in the show, today :D
<lazyPower> skay - we (The eco team) wrote a layer to provide scaffolding for reporting on a unit to do debug. In our case we wanted entwork info dump, container / file descriptor info dumps, fs dumps, etc.
<lazyPower> so i hope they cover it, might b fun :D
<Merlijn_S> bdx_: I'd think that's more or less the same issue. Since charm tests can be whatever. I'd think you would just create a charm test that starts the tests of the actual application inside the charm..
<sparkiegeek> https://bugs.launchpad.net/juju/+bug/1645422 - retry-provisioning bug
<mup> Bug #1645422: retry-provisioning doesn't retry failed deployments on MAAS <landscape> <maas-provider> <retry-privisioning> <juju:Triaged> <https://launchpad.net/bugs/1645422>
<bdx_> Merlijn_S: application CI vs charm CI .... we have a team that writes selenium test suites for each of our apps
<bdx_> rick_h: true that
<Merlijn_S> bdx_: I have no experience with application CI so I'll just shut up :)
<bdx_> Merlijn_S: lol ... oh man .. lets catch up soon
<sparkiegeek> rick_h: FYI: the Google logo is obscuring the first line of your terminal
<sparkiegeek> QUESTION: Does interactive juju add-cloud pre-seed potential MAAS API endpoints from ~/.maasclidb ?
<sparkiegeek> ooh nice
<sparkiegeek> no more tickling Juju in three different ways to bootstrap in MAAS
<petevg> cory_fu: updated the crashdumping/gating PR to address your comments: https://github.com/juju-solutions/matrix/pull/63/files
<petevg> cory_fu: if you have the branch checked out locally, you'll need to delete it and refetch -- someone's "trivial" indentation fixed created conflicts that I needed to clean up with a rebase :-p
<cory_fu> petevg: :)  Sorry about that
<lazyPower> Marco: I really think it would be solid to talk about layer-debug even if only briefly
<petevg> Is cool.
<lazyPower> marcoceppi ^
<lazyPower> if you have k8s deployed, you can just run it :)
<sparkiegeek> QUESTION: are there any ACLs planned for actions?
<sparkiegeek> I might not want everyone to be able to get debug information
<sparkiegeek> cover it next time, all good :)
<CoderEurope> bye guys !
<lazyPower> Great show fellas
<lazyPower> \o/
<CoderEurope> lazyPower: Was it you that was to ping me about discourse Xenial charm ?
<lazyPower> CoderEurope - thats marcoceppi, but i'm happy to sit in and provide feedback/assistance
<CoderEurope> okay cheers.
<marcoceppi> CoderEurope: o/
<CoderEurope> o/
<marcoceppi> so I wrote this a long /long/ time ago. But they've moved to docker distribution. So it should be pretty straight forward to re-charm again where we run their bits in their docker package and then bolt on the rest from the outside
<CoderEurope> right -oh
<marcoceppi> CoderEurope: would you ahve time time later this week, or next week, to hash it out a bit?
<CoderEurope> marcoceppi:   - I have a new laptop coming in this/next week - Had to stretch to the chromebook just to tune in today :D
<rick_h> mbruzek: do you have a link for the layer to stick in the show notes?
<marcoceppi> CoderEurope: cool, give me a ping here when you're set up
<mbruzek> yes
<CoderEurope> marcoceppi: cool-beans !
<rick_h> mbruzek: ty
<mbruzek> rick_h: https://github.com/charms/layer-debug
<mbruzek> welcome
<rick_h> marcoceppi: jcastro either of you know why the comments are disabled when I look at the video, but if I go to the edit/advanced settings, allow comments is checked?
<marcoceppi> rick_h: is it publshed?
<rick_h> marcoceppi: hmm, says "public" not seeing anything addressing "published" specifically
<sparkiegeek> I still have problems accessing the links in the show notes in iOS - the long links get truncated by YouTube (fine) but it actually changes the href to be truncated too :(
<sparkiegeek> rick_h: think you could use a link shortener to work around their bug?
<rick_h> sparkiegeek: doh
<rick_h> sparkiegeek: you're asking for an increased production level here. I feel like I need to go to film school for this :P
<sparkiegeek> rick_h: well normally I wouldn't pay it any attention, but there's this guy I follow on Twitter who keeps harking on about them :)
<rick_h> sparkiegeek: lol
<magicaltrout> in python world can i do status_get and get the message?
<magicaltrout> oh yeah it appears it already does
<justicefries> hey all, picking up after a few weeks. is juju on OSX compiled with Go 1.7 yet?
<lazyPower> justicefries it is not
<lazyPower> i still get the random go stacktracing here and there
<lazyPower> magicaltrout yep :D
<lazyPower> rick_h - wasn't there some talk about the go 1.7 bit and how we handle that for brew?
<cory_fu> petevg: I added a comment reply to your PR, but I really think that we need to keep the test with glitch in and figure out how to make it work with the gating.  That was the entire point of adding the gating logic in the first place.
<petevg> cory_fu: I thought that we had decided not to run glitch by default.
<petevg> cory_fu: ... because it conflicts with the goal of providing a reliable automated testing framework for now.
<cory_fu> petevg: The reason we added the gating flag was because we wanted to still be able to run the glitch test but not have it influence the success or failure
<cory_fu> So it would be informational but not affect pass / fail
<petevg> cory_fu: yes. But unless we add a generic Exception check, we can't do that, because we also fail on uncaught Exceptions.
<petevg> cory_fu: we could also just not throw an error on uncaught exceptions if the task doesn't gate, but that feels more likely to mask genuine issues to me.
<cory_fu> We need to address that, then.  And I don't think a blanket catch is the right solution.  What exception is being thrown and why?  Is it a bug in glitch, or is it an expected failure case?  If the latter, then it should be converted to a TestFailure or added to the gating logic
<petevg> cory_fu: I'm talking about things that you posted -- the NoneType surfacing because of a bug in core.
<petevg> cory_fu: I've caught that by checking for TypeError, along with some other general Python errors, but I don't know what other errors libjuju or core will throw (especially as libjuju is under rapid development).
<petevg> cory_fu: I want matrix to surface those errors, actually. But I don't want to do it in a way that messes the automated testing of charms bit.
<petevg> cory_fu: I think that the best way to wind up with a glitch that gates in a controlled manner is to turn any Exception that comes up during a glitch run into a TestFailure.
<petevg> cory_fu: that way, if we gate on glitch, we see the error. If not, it's in the logs, but it doesn't break stuff.
<cory_fu> petevg: How is that NoneType error not a bug in glitch / matrix?
<petevg> cory_fu: it's a bug in core/libjuju.
<petevg> cory_fu: it's one of the things that can happen if you try to send a command to a machine that is rebooting.
<petevg> cory_fu: you periodically complain about it in PRs, but then you also complain about it when I try to fix it in a generic way :-p
<cory_fu> I don't think that NoneType error is a bug in libjuju.  I think it's a bug in matrix in that it continues to try to perform glitch operations after encountering a TestFailure
<petevg> cory_fu: The NoneType error happens in libjuju.
<cory_fu> petevg: Sure it does, because we're trying to call model methods after the connection was closed
<petevg> cory_fu: and if I mask it in glitch by adding a check for it in the reboot action, I disable glitch's ability to tell us whether or not that error is fixed.
<petevg> cory_fu: let's jump into the hangout.
<petevg> cory_fu: wrote code, tested, and pushed the restored matrix.yaml, and Exception catching glitch.
<cory_fu> Thanks
<cory_fu> I'll take a look in a minute
<petevg> cory_fu: ack. Didn't check in the "gating: false" lines in the matrix.yaml. Fixed and pushed.
<cory_fu> tvansteenburgh: Quick PR for you: https://github.com/juju/python-libjuju/pull/40
<jhobbs> do bundles need metadata.yaml's to be in the charm store?
<marcoceppi> jhobbs: nope, just a readme and bundle.yaml
<jhobbs> ahh it was the README i was missing
<jhobbs> thanks marcoceppi
#juju 2017-01-05
<derekcat> Happy Wednesday #juju!
<derekcat> Anyone seen this error when trying to deploy an OpenStack bundle to MAAS?
<derekcat> Deploying charm "cs:ceph-269"
<derekcat> ERROR cannot deploy bundle: cannot deploy application "ceph": cannot add application "ceph": unknown space "storage-cluster" not valid
<pixelate> fun
<stokachu> marcoceppi, hmm cannot assign unit "ghost/0" to machine 0/lxd/0: adding storage to lxd container not supported
<marcoceppi> stokachu: you can't use storage in lxd (it would seem)
<stokachu> works just fine with localhost though, its only when deploying to a lxd container on a bare metal machine via maas
<marcoceppi> stokachu: sounds like some aggressive check code
<stokachu> yea
<babou> how do i remove a charm which failed to install?
<babou> i deployed a local charm, its install failed, i fixed the problem, but when i try to deploy it again it says the application has already been deployed
<babou> when i try remove-application, there is no output and the application remains on the controller
<stokachu> babou, juju resolved <app>/<unit>
<stokachu> oh you probably want to juju debug-hooks <app>/<unit> and fix it there so you can do a juju upgrade-charm from your working copy
<babou> sorry im quite new to all of this
<babou> juju resolved had no effect
<babou> debug-hooks ssh me into the container
<babou> you're saying I should edit the code from within the container instead of externally and redeploying?
<babou> the debug-log just keeps looping
<babou> saying the install hook is failing again and again
<babou> restarting the machine had no effect, it immediately began failing the install hook again
<stokachu> babou, juju debug-hooks <app>/<unit> install
<stokachu> then in another terminal type juju resolved <app>/<unit> and the debug-hooks window will show up an "install" tab
<stokachu> from there you can edit the currently installed hook to make it work
<stokachu> when you ctrl+d out of that terminal juju should continue on through the other hooks
<babou> i was able to upgrade it by running this command
<babou> juju upgrade-charm --force-units --repository=./buses buses
<stokachu> if all else works you'll get an active status and can then run `juju upgrade-charm`
<babou> where buses is the name of my charm
<stokachu> ah ok
<babou> and then running resolved
<stokachu> that works too :D
<babou> onward to the next error!
<stokachu> i should probably look at the help output more
<stokachu> babou, cool gl
<babou> got help from this page which i had to translate to v2
<babou> http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/
<marcoceppi> babou: o/
<stokachu> ah yes i remember that post
<stokachu> maybe time to update it :D
<marcoceppi> yes, *updates post*
<stokachu> your posts aren't on github are they?
<marcoceppi> stokachu: no, wordpress
<babou> @marcoeppi i have no idea who you are, but i feel like im meeting a famous person
<stokachu> marcoceppi, ah ok
<stokachu> babou, he is famous
 * marcoceppi blushes
<marcoceppi> babou: thanks for the updated commands, I've refreshed my blog post
<rick_h> marcoceppi: stokachu storage on lxd is on this cycle's roadmap of items. Couple of folks are working on it
<rick_h> marcoceppi: stokachu that's both lxd provider and lxd on another cloud (maas, etc)
<babou> does anyone know how to install a modern version of node to a controller?
<babou> the way this one does it, it installs 0.10.3
<babou> https://api.jujucharms.com/charmstore/v5/precise/node-app-11/archive/hooks/install
<kjackal> Good morning Juju world
<Spaulding> morning!
<anrah> question about layers and interface-layers
<magicaltrout> goooo anrah gooooooooo!
<anrah> I have a interface-layer which I (perhaps) want to peer on all nodes on my model
<anrah> then i have base-layer on which all other apps are build on
<anrah> apparently i can't use my peer-interface on all nodes?
<anrah> as now it seems that all diffent apps have their own peering, even the interface is same on all applications
<magicaltrout> https://github.com/juju-solutions/interface-spark-quorum/blob/master/peers.py you have something like that?
<anrah> yes
<magicaltrout> and you have the scope set?
<anrah> yes, global
<anrah> Relation  Provides          Consumes          Type
<anrah> test    app-control  app-control  peer
<anrah> test    app-worker   app-worker   peer
<anrah> it shows like that
<anrah> what I would like to have is that all different applications would peer with each other
<anrah> throught that layer
<anrah> I might have wrong idea on my mind and I need ot change the approach, but that is how I planned it :)
<magicaltrout> that output looks right from my limited knowledge of peer relations.
<magicaltrout> You're wanting to peer app-control to app-worker as well?
<anrah> Yes
<magicaltrout> yeah thats not happening afaik
<magicaltrout> kjackal:
<anrah> Without requires-provides
<magicaltrout> help the poor person
<anrah> I think that I'll have a beer and think this otherway around :)
<kjackal> hello magicaltrout
<magicaltrout> marcoceppi is answer emails
<magicaltrout> he'll know as well
<kjackal> let me see give me a sec to catchup
<marcoceppi> wat
<magicaltrout> peer relations across different charms
<magicaltrout> I think in a nutshell
<marcoceppi> does not exist
<marcoceppi> peers are only for unit <-> unit communication wihtin a single application
<magicaltrout> figured as much
<magicaltrout> there you go anrah
<marcoceppi> you can reuse the interface for a peer relation in the provides/requires though
<marcoceppi> but it's a seprate relations
<anrah> I thought so
<anrah> Then I'll have a beer and think other way around :)
<magicaltrout> have 5
<magicaltrout> its more effective
<anrah> basically the need is just to get ip-addresses of all units within a model
<anrah> inside the charm
<magicaltrout> there must be a way to get that that marcoceppi can python up
<marcoceppi> anrah: ah, so, juju-info will be your savior
<kjackal> there is the juju-info interface that is implicit to all charms
<marcoceppi> anrah: but it's still a bit, dull
<marcoceppi> anrah: basically, juju-info is an interface you can require in your charm that allows you to connect to any application deployed. It only sets the `private-address` field so you can relation_get('private_address') but it requires the bundle or operator to add a relation to each component
<marcoceppi> anrah: without using the API, that's the best you can get
<anrah> thanks! I'll try that
<caribou> Hello, when creating actions for a charm in python, if I keep the .py suffix, the action is not identified by the script name w/o the .py
<caribou> what is the best approach to create actions in python ? just drop the .py ?
<magicaltrout> i did some yesterday
<magicaltrout> drop the .py and make sure you have a shebang in the top of the script
<caribou> if I use a symlink (layered charm), when the charm is built, it drops the symlink & copy the file
<caribou> magicaltrout: then it becomes an issue with unittests
<magicaltrout> tests are for wimps
<caribou> :)
<magicaltrout> dunno, i asked marco the other day, he said, just write them in python so I did it like that
<magicaltrout> i've not tried testing them yet
<mattyw> rick_h, ping?
<rick_h> mattyw: otp pong
<marcoceppi> caribou: magicaltrout you could do this, a pattern some charms have produced: https://github.com/juju-solutions/vpe-router/
<marcoceppi> that's a python stub for the actions/<action> and then a start of the reactive loop
<marcoceppi> it's not super clean, but it's something we're looking at cleaning up in reactive
<marcoceppi> caribou magicaltrout alternatively, put the heavy lifting in lib/charms/layer/ and import it from the action stub
<marcoceppi> unit test the library and don't worry about the stub
<caribou> marcoceppi: hmm, good idea
<stub> or land stub's branch to avoid the stub
<marcoceppi> stub's stub is one approach
<marcoceppi> fwiw stub, we'll be looping back around to reactive 2.0 in the coming weeks, actions is one of the items on this list
<caribou> magicaltrout: marcoceppi: that seems to do the trick as well : collect = imp.load_source('collect', 'actions/collect')
<caribou> at the top of the test script, instead of "import actions.collect"
<stub> marcoceppi: I need to be writing actions in under two weeks time, in reactive 1.
<mattyw> anyone around for a question about contraints?
<pmatulis> mattyw, dunno, best ask
<mattyw> pmatulis, according to the docs it's acceptable to set the arch constraints to no value. but: juju deploy ./ msql --constraints "arch=""" --series win2012r2 returns ERROR cannot add application "msql": invalid constraint value: arch=
<aisrael> marcoceppi: stub: Unless I'm missing some context here, actions in reactive aren't terribly difficult, just a little clunky. Totally functional, though.
<pmatulis> mattyw, yeahh i saw something similar with 'mem'
<pmatulis> mattyw, i think something changed recently
<stub> aisrael: I need actions that tie into the reactor. I can't do that without my patch, and I've seen no alternatives.
<aisrael> stub: This is what I've been doing for actions: https://github.com/AdamIsrael/vnfproxy
<stub> aisrael: Hmm... You might have reimplemented my branch from 6 months ago, but without the charms.reactive patches
<aisrael> stub: Dunno. I've been using actions with reactive like this almost since reactive was released.
 * aisrael smells a blog post
<stub> aisrael: I don't recall it being brought up in Pasadena, and haven't heard it mentioned as an alternative
<stub> aisrael: (which is surprising, since I've been escalating this the last month)
<aisrael> stub: I'll make a point to call it out more. I didn't realize it was a pain point. I've been head-down working on open source mano and must have missed the discussion. :(
<stub> aisrael: I'll need to revisit my old branch, but I think all that is missing is the @action decorator and that is just a style choice.
<aisrael> stub: I'm happy to take a look at what you've got. I definitely think we can make actions easier to implement
<stub> aisrael: https://github.com/juju-solutions/layer-basic/pull/69/files was my template, which might look very familiar :)
<aisrael> Hah, sure does. Yours is pulling the action name directly from hookenv, which is something I wanted to add to my template. :D
<stub> aisrael: oh, my reactive patch is adding a new phase so the main action body is guaranteed to kick in before the rest of the reactive handlers. But I don't think I need that, at least for now.
<stub> aisrael: Ta. This unblocks me nicely :)
<aisrael> stub: Excellent, good to hear!
<stub> (amusingly, I used to have states too for actions but removed that from the patch on request ;) , https://github.com/juju-solutions/charms.reactive/pull/66 )
<aisrael> stub: I'll try to  weigh in on that discussion today or tomorrow.
<marcoceppi> cory_fu: I'll merge, but I could use a review https://github.com/juju/charm-tools/pull/298
<cory_fu> marcoceppi: Comment added: Do you think you could add a trusty series to the mysql layer's metadata.yaml to ensure that de-dupe works across layers as well (since you ran in to that while testing)?
<marcoceppi> cory_fu: sure
<kjackal> cory_fu: kwmonroe: Here is the error http://pastebin.ubuntu.com/23747121/
<stub> marcoceppi: I think you lose the declared series ordering, so the preferred series (first) might change when you 'charm build'
<marcoceppi> probably should use ordered set then stub ?
<stub> marcoceppi: There is an ordered set?
<marcoceppi> I mean, "yes"
<stub> (hey, there is an ordered dictionary... wouldn't surprise me if there was an ordered set lurking around somewhere in stdlib)
<marcoceppi> it's not in stdlib, but there's a generally accepted class that implements it
<kjackal> cory_fu: kwmonroe: something very important cwr filed but it did not retrun an error code so the script tried to release the charms (even if the tests failed)
<cory_fu> kjackal: There's a card on the board for that
<kjackal> cory_fu: so how can we move forward with this PR then? Should we have a check (eg grep FAILURE) against the output directory and stop the jenkins job?
<cory_fu> kjackal: We should address the card on the board and make cwr return a proper exit code
<cory_fu> kjackal: Ok, I know what the problem is.  Your invocation with the charm doesn't have a reference bundle
<cory_fu> kjackal: http://54.205.68.190:8080/job/charm-apache-tomee/2/console vs http://54.205.68.190:8080/job/charm-topbeat/2/console
<kwmonroe> ahhh. nice!
<cory_fu> PM'd you the login
<cory_fu> kjackal: Note the "echo 'bundle: /tmp/bundles/'" vs "echo 'bundle: /tmp/bundles/cs_beats_core'"
<cory_fu> We need to detect no reference bundle and use the charm path / name instead
<cory_fu> That, or make the reference bundle required
<cory_fu> I'm leaning toward the latter
<kjackal> cory_fu: if we make the reference bundle required then we are losing all our customer base :)
<kjackal> I am not sure, I am a bit puzzled
<marcoceppi> stub: good catch on the ordering
<kjackal> cory_fu: so we will say if you want to make a charm you should first have a bundle for it before using the CI we are building?
<cory_fu> kjackal: Part of the justification for reference bundle is that we want to heavily push for testing of bundles instead of charms.  I don't think it's unreasonable to make that a requirement, but I'm not 100% set on it.
<cory_fu> arosales: What's your opinion on that?  ^
<cory_fu> kjackal: Pretty much, yes
 * arosales reads back scroll
<cory_fu> kjackal: Otherwise, you're back to having to write amulet tests to get any testing done.  If you have even a trivial bundle, you can test it with matrix
<arosales> we had talked about policy in making a reference bundle mandatory, thus I would be a +1 on making a reference bundle mandatory
<arosales> a charm that doesn't have a bundle doesn't make much sense
<arosales> I guess at a very minimum if someone did have a use case for a single charm they could still make a bundle of 1 charm. I wouldn't suggest it, but there is still that option
<arosales> cory_fu: kjackal  ^
<kjackal> arosales: cory_fu: It is a chiken and egg issue here. Let say we want to have a CI for juju from day 1 of development. That means that day 0 should start with creting a bundle. And day 2 you can create a charm.
<kjackal> arosales: cory_fu: anyway, I am out for today
<kjackal> On Monday, i will look at the multiple controllers
<kjackal> bye
<marcoceppi> cory_fu: I found another problem, the way combine works
<marcoceppi> it's appending series list, instead of prepending
<cory_fu> marcoceppi: Why is that wrong?
<marcoceppi> cory_fu: if lower layer is trusty, precise, and upper layer is xenial, trusty, order will be trusty, precise, xenial
<marcoceppi> but upper layer should take precedence
<cory_fu> Ah, yeah
<stokachu> rick_h, thanks re: storage on lxd
<rick_h> stokachu: <3
<arosales> kjackal: I think in that case we could still just have a bundle of 1 charm
<arosales> at a very minimum if that charm doesn't have any relations yet
<smgoller> hey all, so I'm using 2.1 beta3 to talk to maas 2.1. When I try to bootstrap, juju errors out because MAAS doesn't support the 1.0 api anymore. Is there a workaround for this?
<lazyPower> marcoceppi - got a second? mbruzek and I are looking to share a controller we have hosted for the team, but we want teh shares to be admin level users
<lazyPower> can you grant controller wide admin access?
<rick_h> lazyPower: grant superuser https://jujucharms.com/docs/2.0/users-models#controller-access
<lazyPower> rick_h ta
<smgoller> anybody? How do I tell juju to use api version 2?
<smgoller> (for maas)
<smgoller> this is the error i'm getting
<smgoller> 2017-01-05 17:57:23 ERROR cmd supercommand.go:458 new environ: Get http://<ipv4address>/MAAS/api/1.0/version/: dial tcp <ipv4address>:80: getsockopt: connection timed out
<smgoller> ERROR failed to bootstrap model: subprocess encountered error code 1
<smgoller> but curl on that url from the host works just fine
<lazyPower> smgoller what version of juju?
<magicaltrout> so here's a weird one
<magicaltrout> that is completely left field at the moment, but I need to figure out how to deploy Juju stuff to this bad boy... https://www.tacc.utexas.edu/systems/wrangler
<magicaltrout> thats quite a powerful computer...
<cory_fu> marcoceppi: Are you working on the append vs insert issue or is #298 ready for review again?
<h00pz> hey folks got a mysql question regarding the openstack deployment from juju
<lazyPower> magicaltrout hoooo, DSSD rack scale? I bet that disk IO is *in-sane*
<lazyPower> analytics with bandwidth of 1TB/s and 250M IOPS (6x faster than Stampede)
<marcoceppi> cory_fu: working on it
<h00pz> in the past (non juju deployed openstack) I would be able to log into my mysql servers and do a âshow databases;â and voila my openstack dbs
<h00pz> now I juju ssh mysql/0 and then log into mysql and do the same and all I see is schema and test
<lazyPower> h00pz - not to be daft, but i'm fairly certain our openstack offering uses percona
<lazyPower> wouldn't that then be warehoused in percona/0 not mysql/0?
<h00pz> yah it does hence my complete lack of brain cells
<lazyPower> well I wasn't going to go that far :D  Its a common mistake
<lazyPower> i do ask, as some people modify the bundle, and its not always obvious if thats the case
<h00pz> ubuntu@maas:~$ juju status | grep percona
<h00pz> mysql                  5.6.21-25.8  active          1  percona-cluster        jujucharms  246  ubuntu
<h00pz> ubuntu@maas:~$ juju ssh percona/0
<h00pz> ERROR unit "percona/0" not found
<h00pz> so mysql/0 exists but not percona/0
<lazyPower> ok, they aliased the percona charm as mysql
<lazyPower> makes sense to me so far. When you log into the percona unit you dont see any databases related to openstack right?
<rick_h> h00pz: right, you can name the applications once deployed whatever you want.
<h00pz> correct
<lazyPower> hmmm
<rick_h> h00pz: e.g. you might deploy mysql twice, once as mysql-leader, and a second time as follower
<h00pz> i did a first-timer juju deploy of the basic openstack bundle and didnât fiddle with it.
<rick_h> h00pz: so to ssh to each you'd use your names you gave it, mysql-leader and such
<h00pz> services are working fine i just need to edit a couple fields (i tied to delete vms when rabbitmq was down)
<rick_h> h00pz: right, so juju ssh mysql/0 should work for you in this case?
<h00pz> yes that does and once in there no darn dbs
<rick_h> h00pz: this was running before or is a new deploy?
<h00pz> been running fine (and still is) for over a month
<h00pz> im just trying to find the darn dbs to edit some stuff
<rick_h> h00pz: k, and how are you listing the dbs on the mysql?
<h00pz> ubuntu@maas:~$ juju ssh mysql/0
<h00pz> ubuntu@juju-e039b2-0-lxd-1:~$ mysql -u mysql
<h00pz> mysql> show databases;
<h00pz> +--------------------+
<h00pz> | Database           |
<h00pz> +--------------------+
<h00pz> | information_schema |
<h00pz> | test               |
<h00pz> +--------------------+
<h00pz> 2 rows in set (0.00 sec)
<rick_h> h00pz: can you get an interactive mysql shell and try "SELECT User FROM mysql.user;"
 * rick_h dusts off some old mysql know-how
<bildz> heh
<bildz> im sitting next to h00pz btw
<rick_h> bildz: howdy
<bildz> hey rick_h
<h00pz> its telling me to faq off, clearly no adequate perms
<rick_h> bildz: h00pz so my suspicion is that it's not showing all databases, but only the ones you have permission to see
<rick_h> bildz: h00pz so the ones for the OS aren't shown as the charms will create users/dbs with permissions in case that mysql is reused with other applications
<rick_h> e.g. a blog and a drupal and a ...
<rick_h> h00pz: bildz so try using "sudo mysql
<bildz> yeah that's very odd
<rick_h> to get mysql running as root
<rick_h> sorry, w/o the "
<bildz> well, you can mysql -u root -p
<bildz> same thing
<rick_h> ok, same thing
<rick_h> bildz: but not mysql -u mysql?
<rick_h> bildz: so might be completely
<bildz> im guessing we could parse the configs of services using mysql to see what it's using :)
<rick_h> sudo mysql
<rick_h> use mysq;
<rick_h> select User from mysql.user;
<bildz> yeah basing that root has no passwd
<rick_h> bah, use mysql;
<bildz> but its appearing to have one
<rick_h> right, it's ubuntu so you should be able to sudo from the ubuntu user
<smgoller> lazypower: I figured it out. it's not an api problem
<smgoller> it's a network connectivity problem.
<smgoller> the problem being the command was being executed on the bootstrapped controller, not the machine i was running the bootstrap from
<lazyPower> smgoller ok good :)
<smgoller> and the controller couldn't talk to maas
<lazyPower> glad you figured out the blocker
<lazyPower> i wasn't entirely sure where to start but i figured best to start with what juju
<lazyPower> and go from there
<lazyPower> +1 for your debugging routine :)
<h00pz> yah so imi thinking that the only way to get into the openstack table space is from the root user WITH PASSWORD which we dont know or setup during the juju deploy
<lazyPower> h00pz thats in /var/lib/mysql  or /var/lib/percona respectively
 * rick_h goes to deploy the percona-db charm and will be back in a sec
<lazyPower> h00pz specifically /var/lib/mysql/mysql.passwd
<h00pz> on the controller ?
<lazyPower> That should exist on any mysql unit you have deployed
<lazyPower> thats a convention baked into the db charms when it auto-generates a password, it leaves a little cache file around on disk with the contents of the root users password, only readable by root
<h00pz> i have /var/lib/percona-xtradb-cluster but no mysql.passwd
<h00pz> i did a find and a grep for that file name, no dice
<lazyPower> hmm
<lazyPower> perhaps something changed and i wasn't aware its been a bit since i've looked at those charms
<lazyPower> beisner, or thedac - any suggestions on how to get the root password for the percona charm these days?
<rick_h> lazyPower: h00pz bildz this is the only thing I can find atm https://help.ubuntu.com/community/MysqlPasswordReset
<rick_h> lazyPower: h00pz bildz lots of thing say that if not provided during install it creates on in the log file
<rick_h> but my deploy doesn't have a password line I can tell
<beisner> howdy lazyPower, we typically set it on deploy, but i think there is a retrieval method.  pretty sure it's in rel data.  thedac may have more detail?
<rick_h> https://jujucharms.com/percona-cluster/#charm-config-root-password yea
<rick_h> hmm, /me wonders if you can change it via charm config?
<h00pz> bwahahahaha are telling me I can set it through the juju gui
<rick_h> h00pz: I'm testing it out :)
<rick_h> h00pz: hmm, doesn't look like it took
<h00pz> this â> Root password for MySQL access; must be configured pre-deployment for Active-Active clusters. <â
<rick_h> yea...that seems about right
<h00pz> lemme try
<h00pz> nope might have to try lazypowerâs suggestion to break the passwoid
<lazyPower> beisner ok, i hand't thought about the rel data being the key holder. Percona is non reactive correct?
<rick_h> h00pz: bildz yea, have to suggest to go the password recovery route
<rick_h> seems this is the way it works these days to keep things secure
<h00pz> k will do thanks guys.
<kjackal> cory_fu: I am doing two minor changes in "Store and serve CWR reports and build artifacts"
<beisner> lazyPower, indeed, it's old style
<cory_fu> kjackal: What changes?
<magicaltrout> indeed lazyPower that is some serious compute power
<magicaltrout> not sure how i'm gonna get charms deployed on int
<magicaltrout> it
<magicaltrout> but always like a challenge
<kjackal> cory_fu: the ref-bundle param should be mundatory in the build-on-release as well and something is not right in the descriptiona
<cory_fu> kjackal: OK, thanks
<kjackal> cory_fu: last show stopper in this PR (at least for me) is the CWR not reporting a failure. How do we want to handle this in respect to the PR. Should we merge the PR now knowing that this needs to be fixed or should we keep the PR pending?
<cory_fu> kjackal: It will have to be fixed in the CWR repo anyway, so I'd say treat it as a separate issue.
<cory_fu> kjackal: However, hold on one sec, I have one more minor change to make
<cory_fu> kjackal: Ok, pushed
<kjackal> cory_fu: Ok, so we move forward with the merge (as soon as you are done). One last question. What is more important: multiple controllers vs cwr retruning an error code on failure? If you ask me the latter because with that PR the build-on-* are not safe to be used.
<kjackal> Basically I am asking for priority of tasks for Monday morning
<cory_fu> kjackal: Well, they're safe to use if you don't use push-to-channel, and that won't affect our demo next week, so I'd actually say the multiple clouds is higher.  But they're both important.
<kjackal> Ok, multiple controllers it is then (unless someone else goes for it tomorrow)
<magicaltrout> marcoceppi: i get the feeling i'll know the answer to this but.... if i wanted to use the manual provider, apart from the controller, do the other units need to be Ubuntu?
<lazyPower> magicaltrout nah you can manually enlist other series
<lazyPower> i'm like 80
<lazyPower> % certain of this
<lazyPower> its that last 20% that might bite you
<magicaltrout> lol
<magicaltrout> well i'll have a bunch of other issues as well so i'll need to fork and extend
<magicaltrout> but its worth investigating
<kjackal> cory_fu: Sorry to bother you again. Do we export the cloud credentials so that Matrix knows about them? What should happen in the case of multiple controllers?
<cory_fu> kjackal: Ugh
<cory_fu> kjackal: You're right.  And now I realize that my work-around of using the env var won't work
<cory_fu> kjackal: I can't believe I missed that
<cory_fu> kjackal: Ok, I removed the exports.  I'll have to figure out another way to do it
<kjackal> cory_fu: multiple controllers seems an easy task. Should be one day.
<kwmonroe> kjackal: cory_fu: where did the "usage: cwr [-h] [--result-output RESULT_OUTPUT]" get fixed?  kjackal reported it in pr21 because the cwr invocation was --result-dir (not --result-output).  was there a commit in layer-cwr to update the install location of cloud-weather-report?
<cory_fu> kwmonroe: kjackal somehow had an old version of cwr
<kwmonroe> right cory_fu.. now i've got it.
<kwmonroe> and iirc, pypi wasn't updated, so layer-cwr did a clone/install (or wget master.zip or something)
<kjackal> kwmonroe: cory_fu: yes, I probably had an old cwr. Probably because I reused a jenkins deployment
<sfeole> hey guys, i'm working on a new charm and the install hook keeps failing with the following:   http://pastebin.ubuntu.com/23748751/     looks to be related to wheelhouse, has anyone ran into this before?
<thedac> lazyPower: looking that up for you now. The file is written on disk somewhere. Not where I expected. Give me a few.
<thedac> lazyPower: no it is shipped of to leader settings: juju run --unit percona-cluster/0 leader-get
<lazyPower> thedac aha! thats excellent. Thanks for the follow up
<lazyPower> h00pz - still poking around?
<thedac> no problem
<kwmonroe> kjackal: did you fix it by manually updating cwr on the jenkins slave?  i ask because my cwr charm (on the jenkins unit) says to pip_install_from_git, so i know that's correct.  however, the cwr installed on the jenkins system is *not* the same as the github repo.
<kwmonroe> it's like cwr is getting pip installed prior to the 'pip_install_from_git' and being skipped.
<kwmonroe> cory_fu: do you have pypi release rights to cwr?
<kwmonroe> (not that that's the right answer, just curious)
<cory_fu> kwmonroe: I do not.  Only ses
<kjackal> kwmonroe: I am back. So I fixed it by going to bed and woke-up realising what I was doing wrong
<kwmonroe> :)
<kjackal> kwmonroe: let me parse what you are sayiing
<kwmonroe> kjackal: let me help:  i deployed the latest cwr-ci bundle and upgraded the cwr charm with the feature/reports-and-artifacts branch.
<kwmonroe> the jenkins principal has an old version of cloud-weather-report pip installed
<kjackal> yeap, that will not work.
<kwmonroe> it doesn't seem to be using the "pip_install_from_git(..., http://repo/c-w-r)
<kjackal> I just destroyed the model and tried manually from scrach
<cory_fu> kwmonroe: It's working fine for me
<kwmonroe> not possible!
<cory_fu> kwmonroe: As kjackal said, did you re-deploy the jenkins, or did you upgrade it from an old deploy?
<cory_fu> kwmonroe: Of course, this would be moot if we had the workloads containerized.
<kwmonroe> cory_fu: this is a new jenkins deploy.. from ~kos.<greek-name>
<cory_fu> Very strange
<kwmonroe> cory_fu: should cwr be in the /var/lib/juju/agent/cwr/.venv or on the system?  it feels like i'm getting cwr installed outside of the cwr charm.  and that feels bad.
<cory_fu> I've deployed many times and not hit that, but it might be a heisenbug
<cory_fu> kwmonroe: It should be in the system now, because it's not python3 so isn't compatible with the venv.  Really, it should be in the containerized job env
<kwmonroe> heh, you keep saying "containerized" like we have time to do something about that
<cory_fu> Yeah.  :(
<magicaltrout> fix it!!!! fiiiiix iiiiiiit!
<kwmonroe> well, i had a stale ciuser preventing me from doing the build-on-commit, so let me tear all the way down again.  if i deploy cs:~kos.greek/jenkins + local cwr from feature/reports-and-artifacts, i should have a cwr that supports --results-dir?
<cory_fu> Yes
<kwmonroe> awww heck.  i know what happened.  i didn't upgrade jenkins, but i did 'cwr' (hence cwr-ci.installed was already set).  damn.
<cory_fu> kwmonroe: :)
<kwmonroe> totally not my fault.  you asked "did you re-deploy the jenkins ... or upgrade it".  you said nothing of the cwr subordinate.
<cory_fu> tvansteenburgh, kwmonroe, petevg: https://github.com/juju/python-libjuju/pull/43
<kwmonroe> i don't look at PRs until travis is done
<cory_fu> :)
<tvansteenburgh> i don't look at PRs when ppl ping me about them right after i get the email from github
<cory_fu> lol
<kwmonroe> cory_fu: i'll approve this part: https://github.com/juju/python-libjuju/pull/43/files#diff-9aa2102e02a5e20ed59d041e8ee2b2e4L91, tvansteenburgh can approve the rest.  python async isn't really my thing.
<tvansteenburgh> haha
<tvansteenburgh> that line was there for a reason, -1
<kwmonroe> good point.  lack of comment for why 2 blank lines weren't needed.  -1.
<cory_fu> tvansteenburgh: Was the reason to give lint errors?
<cory_fu> kwmonroe: Do you want lint errors?  Because this is how you get lint errors.
<kwmonroe> lol
<tvansteenburgh> cory_fu: confirmed
<cory_fu> :)
<cory_fu> Ok, travis passed.  Someone feel free to merge it so I can submit my next PR
<cory_fu> ;)
<cory_fu> kwmonroe, petevg: https://github.com/juju-solutions/matrix/pull/66
<cory_fu> petevg: Ok, looking at your PR now.  Sorry for the delay
<kwmonroe> whitespace seems legit
<petevg> That's my bad, apparently.
<petevg> I usually catch that sort of nonsense.
<kwmonroe> petevg: emacs has a box for "add superfluous whitespace".  maybe check that out.
<cory_fu> petevg: Technically, that indentation was correct, because the timeout value went with the wait_for(), but I changed it to make it less confusing
<petevg> cory_fu: ah. You are right.
<cory_fu> Sometimes getting indentation to be reasonable while sticking to < 80 columns is a PITA
<cory_fu> petevg: Crap, that's a good catch.  I think that log message is rather important, so I'm +1 to blocking that until I can figure out a fix.
<petevg> cory_fu: cool.
<petevg> I don't have any immediate suggestions. unfortunately :-/
<petevg> cory_fu: you might actually be able to do try: else: finally, actually (all on the same indentation level).
<petevg> ... though I'm not sure how to nested Exception will interact with it ...
<cory_fu> petevg: Actually, the way it's structured, crashdump won't happen if add_model fails
<petevg> That's not good either.
<petevg> cory_fu: thx for merging the output dir PR. :-)
<petevg> cory_fu: darn. If an exception is raised during an "else" after a try-except, it doesn't look like the "finally" executes :-(
<petevg> cory_fu: actually, hang on ... it does work. Flaw in my script.
<cory_fu> petevg: Too late, it's updated
<cory_fu> Yeah, I tested it and it seems to work properly with the else
<cory_fu> Though, TBH, if run_once kicks up an uncaught exception, I'm pretty sure it will hang Matrix
<cory_fu> But it has internal exception catching, so it should be ok
<petevg> cory_fu: I think that run_once won't hang matrix. An uncaught Exception in the "finally" block definitely will, but there's not too much to be done about that other than being careful.
<petevg> cory_fu: in any case, I am +1 on this.
<cory_fu> Sweet, gonna merge it
<cory_fu> We might actually have a demo yet.  ;)
<cory_fu> Ok, I have to EOD now.
<tvansteenburgh> cory_fu: https://github.com/juju/python-libjuju/tree/local-charm-bundle-support
<tvansteenburgh> i just deployed a bundle with local charms using that branch
<cory_fu> tvansteenburgh: That's fantastic!
<tvansteenburgh> needs some cleanup before merge, but feel free to test it out
<tvansteenburgh> i have to eod now too, will finish it up in the morning
<cholcombe> how does the charm review process work again with promulgated charms and their code living at github?
<rick_h> cholcombe: use the review queue to submit a request for review
<cholcombe> rick_h, is it still review.juju.solutions?
<cholcombe> rick_h, nvm i found the link
<rick_h> cholcombe: I thought the old one was going away but that doesn't look right
<cholcombe> rick_h, https://review.jujucharms.com/reviews/81 looks good :)
<rick_h> cholcombe: ok cool
 * rick_h thought there was a big request review button but is blind atm
<cholcombe> yeah you're correct rick.  i was looking at the old site again.  it comes up in my search history.  i need to delete it
<rick_h> cholcombe: gotcha
<rick_h> ah right, there you go
<rick_h> I missed the url change from first paste to the other
<cholcombe> rick_h, do you know if amulet works with juju storage yet?  that was my blocker last time
<rick_h> cholcombe: not that I'm aware of
<cholcombe> gah
<marcoceppi> cholcombe: so amulet will slowly be deprecated for python-libjuju. Amulet will eventually adopt libjuju instead of deployer at which case you'll have access directly to the model
<cholcombe> marcoceppi, interesting
<cholcombe> marcoceppi, so should i start developing against libjuju?
<marcoceppi> cholcombe: if you're comfortable with async io
<cholcombe> i am but it looks like libjuju is also missing storage support
<marcoceppi> cholcombe: it shouldn't it implements every aspect of the Juju api
<cholcombe> hmm alright i'll dig deeper
#juju 2017-01-06
<ubabu> Hi all, Is juju charms only for containers (means while deploying charm it is creating a container for that charm)
<ubabu> can we deploy our charm directly on a VM or Server?
<blahdeblah> ubabu: You can definitely deploy charms in VMs & bare metal servers
<blahdeblah> ubabu: (and containers)
<ubabu> blahdeblah: colud you please provide some refernce links about deploying charms on VMS & bare metal servers
<blahdeblah> ubabu: https://jujucharms.com/docs/stable/clouds-maas <-- bare metal
<blahdeblah> https://jujucharms.com/docs/stable/help-openstack or https://jujucharms.com/docs/stable/clouds-manual for VMs
<blahdeblah> ubabu: Also https://jujucharms.com/docs/stable/getting-started is pretty good
<ubabu> Hi all, how juju is getting OS images for containers?
<BlackDex> ubabu: That is taken care of by lxd/lxc :)
<lazyPower> mornin juju o/
<Spaulding> hm, how i can achieve that kind of scenario that only leader will fill the DB with init file? is there any decorator that I can use?
<Spaulding> or any condition?
<Spaulding> or @only_once will do the job?
<lazyPower> Spaulding if you're using layer-leadership theres an @when('is-leader') state
<lazyPower> mind you i have zero context to what you're attempting to do, i only see the last 3 messages
<Spaulding> lazyPower: it's still a good tip!
<Spaulding> i'll definitely check that
<Spaulding> wow, it looks like what i needed!
<Spaulding> thx lazyPower
<lazyPower> np Spaulding happy to help
<cory_fu> tvansteenburgh: Would you mind rebasing your local charm branch to get the get_cloud?  I need that to test it in our workflow
<tvansteenburgh> cory_fu: yeah one sec
<tvansteenburgh> cory_fu: done
<cory_fu> tvansteenburgh: Thanks!
<tvansteenburgh> cory_fu: i have one more small change to make and then i'll put up a PR for that branch
<cory_fu> Awesome
<jcastro> lazyPower: I'm fixing the readme that bmullan pointed out on the list, you'll have a PR soon.
<jcastro> for the kube-core bundle
<lazyPower> jcastro thanks!
<lazyPower> jcastro there was a few i landed in the last day i put on cdk, that i managed to not put in kube-core
<lazyPower> can i link you at those and collab push those in the same swoop?
<jcastro> sure
<jcastro> lazyPower: after we land in the upstream docs
<jcastro> I want to trim these pages down considerably
<jcastro> I don't want to have to update 3 READMEs
<jcastro> https://github.com/juju-solutions/bundle-kubernetes-core/pull/47
<lazyPower> Merged
<lazyPower> jcastro https://github.com/juju-solutions/bundle-canonical-kubernetes/pull/170/files -- did you see this one sneak by on CDK?
<jcastro> that seems fine to me
<stokachu> jcastro, small comment on that PR
<jcastro> on it
<jcastro> https://github.com/juju-solutions/bundle-kubernetes-core/pull/48
<stokachu> jcastro, \o/
<jcastro> hmm, I wanted to tell him about updated CDK on the thread as well
<jcastro> stokachu: next week I'm going to do a new video for conjure-up/LXD
<jcastro> to replace the one we have now
<stokachu> jcastro, pimp
<jcastro> stokachu: think we can nail that ipv6/juju issue by then?
<stokachu> jcastro, yea so I updated conjure-up code already to add that Note in there about ipv6
<stokachu> its already in the ppa
<jcastro> oh awesome
<stokachu> and i also handle if the user is not in the LXD group
<stokachu> i wanted to try and see about getting deis working with a conjure-up deployment, having some issues though
<jcastro> I have a plan for approaching deis
<jcastro> now that we spent a bunch of time with that deis guy yesterday at the sig ops meeting
<stokachu> nice, yea if you are able to manually get it working I can wrap that all up in the spell
<jcastro> I more want them to help for real, like a proper charm with integration
<jcastro> but I want to go with them with "this is a little hack we did, but we could be so awesome"
<stokachu> yea that would be nice
<stokachu> yep agreed
<stokachu> would be cool if on their website they just did 'conjure-up deis'
<stokachu> :D
<jcastro> too small
<stokachu> aww
<jcastro> "Using Canonical Kubernetes? You have it."
<jcastro> click here to pay us.
<stokachu> ah i see what you did there
<lazyPower> thats slick
<lazyPower> i like it :)
<jcastro> come on google, my stuff is passing, merge me
<jcastro> so _close_, then we won't have to have the docs spread out everywhere
<petevg> cory_fu: whenever you get a chance: https://github.com/juju-solutions/matrix/pull/67
<cory_fu> petevg: Looks good at first glance, but should we move the full_stack test to follow that pattern?  Or do we want to keep it where it is for code coverage?
<petevg> cory_fu: leave it where it is for code coverage, I think.
<cory_fu> Cool
<cory_fu> petevg: In exchange: https://github.com/juju-solutions/matrix/pull/68
<petevg> cory_fu: looks good. merged.
<cory_fu> petevg: Cool.  The functional test stuff looks good, too.  I'm running them now.  About how long do they take?
<petevg> cory_fu: they should take about as long as the full stack test -- the gating tests all skip actually deploying things.
<petevg> cory_fu: it is spitting the matrix log to stdout, but I don't know how to make py.test/tox verbose enough to output it. You can tail /tmp/<hash>/matrix.log if you get impatient :-)
<cory_fu> petevg: It finished.  Merged.  Thanks
<petevg> cory_fu: awesome. Thx :-)
<kwmonroe> cory_fu, i just noticed that nothing checks if cwr was successful -- it always pushes if push-to-channel was specified: https://github.com/juju-solutions/layer-cwr/blob/master/templates/BuildMyCharm/config.xml#L104
<kwmonroe> shouldn't we only push on a successful cwr run?
<cory_fu> kwmonroe: Correct
<cory_fu> kwmonroe: Though, I think the job is set up to fail fast (set -e) so it shouldn't actually get to that part of the code if cwr sets its return value properly
<kwmonroe> cory_fu: it seems cwr doesn't think bundletester failures are fatal:  (line 82): http://paste.ubuntu.com/23754237/
<cory_fu> kwmonroe: Yeah, that's why there's a high priority card on the board for that
<magicaltrout> sod matrix, don't test for failure, just assume it all works \o/
<cory_fu> :)
<cory_fu> "With one simple change, we have made it to 100% passing tests"
<magicaltrout> woop
<magicaltrout> i assume you two are coming to gent?
<magicaltrout> bdx: you coming to gent this year to get off your face?
<magicaltrout> sorry i mean, talk about monitoring
<kwmonroe> magicaltrout: i think it's just petevg and kjackal from the big team going.  it'll be too cold for cory_fu and myself.
<magicaltrout> loosers
<magicaltrout> fair enough
<magicaltrout> petevg's easier to wind up than cory_fu so thats acceptable
<cory_fu> lol
<magicaltrout> fscking kjackal and he's endless talking about cars... might have to bring earplugs
<magicaltrout> well cory_fu you have no excuse to not do apachecon this year
<magicaltrout> well... unless mark doesn't want to pay your expenses which will be basically nothing
<kwmonroe> it's in miami in may, right magicaltrout?
<magicaltrout> correct
<magicaltrout> 18th i think
<kwmonroe> i'm there.  just have the pesky task of getting a talk accepted.
<magicaltrout> aye, i need to submit some shit
<kwmonroe> ... i suppose i should check the cfp closing
<magicaltrout> see if they go for it
<magicaltrout> feb the something for the CFP
<magicaltrout> you've got a few weeks
<kwmonroe> phew - feb 11 deadline.
<kwmonroe> yup
<cory_fu> Ugh.  But Miami
<kwmonroe> will smith says it's a nice place
<cory_fu> I lived there for a year, and that was one year too many
<magicaltrout> yeah never been.  I believe most people in seville shrugged and sighed
<cory_fu> It's probably great for visiting
<magicaltrout> well we only ever see a hotel and 1 block each side
<cory_fu> As long as you want to spend a whole lot of money
<magicaltrout> i think i'm safe
<magicaltrout> i've been told JPL are planning a whole science track, dunno how likely that is to happen, but hopefully i'll get some apache - science - juju stuff accepted with this sparkler charm and some other bits
<magicaltrout> "Tracking down the crims on the dark web with Juju"
<magicaltrout> and get kwmonroe to say that in his british accent
<magicaltrout> it'll go down a treat
<kwmonroe> 2 right!
<spaok_> hello
<spaok_> if I have an issue with OpenStack deployed via juju and cloud-init/neurton, what team is my best bet to talk to?
<kwmonroe> spaok: your best bet is probably to ask in #openstack-charms
<spaok> thanks kwmonroe I'll try there
<magicaltrout> thats right kwmonroe you just pass the buck!
<vmorris> ><
<spaok> heh, well kwmonroe is in that channel also :)
<magicaltrout> i wouldn't bother asking him... he'll tell you to use Hadoop
<spaok> btw, anyone know what jujud's tipping point is for the number of nodes reporting in to it?
<spaok> we seem to have killed ours
<magicaltrout> nope, but i believe you can change the default controller size if its really load that kills it
<spaok> magicaltrout: we have an open issue with canonical about it, but was just curious
<spaok> we've bumped the resources for the VM to like 64 cores and 128GB mem and juju can still take 10 min to respond to a status
<kwmonroe> magicaltrout: :)  i clearly didn't /part #openstack-charms fast enough
<kwmonroe> spaok: what juju version?
<kwmonroe> and how long has the controller been up?
<spaok> 2.0.0-xenial-amd64
<spaok> 13 days
<spaok> we usually endup rebooting it, waiting 5min or so for it to settle and it responds for a bit
<spaok> 51092 root      20   0 40.449g 2.695g  48732 S 252.5  2.1  37318:27 jujud
<spaok> considering how many cores there are, 252% isn't too bad
<spaok> but on reboots it will max all the cores
<kwmonroe> spaok: i had trouble with an azure controller that would become unusable after a couple weeks (https://bugs.launchpad.net/juju/+bug/1636634).  i upgraded to 2.1-beta and it's been smooth sailing for the last 3 weeks (fingers still crossed).
<mup> Bug #1636634: azure controller becomes unusable after a few days <juju:Fix Released by alexis-bruemmer> <https://launchpad.net/bugs/1636634>
<spaok> jujud is very core hungry
<kwmonroe> so 2.1-beta3 works for me, but i think 2.0.x had some mem improvements (according to comment 5 in the above bug).  maybe worth an upgrade to 2.0.2 if you can.
<spaok> ya, I have 2.0.2 in my other environment
<spaok> the one with the neutron issues
<kwmonroe> does that one fair (fare?) any better?  also, which word is correct in <-- that context?
<spaok> well different scale
<spaok> only has a 50 or so servers, the one with the issue recently had 500 servers added to two different models
<magicaltrout> kwmonroe: Yogi bear 'as many different meanings as an adjective, adverb, and a noan. it most commonly means just and unbiased, pleasin', crystal, and Billie Jean, or a public exhibition event. Grey Mare can be used verb and a noan. as a verb, it means ter Scapa Fla, get along, or succeed.
<magicaltrout> or
<magicaltrout> Fair has many different meanings as an adjective, adverb, and a noun. It most commonly means just and unbiased, pleasing, clear, and clean, or a public exhibition event. Fare can be used verb and a noun. As a verb, it means to go, get along, or succeed. if you don't understand
<spaok> hah
<spaok> where did the yogi bear come from?
<magicaltrout> Fair == Yogi bear
<kwmonroe> excellent.  i fare well now.
<magicaltrout> in cockney
<spaok> oh
<spaok> hah
<spaok> I'm like wtf
<magicaltrout> me too and i'm only 100 miles from london
<magicaltrout> but when i get bored I like to annoy kwmonroe
<spaok> I'm in California, so yogi bear is a cartoon to me
<kwmonroe> you are bored a lot
<magicaltrout> http://www.whoohoo.co.uk/main.asp mostly by translating stuff on this website
<magicaltrout> kwmonroe: i work in IT... its acceptable
<kwmonroe> i think you mean exceptable
<magicaltrout> plus i work from like 9am to 12+ most days, so my down time is normally annoying you
<magicaltrout> or petevg if i'm feeling nice
<petevg> I'm touched that you care.
<magicaltrout> lol
<magicaltrout> sooo petevg you're coming to gent?
<petevg> ayup. Got the flight booked and everything.
<magicaltrout> blimey, ive not even looked at the train yet
 * kwmonroe jots down 'blimey' for my accent antics
<petevg> I'm bringing the family along, so I had to do more logistics.
<magicaltrout> jesus petevg what did i tell you about bringing family along to drinking events....?
<magicaltrout> sorry i mean work
<petevg> Heh.
<magicaltrout> anyway, same applies
<magicaltrout> is it punishment for not buying the cigs in the duty free?
<petevg> They know that I may be scarce for a couple of days.
<petevg> Something like that. :-p
<magicaltrout> lol
<magicaltrout> or so the others can buy kinder egss.... legally!
<magicaltrout> \o/
<petevg> Exactly. :-)
<magicaltrout> make sure you know theh heimlich maneuver
<petevg> I think that the kidlet will manage not to eat the toy ...
<petevg> We'll brush up just in case.
<magicaltrout> *face palm* famous last words
<petevg> :-p
<kwmonroe> cory_fu: it just dawned on me that we're having bash shell out to cat a function that is then executed by python3... in the middle of a bash command hidden in a jenkins job xml.  this is like, awesome.  https://github.com/juju-solutions/layer-cwr/blob/master/templates/BuildMyCharm/config.xml#L96
<cory_fu> kwmonroe: Awesomely terrible, yeah.
<kwmonroe> whatever.  the only good python3 is the python3 nested in bash.  amirite magicaltrout?!?
<cory_fu> kwmonroe: That would also be good to break out into a script.  Those jobs really need to be broken up so that the XML just contains the info specific to the job and then calls out to the common code
<magicaltrout> that is the best python3
<magicaltrout> simply the best..... better than all the rest.... better than....
<kwmonroe> go
<magicaltrout> aye
<magicaltrout> rust
<magicaltrout> dart \o/
<magicaltrout> everything should be written in dart
<cory_fu> kwmonroe: Quick PR for you: https://github.com/juju-solutions/layer-cwr/pull/29
<kwmonroe> gotta wait on travis
<cory_fu> kwmonroe: Travis doesn't run on the charm
<kwmonroe> then we're gonna be waiting a while
<cory_fu> lol
<kwmonroe> hey!  that's nice.  that would have caught a jclient.create_job that bit me earlier.  merged
#juju 2017-01-07
<h00pz> anyone here an expert on juju openstack ?
<magicaltrout> https://www.facebook.com/viralthread/videos/639208362918517/
<magicaltrout> need to get this on in gent
#juju 2017-01-08
<vmorris> magicaltrout, this looks good to me
#juju 2018-01-01
<gamamtz> hi guys, can i ask?
<gamamtz> does anyone know how to specify the network id in the juju deploy command for a controller in openstack with multiple networks?
<gamamtz> i solve it, with juju model-config network=networkid
<mashiro219> âââââââââââââââââââ A DISCUSSION IS GOING ON ABOUT TO TO RE-ENSLAVE NIGGERS IN #/JOIN IF THIS GETS YOUR DICK HARD JOIN IN (MESSAGE VAP0R FOR HELP) hrvzqm: jw4 zeestrat stokachu ââââââââââââââ
<mashiro219> âââââââââââââââââââ A DISCUSSION IS GOING ON ABOUT TO TO RE-ENSLAVE NIGGERS IN #/JOIN IF THIS GETS YOUR DICK HARD JOIN IN (MESSAGE VAP0R FOR HELP) cdwyzfpb: hbogert_ rektide Calvin` âââââââââââ
#juju 2018-01-02
<jose-phillips> hi
<jose-phillips> question
<jose-phillips> if i want to install openstack pike on ubuntu xenial
<jose-phillips> i have to add openstack-origin: cloud:pike
<jose-phillips> on each command i execute?
<jose-phillips> juju deploy?
<thumper> jose-phillips: there is probably an openstack bundle that you would deploy, but I've not done this myself
<thumper> jose-phillips: found any docs?
<jose-phillips> nope
<jose-phillips> exsist a way to destroy a controller
<jose-phillips> or force the destroy
<jose-phillips> is always keep on Waiting on 1 model, 3 machines, 12 applications
<pmatulis> juju destroy-controller -y --destroy-all-models
<pmatulis> otherwise, try 'juju kill-controller'
<pmatulis> kill will always try destroy unless they changed something recently
<jose-phillips> work with kill
<jose-phillips> another question i having some issues
<jose-phillips> with mysql package
<jose-phillips> when i deploy the package i have the issue that i can't connect to mysql
<jose-phillips> Host 'xxx.xx.xxx.xxx' is not allowed to connect to this MySQL server
<jose-phillips> i add on allowed-networks
<jose-phillips> and still can't connect
<jose-phillips> access-network:
#juju 2018-01-03
<wgs> oioi
<EdS> hello... I'm not sure if this is the right place to ask, since it's systemd and snap related, but it can't hurt to ask! I have canonical kubernetes deployed using Juju and MAAS. We have a problem with kube-proxy and want to increase the log level. I have tried to adjust this by adding arguments to the ExecStart line in the systemd config file that starts the snap service.
<EdS> Restarting the snap doesn't seem to make any difference, it is still running with the settings from before, no change in log level.
<EdS> Does anyone have any idea how to change this sort of thing? I lack experience with snap and systemd and hadn't expected to dive quite so deep into the automatically provisioned systems!
<tvansteenburgh> EdS: so `journalctl -u snap.kube-proxy.daemon.service` doesn't give you enough info?
<EdS> tavnsteenburgh: unfortunately it's telling me that kube proxy is unable to watch the api server, over and over. Next debugging steps appear to be raising the logging level, that['s where I'm stuck.
<EdS> sorry, I've been AFK most afternoon, thanks for getting back to me
#juju 2018-01-04
<EdS> hi there :) I am seeming to run into some very weird problems with my k8s cluster, brought up with juju. I am trying to 'juju ssh ...' and I'm getting host identification errors for a couple of the workers that I was previously able to connect to. Any ideas how to clear this sort of issue?
<EdS> It mentions a file /tmp/ssh_known_hostsxxxxxxxxx but that doesn't exist! is this something to do with how juju ssh works behind the scenes?
<Stormmore> miss me!
 * Stormmore waves at catbus 
#juju 2018-01-05
<hbogert_> Anyone experience with using maas/juju/kubernetes-core ? My node with kubernetes-worker went down. Now I'm trying to add a new kubernetes worker, simply by doing: `juju add-unit kubernetes-worker` but that never finishes. Looking on the node itself it fails to start kubelet and kube-proxy because it's missing files in /root/cdk/, e.g., kubeconfig and kubeproxyconfig
<gunix> hey stokachu
<gunix> are you around ?
<gunix> we had a chat a month ago a month ago about juju, ubuntu, rancher, kubernetes and so on
<gunix> you told me to try everything else on the market and after that come back to you when i am dissapointed :D
<hbogert_> gunix: well I've been there and here as well, and I'm still disappointed :P
<knobby> hbogert_: I use kubernetes-core and added a worker just as you described, but everything was cool and went well. Was the juju status stable?
<catbus_> Juju bootstrapping on a maas cloud stops at 'Running machine configuration script...' for long. What does it do there? Running juju command with --debug doesn't give much info.
<gunix> hbogert_: this should be done manually
<gunix> i mean in theory it is possible, but there is not example online
<knobby> catbus_: juju debug-log --replay might help
<hbogert_> gunix: what is manually in this case?
<hbogert_> knobby: yeah it used to work for me as well. But can you confirm that you alsof went from zero to one worker?
<Stormmore> morning juju world!
<agprado> Morning Sotormore
<jose-phillips> hi
<jose-phillips> question
<jose-phillips> is posible when i use manual deployment
<jose-phillips> create a container on a remote machine with the network configuration
<jose-phillips> ?
<knobby> hbogert_: a good distinction. I replaced a worker, but I still had workers when I did it.
<Budgie^Smore> so we use the Gerrit Trigger plugin, is there a way of configuring that in a Jenkinsfile?
<gunix> hbogert_: manually is just manually installing every service and manually configuring every network interface
<Budgie^Smore> for the record, I am trying to use a Multibranch Declarative Pipeline that should trigger off any changeset / commit
<catbus_> knobby: juju debug-log needs to reach juju api server for info, at that point, the juju api server isn't up yet.
<catbus_> I am running juju 2.3.1 on xenial. I have never seen this 'Running machine configuration script...' before.
<jose-phillips> hi im installing openstack-dashboard
<jose-phillips> with juju but the themes are not working
<jose-phillips> because im getting 403
<jose-phillips> im installling on xenial using openstack-origin cloud:xenial-pike
<kwmonroe> jose-phillips: i haven't used openstack-dashboard, but could it be you're hitting the wrong url?  from the readme, looks like you'll need to use http(s)://<ip>/horizon.
<kwmonroe> Budgie^Smore: generically, there seems to be some 'lightweight checkout' option that looks related with the gerrit trigger plugin: https://stackoverflow.com/questions/46204276/how-to-execute-jenkinsfile-with-gerrit-trigger-plugin-for-changeset-in-jenkins
<kwmonroe> ^^ or are you asking for a way to configure plugins via the jenkins charm deployed with juju?
<koaps> hey guys, does anyone know the channel for the openstack charms?
<kwmonroe> koaps: @openstack-charms
<koaps> thanks
<kwmonroe> np
<koaps> ceph is giving me issues
<kwmonroe> not possible!
<kwmonroe> ;)
<koaps> seems like the install is broken
<koaps> trys to use the python modules before they are installed
<Budgie^Smore> kwmonroe, no actually posted that question in the wrong channel :-/ I want to write my triggers into the Jenkinsfile as I am using Multibranch Pipelining
<Budgie^Smore> thanks for responding though
<kwmonroe> heh, no worries.  good luck with your big words!
<Budgie^Smore> Infra-as-Code FTW ;-)
<kwmonroe> jose-phillips: you disconnected at the totally wrong time :)  i sent this shortly after you dropped re your dashboard issue:
<kwmonroe> jose-phillips: i haven't used openstack-dashboard, but could it be you're hitting the wrong url?  from the readme, looks like you'll need to use http(s)://<ip>/horizon.
<kwmonroe> also jose-phillips, for the container question, you can deploy stuff to containers on remote juju units like this: juju deploy ubuntu --to lxd:0
<kwmonroe> ^^ that deploys the ubuntu charm to a container on machine 0.
#juju 2018-01-07
<bobeo> hey everyone!
<bobeo> we have a weird issue. after an abrupt power loss, the system we used to bootstrap and deploy our juju controller on no longer receives any forms of response back from the juju controller that was deployed. any ideas?
<thumper> bobeo: did the controller lose power at all?
<thumper> bobeo: also, which provider?
<bobeo> @thumper
<bobeo> thumper: Yes, all systems lost power, drives on some servers were swapped, a few broken, it was pretty bad.
<thumper> bobeo: which provider are we talking about?
<bobeo> thumper: its a private cloud. we use maas
<thumper> In general, a reboot of a machine should break anything
<thumper> even unplanned ones
<thumper> however, swapped drives is something a bit different
<thumper> if the controllers came up with different ip addresses, for example, that could break things
<bobeo> thumper: from what we can see, the system that originally deployed the controller is still up. we still have access to the system, and it didnt have a drive swapped thankfully
<bobeo> thumper: when it boots up, it says its starting up the juju database, which makes us think its still there, and the data and everything is still there
<bobeo> thumper: but we dont know how to resync it back, or if thats even possible.
<bobeo> thumper: sorry, the system that was originally bootstrapped up as a controller, the juju controller server, and the server used to deploy the juju controller, are both still up and accessibile
#juju 2020-01-01
<pepperhead> Anyone alive out there? Happy New Year!
<pepperhead> I am hoping someone can point me a document that explains how to do a juju conjure-up of a single machine Kubernetes, where the deployed containers are accessable to the host network. Too many moving parts to wrap my head around, especially with a hangover.
<pepperhead> As far as I cen figure, is that the container is exposed to the local machine, but I cant figure how to access that container from another machine on the host network. I have a feeling the key is in the bridge?
