#juju 2012-06-18
<m_3> lifeless: cool thanks
<twobottux> aujuju: How to SSH into local juju instance? <http://askubuntu.com/questions/152428/how-to-ssh-into-local-juju-instance>
<shang> hi, anyone knows where I can find more documents on MAAS? like BIOS update, firmware upgrade... etc
<hazmat> lifeless, i'm not really understanding what your looknig for re " so, what I'd really like is to be able to shove juju into an lxc and give it a single callback to a 127.0.0.1 only service to fire up more lxc's,  that would let me run multiple juju environments"
<jml> hello all
<jml> I have a django app that I'm developing that I'd really like to get into a charm
<jcastro> jml: mhall119 is working on a django charm generator thing
<jml> jcastro: ah thanks. I was sort of hoping to get something up today. noodles pointed me at his blog post (http://micknelson.wordpress.com/2011/11/22/a-generic-juju-charm-for-django-apps/)
<jml> I'll work from that and from the bug he linked me to (https://bugs.launchpad.net/charms/+bug/1012942)
<_mup_> Bug #1012942: Charm Needed: python-django (with WSGI container) <Juju Charms Collection:New> < https://launchpad.net/bugs/1012942 >
<jcastro> http://mhall119.com/2012/06/charming-django-with-naguine/
<mhall119> jml: IMO, a single django charm won't work, there's too many different ways to use and deploy django
<mhall119> it's more of a library than a service
<jml> mhall119: yeah, that makes sense to me
<jcastro> SpamapS: ping me when you're around pls
<jml> "1) It requires Puppet, which isnât natural for a Python project" â hah. hah. hah.
<SpamapS> jcastro: wassup
<jcastro> Hey is juju broken in quantal?
<SpamapS> Shouldn't be
<SpamapS> evidence?
<jcastro> I tried to deploy "ubuntu" but it couldn't find it in the cs:precise/ubuntu, then I realized I forgot I had moved the box to quantal
<jcastro> try "juju deploy ubuntu"
<jcastro> and juju says it can't find it in the store, it's in the browser though
<SpamapS> oh that charm is broken in the charm store
<SpamapS> We need to get access to the importer's logs
<jcastro> oh ok, so it's not just me, whew
<jcastro> SpamapS: ok so I am thinking we should link up mims' jenkins hotness in that "tools" menu on the charm browser
<negronjl> 'morning all
<bloodearnest> heya all
<bloodearnest> am playing with juju on lxc, with mysql<->wordpress<->haproxy and my haproxy isn't working
<bloodearnest> I did juju ssh haproxy/0, but I can't find where the haproxy logs are at?
<SpamapS> bloodearnest: I don't think haproxy logs much.
<bloodearnest> SpamapS, right
<SpamapS> bloodearnest: perhaps syslog?
<bloodearnest> SpamapS, nothing in there
<SpamapS> bloodearnest: I am no haproxy expet, I wrote the charm as a proof of concept.. perhaps the manual will hep in this case?
<bloodearnest> also, the juju-generated stanza for proxying to wordpress has this line: server localhost localhost:80 check
<SpamapS> s/expet/expert/
<bloodearnest> which seems odd
<bloodearnest> SpamapS, ok
<bloodearnest> I get a 503 with the default config
<SpamapS> bloodearnest: yeah it looks like wordpress is using the wrong hostname
<bloodearnest> SpamapS, is that an lxc issue?
<SpamapS> relation-set port=80 hostname=`hostname -f`
<bloodearnest> (i.e. juju on lxc)
<SpamapS> no its a charm issue
<SpamapS> hostname -f is forbidden ;)
<SpamapS> should be `unit-get private-address`
<SpamapS> bloodearnest: I'll fix that right now
<bloodearnest> SpamapS, sweet
<SpamapS> even tho I think we're getting a totally rewritten wordpress charm this week :)
<bloodearnest> when will that be available?
<bloodearnest> SpamapS, right
<bloodearnest> FYI, canonical ISD is having a juju sprint this week, so you might get bugged a bit :)
<SpamapS> COOL
 * SpamapS shuttles the toddler off to preschool ... bbl
<newz2000> I'm with bloodearnest
<SpamapS> bloodearnest: fixed in lp:charms/wordpress
<bloodearnest> SpamapS, legend, thanks
 * mars <- afk, need another cup of tea
<newz2000> How do I know if I'm using an updated charm or not?
 * newz2000 wants to tryout SpamapS's new wordpress charm
<jml> sorry, newbie questions. I've deployed 'python-moin' to lxc on my laptop. How do I get to it?
<james_w> jml, you mean to the moin web interface?
<jml> james_w: I guess.
<jml> james_w: I really mean any actual proof of its upness
<james_w> jml, juju status should show an ip address for the "machine" it is running on, hitting that in your browser might be fruitful
<jml> james_w: it does noet.
<jml>         public-address: null
<james_w> jml, what's the full juju status?
<jml> http://paste.ubuntu.com/1047479/
<james_w> jml, looks like it is still coming up
<james_w> agent-state: not-started
<james_w> agent-state: pending
<jml> oh wow, that does take a while
<jml> how can I see progress / get an eta
 * jml has taken the garbage out, posted a bunch of thank you letters, cooked and eaten some lamb kebabs, made a request for clearer requirements and checked facebook about 6 times since running 'deploy'
<hazmat> jml, ps aux
<hazmat> | grep lxc
<hazmat> jml, or check the agent log
<jml> hazmat: what agent log?
<jml> 120       1232  0.0  0.0  25964   840 ?        S    Jun17   0:00 dnsmasq -u lxc-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/lxc/dnsmasq.pid --conf-file= --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0
<jml> root     27369  0.0  0.0  21156  1032 ?        Ss   14:37   0:00 lxc-start -dn lpdev
<hazmat> jml, machine agent log
<jml> hazmat: doesn't exist.
<hazmat> hmm
 * hazmat gets out of meeting
<hazmat> jml, so on a broadband connection it shouldn't take more than 15m
<hazmat> jml, wrt to logs their in the $data-dir specified in environments.yaml
<hazmat> jml, could you  ps aux | grep juju | pastebinit
<jml> hazmat: http://paste.ubuntu.com/1047493/
<jml> http://paste.ubuntu.com/1047492/ is the contents of my data-dir
<hazmat> jml, huh.. did you reboot it?  the machine agent should be logging to /tmp/juju-local/jml-local/machine-agent.log
<hazmat> per its cli params
<jml> hazmat: "reboot it"? I don't know. When I ran 'juju bootstrap', that failed because the directory was owned by root. So I trashed the existing directory and ran 'juju bootstrap' again
<hazmat> jml, ah.
<hazmat> jml, you have to destroy-environment after first bootstrap failed..
<jml> on the understanding that it didn't conform with the behaviour on CharmSchool so some local thing had probably happened to screw it up, and no one would have told me to store something important in /tmp/, would they?
<jml> hazmat: ok. do I do that now? or should I trash it again?
<hazmat> jml, yeah. pls destroy-environment, and boostrap again
<hazmat> it looks like the previous bootstrap (first) created the machine agent upstart job but never had a place to log to.. but its oddl... it shouldn't care about root perms, because it runs as root to startup the lxc
 * jml shrugs.
<newz2000> Hmm, had a working config of Wordpress + MySQL, then destroyed everything and started over, now (2x) juju deploy mysql results in agent-state: start-error
<newz2000> any idea what's going on here?
<newz2000> by everything, I mean all the services, not the environment
<bloodearnest> newz2000, did you destro-environment
<newz2000> no
<newz2000> should I?
<bloodearnest> dunno :)
 * newz2000 gives it a shot
<jml> ok. now I get an IP address in 'juju status', but I can't connect to it (and netstat shows no listening ports on that IP)
<jml> jcastro: hey, for my lxc experience should I just paste the backlog here? :P
<jcastro> I had a todo to do it this cycle
<jcastro> but seeing you today I decided to clear my schedule and just document all the things
<hazmat> hmm.. getting the docs reviews are proving difficult
<jcastro> it's always the easy ones
<mgz> anyone know what getting ConnectionRefusedError in watch_machine_changes on the master is a symptom of?
<mgz> <http://pastebin.ubuntu.com/1047526/>
<mgz> juju thinks it launched an instance, but actually didn't, and I don't see anything else of relevence in the juju logs
<hazmat> mgz is the service provider responding?
<hazmat> mgz,  connection refused would imply it couldn't talk to the provider api endpoint
<mgz> it might just be more borked routing in canonistack, I'll check
<jml> jcastro: thanks :)
<hazmat> mgz, just use something known working.. like hpcloud..
<hazmat> special casing canonistack because it doesn't have enough working infrastructure to resemble an openstack deployment feels like its working around the wrong problems
<newz2000> SpamapS: I think there might have been a regression in the wordpress charm that was recently updated. Now when I start an instance I get "it works!"
<mgz> bah, I know what it is... the routing actually works inside, but my local workaround is getting on to the provider and breaking it
<newz2000> SpamapS: oh, wait, i forgot to expose it
<fugue88> newz2000: That's the wp-config.php thing I mentioned.
<fugue88> fugue88: So, maybe a problem with the wordpress charm not expecting to have different hostnames passed into it.
<newz2000> SpamapS oh, that didn't change it. Still "it works!"
<fugue88> But not a problem with the haproxy charm.
<newz2000> fugue88: yes, but I'm just hitting the wordpress IP
<newz2000> not using haproxy or anything yet
<mgz> hazmat: where does the conf get stored on the master?
<hazmat> mgz, conf?
<mgz> hazmat: as in, it's not in /home/ubuntu/.juju/environments.yaml but was serialised and passed across somewhere
<hazmat> mgz its in zk
<mgz> dammit :)
<hazmat> mgz /usr/share/zookeeper/bin/zkCli.sh
<hazmat> mgz interactive console ls /    .. get /environment
<mgz> ta, poking.
<hamptonpaulk> ping SpamapS
<jml> mhall119: why does naguine think my django project uses sqlite3?
 * SpamapS is back
<SpamapS> newz2000: the wordpress charm is broken w.r.t. hostnames
<SpamapS> newz2000: I thought I had fixed it so the "default" host was wordpress tho
<mgz> well this is daft, java.lang.NumberFormatException on setting any string with a space in it...
<newz2000> SpamapS: I see! I think this is somewhat inherited from the debian package
<mgz> must be a way of passing content of a file
<newz2000> SpamapS I think I've foobar'd my juju setup so take my current feedback with a grain of salt.
<SpamapS> newz2000: well when in doubt, destroy and re-deploy :)
<newz2000> SpamapS: I've just done that and now I can't successfully deploy mysql. Gonna do dist-upgrade and reboot. Then I'll test and get back to you.
<hazmat> jcastro, fixed re docs in review queue
<jcastro> <3
<hazmat> although the display could use some work
<jcastro> oh, those are core branches
<hazmat> jcastro, no their doc branches
<hazmat> just displayed badly
<jcastro> oh
<jcastro> oh nice, so really, more good targets then
<hamptonpaulk> ping SpamapS: I believe I am having an issue with https://bugs.launchpad.net/juju/+bug/920454 - is there currently a workaround of any sort?
<_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:Confirmed> < https://launchpad.net/bugs/920454 >
<hazmat> jcastro, fixed
<mgz> sorry to everyone who's about to get a very large diff in email.
<bloodearnest> SpamapS, I think the issue w/wordpress is when putting haproxy in front - forwarded hostname is different so fails to find the right config in /etc/wordpress
<marcoceppi> bloodearnest: that is a known issue with the current charm
<hazmat> mgz, woot!
<bloodearnest> marcoceppi, k thanks, got a bug I can watch?
<marcoceppi> let me find it
<jcastro> ooh happy, there's the openstack provider incoming, woo!
<robbiew> woot!
<mgz> :)
<SpamapS> marcoceppi: are you done w/ the new one yet?
<marcoceppi> SpamapS: not yet, should be done with it this week
<SpamapS> marcoceppi: let me rephrase that. Does the new one work for even the most basic use case yet?
<mhall119> jml: does it say it uses sqlite in your settings.py?
<SpamapS> marcoceppi: at this point, I'd rather have "not done but actually works sort of" over "thing that basically doesn't work at all"
<marcoceppi> SpamapS: It's not at the point where I'd feel comfortable having people used it. A lot of stuff is still being shuffled around in it
<marcoceppi> and the subordinate isn't even anywhere near operational yet
<jml> mhall119: oh right. I thought I'd blatted over that, but I was getting confused with a different project.
<SpamapS> marcoceppi: mmk. :-/
<marcoceppi> yeah
<bloodearnest> SpamapS, I think your earlier change fixed this bug - or at least it did for me https://bugs.launchpad.net/charms/+source/wordpress/+bug/903312
<_mup_> Bug #903312: Inocorrect Server is specified in Haproxy when using haproxy with Local Provider <wordpress (Juju Charms Collection):New> < https://launchpad.net/bugs/903312 >
<SpamapS> bloodearnest: right. Not sure if that will fix the haproxy issue
<SpamapS> bloodearnest: if it does though, huzzah
<bloodearnest> SpamapS, the hazproxy works correctly, but the wordpress doesn't :)
<SpamapS> right
<SpamapS> man.. the old wordpress charm kind of sucks (no offense to the authors). I think it might have been *the first charm ever written* .. but.. we've learned so much since then
<bloodearnest> marcoceppi, is this the bug in question? https://bugs.launchpad.net/charms/+source/wordpress/+bug/958204
<_mup_> Bug #958204: Apache doesn't know about the server's private address <wordpress (Juju Charms Collection):Confirmed> < https://launchpad.net/bugs/958204 >
<marcoceppi> bloodearnest: that's it
 * SpamapS does a little cleaning up even tho marco is days away from fixes
<hamptonpaulk> SpamapS: did you happen to see the above question about Bug #920454? Switching away from vmware for the time being, just would love any input you may have.
<_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:Confirmed> < https://launchpad.net/bugs/920454 >
<bloodearnest> marcoceppi, kk, thx
<bloodearnest> SpamapS, is "what you've learned since then" documented anywhere?
<bloodearnest> :)
<marcoceppi> bloodearnest: there are a series of posts that outline what we've learned (and it was oh so much) but the best documentation is the next release of the charm
<SpamapS> bloodearnest: sadly no, we have a lot of work to document those best practices
<SpamapS> marcoceppi: I'm not even talking about that
<SpamapS> just basic stuff
<SpamapS> like what to do in install vs. config-changed
<marcoceppi> SpamapS: oh yeah, sadly there isn't a like charming 101 kind of post
<bloodearnest> marcoceppi, is the next version available for instruction purposes?
<SpamapS> we don't need posts
<SpamapS> we need books really
<SpamapS> We should mirror the upstart cookbook
<SpamapS> or rather, follow its model
<marcoceppi> the upstart cookbook is huge though
<SpamapS> If I want to do X, do it this way. If I want to do Y, do it that way.
<marcoceppi> and I found it difficult to navigate btw
<SpamapS> Hrm thats a bummer.
<marcoceppi> bloodearnest: it's really not ready yet (to the point that it doesn't run at all without lots of interventions) but that's because it's mid-refactor
<marcoceppi> SpamapS: I agree that there should be a resource for this though
<marcoceppi> I really like how Go does their intro to Go http://tour.golang.org/#1
<marcoceppi> Not sure how it could be adapted to Charming though
<bloodearnest> marcoceppi, kk - we're going to be writing a bunch of charms in order to a) learn how to and b) hopefully charmify Ubuntu SSO/Pay  so that kinda stuff (best practice, what to do where) would be useful
<SpamapS> marcoceppi: a tour would actually be fantastic
<SpamapS> We kind of do i for juju usage in all our talks
<marcoceppi> I think a tour for juju would be boss
<SpamapS> s/ i / it /
<SpamapS> hmmmm whats this then https://code.launchpad.net/~gz/juju/openstack_provider/+merge/110860
<marcoceppi> but since charms are all so, unique - it's hard to be like HERE'S A TOUR OF CHARMING, but it's in bash - or it's all in python - or it's all in C#
<marcoceppi> SpamapS: Oh, hello there
<SpamapS> marcoceppi: we're like construction workers who just saw Charlize Theron walk by in a sun dress.. ;)
 * marcoceppi whistles loudly
 * SpamapS gets tired of chasing local provider fail and goes back to EC2.. again
 * avoine can't wait to test the new OpenStack provider
<jcastro> SpamapS: heh ... yeah
<SpamapS> avoine:  should be able to test it just by branching that
<avoine> I'm looking at the diff now
<avoine> I'll do that
<jcastro> negronjl: mira, around today?
<negronjl> jcastro: here ... working on the Structure/MaaS/Juju thing
<jcastro> SpamapS: hey so we were missing the docs submissions in the queue, hazmat fixed it, check out the review queue now!
<jcastro> negronjl: oh, so not a good time to ask about graphs then
<SpamapS> /tmp/juju-createRVPUAG: line 28: /etc/resolvconf/run/resolv.conf: No such file or directory
<negronjl> jcastro:  lol ... not yet ...
<SpamapS> anybody else see this while trying to use local provider?
<jcastro> negronjl: we should G+ with m3 later today though, talk velocity?
<SpamapS> oh wait, n/m .. needed latest juju
<negronjl> jcastro:  sure ...
<negronjl> jcastro: let me know when  .... I also have the Ubuntu Developer Contributor thing today in about 1 hour and 15 minutes or so
<jcastro> oh ok
<jcastro> we can punt to tomorrow if you want
<negronjl> jcastro: better
<SpamapS> kees: FYI, just merged all of lp:~kees/charms/precise/mumble-server/trunk into lp:charms/mumble-server. Thanks for adding maintainer. :)
<hazmat> jcastro, there mouseovers for the origin branch name fwiw on the merges
<hazmat> argh. their
<hazmat> they're
 * hazmat finds a new language
<kees> SpamapS: thanks! I've got an update for my 'sbuild' charm as well, if you want to snag that oo.
<kees> *too
<SpamapS> kees: \o/
<SpamapS> I think we're down to 20 that need maintainers
 * SpamapS needs to pick that torch back up.. 
<kees> SpamapS: note that the sbuild charm is more than just maintainer addition -- it's a refresh for precise, and makes the confg control better.
<SpamapS> kees: excellent. I tend to just accept anything from maintainers that doesn't introduce malware. :)
<kees> SpamapS: \o/
<SpamapS> I figure if you're willing to put your name on it, you are willing to hear from angry users :)
<kees> hehe
<kees> SpamapS: I did it as a normal bug rather than a merge request because I didn't realize the oneiric charms all got moved forward to precise
<SpamapS> kees: yeah we're still figuring out how thats going to work long term anyway.
<SpamapS> Looking more and more like it will work a lot like the distro, except with aggressive backporting for new stuff.
<kees> SpamapS: cool
<SpamapS> hazmat: trying to test out local provider + proposed .. seems like juju-create is getting cached somewhere
<SpamapS> hazmat: any ideas where that might be?
<hazmat> SpamapS, its the first container in the env
<hazmat> SpamapS, lxc-ls
<SpamapS> hazmat: destroyed and bootstrapped.. should be gone right?
<hazmat> yes
<pindonga> SpamapS, hi there, so I'm playing a bit with juju, done the basic wordpress stuff bla bla
<pindonga> so next I added an haproxy in front of the wordpress instance (so I can later on add more units there)
<pindonga> and I know wordpress charm doesn't play very nicely along with the haproxy one
<pindonga> and I know you guys are about to rewrite everything , so this is not a support request :)
<pindonga> except for some help in understanding the charm
<SpamapS> pindonga: sure, I'm looking at it right now also so.. :-P
<pindonga> I see haproxy requires the other service to provide a reverseproxy relation
<pindonga> which wordpress doesn't do
<pindonga> so I was trying to make wordpress expose such a relation
<pindonga> for that I just need to edit the metadata and provide a reverseproxy-relation-{joined,changed} hooks right?
<pindonga> in the wordpress charm
<SpamapS> err
<SpamapS> no you' have it backwards
<pindonga> ah :)
<pindonga> hence the question
<SpamapS> haproxy requires the other service to have an *http* relation
<SpamapS> which nearly all the webapps in the charm store do
<SpamapS> the name, reverseproxy, is irrelevant outside the hooks dir
<pindonga> ah, and it names that http relation 'reverseproxy'
<SpamapS> well irrelevant to the other side I should say
<pindonga> k
<SpamapS> pindonga: haproxy names it reverseproxy. Most charms name the provider side 'website'
<pindonga> kk
<SpamapS> that interface.. http.. is very "proof of concept" .. we need to work out something better
<pindonga> so, the idea I had was that when the relation gets established we just symlink /etc/wordpress/config-<haproxy-ip>.php -> /etc/wordpress/config-<local ip>.php
<pindonga> that would make it work (for my purposes)
<pindonga> so I need to add that to the "something"-relation-{joined,changed} hook on wordpress right?
<SpamapS> pindonga: I believe there's a "default" possible as well
<SpamapS> pindonga: which is what should be done
<SpamapS> pindonga: but yes for your purposes.. I'd add that link wherever the main config-... is made
<pindonga> I am just checking
<pindonga> the main config is created during db-relation-changed
<pindonga> however that's not where this should happen
<lifeless> hazmat: I'm here if you want me to expand on what I said
<pindonga> the extra symlink would then happen when website-relation-changed probably
<pindonga> SpamapS, thanks, I'll try it out and let you know
<SpamapS> pindonga: yes that would work
<SpamapS> pindonga: website-relation-joined would be fine actually
<SpamapS> pindonga: you'll have the public-address at that point
<SpamapS> which is what you want
<pindonga> yep,
<SpamapS> tho I think changed is "more correct"
<SpamapS> since in the future, that may change
<pindonga> it needs to happen in both
<pindonga> probably
<SpamapS> pindonga: changed is always called once after joined no matter what
<SpamapS> so, just in changed is appropriate
<pindonga> ah, good to know
<pindonga> is that documents in the hooks page?
<pindonga> s/documents/documented/
<SpamapS> pindonga: https://juju.ubuntu.com/docs/charm.html#hooks
<pindonga> yep, thx
<hazmat> lifeless, awesome
<hazmat> lifeless, so  i saw rabbitmq queues and pika clients to juju..
<lifeless> hazmat: huh... ?
<hazmat> lifeless, sorry.. crossed the streams that was ted
<hazmat> lifeless, what are you trying to do?
<lifeless> me, I had a chat with m_3 and filed a bug as a result, about juju's degree of entanglement with my machine, for the 'local' provider.
<lifeless> its much more intrusive than I had anticipated
<hazmat> lifeless, the upstart job, and the libvirt network?
<lifeless> to the extent it won't work on half my machines here, and will duplicate basic functionality like my squid cache
<lifeless> hazmat: running apt-cacher-ng, running zookeeper
<lifeless> hazmat: the latter, AIUI, constraining me to one local environment at a time
<hazmat> lifeless, it doesn't
<hazmat> lifeless, we run each env's zk on an offset port
<hazmat> er. random port
<hazmat> listening on the bridge i believe
<lifeless> ah, well thats something; however my lxc bridge is bridged to my LAN :)
<SpamapS> it uses the libvirt default network
<SpamapS> not the lxcbr
<lifeless> sorry for being imprecise.
<lifeless> I have a grunty workstation. its what I run $stuff on.
<lifeless> to make getting at its kvms and lxcs easy, its libvirt bridge is my LAN.
<SpamapS> yeah, thats a bug that needs fixing for sure.. should allow an option to override the default network
<lifeless> how is the traffic securing work coming? I presume its trusted-traffic at the moment still ?
<hazmat> lifeless, demo of multiple local envs.. fwiw http://paste.ubuntu.com/1047850/
<hazmat> lifeless, so transport level nothing yet
<hazmat> lifeless, securing zk storage, i've hit a libzk issue, i need to debug, getting some deadlocks
<hazmat> re transport i don't see much outside of stunnel  or a twisted tls proxy.
<hazmat> lifeless, that's wrt to zk communication.. machine level security we need to switch out to firewalls. etc
<lifeless> hah https://issues.apache.org/jira/browse/ZOOKEEPER-235
<SpamapS> hazmat: heh, figured out my proposed+local issue.. r542 incoming. ;)
<lifeless> :( sadface.
<hazmat> lifeless, yeah..
<hazmat> lifeless, there are people who run it with stunnel over untrusted networks.. but its a serious issue.
<lifeless> and yay, dup : https://issues.apache.org/jira/browse/ZOOKEEPER-1000
<lifeless> hazmat: I agree :>
<hazmat> lifeless, atm, most of the developer bodies are on gojuju
<lifeless> hazmat: so, what does juju bootstrap locally do - adds an upstart job which runs apt-cacher-ng + zk, creates a single seed lxc node ?
<pindonga> SpamapS, if you don't mind one more question, I'm trying to debug the relation-changed hook... I exported the environment settings (JUJU_UNIT_NAME, JUJU_RELATION, JUJU_REMOTE_UNIT) but it's saying JUJU_AGENT_SOCKET is not set... how can I debug the relation-changed hook?
<hazmat> lifeless, install apt-cacher-ng, upstart job for machine agent, and upstart job for a simple file storage
<SpamapS> pindonga: juju debug-hooks servicename/# is the best way
<pindonga> SpamapS, I'm in there
<lifeless> hazmat: any way to tell it to shove off w.r.t. apt-cacher-ng ? I have caching already :)
<hazmat> lifeless, on adding the first service, it creates an lxc container that it customizes and uses as a template for deploying units
<pindonga> but now I went to where the hook is stored, and run it (following the tutorial)
<pindonga> however I get these errors
<hazmat> lifeless, atm no.
<pindonga> while it tries to run unit-get and relation-get
<lifeless> hazmat: and there is something that restarts units on laptop reboot, I believe? Where does that live ?
<hazmat> lifeless, there is not .. local envs are toast on restart
<hazmat> local provider has not seen much love
<lifeless> hazmat: toast == not running or toast == deleted ?
<hazmat> lifeless, not running
<lifeless> hazmat: how do you cleanup such an environment?
<lifeless> or restart it ?
<SpamapS> pindonga: no you have to wait till the hook is executed, you can't just force it
<hazmat> lifeless, juju destroy-enviornment
<SpamapS> pindonga: remove/add the relation
<SpamapS> pindonga: you'll get a new tmux window per hook execution
<lifeless> hazmat: is it possible to restart it ?
<pindonga> SpamapS, so before I do that: I run debug-hooks and it opened the ssh session to my unit
<pindonga> that's ok?
<pindonga> SpamapS, now I remove/add the relation, yes?
<SpamapS> pindonga: thatss exactly right
<pindonga> SpamapS, thx, sorry for the noise
<pindonga> :)
<SpamapS> pindonga: no, thanks for the noise. :)
<hazmat> lifeless, you'd have to start zk, and the containers
<hazmat> the zk is not upstartified
<hazmat> neither are the env containers
<pindonga> SpamapS, didn't do anything afaict
<pindonga> just did juju remove-relation but it finished
<pindonga> same for add-relation
<pindonga> or do I have to run these from within the unit I got opened via debug-hooks?
<lifeless> so, the heart of my bug was, I guess, that we're special casing LXC more than we need to - doing a -tiny- cloud api for local use should be doable as a standalone project, and that would let 'local' have nothing special to do.
<hazmat> lifeless, agreed..
<lifeless> Said cloud API could remember state like 'this lxc was running' as well.
<hazmat> lifeless, SpamapS made the original lxc provider that way
<hazmat> but it didn't fly for what are at this point histerical raisons
<lifeless> so local juju config would become, install lxc-cloud-api, create a user, put user details in environments.yaml, profit. Install a cache and configure that in environments.yaml if you need one.
<hazmat> the other delta for the local provider is that its too intimate with its customization of the container
<hazmat> it should use cloud init and cloud-images ala every other provider
<hazmat> that's what lp:~hazmat/juju/local-cloud-img is a start on (the latter)
<lifeless> yeah
<SpamapS> pindonga: when the window opens, you have to run the hook yes
<lifeless> is it worth someone writing such a local cloud API ?
<SpamapS> pindonga: the idea is that you can run it with debuggers or strace or whatever
<pindonga> SpamapS, ah, sorry, I think I start to understand
<pindonga> wasn't seeing the fact that it's a screen session
<SpamapS> or just run it over and over fixing it :)
<pindonga> and it's opening up more tabs
<hazmat> lifeless, hmm.. interesting as a separate project?
<hazmat> or as a new local provider impl?
<lifeless> separate project.
<lifeless> It clearly has nothing to do with juju.
<lifeless> arguably its related to libvirt in fact
<SpamapS> libvirt has lxc
<lifeless> yes
<lifeless> I don't think libvirt has a securable HTTP(s) interface though, no ?
<hazmat> lifeless, it has a net api
<SpamapS> sure, its called OpenStack :)
<hazmat> that too ;-)
 * m_3 would love to see a libvirt provider
<SpamapS> a bit heavy tho
<pindonga> SpamapS, , now when I run the add-relation command, I see this in the debug-hooks window: [wordpress 0:bash- 1:website-relation-broken*   ...
<lifeless> SpamapS: 'bit'.
<pindonga> SpamapS, however I can't change tabs, like with pure screen (ctrl-a 0, ctrl-a 1)
<marcoceppi> So now that Openstack provider support is almost done, when are we going to see Azure :)
<SpamapS> pindonga: if it doesn't exist, you can just exit the shell
<hazmat> lifeless, i'd eschew libvirt.. seems reasonable though
<SpamapS> pindonga: its not screen, it is byobu
<lifeless> SpamapS: thats like 'I had a 'bit' too much to drink at UDS'
<hazmat> marcoceppi, tbd, but don't hold your breath
<SpamapS> pindonga: and I think its tmux byobu
<marcoceppi> hazmat: ;)
<pindonga> ah, now I exited that and it came up again with website-relation-joined (so this was actually queued up waiting for the other hook)
<pindonga> SpamapS, k, starts to make sense (a bit weird, but progress :))
<SpamapS> lifeless: well we did do the hat, and the nose.. but she has got a wart..
<lifeless> :>
<lifeless> SpamapS: seriously though, would running nova + swift locally really be appropriate here? Seems like they have massive overkill.
<SpamapS> lifeless: they do. You'd have to spin up a container just for them.
<lifeless> even if nova grows lxc backend support, you're looking at a pretty tall stack once you bring up its message bus, authentication etc.
<SpamapS> nova has lxc support already
<lifeless> ok, nice to know.
<SpamapS> nova also has grown 0mq support, so rabbit can be disposed of at least
<SpamapS> and sqlite can be used for the db
<lifeless> 0mq is still a queue :P
<SpamapS> lifeless: but its a peer to peer queue, and in a local setting can be defaulted to localhost :)
<lifeless> SpamapS: I'm not sure if you're saying 'its a good idea' or 'technically feasible but I still wouldn't like to do it'
<SpamapS> I'm hashing it out in my head, and dumping the results in here
<SpamapS> no goal
<lifeless> fair enough
<lifeless> While you do that, I'm going to go get cynthia out of bed
<SpamapS> I should probably eat actually. :)
 * SpamapS goes looking for the Truck Norris gourmet burger truck
<SpamapS> they're spicy.. just a little KICK
<negronjl> SpamapS: what happened to that diet thing you were going for ? :)
<m_3> gourmet burgers are diet... right?  huh?
<lifeless> negronjl: did you get hold of moser ?
<lifeless> (and good morning :P)
<negronjl> lifeless:  yup ... I've been working with a few guys on this already ..
<negronjl> lifeless:  I'm at #ubuntu-meeting atm ... trying to get Ubuntu Contributing Developer ... brb
<negronjl> lol ... denied Ubuntu Contributing Developer .... AGAIN
<negronjl> I must really suck at this
<m_3> negronjl: no way!
<negronjl> m_3: yup ... denied
<m_3> dang, I was thinking of going up before long... maybe I'll rethink that
<negronjl> My juju contributions don't count
<negronjl> this is the second time i try this ... no more.
<m_3> man, that sucks
<pindonga> is it possible to call upgrade-charm using a different repository than the one you used for the original deploy?
<pindonga> I get an error when doing so
<pindonga> I've installed a charm from the charm store, and now I want to upgrade it from a local repo
<pindonga> for toying around
<SpamapS> pindonga: heh, there's a bug for that
<SpamapS> pindonga: IMO the charm store is inherently broken until we have a 'switch-charm' command
<hazmat> jimbaker, fwiw you've got a couple of approved doc merge proposals that have been sitting around for  a while
<hazmat> http://jujucharms.com/review-queue
<pindonga> SpamapS, so you actually recommend deploying everything from a local repo?
<pindonga> is there a workaround for that bug, or do I have to recreate my instances from a local charm repo?
<pindonga> and is it possible to add the local repo as the default option so I don't have to constantly pass in the --repository option?
<SpamapS> pindonga: I do
<SpamapS> pindonga: export JUJU_REPOSITORY=/home/you/charms :)
<SpamapS> pindonga: you can't convert a service unit from one charm to the other, but if you're using ec2, you can destroy the service, and re-deploy onto the same machines. However, some charms will break on that.
<pindonga> SpamapS, I'm doing it locally, and have just done that
<pindonga> :)
<pindonga> thx for the env var
<pindonga> the only problem with local is that it's slow :) (on my machine)
<pindonga> but other than that it works , which is already a long way to go
<SpamapS> pindonga: I agree.. SSD helps a lot
<pindonga> I'm actually cheating against myself...
<pindonga> running juju inside of a qemu-kvm vm deploying to lxc instances inside of that, and all of that on a laptop 7200rpm disk
<pindonga> can't expect it to be too fast :)
<SpamapS> yeah even natively it will suck though ;)
<pindonga> so, out of curiosity, what are the plans to support multiple services per host machine?
<jimbaker> hazmat, true, i will move those into an appropriate specs directory in the docs
<SpamapS> I'm not sure "specs" is right
<SpamapS> Perhaps just headers saying that a spec is unimplemented
<jimbaker> SpamapS, but these specs are implemented in this case
<SpamapS> jimbaker: well in that case its just regular documentation
<jimbaker> SpamapS, they just are not great for actual docs
<SpamapS> jimbaker: whats the point of them then?
<jimbaker> SpamapS, they do describe the implemented behavior. but they are not user friendly imho. on the other hand, maybe they are the first pass of what should be cleaned up for that purpose
<jimbaker> SpamapS, however we do have some other internals oriented docs in juju docs, assuming they're still around
<SpamapS> jimbaker: users need to have defined behavior before they need to have easy to read behavior definitions :)
<jimbaker> SpamapS, right :)
<SpamapS> Ok, I just pushed a fix to wordpress in the charm store which makes it the only thing on port 80
<SpamapS> which, consequently, makes it work w/ haproxy
<SpamapS> marcoceppi: no, I dare you to make that fix irrelevant :)
<pindonga> SpamapS, cool, I'll try it out
<pindonga> however I'll first make sure I get it working locally via another way (to learn charms better)
<SpamapS> make sure you have bzr rev54, or charm rev 34
<marcoceppi> SpamapS: well...it technically wont' work with multisite, but since you can't really expose/configure that with the current charm I guess that makes it okay :P
<SpamapS> marcoceppi: right, multi-site is a whole other thing
<marcoceppi> SpamapS: new charm will support it, with site-specific subordinates
<SpamapS> marcoceppi: thats great... like I said, make my change irrelevant :)
<marcoceppi> It'll be hard, but I'll try not to :P
<pindonga> SpamapS, final question of the day
<pindonga> so my hook is trying to do a sed on an apache config file, however, from the hook itself I cannot read that file, but in the shell from debug hooks I can
<pindonga> ie, ls -l /etc/apache/sites-enabled shows the file, but doing cd /etc/apache/sites-enabled within the hook gives me an error
<SpamapS> pindonga: thats a bit odd, but is it possible you're trying to edit it before apache is installed?
<SpamapS> pindonga: also it should be /etc/apache2
<pindonga> SpamapS, no, it was me not reading carefully enough
<pindonga> :/
<pindonga> sorry, and thanks
<pindonga> it worked now
<pindonga> s/apache/apache2/
<pindonga> cool, my charm now works with haproxy \o/
<pindonga> SpamapS, what I did was to add a website-relation-changed hook and inside there I do 3 things
<pindonga> 1. sed the apache config to include ServerAlias ${remote_ip}
<pindonga> 2. ln -s /etc/wordpress/config-${local_ip}.php /etc/wordpress/config-${remote_ip}.php
<pindonga> 3. apache2ctl graceful
<pindonga> how does that sound?
<pindonga> that way you still can keep the wordpress stuff in a virtualhost entry
<SpamapS> pindonga: thats pretty cool :)
<pindonga> SpamapS, should I submit my charm for review somewhere?
<pindonga> :)
<SpamapS> pindonga: since marcoceppi is totally rewriting the w hole thing.. probably not. But I really do appreciate the effort. :)
<pindonga> on the contrary, it was a really useful learning experience
<pindonga> thanks for all the help
<SpamapS> marcoceppi: hey, while you've been redoing wordpress.. have you come up with any insight on whether the endpoing host would be better off configured on the load balancers or on the app servers?
#juju 2012-06-19
<lifeless> SpamapS: around ?
<lifeless> SpamapS: can you perhaps lurk in #maas, plenty of cross over i suspect, and we get lots of juju q's in there.
<EvilMog> I have officially figured out the killer Juju app, although I doubt I can get the package maintainers to make one
<EvilMog> John the ripper + MPICH2
<EvilMog> build a charm for that and you'd have an instant password cracking cluster, and every infosec junky would have it in their private cloud
<SpamapS> EvilMog: I'll take that challenge
<EvilMog> that would be awesome
<SpamapS> EvilMog: they're both packaged, so thats at least easy
<EvilMog> ypi
<EvilMog> you'll want to use the latest jtr source
<EvilMog> the packaged one doesn't have mpich support
<EvilMog> need to edit the makefile uncomment a few lines to add it in
<EvilMog> actually I'll be honest I'd even be happy with an auto mpich setup and pre-setup the hosts files and node information for mpich2
<SpamapS> EvilMog: why isn't the latest one packaged?
<EvilMog> because it changes so often
<SpamapS> ah so you're just one of those devs who thinks "latest" == "best" ;)
<EvilMog> what you'd want is the latest code-train with unofficial jumbo-5 patch
<EvilMog> theres 2 releases, theres the stable but missing tons of features, then theres the jumbo patch which supports MPI and way more password formats
<EvilMog> I'm just a heavy john user, and virtually every cluster I've built has used the latest code train with most recent jumbo release
<EvilMog> this also greatly improves crack performance
<SpamapS> EvilMog: it just baffles my mind that an OS released < 2 months ago would have something so out of date
<EvilMog> the debian base package uses the stable tree
<SpamapS> EvilMog: sounds to me like john+mpi is a fork
<EvilMog> its not
<EvilMog> its an official build
<EvilMog> theres 2 releases stable and unstable
<EvilMog> the debian base package uses stable
<SpamapS> I only see "official free" and "community enhanced"
<EvilMog> same thing
<EvilMog> official free = stable
<EvilMog> which is whats packaged
<EvilMog> community enhanced is where the good stuff is
<EvilMog> including all the performance enhancements
<SpamapS> Ok https://launchpad.net/~pi-rho/+archive/security seems to have jumbo-5 packaged
<EvilMog> nice
<EvilMog> it never used to
<SpamapS> just have to copy that source package and verify the sources
<EvilMog> so the only thing you neeed to do then is verify they compiled it with mpich support
<EvilMog> http://openwall.info/wiki/john/parallelization
<EvilMog> theres 2 lines that need to be uncommented in the make file
<SpamapS> https://launchpadlibrarian.net/107245212/buildlog_ubuntu-precise-amd64.john_1.7.9-jumbo-5-1ubuntu1~ppa2~p_BUILDING.txt.gz
<EvilMog> CC = mpicc -DHAVE_MPI -DJOHN_MPI_BARRIER -DJOHN_MPI_ABORT
<EvilMog> MPIOBJ = john-mpi.o
<SpamapS> no mention of mpich
<imbrandon> morning
<SpamapS> DEB_MAKE_EXTRA_ARGS := SYSTEMWIDEFLAGS="-DJOHN_SYSTEMWIDE=1 -DJOHN_SYSTEMWIDE_EXEC=/usr/lib/john"
<SpamapS> EvilMog: looks like it will need some work
<SpamapS> imbrandon: alo
<EvilMog> yeah, I'm working on a build script
<EvilMog> thinking wget the source, apt-get install the pre-reqs
<SpamapS> imbrandon: You'll be excited to know I'm uploading PHP 5.4.4 to quantal right now
<EvilMog> find and edit makefile
<imbrandon> sweet
<EvilMog> then make and install package
<EvilMog> so if the mpich2 environment is pre-setup I can just call the build script as a post install
<imbrandon> gonna try for backport ?
<imbrandon> ahhaha
<SpamapS> imbrandon: you already know what that will be the problem
<imbrandon> yup
<SpamapS> EvilMog: *edit* the makefile?
<SpamapS> EvilMog: no wait to configure it w/ cmdline args?
<SpamapS> -CFLAGS = -c -Wall -O2 -fomit-frame-pointer -I/usr/local/include $(OMPFLAGS)
<SpamapS> +CFLAGS = -c -Wall -O2 -fomit-frame-pointer -fPIC -I/usr/local/include $(OMPFLAGS) $(SYSTEMWIDEFLAGS)
<SpamapS> thats the diff the package has
<imbrandon> SpamapS: is there a little SpamapS yet ? hehe
<SpamapS> imbrandon: 2 weeks old now
<imbrandon> oh wow, i'm so behind
<imbrandon> Congrats man
<SpamapS> imbrandon: and this would be the 3rd child process I've forked
<EvilMog> that works too
<imbrandon> :)
<EvilMog> I'm just a very hacky analyst who does things the hard way out of ignorance
<EvilMog> http://download.openwall.net/pub/projects/john/contrib/mpi/John_the_Ripper_on_a_Ubuntu_10.04_MPI_Cluster.pdf
<SpamapS> EvilMog: the GPG key that openwall uses is pretty crusty
<EvilMog> if that helps
<EvilMog> more than likely
<SpamapS> pub   1024R/295029F1 1999-09-13
<EvilMog> ick
<SpamapS> yeah its like they don't care
<imbrandon> 1999
<EvilMog> you'd think the authors of hash cracking software would give a damn :P
<SpamapS> EvilMog: actually I'd think the uathors of hash cracking software would run a single evil mirror just to see if anybody checked
<EvilMog> sadly though I have to sneak off to work, I'll be back on in about 8 hours
<EvilMog> you know that would be entertaining
<SpamapS> I have a reasonably well filled out web of trust here.. and I can't find any Debian developers who have signed it yet
<imbrandon> time for fooood, my tummy is growling
<SpamapS> or even any transitive
<SpamapS> Wow finally I found one like 3 levels deep thats in my keys
<imbrandon> mmm coco-puffs
<SpamapS> EvilMog: looks to have built fine
<SpamapS> alright.. skeleton john the ripper charm done
<SpamapS> complete w/ mpich2 scale-out (hopefully)
<SpamapS> If I can add gluster/ceph .. I am a god ;)
<SpamapS> EvilMog: lp:~clint-fewbar/charms/precise/john/trunk  .. needs to setup the SSH keys between hosts, but mpi kinda sorta works
<SpamapS> thinking this should just be a generic 'mpich2' charm
 * SpamapS goes to bed
<jcastro> jml: the "ubuntu" charm I just recommended to you on askubuntu had some problems yesterday, I don't know if we fixed it
<imbrandon> jcastro: what was wrong with it ?
<koolhead17> jamespage, the Juju on LXC issue still giving issues
<koolhead17> in one day it created good GB of logs
<jml> just trying to find documentation on how to set up environments.yaml for ec2
<jml> looks like it's not in the package docs or on the wiki. trying code tree instead.
<imbrandon> jml: https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment-using-ec2
<jcastro> jml: https://juju.ubuntu.com/docs/getting-started.html#introduction
<jcastro> imbrandon: it didn't launch
<jml> ta
<jml> jcastro: how would I have found that from https://juju.ubuntu.com?
<imbrandon> jcastro: ahh
<jcastro> on the Documentation link
<jcastro> but you're right, the webpage sucks
<jcastro> we're supposed to have a new one
<imbrandon> jml: there is a Documentation link on the right, but its not clear, i'm gonna try and fix that today
<jcastro> I wonder if we can get rid of all that marketing crap on it and put the useful information
 * jcastro will ask
<jml> what are control-bucket and admin-secret?
<jcastro> look at the note underneath
<jcastro> Note If you already have an AWS account, you can determine your access key by visiting http://aws.amazon.com/account, clicking "Security Credentials" and then clicking "Access Credentials". You'll be taken to a table that lists your access keys and has a "show" link for each access key that will reveal the associated secret key.
<jcastro> oh
<jcastro> I see what you mean
<jcastro> those are generated for you
<imbrandon> its a s3 bucket name
<imbrandon> if its not made it will make it for you
<jml> so I don't have to specify those?
<imbrandon> yes you have to specify those but the are filled in if you followed the doc
<jml> imbrandon: oh right. I followed another, different doc first that had me specify a local environment.
<jml> fun times, eh?
<imbrandon> hehe
<imbrandon> right :)
<jcastro> ugh, this doc needs some work
<imbrandon> basicly they can be anything as long as they are unique to the env
<jcastro> this LXC section sucks, I'll work on it today
<imbrandon> jcastro: yes , yes it does, i've been doing bitesize chunks as i had time
<imbrandon> heh
<imbrandon> but the juju.ubuntu.com wiki needs fixed as well
<imbrandon> can you get me access to edit that ?
<jml> how do I set a default environment?
<jml> jcastro: also, I get this on 'juju deploy ubuntu': Error processing 'cs:precise/ubuntu': entry not found
<imbrandon> jml: add default: env-name to the top
<jml> imbrandon: thanks.
<imbrandon> jml: try "juju deploy cs:~charmers/precise/ubuntu"
<EvilMog> thanks SpammapS I'll give it a shot in the morning shortly
<jml> imbrandon: same problem, except cs:~charmers/precise/ubuntu
<imbrandon> hrm
<jcastro> yes, we noticed yesterday it was broken
<jml> oh, that's what broken meant.
<jcastro> jml: no unfortunately the wiki is acl'ed to ~charmers because we enable html to put whiz bang things on it
<jcastro> jml: you should be able to bzr branch the ubuntu charm and deploy it manually
<jcastro> bzr branch lp:charms/precise/ubuntu
<imbrandon> jcastro: i'm in ~charmers and cant edit
<jcastro> then "juju deploy --repository . local:ubuntu" should work
<jcastro> imbrandon: huh, really?
<imbrandon> yea i tried to fix it up some over the weekend and it said i had no rights
<jml> jcastro: thanks.
<jcastro> imbrandon: k, I'll ask to fix that
<jcastro> jml: when clint wakes up I'll ask him what's up with that
<imbrandon> jcastro: rockin let me know, cuz i have time to do some fixin on it today if it get worked otu
<imbrandon> out*
<jcastro> imbrandon: lp:juju/docs could always use love in the meantime. :)
<imbrandon> yup already have that open :)
<imbrandon> been working on the navigation
<imbrandon> err secondary nav
<imbrandon> jcastro: we should just have sphinx generate the whole juju.ubuntu.com page , its controled by the same acl ... woudlent take that much to add the front page stuff in
<hazmat> hmmm..
<imbrandon> err i guess its not
<imbrandon> morning hazmat
<imbrandon> hazmat: what ya think ? wouldent be too much work
<imbrandon> pkgme.net is doing it that way i see
<imbrandon> :)
<jml> heh
 * imbrandon waves to jml :)
<jml> jcastro: have updated the question with the qualifiers to my success.
<jml> I think I'm going to write something to do this for me :(
<jcastro> jml: wait for smoser or someone else on the server team to respond
<jcastro> you can't be the first guy to want to do this
<imbrandon> jml: i hadent looked at your ask question yet but there are helper tools in juju-jitsu and charm-tools
<jml> imbrandon: thanks. I'm not going to look those up right now, I'm afraid.
<jml> jcastro: good call.
<smoser> what are we responding to, jml
<smoser> jcastro,
<jcastro> smoser: http://askubuntu.com/questions/153029/tool-to-fire-up-ec2-instance-and-ssh-in
<smoser> i swear, this seems like a trap
<jcastro> no, a proper trap would include getting you to maintain something new
<smoser> imo ubuntu-ec2-run does the correct thing and lets you create security groups and pass them through.
<smoser> jml, see kirkland's cloud-sandbox in bikeshed
<imbrandon> hazmat: ping, any idea why the copyright at the bottom of juju.u.c/docs wont show when generated automaticly , but when i run "make html" local it shows hust fine , there is a huge diffrence in the sphinx version , maybe thats it ?
<imbrandon> s/hust/just
<smoser> but having a tool create a security group and upload ssh keys for you seems overkill to me.  you might as well just do those things (euca-create-group, euca-import-keypair) and then use them. having each tool do that for itself doesn't make sense.
<jml> smoser: I guess I don't see why I should have to think about those things at all when a tool could just create a disposable security group and keypair and then clean up after itself.
<hazmat> imbrandon, dunno offhand
<imbrandon> kk
<hazmat> imbrandon, i imagine some delta between the build steps.. but i didn't setup the j.u.c/docs build
<imbrandon> i'll just pull the if/else out for now, was going to move the js to the bottom anyhow
<hazmat> imbrandon, i did setup this one.. http://jujucharms.com/docs/  and the copyright shows
<imbrandon> rockin, kk yea likely a delta in the sphinx version then
<hazmat> imbrandon, yeah.. 0.6.4 vs 1.1
<imbrandon> and thats old :)
<imbrandon> heh
<smoser> well, you're certainly welcome to write such a tool.  patches to ubuntu-ec2-run would be welcomed, although i think it would actually better fit in the bikeshed tool.  the reason being that ubuntu-ec2-run is a single "run something" command rather than a long lived environment command (which would be required to properly manage security groups and ssh keys.
<imbrandon> hazmat: btw i did split out the theme into its own
<imbrandon> lp project under .... one sec
<imbrandon> https://launchpad.net/ubuntu-community-webthemes/light-sphinx-theme
<hazmat> smoser, a flag to just enable web and ssh seems like a nice option without mucking with security groups, ditto for lp id  or manual specify ssh key for a key to use instead of keypairs..
<hazmat> its not about long lived, its about convience.
<hazmat> patches welcome i know ;-)
<smoser> well, it is long lived.
<smoser> if you're making temporary things, then you have to clean up those things.
<smoser> and that means you have to be around when the instance terminates to cleanup
<hazmat> smoser, you mean the sec group as a transient thing to clean.. fair enough
<ToyKeeper> Hi, um, I'm trying to deploy a blank unit for charm development, but I'm getting 'not found' on both cs:precise/ubuntu and cs:oneiric/ubuntu.  Any idea what I'm missing?
<hazmat> imbrandon, cool
<smoser> hazmat, or keypairs if you were uploading them (import-keypair). but thats not necessary, really.
<smoser> (and you're right that 'ssh-add -L' probably is going to get you the right thing by default)
<smoser> so i guess the security group is the only thing that would need to be temporary, and fwiw, the 'default' security group can be modified to include 'ssh'
<smoser> at which point, you
<jml> there's also some prior art in lp:lp-dev-utils which creates security groups and keypairs, and cleans them up on following runs.
<smoser> you'd have nothing to clean up, and ubuntu-ec2-run "just works" . the one command extra that jml had to type would be "euca-authorize default -p 22".
<smoser> in the end, i use the 'default' group
<smoser> so tha tisn't an issue for me, and i have wrappers that specify '--key brickies' (which i agree would be a nice add to ubuntu-ec2-run)
<smoser> why they default to the 'default' security group, but provide no way to default to keypairs is strange.
<hazmat> mgz, ping
<mgz> pong
<hazmat> mgz, would you mind using the lbox command to submit the ostack branch?
<hazmat> mgz, it should be a simple thing of just installing lbox , and lbox propose from within the branch
<mgz> I'm on it, battling through go being weird about things
<mgz> I believe I have it defeated now though
<hazmat> mgz, k, ping me if you need any help wrt
<hazmat> mgz, slay the dragon :-)
<mgz> hm, don't have exp/html
<mgz> hosted on google code after it got removed for 1.0 apparently
<hazmat> mgz, huh.. are you installing the package or from bzr?
<hazmat> it should be staticallly linked in the lbox bin
<mgz> I'm trying to build the tools myself
<mgz> which has been... interesting
<mgz> okay, done. amusing turns out you need a trunk copy of the go source anyway.
<mgz> ...and the lbox webbrowser launching script is broken?
<mgz> ...ah, nope, just couldn't cope with giving up the term to lynx.
<mgz> ...unknown files doesn't mean the branch in unclean
<mgz> hazmat: done, not sure what exactly, but something. it posted a comment as me at least.
<hazmat> mgz did ask you for google login info?
<mgz> nope.
<hazmat> mgz first time round i'd run with -v.. it is reentrant/idempotent incidentally
<hazmat> mgz.. oh.. sorry it needs a flag to enable the reitveld integration..
<hazmat>  lbox propose -cr -v
<mgz> ah, now I see that the subcommands have --help too
<mgz> ...and it failed in a fun manner
<mgz> 2012/06/19 15:35:55 error: Failed to send patch set to codereview: Issue creation errors: {'subject': [u'Ensure this value has at most 100 characters (it has 204).']}
<mgz> how do you spell a string slice in go...
<mgz> dammit, there are two rietveld.go files, edited the wrong one
<mgz> hazmat: really done now.
<zirpu> how close is juju to being ready for production use?
<hazmat> mgz thanks
<robbiew> zirpu: define "production"
<zirpu> used to build a set of instances for running a service. mostly i want to setup a set of mongodb shards and keep it running.
<robbiew> zirpu: juju runs this -> http://www.omgubuntu.co.uk/
<robbiew> I guess my suggestion would be to try it in a test environment, let it run a bit, and if you are satisfied with it...roll it out
<zirpu> i'll troll the bug list too. that should help me avoid gotchas etc.
<zirpu> s/troll/trawl/  oops. :-\
<mgz> troll is valid, it might just be misconstrued
<m_3> zirpu: we've been developing a set of practices for long-running juju environments... writing them up is stil tbd for this cycle tho
<m_3> zirpu: most is prettymuch what you'd expect... freeze your charms, freeze juju.. the rest really depends on the number of people who're maintaining and need access/control
<jcastro> negronjl: m_3: hazmat: we should talk velocity/oscon demo
<m_3> yup
<m_3> jcastro: now?  irc meeting in #ubuntu-meeting atm that I could hang "over"
<jcastro> I think today sometime would be awesome
<m_3> jcastro: see what's good for negronjl... I'm just charmpiloting the rest of the day
 * jcastro nods
<jcastro> whoa, holy docs queue batman
<james_w> http://puppetlabs.com/blog/module-of-the-week-puppetlabs-openstack/
<m_3> jcastro: I was running off of charm-tools' review-queue
<m_3> now I see the docs queue
<jcastro> I don't think juan has fixed the bug in the cli tool
<jcastro> we only found it yesterday
 * m_3 face palm
<negronjl> jcastro, m_3: I'm here
<negronjl> jcastro:  what bug in which tool ... I can fix it but I've been working on the maas demo
<jcastro> your CLI review checker
<m_3> negronjl: no biggie... jorge was mentioning the docs queue and I realized I was looking at the charm review-queue which didn't show them
<negronjl> jcastro:  what's the issue ( or bug number )
<m_3> I can fix it
<jcastro> https://bugs.launchpad.net/charmworld/+bug/1002977
<negronjl> m_3: thx
<_mup_> Bug #1002977: Review queue should list lp:/juju/docs <Juju Charm Tools:Confirmed for negronjl> <charmworld:Fix Released by hazmat> < https://launchpad.net/bugs/1002977 >
<hazmat> negronjl, i put some notes in the bug about what i had to change
<m_3> negronjl: when's a good time to hang on node/mongo stuff?
<jcastro> kapil left notes there
<hazmat> negronjl, its slow..
<hazmat> for interactive usage
<negronjl> hazmat: m_3 is going to fix the cli tool
<m_3> I see the comment on the fix
<negronjl> m_3: i can hangout
<m_3> jcastro: ?
<jcastro> count me in!
<imbrandon> nodejs ? /me wants on
<jcastro> I'll start the hangout
<m_3> spinning up the other machine (cam's still borked on this one)
<jcastro> imbrandon: invited, come on in
<imbrandon> had my headset off
<imbrandon> bah
<imbrandon> my connection keeps resetting
<imbrandon> i'm just gonna not hang this time :( but yea m_3 hit me up on the node monitoring stuff after
<imbrandon> if ya like
<m_3> imbrandon: cool
<jcastro> negronjl: martin's in china so I guess that means "sure, keep it."
<negronjl> jcastro: lol
<japage> hi - is anyone else having trouble finding cs:precise/nova-cloud-controller  in the ubuntu store?
<hazmat> m_3, you going to strange loop?
<m_3> hazmat: not planned
<jimbaker> hazmat, i submitted a talk for strangeloop but it was not accepted
<hazmat> jimbaker, bummer
<jcastro> dang
<jcastro> that sounded like a shoe in
<m_3> jcastro: we just need to keep trying... juju'll gradually get more notice
<m_3> jcastro: gluecon rejected my proposed talks even though the conference is right on target
<jcastro> yes, we will go until mims is like superdiamond on all airlines
<m_3> jcastro: the cool part was that there was enought mention of juju during the conference that I'll think we'll get in next year
<imbrandon> lol
<m_3> that pattern's not suprising with new stuff
<m_3> unless it's huge commercial things with big marketing budgets (i.e., not juju)
<SpamapS> japage: http://jujucharms.com/tools/store-missing
<SpamapS> japage: it is, indeed, missing
<japage> thanks for the link SpamapS!
<SpamapS> japage: we are working on getting more visibility into the charm store import process
<SpamapS> when I say working, I mean, we are aware of the problem
<SpamapS> I don't think anybody is actually working on it directly. :-P
<hazmat> i've asked relevant folks about it
<SpamapS> japage: the charm store isn't really useful for production anyway.. since you can't fix any charm bugs
<hazmat> i'll see if i can get a more proactive tooling around the issue
<hazmat> SpamapS, you can publish a fix with a new version at the store
<SpamapS> what we really need isn't switch-charm' but 'fork-charm'
<SpamapS> hazmat: *I* can, but Users* cannot
<hazmat> switch-charm still seems like the right alt solution
<japage> SpamapS : hazmat : no worries, i was giving it a test run doing evaluations of all of the different cloud stuffs.
<hazmat> SpamapS, what's fork-charm?
<SpamapS> hazmat: production doesn't really involve letting third parties control your destiny at 3am :)
<hazmat> SpamapS, their not upgrading your systems at 3am ;-)
<imbrandon> whats charmload expecting anyhow, i couldent get it to populate mongod
<hazmat> a deployed charm is cached within the env, the store isn't consulted again unless your upgrading.
<SpamapS> hazmat: fork-charm would download the charm, unpack it, bump the revisioand switch the charm
<hazmat> imbrandon, charmload?
<imbrandon> from charmstore
<SpamapS> hazmat: In the case where your site is broken at 3am, and the fault is found to be the charm, you must be able to fix the charm, and upgrade then
<imbrandon> charmload and charmd
<m_3> SpamapS: isn't that what destroy-environment is for? :)
<SpamapS> hazmat: fault isn't always found at the time changes are made :)
<japage> SpamapS : what kind of refresh rate does that tool tend to see?
<SpamapS> japage: 15min
<SpamapS> I think
<japage> cool
<japage> juju is a lot of magic
<SpamapS> nah, just smoke, mirrors, and python
<jcastro> the charms are the magic!
<japage> lots of python
<japage> thanks guys, super appreciate the help
<imbrandon> hazmat: there is like -0- documents and the only info is "run charmload <mongo addr>" after 3 hours nothing in the db still
<imbrandon> and charmd seems to run a store, but empty without i'm guessing charmload
<imbrandon>  /charm-info?... etc all "work" but are empty
<hazmat> imbrandon, oh.. this is the store impl
<SpamapS> something definitel wrong with this xen box + tmux + irssi .. keyboard/buffer/something lags out after a few weeks
<imbrandon> screen ftw :)
<hazmat> imbrandon, i set it up once.. just to experiment, it was pretty hands off afaicr
 * hazmat digs it up again
<hazmat> except it picked the same db collection that i was using for the charm browser
<imbrandon> yea like it shows a ton of stuff trying to get from getBranchTips, but never fills in the mongodb
<hazmat> imbrandon, it populates the juju db / charms collection in mongodb
<imbrandon> do i need to have created that db first ? i wouldent think so
<japage> so, when something is MIA in the charms store, is there another way to go get that code and package it similarly for use? or should i just be patient?
<SpamapS> perhaps we should be running our own parallel charm store to see if it has the same problems and results as the production one :)
<imbrandon> SpamapS: heh i was just trying that
<SpamapS> japage: bzr branch lp:charms/precise/foo
<imbrandon> or was getting there
<imbrandon> heh
<kees> SpamapS: uuuh, why does this should a 0 line delta? :P https://code.launchpad.net/~charmers/charms/precise/sbuild/trunk/+merge/110934
<japage> awesome
<kees> comparing the two trees clearly shows my changes. :P
<SpamapS> japage: put that in a dir named 'precise' under another dir which you can name anything (mine is /home/clint/charms) and then set JUJU_REPOSITORY=/home/clint/charms ... it will now be local:charms/precise/foo (assuming metadata.yaml has name: foo)
<hazmat> imbrandon, charmload localhost:27017 is all i do
<hazmat> it starts loading and spewing stuff on the console
<japage> SpamapS: you rock. thanks!
<SpamapS> kees: well, thats backwards for one (lp:charms/sbuild into your branch) .. but.. perhaps they already got merged?
<imbrandon> hrm i dident add the port , let me try that, but yea i get a ton on console
<hazmat> imbrandon, if its writing the console its writing to mongo
<hazmat> its got a hardcoded db name of 'juju'
<imbrandon> http 27018 on the web shows nothing in the db tho
<imbrandon> one sec
<hazmat> imbrandon, the data is there, try the cli
<imbrandon> http://15.185.100.228:28017/
<kees> SpamapS: ah, you're right. okay. fixed: https://code.launchpad.net/~kees/charms/precise/sbuild/trunk/+merge/111080
<hazmat> imbrandon, http://15.185.100.228:28017/listDatabases?text=1
<imbrandon> http://paste.ubuntu.com/1049617/
<hazmat> imbrandon, use the mongo cli
<hazmat> or a real front end, the data is there
<imbrandon> k
<imbrandon> hazmat: this is why i said it isnt
<imbrandon> http://15.185.100.228:9090/charm-info?charms=cs:~charmers/precise/memcached
<imbrandon> but i know it loaded that key already
<hazmat> imbrandon, ic.. it failed processing it because you have the ssh accepting all clients and it doesn't like the output
<hazmat> er. all server fingerprints
<imbrandon> erm?
<hazmat> imbrandon, ie you have something in your ~/.ssh/config like UserKnownHostsFile=/dev/null
<imbrandon> yes
<hazmat> it doesn't like that
<hazmat> because it pollutes the cli output its parsing
<imbrandon> heh wow, ok
<imbrandon> ahh
 * hazmat just figured that one out as well
<imbrandon> wow, it parses the cli output ?
<hazmat> imbrandon, yeah.. bzr is in python.. its executing the cli
<hazmat> and then it processes it, and zip its up into mongodb gridfs
<hazmat> and deletes the checkout
<imbrandon> heh you would think it would just use the lp api and get json
<hazmat> imbrandon, json for the file contents?
<imbrandon> well a poiinter to the revision
<hazmat> it needs the file contents
<SpamapS> thats why it is failing on the ubuntu charm then
<hazmat> its assembling a zip
<imbrandon> ohh ok
<SpamapS> hazmat: probably should be rewritten to ignore unknown lines, rather than fail on them :-P
<imbrandon> still seems awkwasrd
<hazmat> SpamapS, rewritten.. interesting idea
<hazmat> SpamapS, so i think i hack up a version of this to try and figure out exactly why the store isn't loading them correctly
<hazmat> the alternative is grepping through stdout logs of the charmload processes
<hazmat> or having the store actually hand out that info.
<SpamapS> hazmat: we need those exposed publicly anyway
<SpamapS> otherwise we'll never be able to maintain the store
<hamptonpaulk> can anyone point me to some docs on debugging relation-errors?
<imbrandon> hazmat: ahh that worked
<imbrandon> well     LogLevel ERROR
<imbrandon> worked :)
<SpamapS> hamptonpaulk: hm. I don't think there's a good definitive doc on that specific issue.
<SpamapS> We should actually produce an error handling document
<SpamapS> hamptonpaulk: basically you need to find the charm log
<SpamapS> hamptonpaulk: with the local provider its in your datadir, under 'user-envname/units/unit-##/unit.log'
<SpamapS> hamptonpaulk: in ec2/maas/orchestra, it will be in /var/lib/juju/units/unit-#/charm.log
 * hamptonpaulk checking local
<SpamapS> hamptonpaulk: Another way to go is to run 'juju debug-hooks unit/#' and then on another terminal 'juju resolved --retry unit/# relationname'
<SpamapS> hamptonpaulk: that will pop up a window where you can re-run the hook script and see the failure
 * SpamapS suddenly has an image of the kid from "The Sixth Sense" saying "....  I see fail"
<imbrandon> lol
<hamptonpaulk> SpamapS: ok, thanks. I will give it a go.
<tedg> I'm a bit confused.  I've got a setup where I can run juju commands like status, but terminate-machine doesn't work.
<tedg> What does terminate-machine do differently?
<SpamapS> tedg: nothing really
<SpamapS> tedg: it just pokes some nodes in zookeeper so the provisioning agent will terminate the machine
<hazmat> tedg, it kills a machine by id that is not in use
<tedg> Hmm, odd.  I'm getting a "ERROR SSH authorized/public key not found"
<tedg> It's like it's using a different account / setup than others.
<hazmat> hmm
<tedg> Do the other commands use zookeeper?
<SpamapS> almost all of them
<SpamapS> certainly status
<hazmat> every command does except destroy-environment and bootstrap
<hazmat> tedg, that's odd its not materially different from any of the other commands
<tedg> Here's the stack trace if that's helpful: http://paste.ubuntu.com/1049668/
<tedg> Huh, adding "authorized-keys-path: ~/.ssh/id_rsa" to ~/.juju/environments.yaml fixes it.
<SpamapS> tedg: you might have a ~/.ssh/config rule interfering, tho that wouldn't make much sense given that status works ;)
<tedg> Yeah, my config is basically empty except for allowing unknown keys.
<imbrandon> hazmat: yup thats all it was, db filling in nicely now
<imbrandon> ty
<hazmat> imbrandon, np
<imbrandon> btw i just added the loglevel to my ssh.conf too
<imbrandon> so it quelches warnings
<imbrandon> i hope 2 and 3rd runs are faster :|
<imbrandon> SpamapS: until we start using semver or whatever when we make official builds can the bzr build number be part of the version not just +bzrXXX e.g. when 0.5.1 rolls out 0.5.1.560 assuming bzr rev 560 ? it would make my life with rpm's and osx brew much easier
<imbrandon> ( that fits into semver too iirc )
<SpamapS> imbrandon: sure we can change that.. what official builds are you pulling into rpms and osx brew tho?
<SpamapS> imbrandon: I'd have thought you'd just pull from lp:juju
<imbrandon> well i am for the osx brew for now, but i'd like to use the tarbal
<imbrandon> so that its a known version
<imbrandon> e.g. brew allows for a --HEAD too
<SpamapS> oh
<imbrandon> so i can make "brew install juju" install from the tar and "brew install juju --HEAD" from lp:bzr
<SpamapS> I hadn't considered making official tarballs
<imbrandon> LP makes the tarbals
<imbrandon> no biggie
<SpamapS> oh for the ppa
<imbrandon> yea
<SpamapS> thats a bit of a happy accident
<imbrandon> lol yea
<imbrandon> works tho
<SpamapS> I've been thinking about splitting the PPA's
<SpamapS> or limiting the PPA to building from a stable series
<imbrandon> and have liek a juju/crack ppa
<imbrandon> heh
<imbrandon> but yea that makes "brew install juju" install from the tar and "brew install juju --HEAD" from lp:bzr possible
<imbrandon> and then i know exactly the version too thats installed
<SpamapS> really juju should have a --version for that
<SpamapS> I'm still surprised that was never introduced :p
<SpamapS> anyway, lunchtime
<imbrandon> heh ttyl
 * imbrandon looks into how hard --version will be to add
<imbrandon> uht ohh look out i'm doing python
<imbrandon> heh
<imbrandon> SpamapS: ping
<imbrandon> SpamapS: well i poked a bit into the python just cuz i was a little bored :) got the --version added but not sure how to pull it dynamicly from setup.py
<imbrandon> SpamapS: http://bazaar.launchpad.net/~imbrandon/juju/version-cli-option/revision/543?start_revid=543
<SpamapS> imbrandon: I think we should flip it around, and have a juju.version
<SpamapS> imbrandon: so  from juju.version import JUJU_VERSION
<SpamapS> imbrandon: and have setup.py pull that in
<imbrandon> ahh ok
<SpamapS> hazmat: ^^ is that a common python idiom?
<SpamapS> jimbaker: ^^
<jimbaker> SpamapS, i don't know about common, but it seems reasonable enough
<hazmat> SpamapS, i was thinking  text file
<jimbaker> SpamapS, you are still centralizing it from a metadata perspective in setup.py, so that's good
<hazmat> SpamapS, the upgrade branch already has that
<hazmat> a juju/version.txt
<hazmat> that drives a python api and can be used by tooling
<hazmat> i can merge that solo, its tiny
<imbrandon> k
<SpamapS> hazmat: sahweet
<SpamapS> hazmat: https://bugs.launchpad.net/juju/+bug/938899
<_mup_> Bug #938899: juju needs a '--version' option <juju:New> < https://launchpad.net/bugs/938899 >
<SpamapS> hazmat: assigning to you
<hazmat> sounds good
<SpamapS> hazmat: thats a honolulu feature tho, right?
<imbrandon> darn i almost killed my first juju bug :) j/k
<imbrandon> heh
<hazmat> SpamapS, it was just because other things where priority, but its basically done, just need to copy it over, so feel free to rearrange
<hazmat> i'll switch out else when i close it
<hazmat> also fixes up the python packaging (includes test data, etc)
<SpamapS> hazmat: done, and tested, are two different things :)
<SpamapS> hazmat: lets land all the stuff that is "done" in the first few days of honolulu
<hazmat> SpamapS, the upgrade stuff uses and installs this..
<hazmat> k
<SpamapS> hazmat: Its not that meaningful as a milestone.. so its no big deal. I just want to "release" something.
<imbrandon> http://api.websitedevops.com/out.charmload
<imbrandon> got an error that may help
<hazmat> imbrandon, thanks
<hazmat> that should tell us where the errors are in mia charms
<imbrandon> hopefully :)
<SpamapS> 2012/06/19 21:40:25 JUJU ----- lp:~charmers/charms/precise/nova-cloud-controller/trunk
<SpamapS> 2012/06/19 21:40:25 JUJU Trying to add charms [cs:~charmers/precise/nova-cloud-controller] with key "adamg@canonical.com-20120511184310-u1m6sledzutq9otb"...
<SpamapS> 2012/06/19 21:40:25 JUJU All charms have revision key "adamg@canonical.com-20120511184310-u1m6sledzutq9otb". Nothing to update.
<imbrandon> SpamapS: this is on a secondary run
<nathwill> working on trying to finish the storage backend for a revamped owncloud charm.. y'all think it'd be best to use s3fs, or require an nfs instance?
<SpamapS> imbrandon: is cs:precise/nova-cloud-controller in there?
<SpamapS> nathwill: *both* :)
<imbrandon> one sec , lemme check
<SpamapS> nathwill: give s3 as one option, and nfs as another. :)
<imbrandon> SpamapS: nope
<imbrandon> http://15.185.100.228:9090/charm-info?charms=cs:precise/nova-cloud-controller
<nathwill> SpamapS: lol. i see... :P
<imbrandon> SpamapS: charmd running on 9090 feel free to poke away at it
<SpamapS> imbrandon: ok, so where's the initial import log?
<imbrandon> that i dident redirect to the log :(
<imbrandon> i can redo it tho
<imbrandon> if it will help
<SpamapS> actually
<SpamapS> it seems that one has never been fixed
<SpamapS> still points at 'precise'
<imbrandon> some kinda 64bit int error
<SpamapS> no this is a trunk name issue
<SpamapS> fixing
<imbrandon> panic: interface conversion: interface is int, not int64
<imbrandon> is what i was getting at
<SpamapS> weird
<hazmat> imbrandon, you have to kill mongodb and start over for the initial logs, it will skip things its already seen that are unchanged.
<hazmat> cool
<hazmat> imbrandon, nice catch
<imbrandon> woot :)
<hazmat> hmm. the terracotta charm has some other oddities
<hazmat> its one of two charms using a short cut for interface declaration that's supported in pyjuju but not by charm proof, or gojuju
<imbrandon> ohh pizzahut just showed up, charmd is running on 9090 , i'll be back after foood :)
<imbrandon> actually , one sec
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~# ssh-import-id hazmat
<imbrandon> INFO: Successfully authorized [hazmat]
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~# ssh-import-id clint-fewbar
<imbrandon> INFO: Successfully authorized [clint-fewbar]
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~#
<imbrandon> if you need , ssh as root and byobu is running, i'm headed to eat
<imbrandon> box has nothing meaningfull on it
<imbrandon> its a charm playground
 * imbrandon heads to eat
<hazmat> SpamapS, actually that interface declaration short cut causes a runtime  exception in proof
<SpamapS> Ok, nova-cloud-controller should come off the MIA list now
<SpamapS> kees: so, your changes to sbuild change the config.yaml options...
<SpamapS> kees: the problem with that is if you have an existing deployment, your configurations are lost.
<SpamapS> kees: we probably need a way to mark config options as deprecated, and then convert their values into the new ones (yet another use case for config-set)
<SpamapS> perhaps somebody else could +1 this MP so we can land it in galapagos? Great change.. .really needed too.
<SpamapS> https://code.launchpad.net/~therve/juju/validate-config-values/+merge/104808
<SpamapS> fixes bug 979859
<_mup_> Bug #979859: Unable to set boolean config value via command line <juju:In Progress by therve> < https://launchpad.net/bugs/979859 >
<kees> SpamapS: do you want me to figure out a way to deal with that, or should users of that charm just feel pain?
<SpamapS> kees: I think an update to the README is ok
<SpamapS> IMO we should ditch the 'revision' file and go to a 'changelog' file so each revision can actually have info with it, so upgrade-charm can display it.. but thats neither here nor there.
<SpamapS> kees: also the package-builder relation seems poorly defined. Were you thinking of just using private-address/public-address  (so no need for hooks) ?
<SpamapS> kees: but I think you should also keep the old values, so users can use 'charm get servicename' to get the old values and plug them into the new ones.
<kees> SpamapS: "charm proof" yelled at me, so I figured I'd at least provide a "here I am" report
<SpamapS> I:'s are just info.. you can safely ignore I's
 * kees nods
<imbrandon> can we relation-set on juju-info ? heh
<SpamapS> imbrandon: probably
<SpamapS> That would be.. misguided.. but probably :)
<imbrandon> heh
<imbrandon> it would be a hack round config-set
<imbrandon> hrm *thinks of ways to twist that into submission*
#juju 2012-06-20
<EvilMog> btw I definately like the idea of a dedicated mpich2 charm, and then just setup an alternate charm for jtr
<jml> am now getting this error on local deploy, bootstrap and destroy-environment:
<jml> Unable to create file storage for environment
<jml> 2012-06-20 12:43:47,632 ERROR Unable to create file storage for environment
<jml> hmm. IRC logs that google found indicate reboot might do the trick.
<kaaloo> Usually if you del /tmp/juju-local or run using sudo it will go through
<jml> Yeah, I just deleted juju-local
<jml> I wonder why it has that error.
 * jml wishes he had a blocking deploy
<kaaloo> Ha!  May the juju gods help us.  Seems to go back and forth depending on the version you're using.
<jml> Well, I don't think it's a good idea all the time
<jml> What's the deal with all of the blank lines in debug-log during hook.output?
<jml> also, I'm guessing debug-log flags stderr output as ERROR. Is that right?
<imbrandon> yea
<jml> is there any way to look at debug-log retrospectively? i.e. is it stored anywhere on disk?
 * jml deploys again, this time w/ tee
<imbrandon> hazmat: ping ( when you get in this morning ) need a bit of help debuggin a build issue, should be trivial i hope
<imbrandon> jml: yes it should be stored on disk, i think /var/lib/juju or something, i totaly forget, but i konw it is
<hazmat> imbrandon, pong
<imbrandon> heya
<imbrandon> i did a docs push a hour or so ago and it should look like http://www.assets-online.com/docs/juju/index.html
<imbrandon> but no go :(
<imbrandon> is there a build log or anything ?
<imbrandon> e.g i put the tree nav back, and added some responsive goodies like we talked about :)
<hazmat> imbrandon, looks quite nice
<imbrandon> ty
<hazmat> imbrandon, i don't have any insight into j.u.c/docs runs.. i did just update the one at jc.com/docs and it worked fine
<hazmat> imbrandon, it might be a daily cron job
<imbrandon> hrm kk, was 15 min before
<imbrandon> no biggie tho i can just keep an eye out tomarrow worst case
<imbrandon> thought there might be some logs etc , not a huge deal tho :)
<jml> imbrandon: thanks.
<imbrandon> should look good on tablet and phone too now, i only checked on my ipad tho
<imbrandon> hazmat: btw i fully married the "traditional" canonical core css and bootstrap css with this push too, so like it would be easy to update jc.com to use the web-teams header but still get all the bootstrap/bootswatch goodies
<hazmat> sweet
<jml> 'charm help' gives me '/usr/share/charm-tools/scripts/help: 11: /usr/share/charm-tools/scripts/help: /usr/share/charm-tools/scripts/: Permission denied'
<jml> Version: 0.3+bzr148-2~precise1
<imbrandon> hrm how can i charm-upgrade to just a portion of the service ? e.g. a rolling upgrade
<imbrandon> jml: thats an un helpful error but what it really means is help expects an argument
<imbrandon> charm help <some_command>
<jml> oh. the irony.
<imbrandon> heh
<jcastro> imbrandon: ~charmers now has edit rights to the wiki
<imbrandon> jcastro: ROCK!
<imbrandon> see my latest push ? all prettyfied
<imbrandon> jcastro: http://www.assets-online.com/docs/juju/index.html
<imbrandon> so whenever /docs builds next thats what it will look like
<jml>  INFO: Setting up initscripts (2.88dsf-13.10ubuntu11) ...
<jml>  ERROR: mount: block device /dev/shm is write-protected, mounting read-only
<jml>  mount: cannot mount block device /dev/shm read-only
<jml> Am getting that now when I try running dist-upgrade in my install hook (lxc environment)
<jml> should I be filing bugs about all of these things?
<imbrandon> likely, sorry i cant be much help, not used the local provider much
<imbrandon> jcastro: no workie http://cl.ly/HVxl
<m_3> imbrandon: docs merge this morning... I'm charmpilot today
<jcastro> imbrandon: can you send a mail to rt@ubuntu.com with that screenshot?
<imbrandon> jcastro: sure thing
<imbrandon> m_3: E:parse
<jml> is there a standard thing that hooks do to silence debconf warnings about backends?
<jml> I have to bump revision every time I make a change to a charm, even while under development, right?
<mars> jml, upgrade-charm will bump the number in the revision file for you
<jml> mars: thanks.
<jml> (although changing '1' to '2' is probably the least of the things I want automated :))
<mars> hehe
<jml> debconf :( upgrade questions :(
<m_3> jml: maybe you mean DEBIAN_FRONTEND='noninteractive'?
<jml> where do I mean it?
<m_3> jml: asking about how hooks silence debconf warning
<jml> so just export that at the start?
<m_3> juju sets up the hook exec environment with that set already
<SpamapS> jml: you can use deploy --upgrade
<m_3> oh, now I think I understand your question... sorry.  you can do something like `apt-get update || true`
<m_3> doesn't _silence_ them, but it doesn't barf on them either
<jml> SpamapS: thanks.
<jml> http://paste.ubuntu.com/1050895/ â that's where my install hook blocks at the moment.
<jml> SpamapS: should I destroy between deploy --upgrade?
<SpamapS> jml: I encourage everyone to make the upgrade-charm hook call 'stop;install;start;config-changed' so it basically redeploys
<jml> SpamapS: oh right. there's a hook. :)
<jml> SpamapS: if you encourage everyone to do that, why doesn't the charm created by 'charm create' do that?
<SpamapS> Its also not a bad idea to "refresh" all your relations by calling them all over again but that requires careful refactoring.
<m_3> jml: so it looks like you might need to echo some config params to debconf-set-selections for that
<SpamapS> jml: *GOOD IDEA*
<SpamapS> honestly.. IMO juju core should do it
<jml> SpamapS: I abstain from that category of discussion :)
<m_3> SpamapS: jitsu-core does though :)
<SpamapS> should just call upgrade-charm; ... all of that
<SpamapS> m_3: huh? :)
<m_3> juju-do will refire a relation right?
<jml> I'm working up my first serious charm. Will probably just have to grep my logs here for r'<jml>.*\?'
<SpamapS> m_3: jitsu do was renamed 'jitsu run-as-hook', because no, it does not do that
<jml> first tip: wait until the US is awake
<jml> ok, what's jitsu?
<SpamapS> jml: juju-jitsu
<SpamapS> jml: its in the PPA.. little helpers
<m_3> SpamapS: ah, nevermind... that hasn't been merged in yet
<jml> SpamapS: I'll take a look.
<jml> jitsu has no way of discovering subcommands?
<newz2000> Hi, how do we know which we should used, db-relation-joined vs db-relation-changed?
<SpamapS> jml: oops, regression.. that used to happen on bare 'jitsu'
<newz2000> The tutorial (demoing drupal) uses db-relation-changed but we're not getting the user/pw/db details there, but they do seem to show in relation-joined
<m_3> jml: hey, so `debconf-get-selections | grep console-setup` shows lots of options... perhaps you can echo one of these to debconf-set-selections for that package?  not sure the option from your pastebin, but start there
<james_w> I find it surprising that console-setup is prompting
<m_3> newz2000: joined is fired once, changed is continuously fired on both sides until no more relation-sets are called
<james_w> I would have thought it would already be set up in the container
<m_3> james_w: yes, me too
<newz2000> m_3: if we're configuring our app to talk to the db, which is best?
<james_w> as in, I haven't seen it, but maybe jml is installing some obscure packages in the install hook
<jml> at this stage, I'm just dist-upgrading precise.
<jml> tbh, am a little surprised that there's not a standard recipe for doing so.
<m_3> newz2000: often best in changed
<m_3> newz2000: make sure you `exit 0` if the other side (db) isn't up yet
<newz2000> m_3: ok. We're having a prob where our script is firing one time and has no login credentials
<newz2000> m_3: ok. We're still trying to wrap our heads around that part.
<james_w> newz2000, you don't always get all the credentials the first time it is called
<pindonga> SpamapS, hi there again :) conceptual question about subordinate charms... would it make sense to write an apache-wsgi-app subordinate charm that you can configure with options (wsgi file, apache config, etc) (so that it can be reused by any "web appserver" charm without having to rewrite the apache config stuff?
<james_w> pindonga, it would
<m_3> newz2000: it's up to the particular interface, but the usual story is the db gets the other side's hostname, then creates databases and creds, then does a relation-set on that
<pindonga> SpamapS, basically what I'm looking for is something similar to the chef recipes concept
<pindonga> or james_w ^ :)
<m_3> newz2000: so the first time the app gets a relation-get from the db is in changed
<newz2000> james_w, m_3, interesting. We seem to only be running one time. Maybe that's because we're not doing exit 0 right
<jml> https://bugs.launchpad.net/charm-tools/+bug/1015575 filed also
<_mup_> Bug #1015575: Error running 'charm help' without arguments <Juju Charm Tools:New> < https://launchpad.net/bugs/1015575 >
<m_3> newz2000: gets a _non-empty_ relation-get :)
<james_w> pindonga, I don't think there's all that much benefit to it if it is apache-specific though
<jml> grr. 'nother meeting.
<newz2000> m_3: ok. We don't understand it yet but we may be about there. Thanks, will pester you again in a bit
<m_3> newz2000: not sure...  have to see the code
<m_3> newz2000: sure
<james_w> pindonga, but it's certainly a valid use of subordinates IMO
<pindonga> james_w, well, the benefit would be write once read many
<pindonga> :)
<pindonga> don't see how you can do this in a webserver agnostic way
<SpamapS> pindonga: wsgi is pretty webserver agnostic isn't it?
<m_3> james_w: I've thought of this the other way around... the django app being subordinate to an apache-wsgi app... but it's probably the same diff
<james_w> m_3, right, that's what I thought pindonga meant, but either way would work
<SpamapS> another reason it would still be useful if it is apache specific is that many of them can be written, and then as apache improves, the users see those improvements
<pindonga> james_w, wsgi is, but you still need an apache-modwsgi one
<hazmat> pindonga, i tend to think things like wsgi containers are best left to the charm not a subordinate.. the charm has to pick one to be functional anyways.
<pindonga> so apache-modwsgi is probably the right combination for a subordinate
<pindonga> hazmat, the idea was to avoid repeating commonly performed stuff
<james_w> pindonga, you could have a "wsgi" interface, which your app would provide, and then the apache charm could require that interface to serve your app via wsgi
<pindonga> every app server that you deploy using apache+modwsgi follows the same steps
<hazmat> pindonga, anyways.. that's my op.. there's also a gunicorn subordinate
<pindonga> except for the config itself
<m_3> james_w: like that
<james_w> or nginx could require the interface
<pindonga> james_w, interesting
<hazmat> pindonga, http://jujucharms.com/~patrick-hetu/precise/gunicorn
<SpamapS> hazmat: but without inheritance, we have no way to improve general apache+wsgi charms other than subordinate/primary relationships
<m_3> similarly for a 'rack' interface
<pindonga> james_w, so apache would be a subordinate charm to my app, in that case right?
<james_w> pindonga, other way around
<pindonga> as I need it to be deployed in the same container
<SpamapS> I think we will eventually see charm inheritance, and thats how this will work
<pindonga> james_w, if it's the other way, would I then juju deploy apache? (/me is at a loss)
<pindonga> SpamapS,+1
<SpamapS> you'll just write  'my-sexy-django-app' which extends: django-app which extends: apache-wsgi-app
<james_w> the basics would be "wsgi script, working dir, user, processes, threads"
<m_3> can't wait for that
<SpamapS> funny juju might be the first language ti implement interfaces *before* inheritance
<james_w> which is pretty webserver-agnostic, any any extras would require something else
 * SpamapS hits the rimshot button
<hazmat> SpamapS, inheritance doesn't quite feel write either, ie. i don't want anything using apache mod wsgi, so perhaps subordinates give the choice..
<hazmat> s/right
<hazmat> at least for my personal env deploys
<james_w> pindonga, yeah, juju deploy apache, then juju deploy your app as a subordinate of apache, related using the wsgi interface
<SpamapS> hazmat: we need to *end* this idea that personal preference is a good idea in ops
<SpamapS> hazmat: there's a best way. Thats the way the charm works.
<SpamapS> Measure it
<m_3> pindonga: it'd be `juju deploy --config=wsgi.yaml apache; juju deploy --config=myapp.yaml django; juju add-relation apache:wsgi django`
<SpamapS> implement it
<SpamapS> and stop this nonsense that op A and op B can both be "right"
<hazmat> SpamapS, and patch security ;-)
<hazmat> ie maintain it
<SpamapS> I'm just saying.. the idea with the charm store is that we all can actually agree on the best way to do WSGI apps
<hazmat> SpamapS, i agree, but unfortunately my best way is not the same as everyone elses ;-)
<hazmat> SpamapS, gunicorn + nginx
<hazmat> or gunicorn + varnish
<SpamapS> right, the charm should just pick one
<SpamapS> what matter is it how my wsgi app works.. somebody else who cares more about that figured it out. :)
 * SpamapS said pantomiming a hypothetical wsgi app developer
<m_3> with a beret
 * SpamapS cues the bass and snare
<newz2000> m_3: (or anyone) we could use some help with this /cc fugue88
<newz2000> http://paste.ubuntu.com/1050957/
 * hazmat turns up the django 
<newz2000> it's not successfully running syncdb, (line 45)
<newz2000> and it's also not re-running
<james_w> newz2000, I'd check -n $password too
<m_3> newz2000: so that's `db-relation-changed` right?
<newz2000> m_3: yes
<newz2000> and we did actually get it working so that it gets to line 46
<newz2000> 45
<hazmat> newz2000, are you installing django via packages or virtualenv?
<newz2000> and then it stops and doesn't retry
<Beret> m_3, hey now, leave berets out of it!
<m_3> perhaps around line 6 you should bail if
<hazmat> :-)
<m_3> haha
<newz2000> hazmat: package
<m_3> Beret: sorry :)
<Beret> :)
<james_w> newz2000, do you know what it errors with?
<newz2000> james_w: yes...
<james_w> newz2000, also, are you adding this to the existing graphite charm?
<m_3> newz2000: perhaps around line 6 you should put an other-side's-not-up guard?... lemme finish reading
<newz2000> psycopg2.OperationalError: could not connect to server: Connection refused
<newz2000>         Is the server running on host "192.168.122.75" and accepting
<newz2000>         TCP/IP connections on port 5432?
<fugue88> m_3: What would we do in that guard?  Busy-wait?
<m_3> newz2000: nevermind
<newz2000> james_w: we're doing an exercise to learn charming (ISD team srpint)
<newz2000> we made this charm yesterday, now adding db relationship to it
<newz2000> (it works fine when using local sqlite)
<james_w> I wouldn't have started with graphite :-)
<newz2000> james_w: +1
<james_w> newz2000, why are you writing the credentials to /local_settings.py too?
<newz2000> debugging
<james_w> ok
<newz2000> Interestingly, if we do juju ssh â¦ then run syncdb without changing anything, it works fine.
<fugue88> Under what circumstances, exactly, will juju retry a hook?  One part of the docs seems to indicate that if the hook exits non-0, it will be retried.  We don't see that.  Another part states that updating relation settings and exiting 0 will cause a retry.  We could do that, but we don't really have any settings we *need* to communicate to the other side of the relation.
<newz2000> btw, fugue88 and I are pair programming this
<fugue88> Also, the docs aren't clear about *what* will be retried.  Only the *other* side, or both sides?
<m_3> fugue88: the relation hooks keep re-firing as long as either side keeps relation-set'ing
<fugue88> m_3: If side A relation-set's, both A's hook and B's hook will be retried?
<m_3> fugue88: any non-zero exit from a relation at any time will just error out the relation (I think)
<james_w> fugue88, if they exit non-zero they won't be retried until a human intervenes
<fugue88> Good to know about the non-0.
<james_w> fugue88, if a relation-set is done then any relation-changed hooks will be fired for any relations that aren't in an error state
<fugue88> Great!
<james_w> fugue88, I don't believe anything just gets "retried"
<m_3> fugue88: A relation-sets, then B fires and relation-gets... if B doesn't relation-set, then A will not be fired again
<fugue88> Oh.
<fugue88> Not great.
<james_w> fugue88, as in, you exit 0 early if the data you need isn't there yet, on the assumption that the data will be set later
<newz2000> is thre a convention for a relation-set to happen that indicates I'm not connecting, tell me when to try again?
<fugue88> So, if the relation data is there, but the other side isn't actually ready, we'll need to busy-wait in the hook.
<james_w> so if you relation-get user and it is "" then you exit 0 on the assumption that the other side hasn't done relation-set user yet
<james_w> then when it does that relation-changed will be called again
<james_w> fugue88, the other side shouldn't set all the data until is is up
<m_3> if the other side takes an hour.. and _then_ relation-sets... you'll get run again
<james_w> or it should make sure to do a relation-set when it is up
<fugue88> james_w: Okay, we might take have to take a look at the postgresql charm.
<fugue88> Thanks!
<james_w> but there's nothing that will cope well if there is a network hiccup during relation-changed
<m_3> this dance is different with different "interfaces"... pgsql creates the db and creds on the first join... then just hands it back
<m_3> newz2000: one thing to look at is the pg_hba.conf on the pg side
<newz2000> m_3: we have no prob connecting after it fails
<fugue88> hba is fine in this case.
<fugue88> Or, I should say...
<fugue88> becomes fine.
 * SpamapS wishes he could focus fully on this discussion right now
<m_3> ok cool... we've had problems setting that correctly in the past
<fugue88> Maybe a delay where the pg charm sets the relation stuff before HUP'ing postgres has finished.
<fugue88> We'll look.
<SpamapS> fugue88: you can do relation-set's and relation-get's for other unrelated relationships, you don't have to busy wait
<SpamapS> fugue88: so if you're in a relationship that gets a change, and you don't have some other relationship established yet that you need to send that data down or use it for something.. you can just defer
<SpamapS> fugue88: right now that deferring is manual.. but its fairly straight forward how to do it. In the future I think we'll have a way to tell juju "defer this hook" and it will just retry it again
<m_3> fugue88 newz2000: the pg charm is basic... please help improve it!  i.e., great project post-sprint would be to add in pg9 replication :)
<newz2000> m_3: you're speaking our language
<newz2000> we're testing now, if we get it working as we think it will then we'll look at that charm next
 * SpamapS goes afk bbiab
<SpamapS> please lets continue this discussion later though
<SpamapS> killing me to go right now!
<newz2000> ok, we have it working
<newz2000> we put an until syncdb sleep 1 in there
<newz2000> and it works, so that implies the charm needs to be a bit smarter
<m_3> hmmmm
<m_3> newz2000: hey, so why do you exit 0 on syncdb fail?  that seems like a real error condition and should fail the relation
<newz2000> m_3: that was a misunderstanding of the docs on our part. We thought exit 0 meant it would get tried again soon.
<m_3> ah gotcha
<newz2000> we think that could be a bit clear. :-)
<newz2000> clearer
<m_3> yup... merging docs today as a matter of fact
<newz2000> m_3: what is the best way to get the postgres charm in order to propose a patch (if we're able to improve it)?
<m_3> newz2000 fugue88: btw, please take meta notes about stuff like that... how we can improve the learning process.
<newz2000> ok
<m_3> newz2000: just branch it to a personal branch... then propose for merging.  the source is lp:charms/postgresql and your personal branch should be in the format of lp:~<lp-id>/charms/precise/postgresql/<some-branch-name>
<newz2000> m_3: cool
<m_3> newz2000: you might check to see if we did anything special for syncdb timing in lp:~mark-mims/charms/oneiric/summit/trunk ( derived from lp:~michael.nelson/charms/oneiric/apache-django-wsgi/trunk ).  It might've just been that running other manage.py commands added the necessary delay accidentally
<newz2000> m_3: ok
<newz2000> we've found at least one potential optimization for this charm
<m_3> newz2000: that's definitely a best practice... get it talking then iterate
<jcastro> m_3: if you could find time to review this today I can totally remove that wiki page: https://code.launchpad.net/~jorge/juju/add-charm-store/+merge/110854
<m_3> jcastro: sure thing... I'll bump that up to do it next
<m_3> SpamapS: when you get a chance, let's talk about what to do with charms that could be both a primary or a sub... seems silly to fork if the only change is `subordinate: true`
<m_3> or at least let's put it on the policy todo list
 * imbrandon returns
<imbrandon> wow leave for an hour and yall start a party :)
<imbrandon> hazmat: ~juju/juju/docs is not right it should be lp:juju/docs
<imbrandon> where you keep moving merge proposals, but that branch is very old and owned by ~juju-hackers and not what is built from
<_mup_> Bug #1015637 was filed: Allow one charm to be either a primary or a subordinate service <juju:New> < https://launchpad.net/bugs/1015637 >
<newz2000> m_3: improvement to the postgresql charm: https://code.launchpad.net/~dsowen/charms/precise/postgresql/reload/+merge/111247
<newz2000> works for us
<newz2000> m_3: also, I think the reason you didn't experience our issue with the summit charm is because you're doing enough work in your hook to not notice the delay for postgresql to restart
<imbrandon> mornin newz2000
<newz2000> howdy imbrandon
<hazmat> imbrandon, those are the same
<hazmat> imbrandon, the former is the actual target , the later an alias to it
<imbrandon> hazmat: nah
<imbrandon> hazmat: Diff against target:3147 lines (+2411/-174) 48 files modified
<imbrandon> is the former
<imbrandon> and actually its a 1 file change with 100ish loc
<imbrandon> it might should be , but its not
<imbrandon> but if it should be then it needs to be owned by ~charm-contributors too not ~juju-hackers
<m_3> newz2000: cool thanks!  processing the queue today, so I should get it into the store version
<imbrandon> hazmat: @see https://code.launchpad.net/~jorge/juju/add-charm-store/+merge/111250
<_mup_> Bug #1015644 was filed: Users told when deploy actually completes <juju:New> < https://launchpad.net/bugs/1015644 >
<_mup_> Bug #1015645 was filed: juju set not firing config-changed when passed a yaml file <juju:New> < https://launchpad.net/bugs/1015645 >
<newz2000> m_3: cool
<imbrandon> Diff against target:156 lines (+140/-0) 2 files modified
<m_3> can somebody verify Bug #1015645 when you get a chance pls?
<_mup_> Bug #1015645: juju set not firing config-changed when passed a yaml file <juju:New> < https://launchpad.net/bugs/1015645 >
<m_3> make sure I'm not going nuts
<surgemcgee> Sure am getting this alot -->
<surgemcgee> operation timeout 2012-06-20 11:53:01,900 ERROR operation timeout
<surgemcgee> juju debug-log
<_mup_> Bug #1015649 was filed: "Unable to create file storage for environment" on 'juju bootstrap' in LXC environment <juju:New> < https://launchpad.net/bugs/1015649 >
<imbrandon> hazmat or jcastro can you delete the first merge proposals ? i dont have access to
<m_3> imbrandon: are these for docs?
<imbrandon> m_3: yea i was fixing a merge proposal for jorge and going over the target with hazmat
<SpamapS> m_3: back
<m_3> imbrandon: i.e., please let me know if any MPs in the queue need to be ignored
<SpamapS> m_3: whats an example of a charm that is primary or sub?
<m_3> SpamapS: juju for one
<SpamapS> err?
<imbrandon> SpamapS: mysql
<SpamapS> imbrandon: that is a placement issue, not primary/sub
<m_3> I use it all the time and've copied it out to juju and juju-standalone
<SpamapS> m_3: can you be more clear?
<m_3> SpamapS: juju is a lxc container installer
<m_3> lp:charms/juju
<m_3> sorry for the "who's on first" aspect of that :)
<m_3> SpamapS: I test lxc by spinning up the 'juju' charm in ec2
<m_3> SpamapS:  but I also do classrooms by subbing juju to byobu-classroom
<m_3> SpamapS: and charmtesting is migrating to be just some jenkins stuff with a juju sub for testing
<_mup_> Bug #1015654 was filed: debug-log over-rates severity of stderr output <juju:New> < https://launchpad.net/bugs/1015654 >
<imbrandon> m_3: what about juju on ubuntu
<m_3> imbrandon: right
<m_3> meta meta
<imbrandon> heh , kinda funy to say but yea
 * m_3 is thinking of "Duuuude"... "Sweeeet"... but what does it say?
<imbrandon> heh
<SpamapS> m_3: right, so I think the answer here is "runtime subordination"
<SpamapS> m_3: we need to add a simple  --subordinate to deploy
<_mup_> Bug #1015655 was filed: Repeated blank lines during debug-log output <juju:New> < https://launchpad.net/bugs/1015655 >
<m_3> SpamapS: actually to be more precise, juju isn't an lxc container installer... it's a juju client.. env is a config param
<m_3> SpamapS: yeah, I'd _love_ that
<SpamapS> m_3: we can of course fake that w/ jitsu, but this seems simple and straight forward and worth discussing w/ the juju core dev team.
<m_3> SpamapS: yeah, that's why I filed it as a real bug... it's really worth it for users going forward
<imbrandon> m_3: re-ignoring one , 110854
<imbrandon> i just cant change the status , its superceeded
<SpamapS> Of course.. the original --with and --in ideas would have handled this wonderfully. :-P
<m_3> imbrandon: looking
<m_3> SpamapS: true
<_mup_> Bug #1015657 was filed: write-charm doesn't mention the need to bump version numbers when iterating charms <juju:New> < https://launchpad.net/bugs/1015657 >
<imbrandon> brb nephew showed up, but its superceeded by 111250
<imbrandon> m_3: ^^
<m_3> imbrandon: whoah... that's the one jorge was just asking me to fasttrack
<imbrandon> yea it was targets wrong
<imbrandon> 111250 is correct
<imbrandon> and the same merge
<imbrandon> see hows its 156loc , thats what it should be
<imbrandon> :)
<imbrandon> brb
<jml> oh hey, there's a bug announce bot.
<jml> just filed bugs based on my IRC backlog in case any of you wanted to keep track on what was causing me to stumble.
 * jml has to go now
<jml> looking forward to getting this darn thing deploying soon.
<m_3> imbrandon: ok, I see that one
<m_3> imbrandon: thanks
<SpamapS> damn
<SpamapS> so I missed all the good stuff?
<SpamapS> :(
<m_3> SpamapS: sounds like newz2000 and fugue88 got django working... and have some updates for pgsql!
<SpamapS> great!
 * m_3 is hoping somebody who knows something about pgsql can be the maintainer at some point
<newz2000> SpamapS: yes, we've got graphite, a Django app running
<SpamapS> I'm working on writing up the charm store policy in a .rst document today
<newz2000> pgsql obviously hasn't gotten the same level of love that mysql has
<SpamapS> newz2000: heh, are you saying mysql looks a little.. used? ;)
<newz2000> well, it is on rev 121 vs 17 for pg. :-)
<m_3> newz2000: yeah, I wrote it by default... never used pgsql except for playing around
<newz2000> we're moving to use juju more and more in ISD and we deploy with pg whever possible.
<newz2000> So I suspect it will mature
<m_3> newz2000: I was always mysql (and sqlserver, but shhhhhh)
<SpamapS> newz2000: I imagine the most important addition you guys can make would be a pgsql-shared interface so django apps can share a db instead of having the db named after the service
<m_3> and then replication :)
<m_3> that's huge and missing
<newz2000> ok, I'll keep that in mind.
<newz2000> Tomorrow is SSO charm writing and it will use pg as well.
<m_3> then externalizing performance tweaks into the charm config
<m_3> (that might be an easy change to start with)
<SpamapS> isn't there a new scale-out pgsql that was published recently?
<SpamapS> m_3: indeed, it needs similar tuning capabilities to the mysql charm
<m_3> SpamapS: afaik there were several solutions in pg pre-9 but now a standard one in 9
<SpamapS> I really wish add-unit on the mysql charm did something useful... like throw up galera replication automatically
<SpamapS> m_3: No, something better than even that
<SpamapS> hazmat: whats the new pgsql cluster thing?
<hazmat> SpamapS, postgres-xc
<hazmat> SpamapS, its not ha though
<hazmat> its linear scaling
<hazmat> but since its shared nothing, you need backups of each member of the cluster
<SpamapS> Postgres-XC (eXtensible Cluster) is a multi-master write-scalable PostgreSQL cluster based on shared-nothing architecture
<SpamapS> sounds HA to me
<hazmat> SpamapS, shared nothing, no replication
<SpamapS> so it doesn't do the HA..
<SpamapS> but it would be trivial to make it HA
<SpamapS> its RAID0 ..
<SpamapS> replication is RAID1
<SpamapS> so.. we need a postgresql-raid10 :)
<imbrandon> heh
<hazmat> SpamapS, re pg we should focus on streaming replication options
<hazmat> postgres-xc is a different beast
<SpamapS> hazmat: I want both
<SpamapS> We should be able to do XC as a peer relation, and streaming replication as a provides/requires
<imbrandon> just use mysql and be done :)
<hazmat> SpamapS, me too.. but since we have neither.. we should get pg doing something decent first.. since its actually used by many.
<SpamapS> its used by nobody right now
<SpamapS> in the context of juju :)
<SpamapS> (unless people are adopting juju in secret)
<hazmat> SpamapS, who uses that ? ;-)
<imbrandon> heh actually there are, i see tweets about internal juju deployments all the time
<hazmat> anyways.. i'd vote for spiffing up the pg charm with replication first.
<hazmat> but the do-ers have vetoes
 * hazmat lunches
<imbrandon> hazmat: so how should we fix the docs targets, looking at the review queue, they are all targeted wrong
<hazmat> imbrandon, i'm not clear that their broken
<hazmat> imbrandon, this one is correct.. https://code.launchpad.net/~jorge/juju/add-charm-store/+merge/111250
<hazmat> but i guess you fixed it
<hazmat> imbrandon, so in general on those we should repropose
<imbrandon> yea thats thje only one
<hazmat> that's what i've been doing in the past
<imbrandon> ok cool, just wasent sure
<imbrandon> i can do that for those in the que now
<hazmat> it takes too long to ask the original sometimes to do so, just note in the merge proposal.. the review queue still shows the correct origin person/branch
<hazmat> imbrandon, if its in the review queue it should be correct..
<imbrandon> yea
<imbrandon> no
<hazmat> hmm..
<hazmat> so this one https://code.launchpad.net/~jorge/juju/add-charm-store/+merge/110854
<imbrandon> there is alot un the queue wrong, infact the one you showed me that i fixed is the only right one
<imbrandon> yea, see the huge diff
<hazmat> imbrandon, but its correctly done
<hazmat> imbrandon, its a branch of docs targeting docs
<imbrandon> ~juju/juju/docs isnt a failed branch and needs removed
<imbrandon> it dont point anywhere
<hazmat> oh..
 * hazmat gets it now
<hazmat> imbrandon, right
<imbrandon> yea :)
<hazmat> the lp:juju/docs doesn't point there anymore it goes to ~charm-contributors/juju/docs
<_mup_> Bug #1015682 was filed: docs... 'make text' fails <juju:New> < https://launchpad.net/bugs/1015682 >
<imbrandon> ahh ok, that makes sense
<imbrandon> ok sooo ...
<imbrandon> heh
<hazmat> so we should correct ones that show up like that hopefully not too many.. they won't show  up in the review queue unless they target 'docs'.. i need to grab my takeout, bbiab
<imbrandon> kk
<imbrandon> m_3: make text works here
<imbrandon> toctree is part of sphinx, do you have a really old version ?
<imbrandon> 1.1.3 i see , hrm
<imbrandon> oh thast ~juju/juju/docs
<imbrandon> m_3: wrong branch
<m_3> imbrandon: huh
<m_3> imbrandon: that bug was against trunk bzr10... before any merges
<imbrandon> yea thats a really old bzr branch
<imbrandon> we're on like bzr39
<imbrandon> or something
<imbrandon> ~charm-contributors/juju/docs
<imbrandon> m_3: all these merge proposals are targeting the wrong bzr branch by mistake, they should target ~charm-contributors/juju/docs or lp:juju/docs , not ~juju/juju/docs
<m_3> imbrandon: ah, ok... I was starting from the oldest MPs and going forward
<m_3> imbrandon: thanks
<imbrandon> yea no matter what the MP says do it against lp:juju/docs
<m_3> gotcha
<imbrandon> and it should be correct
 * imbrandon is trying to get it all streight :)
<SpamapS> we should fix all the warnings on the juju docs
<_mup_> Bug #1015704 was filed: docs:  empty the drafts folder on production <juju:New> < https://launchpad.net/bugs/1015704 >
<SpamapS> :make html in vim has quite a lot to say ;)
<imbrandon> never tried it in vim, but yea that was one of my goals SpamapS , over time :)
<imbrandon> 146ish of them or something
<SpamapS> imbrandon: hey, the new theme.. it doesn't have any way to navigate sections
<SpamapS> imbrandon: thats pretty vital, you adding it back in?
<imbrandon> i thought of that, and am looking at it
<imbrandon> was actaully what i was doiing now
<imbrandon> dident seem to be missing that much tho honestly, after looking it over more, but yea i'm testing diffrent ways now
<imbrandon> SpamapS: you are talking about this http://www.assets-online.com/docs/juju/index.html
<imbrandon> right ?
<SpamapS> no
<SpamapS> imbrandon: the thing thats being built right now in lp:juju/docs
<imbrandon> oh then yea, i added the tree back in
<SpamapS> unless thats been updated
<imbrandon> its been updated
<SpamapS> I haven't pulled in a day or two
<SpamapS> well then YAY
<imbrandon> check that link, thats what it will look like next build
<SpamapS> imbrandon: awesome!
<SpamapS> imbrandon: wait, still no sections
<imbrandon> i thought it was building every 15 minutes but somethign is broken or it isnt
<SpamapS> thats the toc
<SpamapS> I need the sections of the document
<imbrandon> yea and thats what i was refering to as i was working with right now to add in somehow
<imbrandon> but reeally there is one or two toc items that really need it, and i'm wondering if they dont need broken up instread
<m_3> SpamapS: fixing some warnings now
<imbrandon> but yea working on it now
<m_3> imbrandon: if you're playing with layout you probably should be doing that on a separate feature branch and not trunk
<imbrandon> SpamapS: see https://juju.ubuntu.com/docs/faq.html sections there
<SpamapS> imbrandon: yeah, thats pretty awful. ;)
<SpamapS> and agreed
<imbrandon> m_3: layout is done, just a one char change to add sections but yea
<SpamapS> playing on trunk is a no-no
<m_3> so what do we do right now... I'm halfway through a stack of merges
<m_3> should I stop and let trunk settle?
<imbrandon> no
<imbrandon> i am no pushing anything
<m_3> or keep going and let imbrandon deal with conflicts
<imbrandon> yea go go, i'm off in my world
<m_3> imbrandon: ok
<imbrandon> i'll fix it if you step on my toes accidently
<imbrandon> :)
<imbrandon> but yea its just one char change to add sections back in, but i'm working localy etc etc thus the assets-online domain to visualize
<imbrandon> etc
<imbrandon> so yea dont pay me no mind m_3 go go
<imbrandon> SpamapS: btw uploading nginx+spdy to ppa here in a few
<imbrandon> have it all working
<koolhead17> bkerensa_, around
<SpamapS> imbrandon: nice, so then we can just add an 'nginx-website' interface that lets subs drop a file in /etc/nginx/sites-enabled, and away we go?
<imbrandon> yup basicly
<SpamapS> imbrandon: I love that. That should make scaling *php* quite easy. :)
<bkerensa_> koolhead17: sup?
<imbrandon> SpamapS: how do you squash a commit in bzr ?
<SpamapS> imbrandon: bzr uncommit
<SpamapS> imbrandon: though that will make your branch diverge
<imbrandon> erm
<imbrandon> not good
<imbrandon> i just really want to append to the last commit
<SpamapS> imbrandon: if you just want something like git revert, I believe you just do it like svn and reverse merge
<lifeless> imbrandon: SpamapS: squash makes git branches fiverge too :)
<lifeless> imbrandon: uncommit is the right thing, long as its a local branch vs e.g. trunk.
<SpamapS> oh is squash something in git ??
<imbrandon> yea
<SpamapS> Ok, never used it
<lifeless> SpamapS: http://365git.tumblr.com/post/4364212086/git-merge-squash
<SpamapS> still fighting tooth and nail to not have to learn git until the rest of the world sucks me in. :-P
<imbrandon> heh, hey now i'm _trying_ with bzr :)
<imbrandon> lol
<imbrandon> ok so i want my working copy uncommited changes added to my last commit
<imbrandon> uncommit is the right thing ?
<hazmat> just add a new commit
<imbrandon> heh yea, sounds like it
<lifeless> imbrandon: yes, uncommit will pop off the commit and not change your working copy at all.
<lifeless> imbrandon: then you can commit again, and everything gets saved
<imbrandon> k
<hazmat> bzr doesn't want to support the whole rewrite commit history workflow..
<lifeless> hazmat: well
<hazmat> for good reason..
<hazmat> lifeless, i know.. it can.
<lifeless> hazmat: thats not strictly true.
<hazmat> but that's not the intended usage model
<imbrandon> yea ... with great power ... bah , adding commit
<imbrandon> ;)
<hazmat> lifeless, are you saying it does want to support it.. or just noting that it can?
<lifeless> hazmat: https://lists.ubuntu.com/archives/bazaar/2009q2/059263.html
<hazmat> lifeless, incidentally do you know if the multi branch in single dir support in bzr work is basically on hold?
<lifeless> hazmat: 3-mumble years ago I analysed the tasks users need to do with their VCS w.r.t. history management and came to the conclusion that editory editing is a core facility for a VCS
<lifeless> hazmat: the team are currently holding the fort for maintenance across the LP portfolio, so yes. Patches would be loved - it works but needs some UI polish basically.
<lifeless> hazmat: I would, if doing the analysis now, create and reference some personas, to make it more concrete.
<lifeless> but I think the conclusions owuld have been the same. I wish I'd pushed harder subsequent to that doc, now.
<hazmat> lifeless, thanks for the link, digesting
<SpamapS> http://spamaps.org/files/charm-store-policy/policy.html
<SpamapS> A rough draft, for your reading pleasure
<hazmat> ideally we expand the toc tree out on the current section
<hazmat> and that goes under charms
<imbrandon> hazmat: just poposed that merge
<imbrandon> https://code.launchpad.net/~imbrandon/juju/docs/+merge/111281
<imbrandon> :)
<hazmat> imbrandon, nice!
<SpamapS> imbrandon: hah, that charm-store.rst ... I just took all the 'musts' from that and put them in the policy.rst that I'm working on
<imbrandon> heh
<imbrandon> and infact m_3 just merged it, he's on it :)
<SpamapS> Yeah thats fine, I'll pull them out of it before I commit to trunk
<hazmat> doh.. here i was bothering to approve it ;-)
 * SpamapS grabs lunch
<imbrandon> :)
<imbrandon> SpamapS: thats still i good idea i think
 * imbrandon gets food as well
 * m_3 too
<jcastro> m_3: I didn't even notice the drafts folder, heh
<lifeless> hazmat: checking you got my reply; maybe I'm not signed in to freenode or something
<hazmat> lifeless, just saw it now
<lifeless> cool
<m_3> bcsaller: yo
<bcsaller> m_3: hey
<m_3> bcsaller: hey, so in lp:juju/docs
<m_3> there's a drafts/subordinate-internals.rst
<m_3> that's referencing a relative path subordinate-services.rst
<m_3> but the latter got moved to toplevel
<m_3> can you please tell me what should go where?  and/or give me a branch?
<m_3> happy to file a bug if you'd rather... just trying to clean docs up today
<bcsaller> m_3: internals are just impl details, the other sub-serv is the user facing doc
<m_3> should the draft/sub-internals be moved to internals/?
<bcsaller> m_3: yeah, now that its in trunk it should
<m_3> i.e., is any of that still WIP or is it all released?
<m_3> ok cool
<m_3> gracias!
<bcsaller> :)
<robbiew> jcastro: yo...dumb/out-of-touch question...where are we hosting the juju client rpms?  Is the plan still to leverage the OpenSUSE Build Service?
<jcastro> https://github.com/jujutools/rpm-juju
<imbrandon> robbiew: github atm , on the same group as osx forumula , and yes
<imbrandon> :)
<robbiew> imbrandon: heh...thanks jcastro2.0
<imbrandon> i'm happy to shift things arround if a better spot is deemed , just figured i'd keep the "ports" togather
<robbiew> imbrandon: nah..just curious
<robbiew> I know the OBS can build for other distros, so just an "easy" way to support multiple RPM-based clients
<imbrandon> yup, even builds right from git, someone sugested it like 5 min after i posted em
<imbrandon> i was like erm , nice
<imbrandon> infact i should put the OSX and RPM docs in /docs today
<jcastro> looks like a nice thing for the next set of builds
<jcastro> hmm I dig that new style you guys linked to too
<jcastro> also, I'm going to take a crack at that About page in the docs.
<imbrandon> :)
<jcastro> it's basically the most boring three paragraphs I've ever read in my life.
<imbrandon> jcastro: see how i did the front page and you can jaz it up
<imbrandon> with plain html
<imbrandon> for pictograms and such
<m_3> imbrandon: got some other additions to the toc in a sec
<imbrandon> m_3: cool, i only had that one merge
<m_3> relation stuff
<imbrandon> for now
<imbrandon> so i'm back hands off for a few, was thinking about combineing the provider config pages and adding the OSX/RPM pages
<imbrandon> but i'll do them as merge req's
<jcastro> hmm yeah
<jcastro> a separate page for OSX and RPMs would be useful
<m_3> we had some stuff in drafts that needed to go out toplevel
<imbrandon> yea basicly the info i have on the github pages
<m_3> then it needs just overall love
<imbrandon> :)
<m_3> jcastro: yeah, that's a good idea
<jcastro> heya hazmat
<imbrandon> i'll start a draft for it now, "ports" unless someone has a better name
<jcastro> do you have powers to kick off the doc generating cron on demand?
<jcastro> like say ... nowish?
<jcastro> robbiew: http://www.hanselman.com/blog/ManagingTheCloudFromTheCommandLine.aspx
<jcastro> there's a link to the cool node tools for azure btw.
<imbrandon> yea their account mgmt on the cli is pretty sweet, i was thinking aobut how to add that to juju-jitsu
<robbiew> jcastro: ack, thx
<imbrandon> azure import account.json --> jitsu import environments.json :)
<imbrandon> SpamapS: ^
<jcastro> jI want that kind of syntax for AWS'es tools
<jcastro> not "oh in order to make that easy you should grab smoser's scripts from git."
<m_3> jcastro: give the doc gen a couple of minutes
<jcastro> ok
<jcastro> m_3: man, everyone is excited about docs today
<jcastro> people must be bored
<imbrandon> LOL
<imbrandon> its my new sexy theme, well about to be new sexy theme when it builds :)
 * imbrandon ducks
<hazmat> jcastro, no..
<hazmat> jcastro, that's a prod machine run by is
<m_3> jcastro: docs were just big on the review queue
<m_3> jcastro: yeah, I don't see an obvious way to kick it off from lp
<jcastro> it's no biggie I think
<m_3> jcastro: well all the queued items are in... please pull and take a look locally when you get a chance.  it still needs love across the board as far as content and organization is concerned
<hazmat> jcastro, for a preview its, there is a cron job i can/have poked at http://jujucharms.com/docs
<m_3> hazmat: just updated like two minutes ago... poke again after a bit pls
 * m_3 back to charms
<jcastro> oh cool
<hazmat> m_3, just switched out to 10m pulls
<jcastro> m_3: I'm hoping for elastic search to make it!
<jcastro> though the queue says 11 months
<jcastro> I think it's measuring from the bug being filed
<jcastro> not the branch being attached
 * jcastro goes to file a bug
<hazmat> jcastro, not all of the bugs have branches attached..
<m_3> jcastro: yeah, it's age of bug
<hazmat> some just drop it in the description or a comment
<jcastro> right
<jcastro> but we don't care to review it unless there's a branch attached
<hazmat> imbrandon, one more suggestion for the navtree, make it perm re screen pos, else you can click on it, and loose it if you subnavigate something
<jcastro> like if I filed a wishlist a year ago, and someone attaches a branch today, the age should be one day, not one year and one day.
<hazmat> jcastro, there is a branch noted on the bug just not attached.. its fairly common
<imbrandon> ahh yea, i actually have that in the css and turned it off, wasent sure
<imbrandon> hazmat: ^
<hazmat> imbrandon, cool, i think its makes sense, that whole area isn't being used, and navtree follows focus makes sense i think
<imbrandon> yea, it made some finky flicker tho, but i can fix that i'm sure, i'll toy with it tonight
<imbrandon> funky*
<hazmat> thanks
<m_3> jcastro: yeah, if there's a bug last modified time
<m_3> jcastro: otherwise we can make up some sort of time since picked up into the queue
<m_3> then that'd catch when it's removed from the queue b/c of status change
<m_3> and the clock starts over when the status puts it _back_ in the queue...
<jcastro> ok I'll file that
<imbrandon> see we could easily make this the front page of juju.u.c tho :)
<imbrandon> it really has all the info etc
<hazmat> m_3, yeah.. last modified works a bit better
<hazmat> m_3, the queue doesn't keep state
<hazmat> although modified has its own problems
<m_3> right
<m_3> it's good enough
<imbrandon> hazmat: ohh i see what you mean about the subnav
<imbrandon> hrm, thats a bit more tricky
<SpamapS> jimbaker: hey, how is jitsu watch supposed to work? I want to wait for agent-state: started ...
<imbrandon> because it cant be fixed if you scroll, but needs to be fixed if you click
<SpamapS> ahh wait
<SpamapS> --state
<imbrandon> hazmat: hrm, actually bootstrap scrollspy.js would work i think
<jimbaker> SpamapS, correct
<jimbaker> which also will apply  --num-units
<jimbaker> =1 if you're watching a service
<jimbaker> if you're using the latest proposed branch
<m_3> SpamapS: --help is pretty verbose
<m_3> jitsu watch -h
 * m_3 is excited about the test possibilities that opens up
<SpamapS> yeah
<m_3> jimbaker: I'm curious to see what you did to make it play so nicely with shell signals
<SpamapS> we need to be able to wildcard the units
<m_3> jimbaker: it responds so nicely to timeout
<m_3> jimbaker: I wish we could do the same with some other juju commands (I mess up juju ssh all the time and try to ctrl-c it to no avail)
<imbrandon> hahah me too
<SpamapS> Isn't that just because the KeyboardInterrupt isn't being handled properly?
<SpamapS> need to shutdown the reactor or something
<m_3> SpamapS: dunno... thought it was trapping something it shouldn't
<SpamapS> imbrandon: is there a way to get sphinx to display arbitrary doc fields in the HTML?
<SpamapS> :Version: 0.1
<SpamapS> I want to show that
<SpamapS> actually I can just dup them into visible text
<imbrandon> yea
<imbrandon> i think i know what ya mean
<imbrandon> gimme example ,but yea, sphinx is like god, i seriously have fallen in love with it, its got some rough spots but yea
<imbrandon> and the template syntax is like Twig :)
<imbrandon> SpamapS: oh yea for a version
<imbrandon> just add it to the conf.py ( look in the html section ) as a option
<imbrandon> then use like {{ option_name }}
<imbrandon> in the footer or something
<imbrandon> you can even do python etc with like .. code:: python
<imbrandon> print "blah"
<imbrandon> etc
<SpamapS> imbrandon: easier to just make it visible as text.. :-P
<imbrandon> :)
<SpamapS> actually hm, its available as {{ meta['...'] }}
<imbrandon> sudo reboot
<imbrandon> bah
<hazmat> SpamapS, its the ssh subprocess that makes keyboardinteruppt a bit nasty afaicr
<SpamapS> hazmat: twisted should handle it in the reactor shutdown, shouldn't it?
<SpamapS> or are we just subprocess'ing it directly?
<hazmat> not its twistedified
<imbrandon> okie, i got way too early of a start today, taken a cue from mexico and going for a mid-day nap, back after bit yall
<jaustinpage> does anyone know any good resources for trying to debug problems with juju deploying?
<jaustinpage> im trying to figure out why the nova-compute charm is giving me trouble
<jaustinpage> r / juju deploying / juju charms deploying / g
<SpamapS> jaustinpage: this channel is the best way to debug things. :)
<SpamapS> jaustinpage: I mean, to find info how to debug. :)
<SpamapS> jaustinpage: also askubuntu.com is good as we have a bot in here which alerts us to new questions. :)
<SpamapS> jaustinpage: what seems to be the problem?
<SpamapS> wow, argparse's subparser formatting is ridiculous
<SpamapS> jml: FYI, fix for bug 1015574 in juju-jitsu trunk. Thanks for playing!
<_mup_> Bug #1015574: No way of discovering subcommands <juju-jitsu:In Progress by clint-fewbar> < https://launchpad.net/bugs/1015574 >
 * SpamapS goes to the dentist
<jaustinpage> Spamaps: figured out what i did wrong. Aparently the space in between the : "option" in yaml is important, i was leaving it out like this :"option:, and it was unhappy with me :-)
<jaustinpage> r/"option:/"option"/g
<SpamapS> ugh, argparse.. you steaming pile.. why can't you just do what I want?
<SpamapS> subparsers are basically a joke. Argh.
#juju 2012-06-21
<jimbaker> SpamapS, subparsers can be useful, but they usually need some programmatic help on the parse. one extreme example is what i did with jitsu watch
<jimbaker> still better than trying to write that parser from scratch however
<SpamapS> perhaps
<SpamapS> I think I've wrangled it almost
<SpamapS> but still can't suppress --help in subparsers
<jimbaker> SpamapS, add_help=False doesn't help?
<jimbaker> so to speak ;)
<surgemcgee> Any still around? Is the only way to get the charm revision number with a --> cat hooks/revision <-- ?
<_mup_> Bug #1016003 was filed: "juju debug-hooks -h" doesn't say what it does <juju:New> < https://launchpad.net/bugs/1016003 >
<_mup_> txzookeeper/trunk r48 committed by kapil.foss@gmail.com
<_mup_> correct unit tests minors [thanks to ben bangert for spotting]
<jml> I'm working on a charm that deploys code from a bzr branch. Currently, I'm fetching the branch into $PWD, which is /var/lib/juju/units/$UNIT_NAME/charm/. Is this sensible?
<jml> Is there a better practice?
<hazmat> jml, its sensible, but you can pull it anywhere
<jml> what user is 'install' run as?
<hazmat> jml all hooks run as root
<jml> huh
<hazmat> jcastro, ping
<jml> is everything in the charm directory copied up to the instance indiscriminately?
<jcastro> hazmat: pong
<james_w> jml, yes
<james_w> jml, or at least you can add arbitrary stuff, I don't know that e.g. metadata.yaml is copied exactly
<marrusl> hey folks..  does juju on openstack require swift?
<james_w> marrusl, IIUC yes, but you can point it to S3 and it works fine
<james_w> as in, it needs object store, but it can use openstack for compute and s3 for object store
<marrusl> james_w, aha, ok.  great, that makes sense.
<james_w> but I don't know how auth works there
<marrusl> I imagine it doesn't store much, but I wonder if that will be a security issue for some.  they might feel safer keeping it all inside.
<hazmat> marrusl, there's also a standalone s3 compatible impl nova
<hazmat> if their not using swift they can just stand that up by itself on a nova api server
<marrusl> hazmat, oooh?  i.e. nova-objectstore?
<hazmat> marrusl, its just a dumb s3 impl in nova for compatibility &testing, no replication etc, just stores files in a dir.
<hazmat> but functional for juju's needs
<marrusl> hazmat, indeed.  we will check it out.  thanks!
<SpamapS> jml: re your question about whether its good practice to store in the charm dir or not.. I think its actually the best practice, because the charm dir gets completely deleted when the service is destroyed.
<jml> SpamapS: I can't make the connection
<jml> SpamapS: If you're installing a package, you don't care that it's in the system directories
<jml> SpamapS: so why is auto-deletion from the charm dir a win if you're installing/running from a branch?
<SpamapS> jml: I mean, if you are, at runtime, storing flag files or downloaded data or something, the charm dir is a good place to do that.
<jml> SpamapS: ah rigght.
<jml> SpamapS: in this context, I'm am bzr branching at install time
<SpamapS> hm
<SpamapS> for that I might put it somewhere else in case I re-deploy onto the box
<SpamapS> since its basically an immutable cash
<SpamapS> cache
<jml> well, it's just apt by another means, no?
<SpamapS> right, and apt is going to cache your debs in /var/cache :)
<hazmat> SpamapS, i had to do an increment on txzookeeper latest is 0.9.6.. it looks like the build doesn't like that though
<SpamapS> hazmat: looks like you still are using the debian dir from trunk instead of distro
<SpamapS> hazmat: so you will need to dch -i in trunk, since the recipe uses debupstream
<hazmat> SpamapS, ack, will check it out post meeting
<mars> Question for the room: I saw a note about augtool in the wordpress charm.  Has anyone tried it out?
<mars> http://augeas.net/tour.html
<jimbaker> mars, re augtool, that's an old note on my part as a todo. but sure, it would be cool to try
<mars> jimbaker, it has potential, a standard interface for config files is a nice idea.
<mars> jimbaker, otherwise every charm writer will use their own way of hacking config files
<SpamapS> mars: My feeling, after using augtool/augeas a few times, is that it is useful when you absolutely must only *edit* a complicated config file.
<SpamapS> mars: its far simpler to use templating and just build the whole file.
<jml> is there an idiom for 'juju-log if I can but otherwise skip'?
<mars> SpamapS, makes sense
<jml> (my subtly hidden question is, why isn't that an option or even the default behaviour of juju-log?)
<SpamapS> jml: || : ?
<imbrandon> mornin
<mars> jimbaker, SpamapS, thanks
<jml> SpamapS: ok.
<SpamapS> jml: when are you failing to log?
<jml> SpamapS: when I'm running scripts manually on the instance to debug stuff
<jml> No JUJU_AGENT_SOCKET/-s option found
<SpamapS> jml: I find it better to use debug-hooks for that
<mars> jml, fwiw, we saw that exact error on the first day of our sprint
<SpamapS> jml: as then you're running it in the appropriate context
<jml> SpamapS: sorry, I figured debug-hooks wasn't ready for folk to use
<SpamapS> no way, its a pretty awesome toy :)
<mars> jml, we resolved it by making sure we used the correct procedure to run debug-hooks
<jml> as debug-hooks --help doesn't actually say what it is, and the documentation on the website says it can't be used for install
<SpamapS> It has proven hard to describe how to use debug-hooks.. I think we need screenshots in the documentation
<SpamapS> jml: OH thats a bug in the docs.. that was fixed
<mars> +1 for debug-hooks docs with screenshots.  The text description of the procedure we have written down is difficult to follow without a live byobu terminal in front of you.
<jml> mars: OK, I'll bite. What's the correct procedure? How did you make sure you used it?
<mars> jml, just a sec, I'll pastebin it
<SpamapS> I do think we need to add a --debug flag to deploy which deploys and immediately fires up debug-hooks so you don't miss the install hook, as that is possible if the machine is already running.
<mars> jml, http://pastebin.ubuntu.com/1052882/
<jml> mars: ta
<japage> hi
<jml> so I have these neat bash hack that pops up an inotify thingy when long running commands finished
<jml> I wish, I wish, I wish I could have a command that ran for as long as the deploy process took.
<jml> fwiw, notes I've made on today's work so far: http://paste.ubuntu.com/1052894/
<SpamapS> jml: jitsu watch
<jml> SpamapS: oh that's right. I forgot to play with that.
<SpamapS> jml: and when I release juju-jitsu 0.13 later today or tomorrow, it will actually have a --help :)
<SpamapS> http://paste.ubuntu.com/1052901/
<SpamapS> jml: still needs a lot of work.. some of the commands don't have their own --help .. but its a nice step forward. :)
<jimbaker> jml, jitsu watch could definitely be nice for starting your notification
<jml> SpamapS: will 'jitsu watch mysql' also stop watching if the service has an error in its deployment?
<SpamapS> jimbaker: btw we need to be able to wildcard unit ids
<jml> SpamapS: looks good :)
<jimbaker> SpamapS, i think this really is covered by --num-units
<SpamapS> jimbaker: how so?
<jml> I really need to figure out how I can make my bash hack more readily usable by others.
<SpamapS> jimbaker: I want to deploy, then immediately wait for a state of started. But id is not guaranteed to be 0, because the service name may have been used before.
<jimbaker> SpamapS, let me dig out the example
<japage> SpamapS: do you know how juju is retrieving the node's public ip when using maas? I seem to be getting <nodeName>.localdomain . In MAAS, i am using a blank domain, because I didnt feel like setting up a real domain. <nodeName> resolves in my environment, <nodeName>.localdomain does not. I think this is causing my relations to not work.
 * jml is on libdep-service/29
<jimbaker> SpamapS, you can do stuff like this, once the watch-ports branch is approved & merged:
<jimbaker> timeout 600s ./sub-commands/watch \
<jimbaker>   mysql --state=started -r "mysql wordpress" --setting=database \
<jimbaker>   wordpress --state=started --open-port=80
<jimbaker> so that's saying, wait until at least one unit of mysql (--num-units=1 is implied with the branch) is in the started state, and it has a database setting
<jimbaker> likewise also wait until at least one unit of wordpress is in the started state and it has an open port of 80/tcp
<jimbaker> SpamapS, i think that works better than wildcards on service units, really care about services here
<SpamapS> japage: IIRC, its just 'hostname -f' .. but I could be wrong
<SpamapS> jimbaker: ah ok thats good, I didn't realize I could use --state without a unit id
<jimbaker> SpamapS, yeah, it's pretty nice in that way
<jimbaker> SpamapS, you do need to specify --num-units until watch-ports branch lands however
<japage> Spamaps: yep, that seems to be it... hmmm, i wish that didnt happen. Thanks for pointing me in the right direction
<jml> what am I doing wrong? http://paste.ubuntu.com/1052914/
<jml> docs say "$ jitsu watch mysql                                  # service is deployed"
<jimbaker> SpamapS, here's another nice example, drawn from your unit test spec
<jimbaker> timeout 600s jitsu watch \
<jimbaker>     mysql --state=started -r "mediawiki:db mysql:db" --setting=database \
<jimbaker>     memcached --state=started -r "mediawiki memcached" --setting=host \
<jimbaker>     mediawiki --state=started --open-port=80
<jml> but status says service is not deployed.
<jimbaker> so it waits until the full stack is deployed and in a steady state, because the appropriate settings have been made and in particular mediawiki has reached the end of its -relation-changed hook and has opened a port; note in this case it ignores exposed or not
<_mup_> Bug #1016138 was filed: The juju manpage should mention the JUJU_REPOSITORY environment variable <juju:New> < https://launchpad.net/bugs/1016138 >
<SpamapS> japage: I think there's an assumption in MaaS that you will have working DNS
<japage> SpamapS: It provides you with the option of using blank for the dns, which is supposed to use the maas server's dhcp to provide the dns, however i could be wrong about that.
<imbrandon> well your problem is there is no proper domain setup, not that there isnt dns, localdomain is in the hostfile
<imbrandon> not dns
<imbrandon> thus the node will report it but others will not reach it. break down and setup a domain if you need one :)
<japage> imbrandon: yea
<imbrandon> japage: in otherwords its working as intended , you just need to setup a domain if your going to use one
<jml> still having a bit of trouble with 'jitsu watch'
<jml> is 'jitsu watch <service>' supposed to wait until <service> is deployed? i.e. until at least until after 'install' completes successfully?
<japage> juju is working as intended, im just not sure why hostname -f is returning maas-1.localdomain
<m_3> jimbaker: watch-ports is in trunk... I'll wait a day or so before another release unless anybody needs this now
<SpamapS> jml: no, you need --state=started and --num-units=1
<imbrandon> japage: because localdomain is set properly in the hostfile and there is no other domain setup for the box
<jml> SpamapS: thanks.
<SpamapS> Anybody familiar with argparse want to help me fix 'jitsu sub-command --help' ? I want to override it to pass --help to the subcommand instead of intercepting and printing lame sub-command help
<imbrandon> japage: i was saying dns is working as intended
<SpamapS> (only an issue in trunk.. 0.12 has no help for jitsu)
<jimbaker> jml, when m_3 releases the next version of jitsu, you won't need to specify --num-units if using service unit specifications
<jimbaker> unless you want to do --num-units=2 or whatever
<jimbaker> m_3, thanks for that merge
<imbrandon> SpamapS: what about juju its self, doesnt it do that ?
<jml> jimbaker: 'if using service unit specifications'?
<jimbaker> jml, correct
<jml> jimbaker: sorry, what I meant was I don't understand that clause
<jimbaker> i quickly realized it was an oversight
<jimbaker> jml, not certain what you mean by which *clause*
<SpamapS> imbrandon: juju's sub-commands are all just python modules
<jml> jimbaker: what's a service unit specification?
<SpamapS> imbrandon: so they all just add their sub-parser to the main parser
<imbrandon> ahh
<SpamapS> imbrandon: but I need to essentially say "don't print --help for sub-commands"
<imbrandon> optparse :) heh
<SpamapS> which I'm pretty convinced argparse just won't allow
<SpamapS> perhaps
<jimbaker> SpamapS, i can take a look at --help support
<jimbaker> SpamapS, i assume you just want what we see with juju --help, right, a synopsis for each available subcommand based on the description
<m_3> SpamapS: looks like there's a `parser = argparse.ArgumentParser(prog='PROG', add_help=False)`
<jimbaker> from the subparser
<imbrandon> japage: you see what i'm gettin at ? it returns that ( rightfully ) because that is in its hostfile as it should be, that does not garentee its reaschable
<m_3> then maybe explicitly add it to the subcommands
<SpamapS> m_3: so unfortunately, all the sub-parsers will share --help with the main command.
<SpamapS> m_3: as in, if you don't add_help .. you get no help.
<japage> why would, for a maas with no domain set (blank, not local, which tells it to use mDNS), cloud-init write an /etc/hosts file with 127.0.1.1 <hostname>.localdomain <hostname> ; instead of 127.0.1.1 <hostname> <hostname>.localdomain ?
<SpamapS> and once you add it to one command
<SpamapS> you can't add it to any other
<m_3> it's like we need a help delegator
<japage> nvmd, i just figured it out, /me stupid question previously...
<SpamapS> m_3: yeah I think we just need to override the --help action with a function that is smarter than me :)
<imbrandon> japage: becase thats how dns is designed, you need to setup a proper domain if you want to use one
<SpamapS> m_3: but one tricky part is, I want it to use the usual --help action if there's no sub-command specified
<m_3> right... doesn't juju do this?
<jimbaker> m_3, indeed, that's what it does. looking at jitsu, it's roughly doing something similar
<jimbaker> but clearly not quite there
<jcastro> SpamapS: imbrandon: Which one of you "owns" the sexy column-on-the-side layout for the docs?
<imbrandon> me
<jcastro> hey can we get it reviewed and landed by say Monday PST?
<jcastro> it's too sexy not to show off at Velocity
<imbrandon> its supose to be landed now, but its not building
<imbrandon> need to get in touch with IS
<jcastro> oh, heh
<SpamapS> probably on a lucid box or something :-P
<imbrandon> as in the cron is broken or something
<SpamapS> imbrandon: try in a lucid chroot, probably some missing sphinx feature
<imbrandon> k
<imbrandon> yea its using sphinx 0.6.3 OLD we;re all on 1.1.3
<imbrandon> heh
<imbrandon> jcastro: but yea as soon as we figure that out, its landed
 * imbrandon goes to build a chroot
<SpamapS> imbrandon: mk-sbuild ftw :)
<imbrandon> heh
<SpamapS> Last Generated on Jun 20, 2012. Created using Sphinx 0.6.4.
<imbrandon> sudo debootstrap --variant=buildd --arch i386 lucid /mnt/lucid/
<imbrandon> :)
<SpamapS> imbrandon: schroot is your friend
<SpamapS> one-off chroots are just a waste of time
<imbrandon> likely but i'll keep this one until they upgrade the docs
<imbrandon> box
<SpamapS> Probably won't happen until 12.04.1 is released in August
<imbrandon> yea, so we may run into this again
<imbrandon> hopefully not, but you know
<imbrandon> heh
<imbrandon> brb more mt dew while that builds
<japage> imbrandon: i set up a real domain, called localdomain. (cheating ftw) :-)
<imbrandon> lol
<SpamapS> imbrandon: where does that branch live btw?
<SpamapS> since I have lucid as of 'schroot -c lucid-amd64 -u root' ;)
<imbrandon> SpamapS: what one ? docs ? lp:juju/docs
<SpamapS> imbrandon: oh so docs just isn't building at all right now?
<imbrandon> right
<imbrandon> look its still old from before all our changes
<SpamapS> Exception occurred:
<SpamapS>   File "/usr/lib/pymodules/python2.6/sphinx/builders/html.py", line 653, in <lambda>
<SpamapS>     ctx['toctree'] = lambda **kw: self._get_local_toctree(pagename, **kw)
<SpamapS> TypeError: _get_local_toctree() got an unexpected keyword argument 'maxdepth'
<imbrandon> nice
<imbrandon> ok one sec
<SpamapS> heh, we really should make juju.ubuntu.com a charm
<imbrandon> :)
<SpamapS> like.. seriously
<imbrandon> how its on lucid :)
<imbrandon> unless we chroot it on the box, but yea that woudl be awesome , dogfood it
<SpamapS> mouth.where().put(money)
<imbrandon> ok let me fix the maxdepth issue
<SpamapS> imbrandon: it shouldn't be on lucid forever though. :)
<imbrandon> one sec
<SpamapS> ./source/_templates/ubuntu1204/layout.html:{{ toctree(maxdepth=2) }}
<SpamapS> so thats f'ing it up?
<imbrandon> yea
<m_3> imbrandon: redirect :)
<imbrandon> SpamapS: just make toctree(maxdepth=2) == toctree
<imbrandon> 2 is the default anyhow
<japage> hmmm, appears my kludgy dns was a red herring, still seem to be having mysql issues creating relations
<imbrandon> m_3: hehe :)
<m_3> imbrandon: seriously... I do agree we should be dogfooding that one
<m_3> just have the dang thing redirect to ec2
<m_3> SpamapS: fido
<imbrandon> yea i think i;m gonna do that today
<imbrandon> we dont have a sphinx charm anyhow
<imbrandon> that i know of
<SpamapS> You know, the more I think about our need to cryptographically verify upstream software.. the more I think we should just require embedding anything not in the Ubuntu archive.
<SpamapS> Or, an archive that is sufficiently highly available like that one.
<SpamapS> would simplify a lot of charms to just toss tarballs into them
<SpamapS> and make them more robust
<imbrandon> ... the more i hear stuff like that the more i think about jcastro saying "you can do anything in a charm"
<imbrandon> heh
<SpamapS> YOU can do anything in a charm
<SpamapS> But I'm not going to inflict all the crazy brandon stuff on everyone. ;)
<imbrandon> heh well i can do anything in a deb too for that matter :)
<SpamapS> right!
<imbrandon> lol
<jcastro> SpamapS: right, so like right now the mod_spdy one is worthless because the google archive times out all the time
<jcastro> and so on
<SpamapS> deb can do anything charms can do better.. debs can do anything charms can do...
 * SpamapS sings a little song
<SpamapS> there's an offline apt thing that would work for that too
<imbrandon> SpamapS: http://s3.assets-online.com.s3.amazonaws.com/files/nginx/nginx-1.3.1.spdy.tar.gz
<imbrandon> :)
<SpamapS> more and more I think charms might benefit from a build step too.. where you could just run a "rebuild" that downloads and unpacks and stuff.. so the install hook isn't doing so much work
<imbrandon> patched and ready, just not had the time to build it into a charm yet
<SpamapS> imbrandon: yeah thats plenty available
<SpamapS> even from outside Amazon :)
<imbrandon> right, none pre-patched tho, thats why i made that tarbal
<m_3> SpamapS: yeah, install feels like a beast sometimes
<m_3> I try to move as much out of it as possible
<m_3> can do the "build" step in config-changed really
<m_3> win 17
<jimbaker> SpamapS, re jitsu --help on trunk, it's just not introspecting things properly. i can fix that
<SpamapS> m_3: did you not see my objections to kees's sbuild merge in the bug?
<SpamapS> m_3: actually I didn't make them in the bug, whoops
<SpamapS> m_3: anyway, we can't be dropping config options
<SpamapS> m_3: will likely break deployed services
<m_3> SpamapS: whoops... sorry, no didn't see that
<SpamapS> m_3: It wasn't well communicated
<imbrandon> well we can, we just need a way to get at the old ones
<imbrandon> keeping the old ones promostes crift
<imbrandon> cruft
<SpamapS> m_3: lets just leave this one be, but the policy doc I put out described this as a 'de-facto' rule.. where if something has been around for 30+ days, it can't be dropped
<m_3> SpamapS: roger
<SpamapS> imbrandon: cruft can be removed in the next series
<imbrandon> ugh that sounds bad
<SpamapS> Until juju gives us config-set .. cruft must remain
<imbrandon> next series 5+ years
<SpamapS> imbrandon: no, 2 years
<SpamapS> tho we can fix it in quantal
<SpamapS> just that nobody will care ;)
<imbrandon> server where most of these target
<imbrandon> 5
<SpamapS> imbrandon: *2*
<SpamapS> every 2 years theres a new LTS
<imbrandon> so you expect that everyone will upgrade ? we still have stuff in IS on lucid
<SpamapS> I don't mind keeping the cruft around for the full LTS lifetime as long as we don't keep it forever in the current LTS
<SpamapS> imbrandon: this isn't about everyone upgrading, its about easing development burden
<SpamapS> existing users won't care about the cruft
<SpamapS> I would like to see juju grow a 'deprecated' tag for options
<SpamapS> so deploy will yell loudly
<SpamapS> and set will warn
<imbrandon> yea thus i say we need a way to get at historical config options not promote cruft
<imbrandon> we are making policy based on bugs
<SpamapS> yes we are
<SpamapS> thats life
<SpamapS> if we made policy on perfection, we'd have no policy :)
<imbrandon> welll thats like me adding to the metadata.yaml cuz i could
<SpamapS> the bug being.. ?
<imbrandon> only approved items should be there
<SpamapS> Thats not a bug, thats a fact. :)
<SpamapS> One agreed upon by a pretty short discussion.
<imbrandon> but nothign enforces that , i am just saying instead of a short lived policy lets fix the bug
<SpamapS> its on the TODO
<imbrandon> ok then no need for policy
<SpamapS> Yes there's a need for a policy so we can ease unwanted affect of the bug being fixed
<imbrandon> and how is it on the todo i just sugested it ?
<SpamapS> Basically the policy is saying "Don't do that, because its not going to work at some point"
<imbrandon> no it will work at some point when the bug is fixed, it dont work NOW and coudl breaak things
<SpamapS> There's already a bug somewhere to start warning on unknown fields.
<imbrandon> see the diff
<imbrandon> no no
<imbrandon> i'm on the original thing, not metadata
<SpamapS> <-> see that, we just talked right past eachother
<imbrandon> i just used that for example
 * SpamapS capitulates
<imbrandon> hehe
<imbrandon> yea i think we;re on the same page just
<imbrandon> was on diff subjects
<imbrandon> ok /me goes back to docs
<imbrandon> btw did you change that, can you change that toctree and rebuild to make sure i'm rtight before i commit
<SpamapS> imbrandon: push to some other branch and I can try it
<imbrandon> k
<SpamapS> I tried removing the maxdepth and got something else
<imbrandon> oh
<imbrandon> fun
<SpamapS>   File "/usr/lib/pymodules/python2.6/docutils/nodes.py", line 92, in setup_child
<SpamapS>     child.parent = self
<SpamapS> AttributeError: 'NoneType' object has no attribute 'parent'
<SpamapS> which looks way nastier
<imbrandon> yrs
<SpamapS> Perhaps we can request an upgrade to 12.04 :)
<imbrandon> bah
<imbrandon> please
<imbrandon> heh
<imbrandon> but will they do it before monday ? heh
<imbrandon> jcastro: jujucharms.com/docs has the current build too btw in a pinch, but thats on hazmat's $$ sooooooo
<hazmat> SpamapS, yeah. i suspect its on lucid given how old the sphinx their attempting to use is
<SpamapS> definitely lucid
<SpamapS> such is life, unless we embed sphinx in the branch ;)
<jml> so, I've finally got my thing deploying from my charm
<jml> which is great
<SpamapS> but, that would be eeeevil right ;)
<jcastro> hey so I have been thinking SpamapS
<SpamapS> jcastro: dangerous that
<jcastro> what does bundling the tarball in the charm accomplish
<jml> https://code.launchpad.net/~jml/libdep-service/juju/+merge/111396 has the MP. I would *really*, *really* appreciate a review from an experienced charmer
<jcastro> I still have to trust you.
<jml> by which I mean you, SpamapS
<SpamapS> jcastro: makes the deploy/add-unit more predictable
<jcastro> I can't confirmed that you didn't check the sha either
<jcastro> oh ok, so you mean purely for "it will work every time"
<SpamapS> jcastro: presumably we will add some crypto verification to the charm store beyond what we have now (https for launchpad). Either way, we're trusting the charmer to provide a valid SHA.. so providing the actual file is the same thing
<SpamapS> jml: I'll look right now, since you asked so nice
 * jcastro nods
<jml> SpamapS: thanks.
<SpamapS> I *do* think we need to build a Packages.gz type of file for the charm store
<SpamapS> which is signed
<jml> SpamapS: I have to leave in the next couple of minutes, so please put your comments on the MP.
<SpamapS> and have the commits to the bzr branches signed too probably
<SpamapS> jml: will do
<jml> SpamapS: thanks!
<surgemcgee> The *config-changed* hook will trigger the first time the charm is deployed. Is this functional to anyone?
<imbrandon> yes, thats where most of my install actions come in
<imbrandon> ver little is done in the install hook for me
<SpamapS> surgemcgee: aye, its guaranteed to run, unless install or start fails
<jml> incidentally, the juju tests take a while to run.
<jml> speaking from personal experience, you want to get on that sucker now, or you'll become Launchpad.
<SpamapS> jml: 7 minutes?
<SpamapS> jml: for 98% coverage..
<SpamapS> jml: also remember that the python code base is done growing. :)
<SpamapS> jml: on an SSD they only take 4 minutes.
<jml> SpamapS: I just ran ./test on my machine w/ an SSD and it's still going
<jml> SpamapS: maybe I'm supposed to run the tests differently
<SpamapS> ./test
<SpamapS> thats all I do
<jml> SpamapS: way more than 7m
<SpamapS> takes at most 7 minutes
<SpamapS> jml: bad java maybe?
<jml> SpamapS: possible
<SpamapS> it taxes zookeeper quite a bit
<SpamapS> jml: also try 'eatmydata ./test'
<jml> SpamapS: I just have whatever data is on the system
<SpamapS> that at least disables all the syncing that zookeeper wants to do
<jml> SpamapS: so, hang on....
<jml> if the python code base is "done growing" as you say
<jml> then there's zero point in me contributing patches for the bugs I've filed.
<SpamapS> jml: Its not done living, its just not going to get any more feature dev.
<SpamapS> making it easier to use, clarifying stuff with online help, those will all help users while we transition to go
<jml> and when that's done, we can do it all over again
<jml> 738 seconds for the test run.
<SpamapS> (and also the go team will be expected to not regress any of the bugs that are fixed before we declare it "complete"
<jml> well, I wish them all the best with tht.
<imbrandon> SpamapS: can we just get IS to turn on -backports and -updates and such
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~/docs# cat /etc/issue.net
<imbrandon> Ubuntu 10.04.4 LTS
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~/docs# dpkg -l|grep sphinx
<imbrandon> ii  python-sphinx                   1.0.1-1~lucid1             tool for producing documentation for Python
<imbrandon> root@server-1339205906-az-1-region-a-geo-1:~/docs#
<imbrandon> and it builds perfect with no changes
<SpamapS> jml: review posted to the MP
<jml> SpamapS: thanks!
<imbrandon> how do we make that happen sooner than later ? heh
<SpamapS> imbrandon: backports might be a good way to go
<SpamapS> imbrandon: I'll open up an RT
<imbrandon> ok ty, mention that its broken now hehe
<SpamapS> imbrandon: trying with lucid-backports enabled
<imbrandon> but yea i enabled -backports and -updates and it worked
<imbrandon> deb http://archive.ubuntu.com/ubuntu lucid main restricted universe multiverse
<imbrandon> deb http://archive.ubuntu.com/ubuntu lucid-updates main restricted universe multiverse
<imbrandon> deb http://archive.ubuntu.com/ubuntu lucid-backports main restricted universe multiverse
<imbrandon> deb http://archive.ubuntu.com/ubuntu lucid-security main restricted universe multiverse
<SpamapS> easy pastemonkey
<imbrandon> suprised the bot dident kick me
<SpamapS> I don't think we have that bot
<imbrandon> ahh ubottu normally in #ubuntu-* chans guess not #juju
<imbrandon> anyhow yea, i bet they are much more likely to do that then update to 12.04 by monday
<imbrandon> and it requires no docs work arounds that way
<imbrandon> still agree with the charm tho
<imbrandon> man i wish there was 3 of me
<imbrandon> jcastro: awe, not using my button :(
<jcastro> I have to use the official blurry button
<imbrandon> heh k
<jcastro> burned by the design team
<imbrandon> wonder how i can make mine "official"
<imbrandon> joey used it once or twice now on posts ( made a wordpress shortcode for him ) and they seemed to love it :)
<imbrandon> infact i should release that plugin for the wp shortcode
<SpamapS> imbrandon: ok, IS ticket submitted
<SpamapS> jcastro: whats significant about Monday ?
<imbrandon> SpamapS: rockin ty
<jcastro> SpamapS: velocity, not like during a talk or anything
<jcastro> but it would be nice to know when we mention stuff that it'll be pretty
<SpamapS> jcastro: Indeed
<imbrandon> now if i could just edit the wiki
<imbrandon> lol
<jcastro> lol, let's not get crazy
<SpamapS> The IS guys said it should be no big deal to pull in the backport, but they might prefer to just upgrade to precise.
<imbrandon> rt is on it, they emailed me back
<jcastro> rock and rolll
<imbrandon> jcastro: yes was more toung in cheek
<jcastro> SpamapS: yeah, I would think they'd prefer to just go all 12.04.
<imbrandon> either way works for me
<jcastro> SpamapS: backports, a sure way to know you're the only guy running that configuration on your production box. :)
<imbrandon> :)
<SpamapS> hazmat: bug 984484 .. galapagos? really? Its not even In Progress.. and I'm releasing *tomorrow*
<_mup_> Bug #984484: subordinate charms should be able to open ports <juju:Confirmed for bcsaller> < https://launchpad.net/bugs/984484 >
<jcastro> SpamapS: also, where do I find the juju codename/release/date mapping?
<SpamapS> jcastro: https://launchpad.net/juju
 * imbrandon starts preping to do new rpm and osx builds tomarrow
<SpamapS> we're 2 weeks late on galapgos
<jcastro> ah, got it
<m_3> SpamapS: can you do notes on the juju release process pls :)
<SpamapS> expected: 2012-06-06
<m_3> SpamapS: assume it's similar to jitsu?
<SpamapS> m_3: Yeah I think that probably deserves something in internals
<SpamapS> m_3: its not going to be as smooth as jitsu's ;)
<SpamapS> since we've never done a "release"
<m_3> oh, gotcha
<imbrandon> i think i'm going to reload my mini tonight, i need the extra space that OSX is using
<imbrandon> heh
<jcastro> what? no, you have to test the OSX releases!
<m_3> imbrandon: especially if you're able to do the osx in a vm thing
<imbrandon> jcastro: osx in a VM
<imbrandon> :)
<hazmat> SpamapS, is galapagos closed?
<imbrandon> lunchtime bbiab
<m_3> SpamapS: we have real (lp) milestones though... I'm curious to see how this differs
 * m_3 is interested in learning lp for real after the branch-distro fiasco :)
<hazmat> on two weeks late
<m_3> hazmat: he mentioned releasing it tomorrow... don't really know what closed means here tho
<imbrando1> mmm i need enough money to buy an island, or at least 98% of one ...
<imbrandon> that has a solid internet connection too :(
<SpamapS> hazmat: Closed, not sure. I did say I wanted to release tomorrow. :)
<SpamapS> m_3: branch-distro is about the weirdest part of launchpad I've seen.
<SpamapS> m_3: nothing else does things like it does.
<jcastro> SpamapS: sadness is watching you pilot not here.
<jcastro> j/k
<SpamapS> jcastro: I'm on deck next week. :)
<SpamapS> 7 items in the queue.. we're pretty healthy anyway :)
<imbrandon> mmm this cant be good, whole bowl of instant pudding to myself :)
<SpamapS> imbrandon: http://www.mtv.com/videos/misc/173418/240-dollars-worth-of-pudding.jhtml
<SpamapS> awww yeah
<imbrandon> SpamapS: zomg
<imbrandon> that is classic
<imbrandon> "... now kids, the rumor says that the `M` in `MTV` used to stand for `Music`, *children all giggling* ..."
<robbiew> jcastro: m_3: SpamapS: any one available for a charm school at the Texas Linux Fest Aug 3-4...in San Antonio?
<robbiew> I'm not sure we need one, but just asking
<jcastro> I can go if you want
<robbiew> I just wonder how many folks there would be interested
<robbiew> last year had good following
<robbiew> and I suspect we given rackspace is diamond...maybe more cloud folks this year
<robbiew> gonna be f*cking hot
<robbiew> lol
<jcastro> heh
<imbrandon> heh
<imbrandon> thats one thing i dont miss about TX, 70's on my b-day , in mid december :)
<robbiew> I actually don't mind 70s in december...it's the freezing rain and snow the week after that always messes with me
<robbiew> and of course the hell on earth heat...peaking in August
<imbrandon> hahahah yea, i lived in galviston so the gulf squelched that a lil
<m_3> robbiew: lemme look
<imbrandon> i do miss the beach parties tho, mmm nothing like watching the sun rise on east beach :)
<SpamapS> robbiew: could possibly make it
<SpamapS> Need to find my wife a nanny before serious travel commences, but we might have one by then.
<robbiew> imbrandon: ugh...galveston, literally the armpit of the USA
<robbiew> in every sense of the word....hot..humid...wet
<imbrandon> hahahah , this was pre-katrina
<robbiew> ..and occasionally stinky
<imbrandon> :)
<SpamapS> robbiew: don't forget the hairy part
<imbrandon> LOL
<robbiew> SpamapS: meh...I wouldn't spend a travel voucher with the wife for this one
<m_3> robbiew: yeah, I can go
<robbiew> m_3: cool, well I'll let you know by next week if we end up doing one
<m_3> robbiew: ok, thanks
<robbiew> no...thank YOU ;)
 * m_3 will think of it as an extended sauna
<imbrandon> heh
<imbrandon> SpamapS / m_3 : you see the new MB with retna displays and paper thin ? WOW
<m_3> imbrandon: yup... I'm still waiting for the little 11" air to fit into the family budget
<imbrandon> yea, thats the next one i am getting , the 11inch
<m_3> it was a tradeoff... house or computer?
<imbrandon> heh, good call
<tooth> not as upgradeable though. :-(
<tooth> soldered on ram.
<tooth> and a proprietary flash disk thing.
<imbrandon> i rarely if ever upgrade machines, i buy new ones. so no biggie
<SpamapS> imbrandon: Yes I've seen them. No I don't really understand why I need to buy one. ;)
<SpamapS> I am looking for a new machine..
<SpamapS> but I want to see if I can actually buy a non apple machine
<imbrandon> good luck :)
<imbrandon> heh j/k
<m_3> imbrandon: please ignore comments on bug #1000088
<_mup_> Bug #1000088: charm needed: newrelic sysmond <Juju Charms Collection:Fix Committed by imbrandon> < https://launchpad.net/bugs/1000088 >
<imbrandon> this openbuildservice is  .... intresting
<imbrandon> m_3: okies :)
<imbrandon> m_3: hahahah should i ask ?
<bkerensa> jcastro: Up for membership today -> https://wiki.ubuntu.com/lynxman
<SpamapS> what about negronjl?
<SpamapS> since the DMB decided to pass the buck
<bkerensa> =/
<lynxman> SpamapS: I applied for the membership meeting 3 weeks ago, if you could give me a testimonial I'd be very grateful
<lynxman> SpamapS:  https://wiki.ubuntu.com/lynxman
<SpamapS> lynxman: done
<lynxman> SpamapS: thanks
<lynxman> SpamapS: !! :)
<bkerensa> lynxman: is this go two for you?
<lynxman> bkerensa: yes, first one was for UCD and got declined
<bkerensa> ahh
<jcastro> same thing happened to juan
<jcastro> we need to get him to apply for normal membership
<jcastro> m_3: did you apply for membership yet?
<SpamapS> Yeah seriously, its time
<SpamapS> jcastro: I do like the idea, more and more, of us having a charm store council and being able to grant membership.
<jcastro> I specifically didn't ask for that council to be able to grant membership
<jcastro> because I didn't think it was necessarry
<SpamapS> Its weird to have people who don't really understand how juju helps Ubuntu saying no to people like Juan though
<imbrandon> for those that contribute charms but not much else i can totaly see it, same thinking for the kubuntu council granting memberships etc
<imbrandon> yea
<jcastro> SpamapS: I think that's what we should fix though
<jcastro> not working around it by making another member-granting council
<jcastro> (IMO)
<bkerensa> jcastro: lynxman in #ubuntu-meeting for Ubuntu Membership :)
 * imbrandon goes to lurk
<bkerensa> he is up now ;)
 * SpamapS can't watch
<bkerensa> quiet debate is occuring
<lynxman> bkerensa: quite silent
<bkerensa> ikr
<bkerensa> maybe they found a nice video on youtube
<jcastro> imbrandon: heh, what the heck is mims doing to your newrelic charm bug
<imbrandon> lol no idea :)
<imbrandon> using it for a guiney pig i think
<lynxman> wohooo!
<imbrandon> grats
<negronjl> SpamapS: what do I need to do to apply for membership
<lynxman> imbrandon: thanks :)
<SpamapS> negronjl: -> ask lynxman he just got it ;)
<negronjl> lynxman: you still around ?
<lynxman> negronjl: I am!
<negronjl> lynxman: what did you do to apply for memebership ?
<imbrandon> negronjl: put up a nice wiki page saying what all you do in/for ubuntu , ask for peeps to vouch for you, then attend a cc meeting :)
<lynxman> negronjl: let me tell you in #siteam
<jcastro> negronjl: mira, we can reuse your application
<jcastro> it'll be the exact same
<negronjl> jcastro:  that's what I was thinking of using
<jcastro> it'll be the exact same
<negronjl> jcastro: I also have a wiki page ( wiki.ubuntu.com/JuanNegron )
<jcastro> looks like you applied to the wrong board.
<jcastro> yeah
<negronjl> jcastro:  sure .. we'll go with that :/
<jcastro> negronjl: 27 june is the next one, we'll be at velocity
<jcastro> we can just prep it together while we are there
<negronjl> jcastro: k
<negronjl> jcastro: thx
<jcastro> don't worry man, we're on it.
<jcastro> like white on rice
<imbrandon> heh
<lynxman> negronjl: I'll root for you :)
<jcastro> imbrandon: this newrelic charm is exciting
<imbrandon> me 3 :) would have for lynxman as well had i known prior :)
<jcastro> I think a bunch of people can find use for it
<negronjl> lynxman: thx man ... it looks like I'll need it :)
<imbrandon> jcastro: there are 2 now, that one and the php one, and i was just thinking aobout doing the ruby one too since the app i've been working on is rails
<imbrandon> :)
<jcastro> hey
<imbrandon> but yea they rock
<jcastro> do they have any node.js graphing stuff?
<lynxman> imbrandon: aww thanks :)
<imbrandon> yea , i can whiop up the node one in a few mintues
<imbrandon> they are all basicly the same , just a few minor changes
<imbrandon> to each one for the runtime
 * imbrandon goes to grab the ruby and node ones
 * SpamapS would like to see just one good graphing solution.. tho ganglia and munin are at least "traditionally" good
<imbrandon> jcastro: for node stuff tho you want some sexy meteor tho
<jcastro> we do
<imbrandon> meteor is frackin bad ass
<imbrandon> http://meteor.com/screencast
<imbrandon> watch that, like 3 minutes
<jcastro> yay, another platform!
<imbrandon> it will change your life as a webdev
<imbrandon> nah
<jcastro> I need pretty graphs yo
<imbrandon> its node
<imbrandon> not new
<imbrandon> but a new way to code, but its all nodejs , will run on any node server anywhere
<imbrandon> no special sauce
<jcastro> oh it certainly looks cool
<jcastro> I just need an app, not a framework to write an app
<imbrandon> they have a few examples, like wordplay would be sweet
<jcastro> yeah
<imbrandon> to show off realtime node client<->server
<negronjl> jcastro: ping
<imbrandon> jcastro: kinda nice too since it will be a sub to anything
<jcastro> yuuup
<imbrandon> HA! good thing i loged into newrelic, looks like OMG has been minus one webhead a day or so
<imbrandon> but since the setup is so sweet no notice :)
<imbrandon> SpamapS / jcastro: check this out too http://uptime.omgubuntu.co.uk/585467
<imbrandon> one minute imcrments
<jcastro> heh
<jcastro> nice!
<SpamapS> imbrandon: what is that?
<SpamapS> ah pingdom
<imbrandon> pingdom
<imbrandon> but like i can tell for certain there has been no downtime in the last 7 days , not even for a minute
<imbrandon> :)
<SpamapS> imbrandon: indeed
<jcastro> that's pretty sexy
<imbrandon> :)
<imbrandon> there is an API, wonder if i could rangle that into a reporting charm
<imbrandon> get the endpoint from the relation and then setup the reporting for that endpoint
<imbrandon> hrm
<imbrandon> jcastro: erm i guess there isnt any node newrelic, only ruby php python and .net
<imbrandon> and sysmon
<imbrandon> there is also a generic restapi so i wonder if there is a 3rd party one
 * imbrandon looks
<imbrandon> HAHA rock, Joyent ( one of the big companies behind nodejs ) sugests use the php agent for monitorying and gives an example on how to use it from node
<imbrandon> :)
<imbrandon> damit i wish gimp was as good as photoshop
<SpamapS> wow.. how messed up is my head that I think I want this https://github.com/alevchuk/vim-clutch
<imbrandon> hahha
<imbrandon> nice
<imbrandon> do it!
#juju 2012-06-22
<m_3> SpamapS: omg that's funny
<m_3> like a piano pedal
<m_3> imbrandon: ok, I think I'm done polluting your bugspace with comments
<imbrandon> np :) can i prom it too ? hehe
<imbrandon> actually i need dinner, bbiab
<m_3> imbrandon: just test running charm-tools 'review' from vim
<imbrandon> m_3: btw what were ya testing ?
<imbrandon> ahh
<imbrandon> if someone wants to +1 it too would be cool to get into the store :)
<imbrandon> but for now, off to get fooooood
<m_3> imbrandon: me too
<SpamapS> so, am I evil because I created a charm to scale out john the ripper?
<SpamapS> EvilMog: ^^
<SpamapS> EvilMog: its.. alive
<SpamapS> EvilMog: lp:~clint-fewbar/charms/precise/john/trunk
<SpamapS> needs a README I suppose
<SpamapS> and now for something completely different...
<SpamapS> sleep
<EvilMog> awesome
<EvilMog> I'll give that a shout in the morning :)
<EvilMog> shot
<EvilMog> also I'll get the corelan and metasploit guys to try it out
<_mup_> txzookeeper/trunk r49 committed by kapil.foss@gmail.com
<_mup_> update changelog
<jml> is there standard copyright for Canonical written charms?
<jml> this is for an AGPL service
<jml> SpamapS: why did you lie to me?
<jml> SpamapS: never mind.
<lifeless> how do you tell juju to use a proxy server for apt ?
<lifeless> And
<lifeless> how can you tell juju to use the ip address of started instances, rather than the hostname
<hazmat> lifeless, we don't have support for it yet re proxy.
<hazmat> lifeless, the ip inspection is specific to the provider being used. typically its using hostname -f output
<hazmat> re apt proxy support its bug 897645
<_mup_> Bug #897645: juju should support an apt proxy or alternate mirror for private clouds <cloud-init:Fix Released> <juju:Confirmed for hazmat> < https://launchpad.net/bugs/897645 >
<jml> http://code.mumak.net/2012/06/unfiltered-reflections-on-my-first-juju.html
<m_3> jml: great info in the write-up thanks!
<jml> m_3: my pleasure
<m_3> we need to update the django-based charms to check status of applied puppet manifests... I missed that
<m_3> debug-hooks might be worth your time checking out... only problem is you can't catch the install hook.  That often makes me wanna put more logic in config-changed so I can catch it
<m_3> jml: which branches are you referring to when saying you should prefer http://code.launchpad.net/ over lp:?
<m_3> jml: also, your typical dev iteration might include terminate-machine on ec2... as it's sometime nice to try again with a pristine instance.  destroy-service does this in lxc, but not ec2
<m_3> anyways... awesome... I'm excited to see how pkgme gets structured
<jml> m_3: any branch on Launchpad. e.g. lp:libdep-service
<jml> m_3: I don't want to say http://bazaar.launchpad.net/~libdep-service-committers/libdep-service/trunk because it's not what I mean.
<m_3> jml: ah, ok... thought you meant when checking out the charms into a local repo.  was wondering where juju was complaining about that
<jml> m_3: it's not juju there, it's bzr moaning about not having launchpad-login set, which is always going to be the case in any automated deploy
<jml> m_3: it's really important that all displayed errors actually be errors, and that all actual errors are displayed as such
<hazmat> m_3, that's fixed incidentally but it timing dependent
<hazmat> its
<hazmat> ie debug-log and debug-hooks both can catch install
<hazmat> actually that shouldn't be timing dependent
 * hazmat verifies
<hazmat> jml, hooks already run with dpkg env vars to set frontend to noninteractive and list changes frontend to none.
<jml> hazmat: yeah, I thought my list mentioned that, and that 'sudo' masks that env var
<jml> because sudo does that
<hazmat> jml but why sudo when one is already root
<hazmat> but good to note agreed
<jml> hazmat: because I didn't know hooks were run as root when I started
<hazmat> gotcha
<jml> and also, it's nice to note what bits I'm doing in the install that _have_ to be root
<hazmat> jml, alternatively sudo -E for env preservation
<hazmat> m_3, just verified you can run debug hooks on install hooks now
<m_3> hazmat: whoohoo!!
<SpamapS> Yeah I've been using debug-hooks to *write* install hooks for a while now
<SpamapS> i.e., start with empty charm, deploy, debug-hooks (quickly!) and then iterate, then juju scp the hook back into the bzr branch :)
<SpamapS> I'd love an option to make juju use bzr rather than the zip file to grab the charm, so it would be easy to push changes back
<marcoceppi> +1
<SpamapS> jml: great brain dump. These definitely help to focus our efforts.
<jml> SpamapS: thanks. I might follow up with something that's a bit more reflective
 * jml submits a super-naÃ¯ve patch
<bloodearnest> hey all - so I'm attempting to write a hook in python, but calls to subprocess seem to blow up
<bloodearnest> is there a better way to do relation/config-set/get in a python hook? If not how can I interact with the shell relation-get et al?
<SpamapS> bloodearnest: there are lots of cases of python hooks which use subprocess to call relation-* and such
<SpamapS> hostname = subprocess.check_output(['unit-get','private-address']).strip()
<SpamapS> bloodearnest: for instance
<bloodearnest> SpamapS, right - so I'm doing something wrong then
<SpamapS> bloodearnest: are you perhaps trying to call these hooks outside of the actual juju agent?
<bloodearnest> SpamapS, nah - my bad missed the /usr of  /bin/env :(
<hazmat> bcsaller, jimbaker ping
<bcsaller> hazmat: whats up?
<hazmat> need to have a trivial vetted
<bcsaller> link?
<hazmat> see msg
<sidnei> uhm, trying to juju bootstrap on precise and seems to be hanging on virsh net-list --all, known issue?
<m_3> sidnei: not sure... wow, actually _hanging_ on virsh net-list... you might make sure you either rebooted since installing libvirt-bin and/or are in the libvirtd group
<sidnei> checked both yes. also, i had lxc working before doing this, with cgroups-lite and such, lxcbr0 already present, and juju prompted me to install libvirtd-bin
<sidnei> maybe this is one for hazmat per https://blueprints.launchpad.net/ubuntu/+spec/servercloud-q-juju-charm-best-practices (hi!)
<m_3> sidnei: yeah, lxcbr0 isn't used unfortunately... juju local provider uses lxc containers but libvirt networking
<m_3> sidnei: sidnei sometimes you can clear out lxc networking hangs by a `juju destroy-environment`
<sidnei> maybe they are stomping on each other
<m_3> sidnei: also look in `lxc-ls` and /var/lib/lxc
<m_3> sidnei: possible... but they default to totally different nets... you can see with `ps auwx | grep dnsmasq` which nets are really active
<sidnei> as i said above, i was already using lxc, there's a couple envs under /var/lib/lxc all from before juju
<m_3> sidnei: gotcha
<jcastro> negronjl: ping me when you're home
<jcastro> I'd like to find out how your demo went
<negronjl> 'morning all
<negronjl> jcastro: I'm here
<jcastro> mind if we G+? I just need to grab lunch real quick
<negronjl> jcastro: sure .. invite me when ready
<jml> I'm writing up documentation for my project for contributors and potential users
<jml> I'm saying, "The easiest way to run libdep-service is with Juju"
<jml> what landing page should I point them at for getting the initial EC2 and/or LXC stuff set up?
<jcastro> jml: https://juju.ubuntu.com/docs/getting-started.html
<jcastro> negronjl: hangout started!
<newz2000> I have some very nice improvements to the Varnish charm that I'm about to push.
<newz2000> Improvement #1: It works now. :-)
<newz2000> (on LXC, it might have worked on EC2 before)
<SpamapS> newz2000: \o/
<_mup_> juju/trunk r543 committed by kapil@canonical.com
<_mup_> [trivial] local provider disables password auth on containers [r=bcsaller]
<newz2000> Hi, got my first charm mp posted: https://code.launchpad.net/~newz/charms/precise/varnish/fix-lxc-deployment/+merge/111648 Would love feedback, going to use this to try and make a squid charm.
<SpamapS> newz2000: heh.. squid.. because.. varnish is just too awesome for you? ;)
<newz2000> SpamapS: well, Varnish is awesome but we use Squid in ISD, the team I work on.
<SpamapS> newz2000: actually squid should be good.. you can then run a test and see just how much better varnish is. :)
<newz2000> You're talking to the wrong guy on this. I just write the code. Someone else deploys it.
<SpamapS> newz2000: thats not the devops way! ;)
<SpamapS> we all write it
<SpamapS> we all deploy it
<newz2000> I would *love* to do it the devops way
<SpamapS> and we all wear a pager from time to time if things are really being fair
<newz2000> we don't do it that way though. :-)
<SpamapS> newz2000: well anyway, I believe Dustin Kirkland took a stab at squid at some point
<newz2000> oh
<SpamapS> newz2000: so maybe dig in https://code.launchpad.net/~kirkland
<newz2000> Apparently there is more than one place to look for charms
<SpamapS> its not official tho.. so just do a fork of varnish if thats easier :)
<newz2000> k
<jml> jcastro: thanks.
<jml> does this read right? http://paste.ubuntu.com/1054642/
<jml> anything I can do to make life easier for my contributors?
<bloodearnest> the docs say that config-get with no args will output json, but when calling it from python I get python a dictionary repr rather than json
<bloodearnest> *a python dictionary repr
<SpamapS> bloodearnest: indeed, its a known bug
<SpamapS> bloodearnest: --format json
<bloodearnest> SpamapS, I tried that, but got empty string#
<SpamapS> marcoceppi: hey, how's wordpress coming along?
<pindonga> hi there... so, in my charm I need to setup access to a private ppa so I can install some custom packages... I was thinking of somehow passing the credentials in so I can customize a template for the apt config, any suggestions?
<m_3> pindonga: pass it in config
<imbrandon> SpamapS: any word from isd ?
<imbrandon> err is
<pindonga> m_3, got any examples around?
<SpamapS> imbrandon: they've ACK'd the ticket
<imbrandon> :(
<SpamapS> imbrandon: but not sure when it will be ready
<imbrandon> SpamapS: i had wordpress running on rails last night :) phrake , kinda amusing ... for 5 minutes
<m_3> pindonga: lemme look.  thinking pass the key and/or key_id in yaml and then pass the $(config-get key) to apt-key
<pindonga> m_3, ah
<pindonga> k, could work
 * pindonga tries
<jml> ping again re patch: https://code.launchpad.net/~jml/juju/stderr-as-info/+merge/111609
<jml> also, feedback requested on short doc snippet for users of a project that recommends using juju to deploy it: http://paste.ubuntu.com/1054642/
<marcoceppi> jml: the readme should assume the charm is in the store, IIRC
<SpamapS> marcoceppi: not necessarily
<jml> marcoceppi: I'm not sure this project belongs in the store, tbh.
<SpamapS> there are plenty of cases where a charm will be highly useful, but never be in the store
<jml> marcoceppi: and atm, the charm is included in trunk
<SpamapS> just like some things don't make any sense being in a distro's archive
<jml> (does that make it the Juju equivalent of "native package"?)
<m_3> jml: I'd make sure it's clear that your --repository is a path... `juju deploy --repository ~/charms local:libdep-service`
<jml> m_3: good point (although it's "./charms" in this case)
<m_3> jml: and then also don't forget `juju expose`... it won't be available over the web until after that's done
<m_3> jml: lxc is an exception (no security)
<jml> ahhh
<jml> m_3: I was about to disagree with you :)
<m_3> jml: I like to point people to entire deployment/spinup scripts... like https://gist.github.com/2050525 or https://gist.github.com/1406018
<SpamapS> Can people please try the latest version of the PPA with the local provider? I'd like to tag r543 as 0.5.1
<jml> m_3: yeah, I think I'll do that in a future iteration.
<m_3> SpamapS: sure... trying now
<m_3> jml: awesome README example is lp:charms/hadoop
<jml> m_3: I guess I'm slightly disappointed that writing wrapper scripts seems appealing
<m_3> ha
<m_3> yes
<m_3> jml: it's only for spinup... note that juju maintains the "model" going forward... you make changes directly using the juju cli
<m_3> jml: but it's often handy to start things off with a script
<m_3> jml: keep your eyes peeled for "stacks" going forward... (juju can serialize/deserialize its model)
<jml> Yeah, I remember talking about stacks in Capetown.
<sidnei> uhm, seems like virsh is stuck polling /var/run/libvirt/libvirt-sock
<jml> And, you know what, two blog posts in one day: http://code.mumak.net/2012/06/further-reflections-on-my-first-juju.html
<jml> don't ever say I don't care about you guys
<SpamapS> jml: I don't care about you guys
<jml> I don't want to be harshing on you guys. It's great work.
<jml> I really do hope these posts help.
<SpamapS> jml: hey, have you tried this, I do it sometimes and its surprisingly nice and interactive: watch 'juju status | ccze -A'
<SpamapS> wait thats not right
<jml> SpamapS: I didn't know about ccze.
<jml> cool.
<SpamapS> yeah its fairly smart
 * jml has to go. I've got to pack & head out to Canterbury.
<SpamapS> at one point I had it showing without flickering.. can't recall how now tho
<SpamapS> jml: anyway, debug-log | ccze -A helps with the colorizing a lot
<jml> SpamapS: yeah, I'll bet.
<SpamapS> (using -A just makes it scrollable)
<SpamapS> jml: I too worry about the feedback loop a lot.. its just hard to correct that one because apt-get installing stuff will never be "zippy"
<SpamapS> jml: but anywa, go away, enjoy Canterbury, and bring us their Tales when you're done
<jml> SpamapS: right. perhaps it could be cleaner though.
<jml> SpamapS: I think I'm obliged to tell tales on my way there. Not 100% sure though.
 * jml is gone
<SpamapS> "Oh, I should mention that I got stuck for a while at the very outset, redeploying the same version of my charm because I forgot to update the revision file and didn't realize that I actually wanted to run deploy with the --upgrade option."
<SpamapS> And yet.. we still require this nonsense. :-/
<SpamapS> I don't think a single new user has been able to avoid that problem.
<SpamapS> niemeyer: ^^ we need to think long and hard about the revision file.. again.
<jcastro> "#juju is heaps better when America is awake"
<jcastro> heh
<imbrandon> lol, who said that
<SpamapS> do we have EU charmers? James Page, but he's like, too busy holding up the pillars of ubuntu server and QA...
<sidnei> SpamapS, would you have any clue about virsh just plain hanging?
<imbrandon> there are a few that are "unknowns" that are regualrs overnight
<imbrandon> but i couldent tell ya their names
<jcastro> it'd be jim baker and lynxman
<SpamapS> sidnei: hanging? no.
<imbrandon> i see them night after night tho
<jcastro> those would be the other two .eu people I can think of
<SpamapS> jcastro: Jim is in Colorado
<jcastro> oh, right
<jcastro> sorry, I meant will
<jcastro> but he's Golanging isn't he?
 * m_3 _hates_ lxc
<imbrandon> ^5 m_3
<sidnei> SpamapS, turning the question around, is it possible to use juju in lxc mode without libvirt-bin installed?
<SpamapS> sidnei: no
<m_3> sidnei: not with out some code munging on your own
<SpamapS> sidnei: it uses libvirt's network management
 * sidnei < between a rock and a hard place
<SpamapS> sidnei: is it possible that your libvirt 'default' network is broken?
<sidnei> SpamapS, well, even just running 'virsh' hangs, and i tried purging and rm -rf 'ing my way around a couple times already
<SpamapS> sidnei: oh weird
<m_3> sidnei: oh, one other thing... does your default use virbr0 as well as 122.0/24?
<SpamapS> sidnei: sudo service libvirt-bin restart maybe?
<m_3> virbr0 is explicitly required (at least was)
<sidnei> restart no luck, virbr0 is up on 122.0/24
<m_3> crap
<m_3> with dnsmasq bound there as well?
<sidnei> yup
<sidnei> /usr/sbin/dnsmasq -u libvirt-dnsmasq --strict-order --bind-interfaces --pid-file=/var/run/libvirt/network/default.pid --conf-file= --except-interface lo --listen-address 192.168.122.1 --dhcp-range 192.168.122.2,192.168.122.254 --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases --dhcp-lease-max=253 --dhcp-no-override
<m_3> and `virsh net-list --all` still hangs?
<sidnei> yup
<jcastro> anyone have ideas for fixing our US-centric responses? Other than "hey if you are not in the US pay more attention"?
<m_3> so that's sounding like a more fundamental problem
<m_3> jcastro: send me and taylor to paris for part of the year :)
<sidnei> uhm
<sidnei> http://askubuntu.com/questions/141720/why-is-sudo-virsh-hanging-in-the-console
<sidnei> and then killing dmidecode makes virsh work
<sidnei> SpamapS, m_3 ^
<m_3> nice...
<m_3> resource contention
<m_3> actually net-list should be any easy one to debug that on
<jimbaker> SpamapS, i plan to merge in the charm-format-2 shortly, now that the issues around that are worked out (whether or not _JUJU_CHARM_FORMAT was the sticking point, it remains as an impl detail). the point of it is that it's fully backwards compatible, and there are a substantial number of tests verifying that, so it should be good for galapagos
<SpamapS> jimbaker: you're late man.. VERY late
<SpamapS> I sent a message over a week ago
<sidnei> Error processing 'cs:precise/ubuntu': entry not found
<sidnei> 2012-06-22 15:55:07,739 ERROR Error processing 'cs:precise/ubuntu': entry not found
<sidnei> shouldn't this work? ^
<SpamapS> sidnei: no, its broken
<SpamapS> sidnei: sadly. :-/
<SpamapS> sidnei: the importer can't import that particular charm for some reason
<sidnei> looks like i picked a bad day to start :)
<jimbaker> SpamapS, sorry, i misunderstood that. well it goes in the next release then
<imbrandon> SpamapS: i sumited the int64 bug to the golang guys
<pindonga> is there any wiki page with tricks/idioms for writing juju charms?
<pindonga> I'd like to share a way to setup private ppa access in a charm
<SpamapS> imbrandon: I don't believe thats the issue with the ubuntu charm
<SpamapS> imbrandon: I think its because bzr prints extra stuff
<imbrandon> SpamapS: so hopefully it will be fixed soonish
<imbrandon> ahh
<SpamapS> pindonga: we've been discussing the best way to collect best practices. No consensus yet.
<SpamapS> pindonga: a wiki is as good a place as any. Unfortunately the juju wiki is locked down... :-/
<SpamapS> jimbaker: yeah, no big deal
<SpamapS> jimbaker: we just release whatever is merged, we still have the PPA building the trunk. :)
<imbrandon> SpamapS: also i have a new tool i want to unveil monday "hiabu" possibly as part of jitsu ( but its in node so i dunno if you want it in there _
<m_3> pindonga: also charm-tools if it's something that makes sense in a helper script
<imbrandon> )
<SpamapS> imbrandon: I want it
<imbrandon> k
<SpamapS> imbrandon: juju-jitsu is for all things crazy
<jimbaker> SpamapS, sounds like a plan, this is really to help as a bridge for the go port, as well as fix bugs like this one, bug 979859
<_mup_> Bug #979859: Unable to set boolean config value via command line <juju:In Progress by therve> < https://launchpad.net/bugs/979859 >
<SpamapS> imbrandon: which is why I'm surprised you haven't dominated it ;)
<m_3> ouch
<imbrandon> SpamapS: its an external dependancy manager
<SpamapS> jimbaker: sure. I'm thinking honolulu should actually be shorter. Maybe 4 weeks, so we catch up
<imbrandon> SpamapS: hahaha i've been busy with hiabu secretly
<imbrandon> is why
<imbrandon> :)
<imbrandon> but yea hiabu == japaneese for hive, manage "groups" of juju service as one hive, eg. an external dependancy manager
<imbrandon> SpamapS: ^
<SpamapS> imbrandon: nice
<shazzner> hiabu sounds like romanji
<imbrandon> :0
<m_3> SpamapS: 543 spun up fine using ec2 to host a lp:charms/juju local container... still not up on my laptop, but that's not really expected :(
<imbrandon> still needs lotas of work but i can "hibau deploy juju.com" and it figures out that wordpress and mysql etc are needed and spins them all up and scales etc, basicly its just calling out to the correct juju comannds tho
<_mup_> Bug #776426 was filed: Add debug hook cli flag for deploy and and add-unit. <juju:Triaged> < https://launchpad.net/bugs/776426 >
<imbrandon> i want to work that jitsu watch into it and make it smarter tho
<imbrandon> too
<shazzner> imbrandon: sounds pretty rad :)
<shazzner> my stupid point in mentioning Romanji is that, if true, hiabu isn't strictly 'japanese' :v
<imbrandon> ok need to go afk a few, back in ~45 min
<imbrandon> ahhh :)
<imbrandon> i just couldent think of a nother name to play off the jitsu name :)
<imbrandon> and mean group
<shazzner> no it's cool, I like it :)
<SpamapS> m_3: sweet thanks. I'll give it another 2 hours to settle and then tag.
<shazzner> just cause I got curious, Jisho.org tells me Su is the japanese word for hive
<shazzner> Hachisu means beehive
<shazzner> well, the kanji for hive, is pronounced Su but Su can have differen't meanings
<shazzner> sorry I'll shutup
<lifeless> hazmat: is there a hack for the proxy?
<lifeless> hazmat: on the host side, I have a local openstack install, which isn't integrated with dns, so lookups for e.g. server-45 fail
<lifeless> hazmat: is there a config option ?
<lifeless> hazmat: Using the ec2 provider.
<SpamapS> lifeless: /etc/hosts ?
<hazmat> lifeless, re hack, i've got a wip branch but it probably needs some finishing.. lp:~hazmat/juju/apt-proxy-support
<hazmat> outside of that going for a dns setup is probably the best option..
<hazmat> lifeless, actually the simplest hack.. is pretty straightforward
<imbrandon> shazzner: i used google translate, hehe , likely wrong
<hazmat> modify juju/providers/common/utils.py  format_cloud_init and cloud_config dict to include  an apt-proxy-url key and value
<hazmat> er. cloud_config dict in that method
<shazzner> imbrandon: check out jisho.org
<imbrandon> kk
<shazzner> best online japanese dictionary
<imbrandon> rockin, good thing before i released it too :)
<imbrandon> ty
<shazzner> oh and be sure to tick the Kana as romaji
<shazzner> imbrandon: np :) :)
<lifeless> hazmat: SpamapS: thanks
<lifeless> SpamapS: /etc/hosts doesn't set http_proxy :)
<lifeless> hazmat: is there a similar code tweak I could do to get ip usage?
<hazmat> lifeless, yes, the relevant bit is in juju/unit/address.py
<hazmat> the implementation class used varies by provider
<lifeless> hazmat: ah, I wasn't clear
<lifeless> hazmat: *bootstrap* is the thing that is using an unresolvable address
<lifeless> hazmat: the client node code can't talk to the metadata service.
<hazmat> ah
<lifeless> I have no idea whether the metadata service will be handing out crapola or not
<hazmat> lifeless, so the client resolves the instance id to the ip address
<lifeless> hazmat: *how* ?
<lifeless> like, does it call the ec2 API to get node details ?
<hazmat> lifeless, it queries out the server via the api using the instance id
<hazmat> lifeless, yes
<lifeless> right, its ending up with 'Server-NNN' atm, not an ip.
<lifeless> point me at the code :>
<hazmat> lifeless, if you do euca-describe-instances at your endpoint do you see the same?
<lifeless> I can't answer that right now
<lifeless> server is in weekend mode :P
<imbrandon> SpamapS / hazmat : HAHA i wasent going to let it get the best of me, check out http://api.websitedevops.com/juju-docs/operating-systems.html ( then look at the footer for the sphinx version )
<hazmat> lifeless, fair enough.. but the client can't really correct for provider errors about addresses. for the client we need an accessible/routable public address for the machine to reach it.
<hazmat> lifeless, have a good weekend (in case you follow your server :-)
<lifeless> hazmat: we do, but it has one
<lifeless> hazmat: I mean, I'll cross check the api results
<lifeless> hazmat: but if thats ok, where in the code is this called ?
<hazmat> lifeless, that part is a little more.. twisted let's say ;-)  juju/providers/common/findzookeepers.py
<imbrandon> SpamapS / hazmat : only one more little quark i need to workout before putting up a MP that should work on 0.6.4 ( all pages TOC is "correct" ) except for the langing page , so there is hope
<hazmat> lifeless, basically for ec2 it gets an instance map from provider storage (s3), and then does describe instances on it and returns the public address associated to the instance as the place to tunnel an ssh connection to for zk access.
<imbrandon> i got the major issue worked out tho, it dident like :hidden: in the old version
<lifeless> hazmat: there is a 1:M mismatch in that code too, room for optimising it looks like
<hazmat> imbrandon, nice
<hazmat> lifeless, definitely.. although i'm not sure exactly which part your referencing
<lifeless> for instance_id in instance_ids: <- synchronous, one loop per instance id.
<lifeless> then collects multi-machine errors from the get_machine call, which took one id
 * hazmat nods
<hazmat> that was my next target on the scaling branch we used for the 2k nodes
<lifeless> I would have written it as
<lifeless>     machines, missing_instance_ids = yield provider.get_machines(instance_ids)
<hazmat> we do it something similiarly dense in status when querying out info
 * hazmat checks
<lifeless> ah
<hazmat> lifeless, right now the provider.get_machines call errors out on missing ids
<lifeless> the contract for def get_machines(self, instance_ids=()): isn't conduicive to that approach
<lifeless> I'd fix that first :)
<lifeless> in set based processing, exceptions need to be super rare
<lifeless> because otherwise - well, you know - the successes get crushed.
<hazmat> yeah.. in eventually consistent distributed systems, some inconsistencies around the state need to be handled with more grace
<imbrandon> lifeless: mind if i pick your brain for a sec on bzr, you work with its internals a bit right ?
<lifeless> I played a developer on TV once.
<imbrandon> heh
<lifeless> (some stupidly large % of bzr is my code, yes I know its internals)
<imbrandon> well i'm totally ignorant about the internals of vcs's but really just was wondering the fesability of a git<->bzr bridge thats like the git<->svn bridge on github, where either client can work with the same repo, not a copy of it, at the same time
<lifeless> hazmat: hahaha
<lifeless>         instance.private_dns_name,
<lifeless> hazmat: ^ that is whats used, *not* the ip address.
 * lifeless checks to see if there is a more useful field
<hazmat> lifeless, ? for what
<lifeless> hazmat: using private-ip-address would be better I think
<lifeless> hazmat: rather than private-dns-address
<imbrandon> ( or even a bzr<->svn bridge etc ) basicly working with multi vcs natively at the same time and keeping the commits/logs in sync transparently
<lifeless> imbrandon: you mean like 'bzr-git' ?
<hazmat> lifeless, econtext used for what
<imbrandon> well kinda, that works on a copy
<hazmat> imbrandon, maybe checkout tailor
<lifeless> 07:52 < lifeless> hazmat: ah, I wasn't clear
<lifeless> 07:53 < lifeless> hazmat: *bootstrap* is the thing that is using an unresolvable address
<lifeless> 07:53 < lifeless> hazmat: the client node code can't talk to the metadata service.
<imbrandon> hazmat: yea, i have been looking into that
<lifeless> hazmat: I followed the pointers you gave me, get_machine -> ec2's get_machines -> machine_from_instance
<imbrandon> lifeless: i mean more server side tho, like i can use `git clone https://github.com/myname/myrepo.git` or `svn co https://github.com/myname/myrepo` both r/w tansparently
<lifeless> hazmat: jujumachine wants private_dns_name, but that depends on resolvability, whereas taking the ip address in the front end wouldn't
<imbrandon> something more like that
<hazmat> lifeless, when you say the client node can't talk to the metadata service.. that's unclear.
<hazmat> lifeless, the private dns name is used for inter unit internal communication
<hazmat> ie. relations
<lifeless> hazmat: its also used to talk to the bootstrap node
<lifeless> from the juju CLI
<lifeless> hazmat: or something; when I run 'juju status', its grabbing the private dns name, and trying to resolve it from outside the cloud.
<imbrandon> hazmat: i was wondering why we do that too since split dns takes care of if we use the public dns name
<imbrandon> and then its uniform
<lifeless> imbrandon: there might not be public ip or pblic dns names.
<lifeless> imbrandon: private* is the only guaranteed thing
<imbrandon> hrm true, split dns dont help there
<hazmat> lifeless, the client isn't using the private address to talk to zk, its using the public dns name given by the api server.
<lifeless> hazmat: I beg to differ :)
<hazmat> lifeless, i'm confused where you think you see it using the private dns name to facilitate that
<lifeless> juju bootstrap-> stuff happens
<lifeless> juju status ->
<lifeless> 'ssh: Server-32 is not a valid somethingorother to forward via\n\n'
<lifeless> loops
<hazmat> lifeless, juju/providers/common/connect.py      .. using dns_name not private_dns_name
<hazmat> lifeless, we'd never be able to connect to a machine in ec2 otherwise
<hazmat> or any other public cloud
<lifeless> hazmat: ok, so s/private/public and my whinge still applies; we're depending on dns resolvability, which is lacking for developer installs of e.g. openstack
<lifeless> I have no sane way of making the public names resolvable
<lifeless> or the private names for that matter
<lifeless> I'm on a different machine, having my machine poke via dnsmasq on the cloud server will break my network when I roam, for instance.
<lifeless> hazmat: added to that, juju bootstrap isn't requesting a floating ip, so we're only getting flat-network addresses allocated the machines *anyway* (10.0.0.2 for instance)
<hazmat> its reasonable to use ips, but their's a tension to have names for hosts for users and for charms that want to configure against names (although some also want to use ips). we should probably capture all.
<lifeless> hazmat: I'll have a poke around mondayish
<hazmat> lifeless, floating by default can be enabled
<lifeless> this has been very helpful, thanks.
<lifeless> hazmat: its one more thing for users to get right
<hazmat> lifeless, mgz's ostack provider also does the assignment
<lifeless> hazmat: and consider how hard canonicloud is to work with ;)
<lifeless> (for juju, not in general)
<hazmat> lifeless, its only gotten worse since i hacked out the support for it..
<hazmat> i did it all without a chinstrap account previously..
<imbrandon> i was just about to try the ostack provider today on hpcloud
<imbrandon> is it not working ?
<lifeless> hazmat: so, perhaps it would be a great exercise for you devs to do all your smoke-and-above testing using canonicloud; without canoniclod-specific hacks.
<lifeless> hazmat: :)
<lifeless> have a good weekend
<hazmat> lifeless, cheers, have a good one.. and no thanks re fool's errand... i'd be willing to talk to the canonicloud folks though if their willing to fix some of the pain points, we can accomodate some change as well (not hacks)
<hazmat> but its felt like a very slow moving progression, that's been in the wrong direction as far usability, and afaik still lacking swift
<hazmat> openstack isn't a product, its a toolkit for building your own cloud.. i mean snowflake
<imbrandon> hehe
<imbrandon> hazmat: so should i grab your branch for the ostack provider or ... ? i'm a little confused on what one to use for testing ( and has the higest probablility to work on hpcloud )
<imbrandon> i'm starting fresh today with it
<imbrandon> and / or rs next gen ostack cloud too, got that activated yesterday
<imbrandon> but i think they have ec2 and s3 compat on
<hazmat> imbrandon, its a sharp edge.. and not my branch, but if you want to play with it.. lp:~gz/juju/openstack_provider
<hazmat> imbrandon, they don't have s3 compat on
<imbrandon> kk, yea nothing production but i wanna put those 3 months to use a little and try soem things
<imbrandon> i know there will be lots of breakage probably
<hazmat> imbrandon, you have to use that for your client and specify juju-origin: lp:~gz/juju/openstack_provider  for the env
<imbrandon> k , is there an example env.y in the code ? i hope
<hazmat> there aren't any docs for it yet.. but the config keys are at juju/environment/config.py ...
<imbrandon> rockin that will workj
<imbrandon> yea i figured docs was out of the question for now
<imbrandon> btw sometime this evening i'll have the normal docs building again if they upgrade to 12.04 or not
<imbrandon> i found a few workarounds that arent too hackish , just one last bit to workout
<imbrandon> well arent hackish at all just have better ways in the newer version :)
<imbrandon> and really only like a 5 line delta so far, so not like alot of code either, just learning all the quarks
<pindonga> hi, I've read I can just include a repositories: section in my environments.yaml to configure my preferred repos
<pindonga> however I cannot get it to work, so I assume I edited the wrong place
<pindonga> is this actually supposed to be working with the precise version of juju?
<imbrandon> pindonga: that was never implmented, its in the draft docs
<pindonga> imbrandon, ok, touch luck then
<pindonga> thx
<imbrandon> pindonga: unless you see it somewhere else , please tell so i can correct it
<pindonga> no, I saw it in the drafts
<pindonga> some drafts include stuff that works right now
<imbrandon> pindonga: you can use JUJU_REPOSITORY env varable to set the local path though
<pindonga> so that's why i tried: )
<pindonga> yep
<pindonga> this was nicer though
<pindonga> don't have to do it every tim
<imbrandon> that is implmented , yea i wish it was personally, i love that idea, but sadly its not
<imbrandon> set the env varable in your bash_profile
<pindonga> yep
<pindonga> no worries
<pindonga> thx
<imbrandon> like "export JUJU_REPOSITORY /var/lib/charms"
<imbrandon> kk
<imbrandon> SpamapS / hazmat : do you all know if that feature got axed or just not done yet, i wanna put a note on it in the docs, cuz i ran into that exact same thing
<imbrandon> if its axed i'll just remove it, if not done yet i'll make a note: saying so
<hazmat> imbrandon, its implemented
<hazmat> you still have to prefix with local:charm_name
<imbrandon> no i mean the environments.yaml
<imbrandon> part
<imbrandon> https://juju.ubuntu.com/docs/drafts/charm-namespaces.html
<hazmat> no that's very old
<imbrandon> right, i was gonna brush it up a little
<imbrandon> but yea i like that idea but i think its axed
<imbrandon> the env.y part
<hazmat> i wrote that like the first week of juju dev..
<hazmat> we can probably just yank the doc
<imbrandon> actualy m_3 did alot of moving drafts out and cleaning up yesterday, let me check if he did this too
<imbrandon> kk
<hazmat> i guess it has some useful info..
<hazmat> i think he killed dups and moved stuff that was done out
<imbrandon> ill see if he did, if not i will see about adding the few bits that are relevant to other docs and yank that one
<imbrandon> all as a MP obviously since i'm not intimately familar etc etc
<_mup_> Bug #1016740 was filed: local provider bootstrap should create an actual working environment if none exists <juju:New> < https://launchpad.net/bugs/1016740 >
<m_3> imbrandon: no, I didn't take the time to go through all the drafts... just did the obvious ones at the time
<imbrandon> m_3: sweet, kk, i'll snag this one then
<m_3> imbrandon: cool
<imbrandon> m_3: also re: our quest for bzr-git harmony :) checkout the second paragraph here http://developer.github.com/v3/git/ , just more to add to the $someday list if no one else does it
<hazmat> have a good weekend folks
<m_3> hazmat: u2 man
<imbrandon> l8tr hazmat
<imbrandon> m_3 / SpamapS : so many subordiantes , i think of the few in the repo i'm running all but one http://api.websitedevops.com/juju-status.txt
<imbrandon> :)
<hazmat> imbrandon, nice
<SpamapS> imbrandon: haha cool!
<SpamapS> imbrandon: so is omg using ELB now?
<imbrandon> yea
<SpamapS> sweet
<imbrandon> has been for like a month or so
<imbrandon> ever since it went down the last time cuz joey rebooted the bootstrap node and the one with the eip
<imbrandon> i put it on elb , and the charm just added instances that are up to the elb with a special check that wordpress is actually runnning ( manually configured )
<imbrandon> so littraly just juju add-unit and then the elb associates with the new omgweb that comes up and add's it to the elb, elb checks every minute to see if its serving wp and if so it adds it to the rotation
<SpamapS> imbrandon: well done on getting things working w/ 0.6.4.. I have not seen any activity on the IS ticket yet
<imbrandon> yea, i got it all working except the front page
<imbrandon> took a little break then was gonna crack at that
<imbrandon> the front page only lists the local toc
<imbrandon> with the other fix
<SpamapS> imbrandon: we should add an optional 'check_url' and 'check_string' to the http interface.
<imbrandon> i'm sure there is a way around that tho
<imbrandon> SpamapS: yea, i have a couple of sugestions for it, that being one
<imbrandon> and the other being that the name of the elb isnt tied to the relation
<imbrandon> but is an option
<SpamapS> actually I really think its time we ditch 'http' for 'http2' and start doing things with the Host header that make sense.
<imbrandon> like i had the elb running before i used your charm, so i had to do some fnaggling
<imbrandon> yea, hold on ,i'll see what the check is
<imbrandon> i have in the elb
<imbrandon> HTTP:80/fpm.www.ping
<imbrandon> but yea, i basicly wanted to get it to a place i could hand it to joey and say "go" and it not be a big deal
<imbrandon> and its at that place now, has been a few weeks, he does his own code deploys etc etc etc
<imbrandon> i havent HAD to touch anything in almost a month
<imbrandon> now i
<imbrandon> still have done a few things like the shortcode plugin etc, but thats purely fun etc
<imbrandon> and nothing to do with juju or preformance etc ( the shortcode plugin where he can do [download4ubuntu url="http://apps.ubuntun.com/some/app"] and it puts my pretty css button in place :)
<imbrandon> but yea, he now handles all the code deploys and juju updates and such, i just kinda keep an eye on things from a-far, and awser any questions he has but its not many, he picked it up quickly and solid grasp of it, may not be a coder but definately isnt a hands off as he's put up to be sometimes
<imbrandon> he is also using a small aws_rds tooo, if you notice there was no mysql charm
<imbrandon> but its not charmed, it is just set as a config option for now , omgweb charm has a config option of db_user db_pass db_host db_name , and sets those on config change
<imbrandon> but it would be nice to charm that too
<imbrandon> but seperate like that not only is it cheaper, but nightly backups are done without extra scripting that could possibly fail and is acutally cheaper ( only by like 1c an hour )
<imbrandon> but also allows the whole env to be rebuilt on the fly without thought of the db
<imbrandon> great for those "oh shit" moments. now just juju bootstrap a new env or juju deploy omgweb some-other-service-name and then once up and config set to the db, destroy the old service without a second thought
<imbrandon> perfect for handing off to less tech ppl
<imbrandon> :)
<SpamapS> imbrandon: so basically, omg has become a custom thing that is nothing like anything in the charm store.
<SpamapS> fail
<SpamapS> but its using juju for great stuff
<SpamapS> WIN
<imbrandon> so its beutifully handled by juju still,  but all he needs to handle is the scaling of webhead units
<SpamapS> imbrandon: its also EC2 specific
<SpamapS> he can't move to HP
<SpamapS> or RAX
<SpamapS> bummer
<imbrandon> SpamapS: well kinda, nothing from the store directly but parts from it are in alor of other charms
<imbrandon> yea he can
<imbrandon> infact he was just looking at moving to rack
<imbrandon> maybe
<SpamapS> imbrandon: this is like the early days when people built servers from Slackware... install slack, cd /usr/local/src .. and start building the box.
<imbrandon> now HP dont have everything, but rack does
<SpamapS> RAX doesn't have *RDS*
<imbrandon> yea
<SpamapS> you'd have to manually do that
<imbrandon> it does
<SpamapS> and ELB
<imbrandon> nah
<imbrandon> it has rds
<SpamapS> which you'd have to do manually
<imbrandon> and elb i;m not certain but it has a lb of come kind
<SpamapS> because there's no charm is what I'm saying
<SpamapS> a big part of juju's promise is cloud agnosticism
<SpamapS> Now, OMG doesn't really *need* that
<imbrandon> well the rds bit is manual right now anyhow, i want to charm it but its not yet, but yea they would need charmed
<imbrandon> but the aws ones needed charmed too
<SpamapS> but its sad to see how fast it blew off the rails into EC2 lock in
<imbrandon> to begin with :)
<imbrandon> well thats what i'm saying its not as locked in as it looks
<imbrandon> because i'm not using any of the special features and its all charmed cept db backup
<imbrandon> so it can be abstracted out into a rackspace_elb solution or a rackspace_rds one
<imbrandon> now hp it would need to use a mysql one, but it still has those interfaces ( optional now ) so it could without change
<imbrandon> if needed
<SpamapS> Well I'd hope you'd charm RDS before attempting that, so you can charm RAX's db as a service, so you could then actually do the migration just that way
<imbrandon> soh for sure
<imbrandon> oh*
<SpamapS> We need some more sites playing with juju
<SpamapS> I'll be interested to see how the next one goes
<imbrandon> yea, that was another reason to get it into joeys hands so i could possibly do "the next one"
<imbrandon> etc :)
<imbrandon> but yea, it LOOKS like its tied to aws but really its not, or only very very very slightly untill rds is charmed
<imbrandon> even then not really cuz there is still sql dumps to the bootstrap node and it has the mysql relations in place
<imbrandon> but yea
<imbrandon> tried to make its as hands off as possible but still use juju for everything that it possibly can
<imbrandon> to make it easy on him, me or anyone else
<imbrandon> and it may not use stuff from the store, but its a guiney pig and large chunks of what we learn/learned with it are going to many charms, like the elb sugestions
<imbrandon> from a bit ago
<imbrandon> etc
<imbrandon> that and when shit breaks the only "custom" code really is the webheads we designed, the rest can be pointed to aws ( or other cloud provider ) for outages
<imbrandon> if needed, but honestly, its been more stable the last month since that last big breakdown that made me take these measures
<imbrandon> than ever
<imbrandon> ( and still 98% juju hehehe )
<imbrandon> and the webheads still rev proxy with a microcache just the same , all to each other etc, they just have an elb infront of them now as well
<imbrandon> preventing one webhead die'ing and taking the whole site
<imbrandon> ( or getting rebooted )
<SpamapS> imbrandon: Its making me question my vision of juju users actually contributing back the stuff that makes their websites go though.
<imbrandon> and thinks like the uptime.u.co.uk and newrelic etc etc all are great cya now instead of hearsay :)
<SpamapS> imbrandon: I had thought it would be simpler than this. :-/
<imbrandon> SpamapS: heh, thats what i've been trying to say from the start
<imbrandon> SpamapS: there will be contributors but "custom" will be very common too,
<imbrandon> esp when legacy sites are migrated
<imbrandon> and not some new service
<imbrandon> but yea, really there is very very little, if any that isnt contributed back
<SpamapS> but for our first known live site to be custom, is sad.
<imbrandon> just not in the clean fashion of a single commit etc
<SpamapS> imbrandon: you're not using mysql, you're not using haproxy, or even an nginx lb charm, and wordpress seems to have stymied you *and* marco to get it into production shape.
<imbrandon> SpamapS: nah, dont think of it like that, think of it like its still 100% juju even after the dust settles and not because it needs to be adn #2
<imbrandon> that it IS the first, their are bound to be hiccups
<imbrandon> :)
<SpamapS> imbrandon: I knew that production would bring radical change to the charm store. I just haven't seen *any* of that change in lp:charms yet
<imbrandon> its using nginx lb
<SpamapS> not from the charm store
<SpamapS> and what its using has not been submitted
<imbrandon> well only becuase i haveent cleaned it up enough to push
<SpamapS> thats what I'm lamenting
<imbrandon> i've been lazy in that respance
<SpamapS> that it takes that much effort from you to clean it up
<imbrandon> respect, but thats me
<SpamapS> lazy is what I want
<imbrandon> no not really, like i said its more of me lazy on that front
<SpamapS> I want you to be *so* lazy that you never want to repeat this again
<imbrandon> of just not doing it and doing other things
<SpamapS> perhaps on the second site, you guys will get off your butts and submit things
<imbrandon> SpamapS: sure, i totaly understand and agreee 10000% , but i mean a diff kind of lazy
<imbrandon> nah
<imbrandon> i need to do it now
<SpamapS> and actually you *are* using your newrelic stuff, right?
<SpamapS> so, WIN there.
<imbrandon> infact i'm going to finish up this doc stuff to get it working and not work on other projects untill i have it at leaste up for revirew
<imbrandon> haha yea
<imbrandon> newrelic , elb ( from you )
<imbrandon> cs:~clint-fewbar/precise/elb
<SpamapS> haha cs?
<SpamapS> brave man
<imbrandon> heh, when i deployed it dns wasent pointing to the elb yet :)
<imbrandon> heh
<SpamapS> yeah but you can't fix that charm
<SpamapS> you can't do anything with it actually
<SpamapS> because subordinates can't be removed
<SpamapS> so if you find a bug, or want to change it.. oops.. you're stuck, nothing you can do
<imbrandon> yea , i dident think of that till you brought it up the other dat
<imbrandon> day*
<imbrandon> but i could have swore i have "juju deploy mysql" then "juju upgrade-charm --repository /var/lib/charms mysql" and it grabs the upgrade from local
<imbrandon> wasent mysql, but still, a "charm"
<imbrandon> as long as the revision was higher, but maybe not, probably mistaken
<SpamapS> imbrandon: that won't work no, because the charm is cs: not local:
#juju 2012-06-23
<imbrandon> SpamapS: btw , you have to really strech it to say omg isnt using juju anymore :) heh i posted that showing the fact it was using 3 suborniates, only one other exists in the store :)
<imbrandon> SpamapS: yea but dident it just use the service name not the proto
<imbrandon> e.g. it sees mysql and local:mysql and cs:precise/mysql all as the mysql service
<SpamapS> I did not say they'r enot using juju
<SpamapS> they're using juju BEAUTIFULLY
<SpamapS> I'm saying they're not using the charm store
<imbrandon> SpamapS: oh for sure, but its the FIRST, and also still very much down in my spare time
<SpamapS> imbrandon: the service name? no. It uses the charm name to figure out if there is a new charm to upgrade to
<imbrandon> so there are things to improve etc etc for sure, but also try to not cowboy stuff either :)
<SpamapS> imbrandon: yeah don't sweat it. I'm frustrated with us for not making this easier, not you for not contributing more. ;)
<imbrandon> oh yea, i dident mean it like that eihter
<imbrandon> kinda came out wrong :)
<imbrandon> but ya know what i mean, and yea juju is still very veyr young too, as well as this is the first one, trust me it could be ALOT worse
<imbrandon> i think considering those two things its fantasitic
<imbrandon> personally :)
<imbrandon> SpamapS: omg has picked up quite a bit of traffic too since we all started
<imbrandon> a few months ago on it, like almost doubled actually
<imbrandon> sustained
<SpamapS> imbrandon: I'm not surprised. Its *WAY* faster now.
<SpamapS> that matters to people
<SpamapS> as google has shown
<imbrandon> like the surge from 12.04 never backed back off
<imbrandon> and its even increaced a bit
<imbrandon> :)
<imbrandon> SpamapS: got him on with that new advertiser too
<imbrandon> http://bsa.ly/k2s
<imbrandon> only a few days ago though so not much inventory yet, and the impressions is aoff
<imbrandon> off*
<imbrandon> but still impressive, and may make him some more money :)
<EvilMog> I like the whole MaaS but I keep running into nodes getting stuck in comissioning
<EvilMog> even when their clocks are sync'd
<EvilMog> so I'm tryign to run initial trials with a bunch of small desktops before I go and rebuild my cluster
<EvilMog> but once I get that ironed out and the comissioning rock solid, this will definately be handy
<imbrandon> :)
<imbrandon> EvilMog: thats the plan
<imbrandon> woot, blitz.io t-shirt showed up today
<imbrandon> :)
<imbrandon> more swag heh
<m_3> ok, really appreciating that I can run mixed series environments right now
<m_3> byobu-classroom internals work a little better with the default settings from oneiric
<m_3> guess maybe that was the switchover from byobuscreen to byobumux
<m_3> really would like to just have tmux-classroom at this point
 * m_3 fingers relearning split adjustments for the irc talk
<imbrandon> oh cruft, i got that class today
 * imbrandon goes to polish the presentation
<jkyle> morning
<jkyle> I have juju set up on osx and I've configured the ec2 variables for my openstack install (type: ec2), but I"m getting connection errors. (euca2ools works with the same configuration)
<jkyle> I used this as a reference for the environments.yml file: http://www.w3.org/TR/html4/strict.dtd
<jkyle> oops
<jkyle> http://askubuntu.com/questions/65364/how-can-i-configure-multiple-deployment-environments-for-juju
<m_3> imbrandon: so `juju deploy cs:oneiric/byobu-classroom` works pretty well as long as there're only 20 or so users
<m_3> imbrandon: it sets up an ajaxterm in ec2 with user/pass of guest/guest... precise's doesn't work though so you have to subsequently ssh from there to a precise instance if you're showing precise-specific cli stuff
 * m_3 add a note to fix byobu-classroom to work with byobumux instead of byobuscreen
<m_3> or actually just frickin write tmux-classroom
<jkyle> how do I enable debug logging in juju?
<m_3> jkyle: `juju debug-log`?
<imbrandon> nice, yea i was gonna ask you if that was charmed
<imbrandon> brb afk
<m_3> jkyle: did you get your multiple environments straightened out?
<jkyle> debug-log doesn't seem to do it
<jkyle> oddly
<imbrandon> debug-log runs in a seperate term while you do the commands
<imbrandon> keep it running
<jkyle> didn't know they were crooked
<jkyle> the issue is a failure to connect
<m_3> jkyle: there're some bugs with what it logs too
<jkyle> I opened the connect.py at the error point and noticed it has decent logging to the debug facility
<jkyle> e.g. log.debug("Failed using url: {0}".format(url))
<m_3> jkyle: hmmm dunno
<jkyle> and I'd like to enable that
<m_3> does your dns work on your openstack install?  I know there've been issues with that in the past
<jkyle> this is a running install
<jkyle> has dozens of users and vm's on it
<jkyle> my endpoints are ip's, e.g. http://12.50.28.2:8773/services/Cloud
<m_3> I've got bacon, egg_white_s, and raw spinach upstairs... the latter two are bacon-offset(TM)
<m_3> jkyle: is your bootstrap node directly accessible from your client?  (i.e., with a public address)
<imbrandon> :)
<m_3> jkyle: nothing works from the client if it can't talk directly to the bootstrap node... iirc, haven't used juju against openstack in a bit
 * m_3 gonna go eat.. ttyl
<jkyle> juju bootstrap                                                                                                                                                                                            1 âµ
<jkyle> 2012-06-23 10:19:34,212 INFO Bootstrapping environment 'pao1' (type: ec2)...
<jkyle> Connection was refused by other side: 61: Connection refused.
<jkyle> can't create bootstrap node
<jkyle> like I said, failing to connect to the endpoints ;)
<jkyle> maybe it's a version thing, does juju only work with essex or something?
<m_3> jkyle: hmmm it sounds like it's not reading your environment
<m_3> `juju bootstrap -e<environment_name>`
<m_3> or
<jkyle> ah, my other openstack install seems to behave more
<jkyle> nope, timed out. but not connection refused
<m_3> ~/.juju/environments.yaml should have a top-level 'default: <environment_name>' if you've got multiple environments
<m_3> don't know where things are stored on the mac client
<jkyle> is it required that you have an s3 endpoint?
<m_3> dunno for openstack... lemme see what I have
<jkyle> right, I have default:pao1 and it's using that definition
<jkyle> too bad juju doesn't have a nova provider yet.
<m_3> ec2-uri, s3-uri, default-image-id are all there
<m_3> pretty sure the first two are requied
<m_3> jkyle: it's there just in review... not in trunk yet
<jkyle> yeah, looks like juju requires s3
<jkyle> I'm not running an s3 service
<m_3> there'll be a 'honolulu' release in a couple of weeks with that provider
<m_3> oh
<m_3> yeah, that's a problem
<jkyle> big problem, I don't think we're offering that on any of our clouds
<m_3> there may be workarounds to get an s3-like service up (rados and stuff like that)
<m_3> you can run rados from euca-tools, then use _that_ endpoint url for juju
<m_3> haven't seen that actually working before though... just ideas thrown around
<m_3> trick is to configure to accept the same creds as nova
<jkyle> why is s3 needed?
<m_3> caching charms mostly
<m_3> you want the set of charms you've deployed to be sort of frozen
<m_3> then you can deploy additional units of a service using the _same_ version as the already-deployed ones
<m_3> two months later
 * m_3 gonna get rid of the classroom setup... brb
<jkyle> well, this is disappointing.
<EvilMog> so heres my question, is there a way to force a username and password onto newly comissioned nodes so that I can log into them during their comissioning phase and sync their times and otherwise force them to complete comissioning?
<m_3> EvilMog: would keys work?  it's easy to have juju inject keys into every instance's startup... environments.yaml there's a per-environment key:value for authorized-keys (http://paste.ubuntu.com/1056340/ indent is important in yaml heredocs)
<EvilMog> as long as I can sudo, my big problem is I have 5 nodes stuck in comissionign and only 1 that made it to ready during the Ubuntu MaaS setup
<EvilMog> so this is prior to juju being even installed
<m_3> ah, dunno the maas ramifications sorry
<jkyle> EvilMog: maas is just a fancy frontend to pxe
<EvilMog> yeah, but the image it pushes never fulyl syncs its nodes
<EvilMog> syncs to its master
<EvilMog> and sicne the image doesn't have a username and password attached to it I can't go in and fix it
<jkyle> EvilMog: right, but you can modify the pxe preseed file to install pubkeys if you wish
<EvilMog> ahh
<jkyle> or do any other kind of preconfiguraiotn
<jkyle> ntp, packages, netowkr config, etc
<jkyle> m_3: I think I'll give puppet a try, just looking for something other than chef
<imbrandon> capify :)
 * imbrandon hugs ruby
<jkyle> they serve different purposes, eh
<jkyle> I'm lukewarm about ruby hehe
<imbrandon> well kinda, so do puppet and juju in that sense :)
<jkyle> language is cool, working in the environment is painful
<imbrandon> i personally like to use juju for orchastration and fabric or capistrano for code deployment and config mgmt
<imbrandon> togather
<jkyle> kinda, but I'd say puppet and juju are much closer. cap is just a app deploy tool pretty dialed in for ruby apps
<jkyle> imbrandon: I'd say that's typical
<imbrandon> nah, i use it for php apps mostly
<imbrandon> ther are alot of plugins for many langs, even drupal and wordpress specific ones
<imbrandon> etc
<jkyle> my condolences ^_^
<imbrandon> ( in cap )
<imbrandon> haha , i actually _like_ php :)
<jkyle> color me perplexed :P
<imbrandon> good php is a beuitiful thing, there is alot of bad out there, but there is bad js and python too
<imbrandon> :)
<imbrandon> php is easy, good php is hard, but good php is rockin, and it powers 77.9 % of the net :)
<jkyle> php drives me mad, it's like they just took a bunch of ideas and threw them in a bag and shook them up
<jkyle> no consistency in language design
<jkyle> docs are great though
<imbrandon> sure but same thing can be said about perl too
<imbrandon> :)
<jkyle> perl? what's that?
<jkyle> ./ducks
<imbrandon> and perl is the foundations of ubuntu and debian :)
<imbrandon> as much as people want to think python is
<imbrandon> hehe
<imbrandon> i hear ya
<jkyle> perl is an old gray beard for sure...not sure I'd say it's as bad as php though
<imbrandon> nope, your right, its worse :)
 * m_3 goes to watch tv now that the discussion has turned to religion :)
<jkyle> hehe
<imbrandon> it just dont get as much press since its not powering 77% or more of the net
<jkyle> I see perl as much closer to ruby than php
<jkyle> or ruby close to perl, rather
<imbrandon> php was built on perl :P all the way till version 3
<imbrandon> jej
<imbrandon> heh*
<imbrandon> version 3 introduced the compiled code / zend engine , before that php was a ton of perl functions
<imbrandon> ( and a bit of C, nothing like today tho, its come a LONG way, even just in the last 3 years with 5.3+ )
<imbrandon> it got its big boy pants arround 5 or 5.1 when they started doing releases right, and ppl like me could stand behind it with a streight face :)
<imbrandon> heh
<imbrandon> ruby is awesome for one thing, and one thing only, but its a HUGE thing
<imbrandon> ruby EVERYTHING is an object , EVERYTHING
<imbrandon> and THAT is awesome, it has a TON of implecations most dont even consider
<imbrandon> like ... "hi".repeat(4)  class String  def repeat self * N end ; puts "hi".repeat(4)  >> "hihihihi"
<imbrandon> thats psudo code, but very very close to exact
<imbrandon> :)
<jkyle> imbrandon: in python as well
<jkyle> everything is an object in python
<imbrandon> nah, i mean EVERYTHING is an object, not so in python
<imbrandon> alot is
<imbrandon> and its very OO
<jkyle> what's not an object in python
<imbrandon> but not EVERY SINGLE THING
<imbrandon> a string
<imbrandon> a literal in ruby is a object
<imbrandon> no extra code
<imbrandon> etc, right off the bat
<imbrandon> like
<imbrandon> 5.*(10)
<imbrandon> == 50
<imbrandon> the litteral 5 has a method
<imbrandon> jkyle: check this
<imbrandon> http://paste.ubuntu.com/1056377/
<jkyle> strings are objects in python
<imbrandon> i'm not saying that cant be done in python but its a diff beast
<imbrandon> i was pretty sure python litterals and strings and such wernt in the same way
 * imbrandon will dig into it tho
<jkyle> I like that functions are first class objects in python
<imbrandon> python has the whitespace downfall tho :)
<jkyle> I like the whitespace :)
<imbrandon> yea same in ruby
<jkyle> imbrandon: no, in ruby they are not
<imbrandon> sure they are, and you even can mask them as builtins
<imbrandon> ir they seem like it when used
<jkyle> imbrandon: no, methods in ruby cannot be assigned or passed as normal variables
<imbrandon> sure they can, rbuy def has lambdas and the like
<jkyle> you have to use a proxy object, this is the purpose of Proc
<imbrandon> def soemting do |func|
<imbrandon> func blah blah
<jkyle> imbrandon: it's a side effect of ruby's being designed around messaging
<imbrandon> def weightedknn(data, vec1, k = 5, weightf = :gaussian) ... weight = self.send(weightf)
<jkyle> imbrandon: pastie an example of directly assigning a function object to a variable
<imbrandon> just use .send
<jkyle> send is aproc object
<jkyle> like I said, you can't assign a function to a variable
<imbrandon> its just diuff synctax not any less able to pass functions
<jkyle> no, it's the definition of a first class object.
<jkyle> it must support the assignment operator
<imbrandon> the variable its self is an object with functions too, its just syntax sugar
<imbrandon> at that point
<jkyle> no, it's not syntactic sugar. it's a core language feature >.< it has significant implications in how the language can be used and how binding works
<imbrandon> ohh crap, 30 minutes untill my talk, gotta start up VM;s and such, i'd love to finish this later with ya tho, but i do disagree that they arent first class, kinda have to be as objects
<jkyle> x`http://stackoverflow.com/questions/2602340/methods-in-ruby-objects-or-not
<imbrandon> give me an hour to give this user days cli thing :)
 * imbrandon will open the tab for later
<jkyle> that's not 100% accurate as they _are_ objects, just not first class objects
<jkyle> that's what's implied by "not objects like strings and numbers"
<imbrandon> thats not entirely true either, look at :name
<imbrandon> etv
<imbrandon> but i really need to get started, sorry :(
<imbrandon> heg
<imbrandon> heh
<imbrandon> otherwise i'll be mobbed in #ubuntu-classroom lol
<imbrandon> m_3 is a nother ruby person i think, i dont claim to know ruby well, but i do like it quite a bit more than python ( but thats mostly due to my hatred of whitespace delemited things like yaml heh )
<imbrandon> but yea they are very very similar , in utility and placed to use them wisely
<jkyle> imbrandon: btw, you can monkey patch python like your example too. though not builtins, this is by design. effecting core classes at the global scope is the cause of many problmes with ruby
<imbrandon> yea class String undef_method :blah
<imbrandon> is fun :)
<imbrandon> i'm thinking more like    a = "blah"   [1,2].each do |func|  puts func.*a
<imbrandon> kinda bastardized but yea
<imbrandon> btw on the s3 thing, s3ql works great
<imbrandon> for that
<imbrandon> oh wow, i fail at reading the clock
<imbrandon> i have one more hour :)
<imbrandon> heh
<imbrandon> jkyle: yea i used s3ql the frist time the other day as a single machine s3 replacement, seems to work ok for limited use things like that
<imbrandon> its designed for like use ona dev machine to practice on the api etc
<imbrandon> but works well for non-critical data like that
<imbrandon> or seems to, not used it a ton yet
<JoseeAntonioR> imbrandon: ping
<imbrandon> yup
<imbrandon> just about ready
<imbrandon> running about 30 secoinds late
<imbrandon> m_3: heh , left a lil toss out to ya at the very end of my cli session ( after explaining aliases a lil bit )
<imbrandon> 15:59:27 <+imbrandon> OK well with that, i'll leave you with this last thought
<imbrandon> :)
<imbrandon> 15:59:33 <+imbrandon> alias pushit='git push && afplay ~/Music/saltnpepa-pushit.mp3'
<imbrandon> 15:59:50 <+imbrandon> ^ m_3 that ones just for you brother :)
 * imbrandon goes to work on the nginx charm a little more, spent the first half of today on it, and will spend the last half too, it _IS_ getting to the review queue today ... $sometimes
<imbrandon> s/$sometimes/$sometime
<imbrandon> jkyle: i dug a little more into what you were saying about the ruby stuff, and yea your are exactly right , but really i dont see that as a drawback really as the alternatives or "work arounds" how ever ya wannna look at them are solid and such, e.g. it dont seem like a phpish cludge^Hhack
<imbrandon> seems more like a codeing style type thing, like the diffrence between Pascal and Bordland Turbo Pascal :) hahahaha
<imbrandon> i mean hell , lamda functions are awesome and a staple in JS, server and client, and as close as syntax wise JS and PHP are ( scarry really ) php also has the same lambda and closure function ability ( its very awesome actually , dont know why its not used more ) but rarely used , esp in "production" code with the exception of Symphony2 and one other small framework i cant think of the name right off
<imbrandon> but definatly not the norm
<imbrandon> 2012-06-23 20:06:52,707 ERROR Cannot connect to environment: DNS lookup failed: address 'ec2.eu-west-1.amazonaws.com' not found: [Errno -2] Name or service not known.
<imbrandon> fun
<imbrandon> hazmat: can you reach your aws stuff ?
<imbrandon> ( or anyone around )
#juju 2012-06-24
<hazmat> checking
<hazmat> imbrandon, jujucharms.com is in ec2 fwiw
<hazmat> imbrandon, i can hit omg as well
<imbrandon> kk ty, kk ty
<imbrandon> yea i could hit omg the whole time, but i cant use juju
<imbrandon> for status or anything
<imbrandon> but external monitors show fine, and i can hit the website as well , but i cant hit the bootstrap node with juju , wonder but because of dns errors not
<imbrandon> net ones, well least not on the surface
<imbrandon> i'll track it tomarrow if its still happening
<imbrandon> dont feel like messin with it tonight and its not an emrngncy blah blah
<imbrandon> ty for checking tho hazmat
<hazmat> imbrandon, ping
<hazmat> imbrandon, fair enough.. not tonight
<hazmat> imbrandon, i'll be around a bit in the am tomorrow, then off on a plane, but i'd be happy to help debug
<imbrandon> sounds good, yea my brain is about fried as far as anything productive
<imbrandon> for the evening :)
<imbrandon> i've been in my OSX boot playing in photoshop the last hour heh, bought the new cs6 version today
<imbrandon> they drasticly droped the price on the creative suite , so it was a no brainer
<imbrandon> its like a $30 a month subscription now ( optional ) instead of $1200 up front
<imbrandon> and includes ALL the CS on and offline apps
<imbrandon> smart move imho
<imbrandon> if they would just port at leaste the online versions to linux ( the online apps are 98% as powerfull as the offline ones )
<imbrandon> i would be set
<arand> I'm getting juju FTBFS on current quantal, due to a bunch of failed test cases, is that something anyone would be able to confirm? https://bugs.launchpad.net/ubuntu/+source/juju/+bug/1017113
<_mup_> Bug #1017113: FTBFS: Sveral tests fails <ftbfs> <quantal> <juju (Ubuntu):New> < https://launchpad.net/bugs/1017113 >
<imbrandon> errr
<imbrandon> bholtsclaw@ares:~/Projects/local/charms/precise/nginx$ bzr push lp:~imbrandon/charms/nginx/trunk
<imbrandon> bzr: ERROR: Permission denied: "~imbrandon/charms/nginx/trunk/": : Cannot create branch at '/~imbrandon/charms/nginx/trunk'
<imbrandon> :(
<SpamapS> imbrandon: you need a series
<imbrandon> ahh crap
<imbrandon> ty
<imbrandon> real fast tho, yea of nothing else at all lets steal that version showing trick, kinda slick AND solves the problem of parsing it possibly later
<imbrandon> k , i'm out for a bit ... SpamapS ^^
 * imbrandon just put the place holder site/design up on behalf of the php-fig group 
 * imbrandon does a little dance , sings a little song 
<imbrandon> heh
<imbrandon> http://www.php-fig.org :) they are the group that made/makes like PSR-0 etc
<imbrandon> e.g. the peps in python
<SpamapS> oh
<SpamapS> cool
<SpamapS> wait, PEP?
<SpamapS> I thought PHP had RFC's
<imbrandon> like pep-8
<imbrandon> rfc's and PSR's now there is only PSR-0 and -1 so far
<imbrandon> thus me getting the chance to do the initial web :)
<imbrandon> its a young group but has some heavy heavy weight behind it like drupal, wordpress zend zf2 symphony welll helll just about any php group or company has a voting member
<imbrandon> fig is "framework interop group" and psr's are php standard resolutions or something, but makes sure all frameworks work togather and do shit the same way
<imbrandon> and even if there is 5 ways to do something there is now a "right" way , or will be :)_
<imbrandon> heh
<SpamapS> Ok, so PSR's are just about php code
<imbrandon> yea
<SpamapS> where as the PHP RFC's are about php dev
<SpamapS> cool
<imbrandon> LIKE pep :)
<imbrandon> well
<imbrandon> its about interop
<imbrandon> like psr-0 says how autoloaders need to work
<imbrandon> and bare minimum they have to supoort etc
<imbrandon> so i can use bits form zf framework with symphony now, like for real
<imbrandon> and it autolaods
<imbrandon> an extreem case of this will be when i can use a wordpress plugin on drupal
<imbrandon> thats a goal, far far out but that level of crap from the ground up
<imbrandon> and -1 talks about tabs and spances
<imbrandon> and 2 spaces versus 4 etc
<imbrandon> so its a mix
<imbrandon> i think -3 thats in the works is covering Cacheing and standarinzing some Cache interfaces
<imbrandon> i *think*
<imbrandon> but yea in like less than 6 months its got evey major player behind it and like 300+ members
<SpamapS> thats great
<SpamapS> PHP is growing up
<SpamapS> despite all the past predictions that it would eat itself
<SpamapS> (I never subscribed to those predictions btw ;)
<_mup_> Bug #1017113 was filed: Juju test suite fails sporadically due to low timeouts <ftbfs> <quantal> <juju:Confirmed> <juju (Ubuntu):Triaged> < https://launchpad.net/bugs/1017113 >
<imbrandon> FRAK
<imbrandon> i just did this whole function in JS
 * imbrandon is working in a php file
<imbrandon> bah, i need a cigarette and a mt dew, heads not right :)
<imbrandon> SpamapS: there we go
<imbrandon> http://cl.ly/HcuA
<imbrandon> fully filled out with temp data and everything
<SpamapS> so .. much.. white(black)space
<imbrandon> till they decide what content they want heh :) or they dont like the look of it
<imbrandon> thats a screenshot
<imbrandon> the black is the website holding the screenshot
<SpamapS> l
<SpamapS> o
<SpamapS> l
<SpamapS> got it
<imbrandon> and on your smaller monitor their wont be so much white
<SpamapS> I'm also just having my first coffee even tho I've been up since 5am
<imbrandon> i'm on a 24inch :)
<imbrandon> heh
<SpamapS> I'm on the 11"
<imbrandon> heh
<SpamapS> thats what s...
<imbrandon> s ?
<imbrandon> res ?
<imbrandon> 1920x1080 x 3
<imbrandon> no retna for me yet :)
<imbrandon> okies, back in a bit, skipped the shower for breakfast instead this morning, not gonna put it off too long or i'll be all icky ... ewww yea, ok back in a bit
<hazmat> SpamapS, about 3m40s on an x220 with ssd (samsung 830) with fsync off on zk
<SpamapS> hazmat: yeah I'm about the same using eatmydata
<SpamapS> hazmat: on the MBA
<SpamapS> with "whatever the heck they use for SSD"
<SpamapS> hazmat: hey, 'make check' doesn't seem to work for me
<SpamapS> I have JUJU_TRUNK set...
<SpamapS> make: *** [check] Error 1
<hazmat> SpamapS, apple uses samsung and toshiba, pretty slow till this year afaik
<hazmat> SpamapS, make check needs modified files committed
<hazmat> hmm.. actually not committed
<hazmat> make review checks committed changes on a branch
<hazmat> most of those are ben's additions, i normally just use make coverage
<imbrandon> yea the ssd's untill this last bactch from apple sucked
<imbrandon> they are samsung mostly
<imbrandon> but not they got some intel ones
<imbrandon> but yea before these toy want to buy third party ram/ssd
<imbrandon> ram is that damn hynix
<imbrandon> bleh
<hazmat> imbrandon, their still using samsung  from what i've read, but its the new 830 series controller, which is pretty fast (also what i use for my thinkpad)
<hazmat> it doesn't do the write deamplification like the sandforce ones, and its not quite as good as the intels as background cleanup, but with trim OS support it works fairly well for me
<hazmat> imbrandon, did your ec2 issues get resolved?
<imbrandon> yea , well kinda
<imbrandon> its a bug but i figured it out
<imbrandon> now i forgot what it was but i made a note cuz i wanted to report it and let you know
<imbrandon> ahh, the only 2 i've owned have been intels so far
<imbrandon> i used 2 others but quickly took them back for write errors in the first week
<imbrandon> ohh i know what it was now, the nodes got a new juju
<imbrandon> but i had old local
<imbrandon> and it kept looking like a dns error
<imbrandon> but as sonon as i updated my local juju it was all good
<hazmat> cool
<hazmat> time to pack up and head to velocity then
<imbrandon> cool cool, dont have too much fun, i plan to fix the docs this afternoon
<imbrandon> so they should be all new and ready for yall
<imbrandon> i wanted ot get my nginx pushed first tho ( i did get the first round pushed, so its atleast on LP now heh )
<imbrandon> but still is missing a few hooks i need to complete
<imbrandon> SpamapS: and the majority of it is in PHP heh :)
<imbrandon> next one i think i'm gonna use ruby on but not chef/puppet, just ruby
<imbrandon> just to kinda feel out the diffs
<imbrandon> http://bazaar.launchpad.net/~imbrandon/charms/precise/nginx/trunk/view/head:/hooks/install
<imbrandon> LP/logger head makes it all pink :(
<imbrandon> you know what i realized just now, and now i see it i cant beleave no one else has
<imbrandon> why do we do all these config-get's and put them into varables in every hook
<imbrandon> why does juju not just put them in the env when the hook fires
<imbrandon> its pretty much garenteed config-get will be called once, if not many times in a row when either they could config-get once at the top and split the json out
<imbrandon> or none at all and juju could do it for us, repetitive boilerplate is not good, i started to make a "common.php" and was like wtf, why dont juju do this
<imbrandon> SpamapS: ^^^ ( and m_3 / hazmat if yall arent on the road yet )
<hazmat> its a little implicit
<imbrandon> well env vars' it happens all the time for websites
<hazmat> say you get a variable mismatch
<imbrandon> the db conn info in the server env
<hazmat> where's the error line
<imbrandon> well i'm not saying that the config-get should die, just that it should prepoulate
<imbrandon> the env
<imbrandon> it almost has to anyhow i'm sure it loads it prior to the config-get call
<imbrandon> like whne the hook fires
<imbrandon> anyhow
<hazmat> imbrandon, worth filing a bug/feature request for
<imbrandon> anyhow, not saying its well thought out either, it just stuck me a few minutes ago :)
<imbrandon> yea, i'll see where it goes ;)
<imbrandon> only takes 5 min to fill out a LP page
<imbrandon> its worth that
<imbrandon> ZOMG !!! the new photoshop cs6 went gimp :( frak
<imbrandon> its all multi windowd, man i knew i should have fired it up fist
<imbrandon> damn the artwork, window chrome , icons , etc alll look gtk3/gimp
<imbrandon> SpamapS: check this out, this is default like 5 seconds after it finsihed installing ( e.g. not a custom theme or something )
<imbrandon> http://cl.ly/HbID
<imbrandon> watch out tho, there might be alot of black round that one too hahaha j/k
<imbrandon> but tell me that dont look gtk/gimpified
<imbrandon> ps has always done its own unique UI , well untill now i guess
<lifeless> is there bug open for getting secret key etc from the environment (e.g. AWS_ACCESS_KEY_ID) ?
<imbrandon> lifeless not that i'm aware, but i havent had prblems with that
<lifeless> just thinking it would make 'get up and go' easier.
<imbrandon> like you set it and then it cant get it ?
<lifeless> sophisticated installs need partitioned credentials
<imbrandon> ohh
<imbrandon> ok i see what ya mean
<imbrandon> i was thinking something diffrent
<imbrandon> yea, that would make sese and would fall into line with the config-get i was talking about
<lifeless> all the euca and aws tools, for intsance, read from the environment. (The actual environment, not environments.yaml)
<imbrandon> yea
<lifeless> I'll check for a bug
<imbrandon> yea a couple two of my charms export those just for that
<lifeless> so, looks like openstack just does noddy dns names
<lifeless> rather than the mass assigned reverse-ip-prefix that ec2 does.
<lifeless> which is why juju + internal openstack is so tedious, I suspect.
<imbrandon> really ? i've been meaning to get the openstack a try on hpcloud the last few days just not made it that far yet
<lifeless> production instances probably have some workaround
<lifeless> but the code I'm reading so far depends on e.g. ldap DNS server integration
<imbrandon> kinda was hoping to give it a go and then talk omgubuntu into moving maybe if it was solid
<lifeless> which is way more complex
<imbrandon> ahh
<imbrandon> i run open directory local on my lan anyhow ( i have a osx box that acts as a pdc and uses open directory )
<lifeless> see ./nova/network/ldapdns.py for instance
<imbrandon> funny tho, no windows clients so really i could just use nis or whatevr but bah. i followed the gui setup when i set it up a few months ago
<imbrandon> wasent really intended to be a "real server" when i set it up, was just testing osx server 10.8 and then i came to rely on it more and more, now i got to get a migration plan done at some point
<imbrandon> heh
<imbrandon> thats sad for my home network
<imbrandon> lol
<imbrandon> pretty sure i have it doing primary logins, home directories , perms control on the other 2 linux servers
<imbrandon> and some other minor osx specifc stuff like time machine and iphoto/itunes shares
<imbrandon> all on something that wasent intended to be used in "production" at all
<imbrandon> lol
<lifeless> ah, they are supported I think
<imbrandon> is kinda neat how much ubuntu picks up on the osx services now tho, used to be a pain, but now banshie/rhythmbox see the itunes , all of them use mdns native
<imbrandon> etc
<lifeless> I didn't realise that the openstack default is to use different environment variables *for the same values*.
 * lifeless headdesks
<imbrandon> heh
<imbrandon> yea i think it looks in the cli then env, then config home and confg server
<imbrandon> err scratch that last one thats the aws tools
<lifeless> yup, they are supported.
<imbrandon> cool
<lifeless> openstack's scripts set EC2_ACCESS_KEY, not AWS_ACCESS_KEY_ID, for instance.
<lifeless> I'll file a bug on txaws for supporting the openstack variables
<imbrandon> hrm, maybe i'll take a break from the nginx charm a minute and get a os env rolling, then i can use it to finish testing the nginx charm up
<imbrandon> and work some kinks out
<imbrandon> nice, yea i'm sure hazmat will toss it in, seen him do a few patchs quickly on it the last week or so
<imbrandon> but he is in route to a conf today iirc
<imbrandon> and there most of the week next week i *think*
<imbrandon> velocity, not sure its a week long tho
<imbrandon> few day at least i'd imagine
<imbrandon> oh btw, i did get my credentials working on the juju wiki, just needed to log out like ya said
<lifeless> there we go: https://bugs.launchpad.net/txaws/+bug/1017239
<_mup_> Bug #1017239: openstack ec2 credentials not picked up from environment <txAWS:Triaged> < https://launchpad.net/bugs/1017239 >
<imbrandon> ( that was you that handled the ticket right ? heh sorry if not )
<imbrandon> my thing with all these cli tools is they all want the same keys but diff env names
<imbrandon> irks me
<lifeless> mmm, no, I ididn't
<lifeless> I get cc'd on some rt stuff, part of my job to be looking for trends and issues
<imbrandon> whoop sorry then , mixed the name up, i had just glanced at it
<imbrandon> yea you might have been cc'd i just glanced at the name that handled it but it was so late friday when i got it working i put it off emailing back
<imbrandon> heh
<imbrandon> or it might have just been a similar name :)
 * imbrandon is curious now, goes to look
<imbrandon> ahh yea, you were just cc'd on it for some reason :)
<imbrandon> chris stratford did it, not sure who that is on irc if i even know them /me replies so they will see it monday and close it hopefully , /me should not be so terrible about the responses
<lifeless> imbrandon: this is what <dnsName>ec2-75-101-245-65.compute-1.amazonaws.com</dnsName> - ec2 returns, vs <dnsName>server-2</dnsName> in my local openstack install
<lifeless> you can see the former is trivially bulk-provisionable
<lifeless> or even just dynamically answerable by a hacked dns server
<imbrandon> hrm
<imbrandon> yea thats nasty
<lifeless> however, these days we get ip addresses straight back
<lifeless> so we can use that
<imbrandon> right
<lifeless> needs a txaws patch tho
<imbrandon> yea dns is a tricky thing to even for experinced programmers / devops / ops , one little variance and can toss a whole range of things out of wak
<imbrandon> so that dont help
<lifeless> https://bugs.launchpad.net/txaws/+bug/1017245
<_mup_> Bug #1017245: ipAddress and privateIpAddress are missing from describeInstances <txAWS:Triaged> < https://launchpad.net/bugs/1017245 >
 * lifeless is on a yak shaving mission
<imbrandon> haha
<lifeless> argh
<lifeless> now
<lifeless> 2012-06-25 08:10:14,921 ERROR Invalid SSH key
<lifeless> ^ $curses
<imbrandon> haha
<imbrandon> bad env.y most of the time
<imbrandon> when i get that
<lifeless> pasted you my env for this
<lifeless> Am I using the ssh option wrong or something ?
<lifeless> does it upload it via cloud-init ?
<lifeless> does the key need to be registered with the cloud
<lifeless> or is cloud-init used to insert it ?
<imbrandon> yea
<imbrandon> it gets passed as user-data
<imbrandon> to cloud init
<imbrandon> along with some pkgs to be installed like bzr
<imbrandon> and juju zk
<lifeless> yah
<lifeless> log shows zk etc installing ok
<imbrandon> and then it also passes a 64base encoded shell script that makes like var/lib/juju
<imbrandon> can you "curl -iS http://instance.data/latest/user-data
<imbrandon> oin maas ?
<imbrandon> EC2 API has it, and i know rackspace and hp both do but not sure if its the compatability layer or OS that also does it
<imbrandon> but that should show you the whole cloud init user-data script including keys
<imbrandon> if they were passed
<imbrandon> or "curl http://169.254.169.254/1.0/user-data" may work too
<lifeless> well, I can't ssh into the machine
<imbrandon> the other is just an alias on the local nets that implment it
<imbrandon> oh frack
<lifeless> because the ssh key is whats failing :>
<imbrandon> thats right
<imbrandon> hrm
<lifeless> otherwise, I'd be like 'woo yeah. lets see'
<imbrandon> this is MAAs right ?
<lifeless> no
<lifeless> openstack
<imbrandon> or jusy OS
<lifeless> seen EC2 error when attempting to delete group juju-devstack-0: Error Message: An unknown error has occurred. Please try your request again.
<imbrandon> ok there is a way you should be able to get at the user data from the outside
<imbrandon> and change it if needed, i just cant rember right off
 * imbrandon things
<lifeless> So, I want to figure out whats wrong
<lifeless> I have root on the openstack server
<lifeless> so I'll fiddle around there
<imbrandon> ohh ok locla too
<imbrandon> yea look in /var/lib/cloud
<imbrandon> it will have where it dumped the keys
<imbrandon> before it added them or should have
<imbrandon> as well as logs
<imbrandon> that juju might not have
<lifeless> where are the juju logs ?
<imbrandon> /var/lib/juju/charm/somewhere
<imbrandon> if your on the box run ps ax
<imbrandon> it will show where the log is getting piped
<_mup_> Bug #1017248 was filed: EC2 error when attempting to delete group juju-devstack-0 <juju:New> < https://launchpad.net/bugs/1017248 >
<lifeless> not the charm stuff; the error coming back from ssh
<imbrandon> yea
<imbrandon> there should be a more general zk log too
<lifeless> ok, this is weird
<imbrandon> in there
<lifeless> *I* can ssh to the instance
<lifeless> juju can't
<imbrandon> ahjh
<lifeless> from the same shell
<imbrandon> that def means the env.y syntaz it fubar then
<lifeless> ssh ubuntu@10.0.03 works
<imbrandon> syntax*
<imbrandon> it wont try other keys
<lifeless> ok, so lets see that url
<imbrandon> curl http://169.254.169.254/1.0/user-data
<imbrandon> let me pastbin you my working one privately
<lifeless>     ', /sbin/start juju-provision-agent]
<lifeless> ssh_authorized_keys: [ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA728T0gI08lJFqZQo8lDUgKiE860aWTQz+QSeYAFg2T5TrYbGHKt2GHZy+OHYkAhUiSCjZXogFyh1+TRkQIYCcZTNQdOoMtLVesOk9/jRh6ZIcrQvTzbK2KpLXBMhNX9J+HZ5MiAYTZRX9uJSmvDAxrsof2qcVyYBs67hPdE3s5I0Zg5uNm93M9/ciEr+UWTWiIxounHhiEbdW1LIszBlAtvLpsw9bgtB6rRjygiSvoiXMTt00YhWip9PpxBBa6OqtETF/Qu+Uf+guujTnwO9Ue77kNDoocMrZfDsBxlSG6gsByGO/ue7YlRI1w96W68xaGLFl5cgt60SUK1BIVJW9w==
<imbrandon> so you can compare
<lifeless>     /home/robertc/.ssh/id_rsa]
<lifeless> thats three lines
<lifeless> but it looks quoted
<imbrandon> that looks right
<imbrandon> hrm
<imbrandon> wow
<lifeless> I mean, it worked right? I can ssh in...
<imbrandon> ok one sec, let me still get mine, i bet its that damned authorized-keys vs authorized-key-path i came accross the other day
<imbrandon> some things use one and some the other i am willing to bet
<lifeless> imbrandon: I don't understand though: how does this affect 'juju status' - the ssh environment is *working*
<lifeless> I just did 'ssh ubuntu@10.0.0.3' and et voila
<imbrandon> right, that was their issue too, liek two days ago, i just unfortunately did not pay enought attn
<lifeless> hmm, status -v
<lifeless> -> its connecting to 10.0.0.2. *bong*
<imbrandon> nice
<lifeless> fixed, I think
<lifeless> lets try that again
<lifeless> \o/
<lifeless> ok.
<lifeless> time to do patches, bugs, writeup and then I can do what I actually wanted to be testing.
<lifeless> but first, a quick break
<imbrandon> http://paste.ubuntu.com/1058118/
<imbrandon> thats how my keys are layed out fwiw, just for ref later
<imbrandon> if you add more
<imbrandon> obviously thats only the middle of the file, but you get the context
<imbrandon> i *think* you can have it read a file as well like authorized-key-path: ~/.ssh/authorized_keys
<imbrandon> but i have not actually tried that yet
<imbrandon> btw for anyone arround , or that reads this in the backlog, i started a full mirror on a box at hpcloud i set aside just for that reason, apt-mirror should be done here in less than an hour and i'll have it sync very very regularly ( like every hour or so if not more ) as a "local mirror" on the hp cloud
<imbrandon> figured i needed to play upstream and dust off apt-mirror for a little workout and fix some of the miror bugs in bts anyhow and that gives me a good excuse
<imbrandon> point is i'll post the priv IP for it etc etc somewhere we can all refrence it like the mailing list
<imbrandon> to use, and should no problem with trust as i'll put a real ssl cert on it ( got one i'm not useing ) and the keys are all canonicals etc cuz i'm not repackin stuff, real mirror
<imbrandon> i'll make sure it runs at minimum the 3 months HP gave us, if its not too much i may keep it past then too
<imbrandon> or see if they will just sponsor that one node etc if i put the ops time in keeeping it up
<imbrandon> we'll hit that bridge when we need to but for now should greatly speed up apt-get and such
<imbrandon> hrm, and actually it just hit me that apt-mirror would make an excelent charm too
<imbrandon> juju deploy apt-mirror; wait 2 hours and you have a private mirror :)
<imbrandon> could even make it smart and use ec2 on aws and such :)
<imbrandon> err s3
<lifeless> well
<lifeless> otoh yes, otoh we have canonical run mirrors on the major clouds, in each reach,that have free traffic for instances...
<lifeless> ok, it was fixed in txaws rev 134
<lifeless> which we're not running yet
<lifeless> SpamapS: what do you think the chances of updating txaws in precise are? to get ip address support in Instances (rev 134)
<imbrandon> if we could isolate it to fixes only i'm sure we could make a case and get it tested good enough
<imbrandon> i would think
<imbrandon> ajmitch: ping pong :) here is good too
<imbrandon> :)
<imbrandon> not sure if i'm even in uwire right now
<imbrandon> lol
<ajmitch> so what are you having issues with? it's good timing as I had the fabric script open in front of me right now
<imbrandon> like was having problems groking even a working file at all
<lifeless> imbrandon: this is what I need: http://bazaar.launchpad.net/~txaws-dev/txaws/trunk/revision/134
 * ajmitch uses it in a really basic way, checking out the branches
<imbrandon> like hellp world stuff, i'm only ever done capfiles like that
<imbrandon> yea thats what i have the charm doing now
<imbrandon> its checking out the git repo
<imbrandon> and then dumping a config
<imbrandon> into place
<imbrandon> and as well as a cron than looks for git updates to the prod branch
<imbrandon> but i wanna change that to a web post hook on next iteration
<imbrandon> no need to be looking 24/7 running crons like that every 10 min
<imbrandon> webhook that takes a postrecieve from a git hook commit or bitbucket or github webhook etc etc
<imbrandon> and pulls and swaps dir
<ajmitch> ok, what do you have so far? dump in a pastebin or something
<imbrandon> but yea, i like dident even get hello world functioning ,i'm guessing because i was trying to do it liek a cam file and use config/deploy etc
<imbrandon> yea , let me find which branch i was on
<imbrandon> give me like 2 min
<imbrandon> lifeless: yea i'm thinking that LOOKS like it could be sane enough to land in an -update but i often over look other parts so SpamapS could give ya a MUCH better idea
<imbrandon> but we already had juju its self in -updates
<imbrandon> so the team isnt shy of it
<ajmitch> fabric is really quite simplistic, it's just "run this stuff on these hosts"
<ajmitch> doesn't do any of the provisioning magic of juju :)
<imbrandon> yea i was hopign it was a python version of capistrano
<imbrandon> since all thse damn hooks are in python it will make since to use fab, and the ones i write in ruby or php i'll use cap :)
<imbrandon> or say screw it and convince them to build it into juju :)
 * imbrandon looks for the code
<ajmitch> you can probably do similar things with it
<imbrandon> yea cap is very deploy centric, not like chef/puppet config mgmt
<imbrandon> it CAN but thats whole nother store
<imbrandon> much better suited for the install and update hooks
<imbrandon> and custom elsewhere
 * ajmitch will have to head off in a few minutes probably
<imbrandon> ok not found my branch yet, you would die at my ~Projects folder
<imbrandon> but i did find where i was cop;y/pasting from
<imbrandon> http://www.saintsjd.com/2011/01/continuous-deployment-for-wordpress-using-git-and-fabric/
<imbrandon> ^^my history file still had it
<imbrandon> heh
<imbrandon> i got all the way down to "writing the deploy scripts"
<imbrandon> and like nothing seemed to work at all
<imbrandon> i had all kinds of problems but it was also trying to do alot more than i just wanted it to grab the proper git branch and then symlink atomic update the deploy putting a config into place if it was the first run and not there yet
<imbrandon> seemed simple but yea
<imbrandon> like -0- examples of that i could find
<imbrandon> or close
<imbrandon> i mean i have the charm doing all this alreay but its kinda to be able to use python with python hooks when the charm uses those ( like the old wordpress )
<imbrandon> and ruby when it uses those ( i know how to use capfiles )
<imbrandon> etc
<imbrandon> seems like passing vars and such might get some novel use if it can do stuff natively
 * ajmitch will have to talk to you about this later
<imbrandon> kk
<imbrandon> np
<imbrandon> i know your working :)
 * ajmitch is at work, has meeting soonish
<imbrandon> yup yup totally understood, jsut next day or two as you have time toss me a bone :)
<imbrandon> ohhh lifeless your turn ( promis its easy q, lol) is the scripts/jobs that build the OSX dmg/pkgs for bzr on LP publicly that you know of ?
<imbrandon> just curious if you knew before i started digging, i'm hoping i can gleen some of them for use wiht osx juju, stand on the shoulders of giants and all that
<imbrandon> sicne it has dmg installer and python i'm hoping there is a good chance :) heh
<lifeless> imbrandon: I don't know.
<lifeless> imbrandon: it should be linked from the macosx stuff on the bzr wiki
<lifeless> bazaar.canonical.com
<lifeless> how does one run the test suite ?
<imbrandon> well
<imbrandon> the packages are but i ment the build part
<lifeless> imbrandon: yes, there should be docs on building it on the wiki
<lifeless> its changed hands a few times
<imbrandon> ahh cool
<imbrandon> okies,i'll poke at it here in afew, getting aws alerts again about the EU stuff
<imbrandon> needing to check it, afkish
<SpamapS> imbrandon: I have *never* used photoshop, so I can't judge that screenshot. To me, photoshop is the one that feels wrong. ;)
<imbrandon> yea but look how GTK/GIMP that looks now
<lifeless> SpamapS: oh hi
<imbrandon> def not OSXish
<SpamapS> lifeless: you do know that the credentials are only ever used for bootstrap and destroy-environment, right? (reading backscroll)
<imbrandon> oh man its like half the ram of CS5 too and very fast, i wonder if they dident just rip gimp off :) hahah , not only that they did the one thing that will keep adobe around for a very long time, subscptions, they did what apple did to the music industry with this release and no one realizes it yet, no one will pirate this when they can have all 16 offfline apps and 6 online apps for 30bux a month except script kiddies, but they dont matter, young pros it will
<imbrandon> ever*
<SpamapS> lifeless: bootstrap shoves them into ZK and the provisioning agent uses them henceforth. :p
<imbrandon> whatever they did to the UI , its very very snappy now, and much much less ram ( all the CS apps so far i installed from the pack are )
<SpamapS> lifeless: reading r134 now
<imbrandon> SpamapS: yea dident wanna put words into the teams mouth there but it is a bug bug
<lifeless> hahaha
<lifeless> so juju doesn't like running its tests in parallel :)
<lifeless> SpamapS: yes, but friction is friction
<SpamapS> argh
<SpamapS> why isn't txaws tagging their trunk? :-(
<lifeless> ENOIDEA
<SpamapS> argh, nor are they using launchpad releases
<SpamapS> so its pretty hard to see if/when that fix is already released
<SpamapS> other than the bug being Fix Released
<SpamapS> as somebody who has been a committer on txaws for a while, it feels like a ship w/o a rudder. :-/
<imbrandon> btw SpamapS the feel i can genuinely get past , i know alot of ppl get hung up on it,but its cuz thats all they really use and like to whine , but really i tend to miss all the "little things" that arent really PS at all but part of the CS like one button ( even a mouse macro ) to move the current image between PS and illistrator ( inkskape ) without closing / opening an app or save/open a file, it just does it and your editing the vector aspects of the proje
<SpamapS> I think we'd have been better off with boto at this point. :-P
 * imbrandon si done on that subject :)
<lifeless> SpamapS: erm, no ;). boto inside twisted is just awful. Please pleaseplease no.
<SpamapS> lifeless: looks like a workaround for OpenStack not making it easy enough to setup DNS... hrm
<imbrandon> lol lifeless
<lifeless> SpamapS: not really.
<SpamapS> lifeless: libcloud then.. ;)
<lifeless> SpamapS: SpamapS or rather, if you want to call it that, but - openstack doesn't have a batteries included mass-provision dns
<lifeless> SpamapS: also not twisted.
<lifeless> SpamapS: AFAIK.
<imbrandon> SpamapS: i thought it looked like a real bug of it returning the wrong value ona correct but uncommon setup
<lifeless> SpamapS: synchronous network code within twisted is generally a disaster waiting to happen.
<SpamapS> lifeless: I've been told a few times that deferToThread works fine for I/O bound code.
<lifeless> SpamapS: if you set your thread pool to the needed concurrency.
<lifeless> SpamapS: and if the other library is concurrency safe.
<SpamapS> Whih, IMO, is about 5. :-P
<lifeless> SpamapS: everyone forgets the first point (which for juju would need to be, oh, 10K or something)
<SpamapS> I am not convinced juju needs the concurrency that twisted affords. :P
 * imbrandon saus screw twisted AND go, wrap it all in JS functions that are talking to a rails apps onto of redis cache and mongodb with puts its json api out useing handlebars and backpone templates
 * imbrandon runs
<SpamapS> Thus far, everything except the provisioning agent is single threaded in nature.
<SpamapS> imbrandon: that sounds webscale
<imbrandon> lol
<lifeless> SpamapS: status isn't single threaded
<SpamapS> lifeless: one can argue that anything missing from txaws's implementation of the EC2 API's is a serious bug.. so I could at least present a case for it in an SRU.
<lifeless> SpamapS: also there is a modelling issue: everything can be *made* single threaded, the question is whether it will perform well enough as such.
<imbrandon> i am wondering why we are the only ( not counting aws hodge podge of community apps ) that dosent use NODE cli &/or server side for service orchstration
<SpamapS> lifeless: status just polls the crap out of a *single* zookeeper node.
<imbrandon> of the ones poping up
<imbrandon> i mean are they ALL doing it because its new and hot, or did we miss something fundamental
<SpamapS> lifeless: which is dumb anyway, there should be a daemon keeping a materialized view of status and feeding it back to the clients.
<imbrandon> SpamapS: there isnt ? i ahvent olooked into what zk does actuially
<SpamapS> imbrandon: um, because node is crazy crack and we're interested in things that developers exist for now. ;)
 * imbrandon is being serious about the zk thing
<imbrandon> SpamapS: well azure , vmware cf, jitsu, appfog, and a few others in production now all use node apps. i was half ass joking but there is a bit of truth to it somehere just not sure where yet
<imbrandon> i'm guessing they are all just hipster cept MS, and MS just accidently picked wrong anyhow
<imbrandon> :)
<SpamapS> I wouldn't call node a wrong choice at this point..
<SpamapS> just that its getting more play than it should because it is the new concurrency shiny
<imbrandon> hahah yea, that was purely ment funny :)
<SpamapS> Apparently we prefer the slightly less popular and less new concurrency shiny of Go :)
<imbrandon> you know, i have been slow and skeptical too beleave it or not on node, but i'm thinking that theere is not only alot of truith to it but there will be alot of crap code like php 3 years ago cuz everyene writes js and its not showen yet, but really its the # 2 of 3 next to nginx magic
<imbrandon> sure apache can preform the same as nginx on pure http when both are tuned but you know
<SpamapS> lifeless: anyway, re the txaws thing missing ipAddress.. my answer is yes, I think we can SRU that.
<SpamapS> had I known such a nice little change had made it into trunk before 12.04's release, I'd have made an effort to update it
<imbrandon> that apchache cant do what nginx can at the same speed just out of design, tuned or not, http purely, yea likely
<SpamapS> imbrandon: the concept of a language w/ 1st class concurrency that also directly serves its network requests is fantastic. JS is widely "known".. but the way to write good client side javascript is vastly different than the way to write good server side code.
<imbrandon> now what i'm still tring to figure out is if its REDIS persistant keystore or MONGO thats the #3 in the trifecta of the next tech age ( you knoe, about 2.5 years )
<lifeless> SpamapS: status talks to all the zk nodes
<imbrandon> but nginx and node are 2 of 3, and i'm thinking its gonna be redis but its just not show its self yet
<lifeless> SpamapS: I know there is only one today, but check the code.
<imbrandon> SpamapS: i whole heartly agree with you
<imbrandon> there
<SpamapS> lifeless: I understand its micro-optimized for that, but at what cost when we could actually have a perfectly up to date single materialized view and never poll zk?
<imbrandon> infact very very much so and you actually just proved my point tho, look at python and php, same thing python is the better lang by far, php powers 77% of the internet because its what ppl know tho
<lifeless> -lol-
<lifeless> Ran 3459 tests in 1113.097s
<lifeless> FAILED (id=0, failures=1377, skips=20)
<imbrandon> ouch
<imbrandon> very leet tho
<imbrandon> err nah, lett
<imbrandon> heh
<lifeless> I ran the tests parallelised, through testrepository. Wham-bang.
<SpamapS> lifeless: NICE
<lifeless> SpamapS: looks like they don't use a randomised zk instance, but instead all use a common one.
<imbrandon> SpamapS: i was actually thinking that the API would silently solve that by it doing just that , as long as it can be kept out of the go core then it could
<lifeless> So, you can't run two copies of the test suite at once either.
<imbrandon> and site keeping a view materialized just lazyloading data
<imbrandon> sit*
<lifeless> SpamapS: I'm not sure where the overheads are in status, so I won't comment :). I will say, being able to answer and slice and dice quickly is important in large environments
<lifeless> SpamapS: and if we can't, we should be able to.
<SpamapS> imbrandon: yeah, status should only be done via the REST API, and the REST API daemon should keep a constant materialized view of status.
<lifeless> SpamapS: are you a reviewer ?
<lifeless> SpamapS: of juju itself?
<imbrandon> one is ssh spinup / connection, imho it should be one .1ms https call
<SpamapS> lifeless: I am, of both juju and txaws actually. :)
<SpamapS> lifeless: tho one needs two +1's for juju
<imbrandon> SpamapS: exactly :)
<lifeless> SpamapS: https://code.launchpad.net/~lifeless/juju/trivial/+merge/111752
<lifeless> SpamapS: do you need anything from me to get such an SRU of txaws done?
<imbrandon> SpamapS: then we dont care wtf they scred up in the design, trust me a few years with drupal you learn how to manuver around things but not break the rules so upgrades and all thet is still pure :)
<imbrandon> plays well into this too
<SpamapS> lifeless: and I agree that concurrency will at some point be necessary for pieces of juju. Twisted just makes it so damn hard to read sometimes, I get irrational about it. ;)
<lifeless> SpamapS: I'm working on the juju tests for using it now.
<lifeless> SpamapS: I agree that twisted can be awkward, I actually think inlinecallbacks makes it worse.
<SpamapS> lifeless: nah I'll open up the bug tasks appropriate
<lifeless> SpamapS: https://bugs.launchpad.net/juju/+bug/945505 is the bug that triggers the need, tho its not about the change itself.
<_mup_> Bug #945505: Use ipAddress instead of dnsName now that txaws supports it <juju:New> < https://launchpad.net/bugs/945505 >
<lifeless> right, thats more like it:
<lifeless> Ran 1979 (-1480) tests in 333.841s (-781.528s)
<lifeless> FAILED (id=1, failures=27 (-1350), skips=12)
<lifeless> 27 failures ;)
<SpamapS> lifeless: agreed, inlineCallbacks takes it from a foreign accent to a whole new dialect hundreds of years removed.. like Afrikaans vs. Dutch
<imbrandon> SpamapS: as it makes sense and dont hold ppl / process up  , ping me for some of the more core-ish stuff i can help with from the outside without stirring too much feathers or the like, i wouldent think so just getting it into the open, anyhow i would like to start dusting off those skills again and put them to use a little and jujuish things seem like the best polace to drum up tasks as its the other part of what i'm doing lately :)
<_mup_> Bug #1017273 was filed: running the test suite in parallel fails <juju:New> < https://launchpad.net/bugs/1017273 >
<SpamapS> lifeless: probably easier to just track under bug #945176
<_mup_> Bug #945176: Support privateIpAddress and ipAddress <txAWS:Fix Released by rye> < https://launchpad.net/bugs/945176 >
<lifeless> SpamapS: think adding a .testr.conf would be acceptable?
 * imbrandon loves having inlinecallbacks and lamdas and closure funcs etc in php now, finally after 100 years we close the gap with JS for parity
<SpamapS> lifeless: I don't know what .testr.conf is
<lifeless> SpamapS: oh, I haven't shown you testrepository ?
<SpamapS> no
 * imbrandon looks up
<lifeless> SpamapS: doh!
<imbrandon> publicish where i could learn abit too ?
<lifeless> SpamapS: ok, uhm, probably want the ppa version - apt-add-repository ppa:testing-cabal
<imbrandon> or on that side of the IS wall ?
<lifeless> in a juju source tree, add a .testr.conf like in this bug: https://bugs.launchpad.net/juju/+bug/1017273
<_mup_> Bug #1017273: running the test suite in parallel fails <juju:New> < https://launchpad.net/bugs/1017273 >
<lifeless> apt-get install testrepository
<lifeless> then run
<lifeless> testr init; testr run
<lifeless> that will seed your repository
<SpamapS> lifeless: btw, running w/ eatmydata, and on an SSD, its far closer to 2 minutes than 5
<lifeless> from there you can do useful things like:
<lifeless> testr failing
<lifeless> testr slowest
<lifeless> testr last
<lifeless> testr run --failing
<lifeless> :!testr slowest
<lifeless> Test id                                                                                      Runtime (s)
<SpamapS> its not even really CPU bound half the time, just waiting on ZK
<lifeless> -------------------------------------------------------------------------------------------  -----------
<lifeless> juju.control.tests.test_status.StatusTest.test_subordinate_status_output_no_container        4.901
<lifeless> juju.control.tests.test_status.StatusTest.test_subordinate_status_output                     4.841
<lifeless> juju.control.tests.test_status.StatusTest.test_collect_filtering                             4.370
<lifeless> for instance
<lifeless> SpamapS: I'm on an SSD :>
<lifeless> haven't got eatmydata configured
<SpamapS> it tends to take about 200s or so for me
<SpamapS> there is no "configured" for eatmydata
<SpamapS> 'eatmydata ./test'
<SpamapS> disables fsync
<SpamapS> so ZK no longer blocks on I/O
<SpamapS> still it probably won't get much below 3-4 minutes so parallel is a great plan
<lifeless> SpamapS: yes, thats configuring it; need to remember to do it etc etc.
<lifeless> and then I can do things like this:
<lifeless> testr run --failing
<lifeless> ...
<lifeless> Ran 27 (-1952) tests in 0.949s (-334.093s)
<lifeless> FAILED (id=2, failures=12 (-15))
<imbrandon> whats the ppa ? its missing the ohther bit
<lifeless> oh, archive or whatever the default is
<imbrandon> kk
<lifeless> https://code.launchpad.net/~testing-cabal/+archive/archive
 * SpamapS tries poor man's parallization by running ./text for each dir in juju/*
<lifeless> testrepository builds on subunit
<imbrandon> cool cool
<imbrandon> heh
<lifeless> so anything that can talk subunit (like twisted trial, testtools, zope.testing,... sambas testrunner) can run under it.
<imbrandon> lazr
<imbrandon> heh
<imbrandon> i'm poke at this a bit more, i'm terrible about not knowing the testing proceesure for most langs/frameworks with the exception of phpunit and others i use in php daily and jsunit ( not done much js functional test specificly , mosly covered whith selenium is testing the front end php css html stuff )
<SpamapS> lifeless: I've marked bug #945176 for SRU to precise ... I'll look at preparing it tomorrow or later tonight (4pm Sunday here for me)
<_mup_> Bug #945176: Support privateIpAddress and ipAddress <txAWS:Fix Released by rye> <txaws (Ubuntu):Fix Released> <txaws (Ubuntu Precise):New for clint-fewbar> < https://launchpad.net/bugs/945176 >
<imbrandon> i really should at least get the beasics down for other areas i touch
 * SpamapS goes afk to do some real life stuff
<imbrandon> ttyl
<imbrandon> look at my nginx hooks when you get pack ( if at all tonight )
<imbrandon> i'd love a pre-review early opinion as i'm finishing and can easily make changes :)
<imbrandon> like a 5 min job, and i'll add a nother beer to the tab i owe ya :)
<lifeless> SpamapS: hazmat: and this - https://code.launchpad.net/~lifeless/juju/bug-945505/+merge/111754 - addresses my issue with openstack I was whinging about the other day.
<lifeless> I wonder, what does the openstack + maas combo do for dns integration
<lifeless> do we run ldapdns, or point folks laptopdns at the dnsmasq instance for maas ?
<lifeless> http://rbtcollins.wordpress.com/2012/06/25/running-juju-against-a-private-openstack-instance/ for posterity
<lifeless> and https://code.launchpad.net/~lifeless/juju/testrsupport/+merge/111755, and with that, I'm context switching
#juju 2013-06-17
<arystar> will someone help me my pc wont boot I have an amd Athlon ii x4 630
<_mup_> Bug #1191651 was filed: Juju logs don't rotate. <juju:New> <https://launchpad.net/bugs/1191651>
<dpb1> marcoceppi: jcastro: nice work on the discourse beta.  I'm already loving it.
<jcastro> I dig it too
<jcastro> there's a charm!
<dpb1> yay!
<jcastro> hey jamespage
<jamespage> jcastro, hey
<jcastro> you're on review duty this week btw
<jamespage> jcastro, thanks for the reminder
<jcastro> no worries, I am reasonably certain no one reads the topic
<jcastro> https://code.launchpad.net/~patrick-hetu/charms/precise/gunicorn/python-rewrite/+merge/167088
<jcastro> also this isn't in the queue but it should be, if you could do that one first it would be <3
<jamespage> jcastro, won't get to it today
<jcastro> sinzui: ^^ that branch is ready for another review but didn't make it into the queue, no idea why
<jamespage> jcastro, doing some ceph/nova-compute benching marking stuff
<jcastro> jamespage: yeah that's fine I just wanted it on top
<sinzui> jcastro, I see, and It has been 2 weeks.
<sinzui> I am reporting a bug
<jcastro> sinzui: want to talk Queue 2.0 today?
<sinzui> interesting, staging and m.jc.com agree, so there is something about that Mp that confuses the review queue proc.
<sinzui> jcastro, I can in a few hours
 * jcastro nods
<jcastro> m_3: you've reviewed that charm before so if you want to steal it from jamespage I am sure he will not complain
<FunnyLookinHat> Ok guys - do any of you know if there's a way to dump the requests being made with juju-core?  I _really_ need to see what the distance is between JuJu and Rackspace... they're bought in to getting things working, but they need debug information to explain the 411 Content Length errors keeping me from auth'ing
<FunnyLookinHat> I'm not seeing anything useful in goose unfortunately... and things _seem_ to be setup correctly
<noodles785> wedgwood_: when you've time, I made the changes you requested, let me know if there's anything else: https://code.launchpad.net/~michael.nelson/charm-helpers/add-declarative-support/+merge/168961
<noodles785> ]
<m_3> jcastro: ack on gunicorn
<MarkDude> Where can I go on LP to also request the Unicorn fix?
<MarkDude> :D
 * MarkDude has a few questions on Juju and some of the cli functionality
<MarkDude> Mostly in regards to getting some work with it in Fedora
 * MarkDude has not forgotten or stopped making plans. We have an idea with my local group of trying to use Juju in combination with Kickstart ( the way to "script" with the new improved Anaconda installer
<MarkDude> Between having kickstart make installs easier, I want to see about having it install Juju at start. After that - most likely using some cli charms to make the system close to out of ther box usable
 * MarkDude will do a formal letter with my title and crap later, but was hoping to an idea of how much work might be needed to use cli of Juju. This is pretty much all work that will be done by Fedora folks. 
 * MarkDude did not have  package sponsor to make the work *effective* on my side. I now have access to package folks that like the idea of making "bridges with other projects". Yay, Penguin family \o
<MarkDude> So far most of the licenses look compatible with the legal restrictions: Jenkins, Go language, salt-minion etc
<Campbell> Anyone had any success with the OpenStack quantum-gateway charm? I
<Campbell> I'm having problems getting it to deploy
<Mage_Dude> When doing a MAAS install, you need two users correct? One for MAAS admin, and then other 'users', but at least one to use JuJu? Or does the MAAS admin also require a single node to run a 'JuJu server' (not even sure what _I_ mean by that)? I'm a little unclear why the default MAAS install takes up a node and who needs to 'own' it.
<jcastro> marcoceppi: do you have a link to the charm school video you guys did on Fri?
<marcoceppi> jcastro: http://www.youtube.com/watch?v=nLYisPONBDc
<jcastro> negronjl: for your next mongodb charm idea: http://www.kchodorow.com/blog/2013/03/06/databases-dragons/
<negronjl> jcastro: interesting .. I'll look into it.
<sigmaone> hi
<marcoceppi> sigmaone: hello o/
<sigmaone> Where can I buy a phone ubuntu in Sudan
<sigmaone> ^~^
<jcastro> hi, this channel is for Juju, which is a server technology
<jcastro> you want #ubuntu-touch for phone stuff
<sigmaone> ya
<Mage_Dude> Juju doesn't really work with virtualized environments does it?
<sarnold> o_O
<sarnold> that's the whole point :)
<Mage_Dude> So there's no way to trial it?
<jcastro> I think he means a virtualization provider
<jcastro> if you use juju .7 you can use containers to try it
<sarnold> Mage_Dude: charms I developed for lxc on my laptop deployed fine to amazon ec2 instances; I didn't use the free micro.t1 instances at amazon because those take for every to even deploy an OS, but you could...
<jcastro> but for example I want a vagrant provider and so on.
<Mage_Dude> It seems that it doesn't even deploy a single node. I've tried 13.04, 12.04 and neither of them, if you follow the instructions online, actual finish provisioning the node and allow Juju to work.
<jcastro> which instructions are you following?
<jcastro> out of curiosity, as we currently are in-progress for a doc transition
<Mage_Dude> https://juju.ubuntu.com/docs/getting-started.html
<Mage_Dude> I've tried the simplest command possible, 'juju --version'. And even that doesn't run...
<jcastro> sigh
<jcastro> those instructions are incorrect
<jcastro> https://juju.ubuntu.com/get-started/
<jcastro> Mage_Dude: if you remove/purge "juju" and install juju-core
<sarnold> jcastro: what's wrong with them? :) those were the ones I used..
<jcastro> "juju version" should work
<jcastro> wrong PPA
<jcastro> thumper: please, hurry. This two Juju nightmare is punching users in the face. :-/
<thumper> jcastro: working on it dude
<jcastro> Mage_Dude: so basically we're in the middle of a transition between major version of juju and you're seeing the worst possible combination right now
<Mage_Dude> jcastro: I wish I'd known but, thanks for at least working on it. It just sucks to have lost a week to fiddling with it.
<jcastro> oh man dude, say something waaaay earlier
<Mage_Dude> I certainly don't want to be the lmgtfy guy. (But it is funny to see every single link purple for searches related to maas/juju)
<sarnold> hehe
<jcastro> no, you may bother me at any time
<jcastro> all the time to get you on your feet
<jcastro> we can at least help you get some time back
<jcastro> no one in this channel will ever tell you to LMGTFY
<jcastro> we're here to help save you time, not steal it
<jcastro> but at least getting you on juju 1.11.x will get you on stuff that is improving very quickly
<Mage_Dude> Well, I'll at least see you at OSCON and hope to be not sucky at this by that time to really learn some tricks and stuff
<jcastro> we certainly owe you some consolation beers.
<Mage_Dude> Portland is a good place for that.
<Mage_Dude> In the install it says use the juju-1.10.0 alternative, should I use 1.11.0?
<marcoceppi> Mage_Dude: 1.11.0, the installation instructions should say 1.10 or above
 * marcoceppi goes to update that doc
<Mage_Dude> marcoceppi: Just the getting started part it mentions the 1.10.0 vs the 0.7
<marcoceppi> Mage_Dude: yeah, it really should be 1.X vs 0.7, 0.7 is probably going to be the last 0.x release (old code version)
<Mage_Dude> sudo apt-get install juju-core
<Mage_Dude> Ha ha!
<marcoceppi> Mage_Dude: exactly!
<marcoceppi> ;)
<jcastro> make sure to check the update-alternatives too
<Mage_Dude> No alternatives for juju?
<Mage_Dude> But juju version shows 1.11
<jcastro> if you have both installed it'll give you the option to switch back and forth
<jcastro> ok, you're good to go then!
<Mage_Dude> No tools available.
<marcoceppi> Mage_Dude: are you deploying on to ARM?
<Mage_Dude> marcoceppi: Nope, nor to AWS.
<marcoceppi> Mage_Dude: just (i386|amd_64) with MaaS?
<mwhudson> Mage_Dude: could be https://bugs.launchpad.net/juju-core/+bug/1172973
<Mage_Dude> Yeah reading through it now, but still don't understand a fix. Do I need to fake AWS keys?
<mwhudson> i think you need actual AWS keys
<mwhudson> (which is why the bug is critical i guess...)
<marcoceppi> Looks like it's being actively worked on too
<Mage_Dude> Well, I did get the command failed: access-key: expected nothing, got nothing (not sure why that's bad...) I'll try again in the morning.
#juju 2013-06-18
<AskUbuntu> Setting up Juju on Ubuntu Cloud | http://askubuntu.com/q/309508
<pavel> guys
<pavel> does anybody have working lucid servers with juju?
<jcastro> marcoceppi: did you have a branch for old docs to unscrew up that getting started page from yesterday? Or do you want me to bust it out?
<jcastro> I can do it, just making sure you hadn't done it already
<marcoceppi> I don't get around to hacking on it yet, it's all yours
 * jcastro nods and gets to work
<jcastro> man, new-docs can't get here fast enough
<pavel> hi there
<pavel> I'm really stuck with JuJu on Lucid. Is it supported at the moment?
<jcastro> I don't think we've backported all the way to lucid
<jcastro> Juju first appeared in oneiric!
<pavel> Though there is a build for lucid in official PPA
<jcastro> marcoceppi: nuts, I forgot old docs was RST, I see what you did there.
<marcoceppi> :D
<jcastro> pavel: huh really? I didn't know that.
<jcastro> what version of juju is it?
<pavel> The latest I think :D
<marcoceppi> pavel: run juju --version
<pavel> let me check
<marcoceppi> jcastro: I think this is pyjuju
<jcastro> oh ok
<pavel> I don't think I would be able to do this, as far as it crashes on the start
<marcoceppi> pavel: you can run `dpkg -l juju` to get the package version
<pavel> one sec, booting vagrant
<pavel> oh, it's 0.5.99+bzr604-0juju1~lucid1
<pavel> so I don't have to suppose it would ever work on Lucid?
<marcoceppi> pavel: yeah, the latest version is 0.7 for the 0.x series, the most recent version of juju is up to 1.11 but that hasn't been backported to lucid (or precise) yet
<pavel> yeah, yeah
<pavel> I just forget to look for the version
<jcastro> marcoceppi: pushed, a onceover of my diff would be <3
<koolhead17> jcastro, marcoceppi hello guys
<jcastro> hazmat: do we have plans to bring .7 back to lucid in the PPA?
<marcoceppi> pavel: yeah, so the juju package in the ppa hasn't been built for lucid in quite a while
<pavel> marcoceppi, yep, now it's clear
<marcoceppi> jcastro: we should be able to update the ppa to use the 0.7 build recipe
<marcoceppi> unless there's a dep issue
<jcastro> maybe it's an easy fix
<jcastro> let's see what kapilt has to say
 * marcoceppi nods
<pavel> it would be great if you update lucid version
<pavel> I have software which is really hard to install on something other then lucid
<jcastro> sure, we can investigate
<pavel> thanks a lot
<hazmat> jcastro, lucid really?
<hazmat> jcastro, i don't know that we done any testing on lucid, in a long time.. i think dpb1 tried it out last (within the past few months)
<jcastro> ok
<jcastro> dpb1: did it even build?
<dpb1> jcastro: so, there was a change to the packaging to get around a missing dependency or something, and I think that broke lucid.  Here is the last build log I found for it: https://launchpadlibrarian.net/139956755/buildlog.txt.gz  -- it needs someone with python deb knowledge to look and fix it.  At that time, that was the only blocker to getting lucid going
<noodles775> wedgwood_away, adam_g_ Do you know if there's anything else needing to be done on this MP for it to be approved/landed in charmhelpers? https://code.launchpad.net/~michael.nelson/charm-helpers/add-declarative-support/+merge/168961
<marcoceppi> hazmat: in deployer can you specify relations via service:relation-name?
<hazmat> marcoceppi, yes
<marcoceppi> hazmat: thanks
#juju 2013-06-19
<stub> Can anyone confirm that if I run relation-list in a peer relation hook, that it might be listing dying units?
<noodles775> mthaddon: Not sure if you review charmhelpers, but as per our conversation yesterday, here's an update for the salt support which doesn't install a daemon: https://code.launchpad.net/~michael.nelson/charm-helpers/depend-on-salt-common-only/+merge/170298
<mthaddon> noodles775: I saw the comment in the RT, thanks. I'll leave the review to wedgwood and others for charm-helpers though
<noodles775> Cool.
<MarkDude> Is all off Juju's main code under Affero v1?
<marcoceppi> MarkDude: yes, juju is licensed agpl
<MarkDude> v1? not v3? marcoceppi ?
<marcoceppi> Oh, I'm not sure the version exactly
<MarkDude> Was that the license from the start
<marcoceppi> juju-core is AGPLv3
<marcoceppi> so is juju0.7
 * MarkDude can email on it. I have been wanting to put juju in Fedora- and have seen v1 be an issue
<MarkDude> Yay,
<MarkDude> much easier, I can proceed now, I *might* need a clarification letter at some point so I could put it in the regular repos
<MarkDude> A few of the other parts are not compatible with all the legal restrictions on my side.
<MarkDude> But as far as the main part- I feel juju- used in concert with Kickstart (scripts for the Anaconda installer) has awesome possibilities
<maxired> Hi everybody !
<marcoceppi> Hi maxired
<maxired> I'm trying to use "juju-gui", from cs:~juju-gui/precise/juju-gui-68 ,a dn i'm stuck on ""Connecting to the Juju environment"
<maxired> I'm not sur how to check that the juju API is running, any idea ?
<jcastro> MarkDude: you don't want that
<jcastro> you want to use cloud init
<jcastro> it's in fedora
<MarkDude> Agreed, the cli client looks easy to integrate
<MarkDude> the rest of the licenses appear to work, and I have a list of Juju items ALREADY available for Fedora
<jcastro> how's the golang stack in fedora?
<MarkDude> +1
 * MarkDude expects no issues there
<MarkDude> Well major ones, possible small fixes needed
 * MarkDude started notes- and mostly saw a Affero v1 as issue- v3 means a bit of work on my side- but it should be fine
<marcoceppi> maxired: what version of juju are you using?
 * MarkDude was reviewing the code. Looks like there is a really nice base to build off of
 * MarkDude has not so secret plan of linking (in idea only) juju charms and Kickstart
<maxired> marcoceppi : I was precisly looking at this.
<maxired> i'm on 0.7 from ppa:juju/pkgs on the master node
<marcoceppi> MarkDude: charms are licensed sperately from juju, you'd need to check each charms license if that was the case ;)
<maxired> but look's like juju 0.7 from default repository has been installed by MAAS
<maxired> I'll try to change this
<marcoceppi> Admittdly it's been a while since i've deployed the gui with 0.7
<ehg> hey - is there an OS X build of go-juju anywhere?
<marcoceppi> ehg: not for go-juju, not yet at least
<ehg> marcoceppi: ah, would it compile?
<MarkDude> The charms themsleves are not as intereresting as what Juju could do for helping make a dev environment, or a few other things more technical
<MarkDude> Bonus being we get to keep the awesome part of getting folks to dev- on the easy route
<marcoceppi> ehg: I don't see why not. I don't think there's anything platform dependant, but I'm not a juju-core developer nor do i own a mac to verify
<ehg> cool, i'll get our CEO to try :)
<marcoceppi> ehg: Let us know how it works out!
<MarkDude> Juju is full of win. Charms made elsewhere may not work for various reasons - technical , license etc
<MarkDude> Roll out on installs of servers is a possiblity at this point
 * MarkDude is hoping to get a kickstart to create Juju- and go from there
<MarkDude> Ty jcastro , marcoceppi , everyone
<MarkDude> Back to study, do I send my "formal letter of intent" (yay Juju rocks) to jcastro  or someone else ?]\
<jcastro> a letter for what?
<jcastro> I don't know where you hang out man, but we're ubuntu, you don't need a letter, just go do awesome stuff. :p
<maxired> MarkDude : I was wrong, i'm also running juju from ppa:juju/pkgs on the execution machine
<maxired> marcoceppi : I was wrong, i'm also running juju from ppa:juju/pkgs on the execution machine
<AskUbuntu> Ubuntu 12.04 MaaS and JUJU's proxy issues | http://askubuntu.com/q/310153
<marcoceppi> maxired: Well, that all _should_ work. There was a rudimentary api added to 0.7 for the gui
 * marcoceppi checks
<MarkDude> I know jcastro but despite the awesomeness of Fedora, every so often I need to do formal shit
<jcastro> ok let me know if you need anything
<MarkDude> So I can forward to legal, and such- CYA on my side
 * MarkDude hates having to use his title, but has to in the formal letter
 * MarkDude really really wants to have Distros focus on our common themes- and juju has great possiblities
<MarkDude> Ubuntu helps all of FOSS imho. and providing some concrete examples can help stop some sniping from at least a few troll types
<MarkDude> Oh I should have an agreeable packager to help with this, so I can see about getting some momentum. Keep kickin ass Juju folks :D
<jcastro> hey we do have some RPM packaging somewhere, but it's old and for pyju
<jcastro> so not useful probably, just thought I'd mention it
<marcoceppi> jcastro: I'd rather pyjuju not end up in some rpm distro somewhere :P
<jcastro> hah yeah, we can't even get rid of it in ubuntu
<jcastro> it's gone from the saucy though!
<maxired> I got a process with " /usr/bin/python -m juju.agents.api --nodaemon --port 8080 --logfile /var/log/juju/api-agent.log --session-file /var/run/juju/api-agent.zksession --secure --keys /etc/ssl/juju-gui" , but not listening on 8080
<gary_poster> maxired, hi.  I'm one of the GUI people.  We test against pyJuju 0.7 at least a couple times a day, but not against MAAS.  We've had similar reports to this, but we have resolved everything we know about previously.
<gary_poster> ok that sounds promising/interesting.  this is on the GUI charm, I assume?
<MarkDude> Where are they located?
 * MarkDude is aware the pyju is the OLD method
<MarkDude> Well we would look at pyju for reference, not to include :D
<gary_poster> maxired, first question I'd have, if you've verified that nothing is listening on the gui charm on part 8080, is what you see in that logfile (/var/log/juju/api-agent.lo)
<gary_poster> /var/log/juju/api-agent.log
<gary_poster> ... *port
<maxired> gary_poster : thanks for helping. yep, nothing actually listening on that port.
<maxired> i'm am currently trying again, because there has bene previously an nginx on it
<maarten__> When I do juju bootstrap, the bootstrap server does not have my openstack keypair and I can not log into it. What do I have to put to include the canonistack keypair?
<gary_poster> ok maxired.  there's a (closed) bug for this, which has had some conversation.  I don't think it has anything interesting but will go find it.
<gary_poster> maxired, https://bugs.launchpad.net/juju-gui/+bug/1180095 <shrug>
<_mup_> Bug #1180095: GUI charm may have difficulties working with Juju on MAAS <juju-gui:Invalid> <https://launchpad.net/bugs/1180095>
<maxired> gary_poster : thanks
<marcoceppi> maarten__: You can add SSH keys to your environments.yaml file: http://askubuntu.com/q/205170/41
<maarten__> thanks
<marcoceppi> WIth that format, you'd just put one public key per line
<marcoceppi> (if you had multiples)
<maxired> gary_poster  : after a new deployment, the agent is listening, but i still stuck on "Connecting to the Juju environment"
<maxired> do you know how can i get logs from haproxy ?
<gary_poster> maxired, I don't.  looking.
<gary_poster> maxired, benji has graciously offered to help
<gary_poster> he'll be around presently
<benji> hi, maxired; I'm reading the backlog to get cought up
<maxired> hi benji ;) thanks for helping
<maxired> benji : looks like it's working right now
<maxired> the first connection to the websocket as been really long
<maxired> but i could login the the web ui ;)
<benji> my work here is done ;)
<maxired> benji , gary_poster and marcoceppi, thanks for helping
<marcoceppi> maxired: glad you got it working!
<gary_poster> maxired, benji, lol, excellent
<maxired> i'll check in the bug tracker, but looks like my probleme was a destroyed instance of 'wordpress' wich didn't remove the nginx
<gary_poster> maxired, on the same box to which you deployed the GUI later, I'm guessing?
<maxired> yep
<gary_poster> ok, yeah.  FWIW, effectively fixed in Juju Core AIUI, maxired.
<maxired> Good news i don't have to reproduce to be sure about that in order to fill a bug ;)
<gary_poster> :-)
<maxired> any link to the commit/bug ?
<gary_poster> looking
<gary_poster> maxired, https://bugs.launchpad.net/juju/+bug/872264
<_mup_> Bug #872264: stop hook does not fire when units removed from service <goju-resolved> <juju:Confirmed> <juju-core:Fix Released by fwereade> <https://launchpad.net/bugs/872264>
<gary_poster> (I believe you destroyed the service, but I also am 90% sure that the issue is the same)
<fwereade> maxired, gary_poster: we do not currently implement an uninstall hook, which is what I'd expect would actually remove such things
<fwereade> maxired, gary_poster: we are currently working on containerization, which would let you *really* remove things, rather than just trusting to the effect of the putative uninstall hook
<maxired> gary_poster : thanks, i was looking for something more specific
<gary_poster> fwereade, when you destroy a service, don't you do a lot more drastic clean up than pyJuju did up?  Or am I just thinking of containerization discussions?
<gary_poster> maxired, ok, that's all I've got.  :-)
<fwereade> gary_poster, we can do a lot more stuff in the DB; and we can actually guarantee that stop hooks will be run; but we don't go any further than that
<maxired> fwereade : I guess you are write. stoping could be not enough if things for example start at reboot ;)
<fwereade> maxired, I would hope that a well-implemented stop hook would prevent the service from autostarting ;)
<gary_poster> fwereade, gotcha, thanks.
<jcastro> jamespage: m_3: have either of you checked out that merge proposal from patrick hetu yet?
<jamespage> jcastro, remind me again - which one is that?
<jcastro> https://code.launchpad.net/~patrick-hetu/charms/precise/gunicorn/python-rewrite/+merge/167088
<jamespage> stub, you are terrifying me with all of the postgresql MP's you have in flight
<jcastro> rick_h: hey so sinzui isn't around so you're the closest, tldr, some merge proposals aren't getting in the charm review queue
<rick_h> jcastro: yea, there's a bug for that. Sec let me find it.
<rick_h> https://bugs.launchpad.net/charmworld/+bug/1191823 jcastro
<_mup_> Bug #1191823: Merge proposals are missing from the review queue <charmers> <charmworld:Triaged> <https://launchpad.net/bugs/1191823>
<rick_h> "But ~charmers isn't requested to review this proposal, only ~mark-mims."
<jcastro> OOOHHHHH
<rick_h> jcastro: so not a lot we can do atm about it.
<jcastro> so this is a workflow bug
<rick_h> jcastro: yea, at least for now until we can get time to check out a better way to generate the list
<jcastro> ok so you know how we can "reset" a merge proposal to a non assigned to mims state?
<rick_h> jcastro: moved to #juju-gui since aaron is there and he looked into it.
<jamespage> jcastro, just fixed that
<jcastro> right so basically after the first reviewer reviews it, and they ping pong, we need a way to stick it back into the pool instead of assignit to the first guy
<jcastro> jamespage: oh awesome, so we just need to tell charmers when you're done you're first pass to do what you just did
<jcastro> jamespage: what did you do> :)
<jamespage> jcastro, requested 'charmers' review the MP
<jamespage> 'Request another review'
<jcastro> ack, found the docs, thanks, I'll send out a reminder and probably put this in newdocs
<stub> jamespage: Big one coming up now I understand -broken/-departed hooks and can rewrite the horrible replication stuff
<jamespage> stub, do you want to consolidate into a single merge proposal?
<stub> jamespage: The ones in the queue will get the charm in the store actually working again
<stub> jamespage: You actually prefer big MPs to smaller bite sized ones?
<jamespage> stub, no - I just wanted to understand if whats in the queue is all valid
<jamespage> time order right?
<stub> yeah - I use a bzr pipeline so they each depends on the previous
<stub> https://code.launchpad.net/~stub/charms/precise/postgresql/charm-helpers/+merge/169487 is likely rubber stamp, a separate MP to avoid noise in actual work
<stub> https://code.launchpad.net/~stub/charms/precise/postgresql-psql/devel/+merge/169851 is teensy
<stub> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1190141-client-access-fail/+merge/169928 is also teensy and fixes the worst bug atm
<stub> yeah, nothing to be scared of yet ;)
<fwereade> rvba, ping
<jamespage> stub, there are two others in the queue
<jamespage> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1084263-fix-broken-hooks/+merge/169486
<jamespage> no - just that one actually
<stub> https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1084263-fix-broken-hooks/+merge/169486 may give you a wtf moment. I think I based my implementation on a misunderstanding. But the approach works.
<stub> There is a test infrastructure one too
<stub> https://code.launchpad.net/~stub/charms/precise/postgresql/tests/+merge/169866
<stub> But nothing scary in there
<jamespage> stub, OK - I'm completely confused
<stub> :)
<jamespage> stub, which one should I be reviewing first? charm-helpers?
<stub> I'm not helping? ;)
<stub> Yes, charm-helpers
<jamespage> Ok - lemme start there then
<stub> That is just upstream code I've copied in from lp:charm-helpers
<jamespage> stub, yeah - I see - you sure you want to include the entire project? and not just the python package?
<stub> jamespage: I don't know if it matters. Whatever best practice is considered.
<jamespage> stub, not sure we have one just yet
<stub> Pulling the whole thing gets the license etc.
<jamespage> I know for the openstack charms we have been taking the approach of pulling in the bits we want
<jamespage> stub,  see http://bazaar.launchpad.net/~gandelman-a/+junk/charm-helpers-sync/files
<stub> I worry that will drift... people making their own little hacks. The PG charm already contains a hacked apon copy of charm-helper's ancestor.
<stub> How does that helper help with charm-store charms?
<stub> oh, I think I see
<jamespage> all charm-helpers-sync does is provide a standardized way to described which bits you want from lp:charm-helpers
<jamespage> and syncs them into your charm
<jamespage> stub, I'll push adam_g to get that included in charm-helpers itself
<jamespage> adam_g_, nudge ^^
<stub> Right. So I don't see any reason not to switch to that. I'll go with whatever mechanism is preferred.
<jamespage> stub, I'd prefer to see that rather than a sync of the entire branch
<jamespage> wedgwood, any opinions on the above?
<stub> A point for the whole tree is it is easier to merge fixes back, perhaps.
<stub> Nah, ignore that
 * wedgwood reads backscroll
<wedgwood> jamespage: I've been doing my best to structure charm-helpers in a way that it can be included in pieces. +1
<wedgwood> I envision it being packaged that way too. The cli, for example, might be a separate deb
<stub> jamespage: Pushed that update anyway. Just running the test again (soon that will be tests, plural)
<_mup_> Bug #1192598 was filed: pyjuju provision-agent hangs and fails to provision instances <juju:New> <https://launchpad.net/bugs/1192598>
<jamespage> wedgwood, ack
<jamespage> stub, guess I should review lp:~stub/charm-helpers/bug-1191002-local-unit-relation-data first as that is where you are pulling charm-helpers from
<stub> jamespage: turtles, all the way down
<jamespage> stub, coolio
<jamespage> stub, that branch also adds the data that the local unit has set on a specific relation right?
<jamespage> (thats the bit I did not even know you could do until last week)
<stub> jamespage: Correct
<jamespage> stub, OK - that works for me - do you want to rebase using lp:charm-helpers now?
<jamespage> I think that would be good hygiene
<stub> It has landed? Sure.
<stub> jamespage: And done
<jcastro> charm call in 5 minutes!
<marcoceppi> jcastro: do you have the URL?
<jcastro> setting it up now
<jcastro> hey can one of you prep the etherpad for this week while I fire this up?
<jcastro> https://plus.google.com/hangouts/_/9d5f2933f12c69245f97d157a4c372d4d61318bd?authuser=0&hl=en
<jcastro> if you want to participate
<jcastro> http://ubuntuonair.com if you want to just listen in
<arosales> jcastro, I can work on the etherpad
<marcoceppi> http://pad.ubuntu.com/7mf2jvKXNa
<jamespage> stub, how worried are you about backwards compat with py-juju?
<stub> jamespage: I will be worried, at least to the extent of migrating our existing systems
<jamespage> stub, destroy-machines in the tests is a juju-core ish - py juju does terminate machines
<stub> right, ta. Might as well change that.
<stub> When I have a standard require/provides relation between client_service and server_service, is it possible for a unit in server_service to retrieve relation data set by other server_service units? From the server side, relation-list only appears to list client_service units and relation-get with the other unit listed explicitly seems to be failing.
<marcoceppi> stub: I believe that's what peer relations are for
<stub> This is relation state... it doesn't fit the peer relation
<marcoceppi> stub: Units of the same service don't talk to each other (normally) when it's service -> client relation
<stub> One of server_service is a master, and creates a user and password for the clients and publishes it to the relation with the client
<stub> The other server_service units ideally will republish the same information
<stub> I'm confused as I think this worked before. It might be racy in some way though.
 * stub ponders about using the peer relation as a communication channel
<wedgwood> stub: you want relation-ids
<wedgwood> you'll need the right ID to go along with the unit
<wedgwood> e.g. (service a) <- myrel:0 <- database -> myrel:1 -> (service b)
<stub> hmm... maybe I'm using the wrong relation id. That would explain why it used to work.
<MarkDude> Where were the RPM charms? I have someone reviewing now
 * MarkDude knows they were done the old way with Pyju
<jcastro> https://github.com/jujutools/rpm-juju
<MarkDude> Ty
#juju 2013-06-20
<mectors> A quick question. Canonistack is not launching instances reliably. Is there any way I can kill a machine and tell juju to try to start another node. juju destroy-unit/machine/service do not work reliably.
<pavel> you mean juju terminate-machine ?
<mectors> It marks it as dying but it does not really terminates it because it never started up
<mectors> so now I am stuck with an inconsistent state
<AskUbuntu> E: Unable to locate package juju-core | http://askubuntu.com/q/310604
<jcastro> hey jam, silly question
<jcastro> but do you know how we ended up with packages in the PPA being built differently than in distro?
<jcastro> I am wondering if we did that on purpose or if it's just a consequence of something
<jam> jcastro: because the PPA was done first, but isn't "legitimate" for how they wanted the build to be done in Ubuntu
<jcastro> ah ok
<jam> jcastro: so it got fixed for official builds, and the changes need to be applied to the PPA
<jam> but there is some reason it isn't trivial to do it
<jam> mgz knows a bit more.
<jcastro> I was just curious
<pavel> Guys, can you explain me one very basic thing? Is it supposed that service instance would be stopped/started or rebooted?
<mgz> jcastro: the ppa generation wasn't updated to use the new packaging, I'm going to try to sync up with dave on fixing that
<marcoceppi> pavel: could you clarify that a bit? I'm not sure I understand what you're saying
<pavel> I mean, what if I stop and start instance with service deployed by juju in aws console
<pavel> marcoceppi, what will happen then?
<marcoceppi> pavel: Depends on which version of juju you use
<pavel> 0.7
<marcoceppi> If you're using juju <= 0.7 Juju will provision a new unit to replace it when you stop an instance. In juju >= 1.0 Juju won't try to replace the stopped unit, but I'm not 100% certain things will "come back up" properly. In that the juju service on the unit will start, but the start hook won't be explicitly executed so your service will need to install itself as an upstart or init script properly configured to start on machine up. Also
<marcoceppi> , relations might go haywire as well
<pavel> I see
<pavel> so there is no predictable behavior for this case
<pavel> and I should suppose that instance always run
<marcoceppi> pavel: not at the moment, no real predicatbility. It's best to remove and re-add units instead of turning them on and off via the console
<pavel> because my problem is that I didn't find a way to increase root partition size on AWS with JuJu, so I use ephemerial storage mounted to /mnt
<dpb1> Hi all -- is there a way in juju-core to associate a public IP through juju with just one service or unit?
<mgz> nope, that would be handy though
<mgz> I'm working on the addressing code currently, so if you have use-cases I'd really like if you could send a message to the list or something so they get recoreded
#juju 2013-06-21
<FunnyLookinHat> Can juju-core handle LXC local deployments?  I'm getting a error: environment "sample" has an unknown provider type "local" when doing juju bootstrap
<pavel> which version of juju?
<FunnyLookinHat> launchpad trunk for juju-core :D
<pavel> I think here is your answer http://askubuntu.com/questions/309508/sample-lxc-has-an-unknown-provider-type-local-error-when-bootstrapping
<pavel>  In order to use LXC as a provider you will need to use Juju .7. The way we're currently doing this is you can install juju along with juju-core and use alternatives to switch back and forth:
<FunnyLookinHat> Ah
<marcoceppi> FunnyLookinHat: Local provider in juju-core is scheduled to land this cycle
<FunnyLookinHat> marcoceppi, right on - I can test with 0.7 in the meantime  :)
<FunnyLookinHat> Curious - when does this current cycle go "stable" ?
<marcoceppi> FunnyLookinHat: I haven't the slightest idea, I know they've got another release ready to go soon and there's talks of publishing a juju-core release every 2 weeks, but as for when juju-core goes 2.0 (stable) that's still dependant on when features land, etc
<FunnyLookinHat> marcoceppi, Ah ok thanks :)
<jcastro> hey evilnickveitch
<jcastro> http://blog.smartbear.com/careers/13-things-people-hate-about-your-open-source-docs/
<jcastro> thankfully we already are avoiding most of them
<evilnickveitch> cool, thanks!
<evilnickveitch> jcastro, i think we are avoiding all of them, apart from not shipping the docs with the software
<evilnickveitch> which we could do if we wanted
<marcoceppi> evilnickveitch: which we should do to make thumper happy*
 * marcoceppi squints at juju help
<evilnickveitch> marcoceppi, I think we should. There are various other bits that need tidying up
<evilnickveitch> man pages for example
<marcoceppi> yeah, but it should be pretty easy go html -> whatever-format-they-want
<evilnickveitch> marcoceppi, indeed - that is the genius of whomever it was that decided the docs should be written in HTML
<marcoceppi> Easy, you might pull a muscle patting yourself on the back that hard ;)
<evilnickveitch> :)
<marcoceppi> really <3 the new docs. evilnickveitch I'll have a merge request for the redirects against the juju-core branch by my eod as it looks like most of the pages are there now (and I'll be out next week)
<evilnickveitch> marcoceppi, okay, that's great, I'll try and pick it up later
<arosales> evilnickveitch, I have an RT open to get the docs deployed as you may have seen. But with that I listed no deps. Thus, could you work to get the navigation settled and then drop the jade ?
<evilnickveitch> arosales - sure, I have been working on that, should be sorted by Monday :)
<arosales> evilnickveitch, thanks.  Could you update the RT with that, you should be on cc.
<evilnickveitch> yeah, i see it, will do
<arosales> evilnickveitch, thanks.
<arosales> marcoceppi, I am also checking where the redirects should live (you are also on cc).
<marcoceppi> arosales: ack, I'm going to stick them in the repo for now, incase that's becomes a blocker next week
<arosales> marcoceppi, ok sounds good. IS can also move them where appropriate if they are in the repo too
<pavel> jcastro, hello
<jcastro> hi!
<pavel> ./msh jcastro we have a kickoff on a tuesday only
<pavel> but can we schedule another one about requirements on a monday?
<arosales> pavel, hello
<pavel> sry, I fucked up with those irc commands )
<jcastro> yeah the problem is m_3's availability
<jcastro> pavel: but if you guys are around now maybe we can set something up?
<arosales> pavel, no worries.  what jcastro said
<pavel> yes, we can
<jcastro> was concerned by the time m_3 got off the phone on the call he is now that you guys would be in bed.
<arosales> m_3, and I are in meeting for another hour
<pavel> I see
<arosales> pavel, is 17:00 UTC to late for you?
<pavel> it's ok for me
<pavel> Jorge, can we have a small hangout just for me to clarify some points?
<arosales> pavel, I'll set up a quick google hangout for 17:00 with jcastro and m_3
<pavel> today?
<arosales> correct
<arosales> in 1 hour
<arosales> given it is not too late for you
<pavel> it will work perfectly for me
 * arosales doesn't want it to disrupt you friday evening though :-)
<pavel> friday night is long  :)
<jcastro> I like how you roll
<FunnyLookinHat> What's the command to see the bash as I'm deploying a charm ?
<marcoceppi> FunnyLookinHat: debug-log ?
<FunnyLookinHat> marcoceppi, duh.  thx.  :D
<FunnyLookinHat> I was looking at juju-debug and just realized that's how you SEND something
<marcoceppi> Personally, I prefer to juju ssh <service>/<unit>; tail -f /var/lib/juju/units/*/charm.log; but I'm just old fashioned
<paraglade> has anyone tried to get the following to work using just charms and no manual config: kibana + elasticserach + logstash indexer + redis + logstash agent
<paraglade> I seem to be hitting a wall with the redis part
<FunnyLookinHat> Ah geez - how do I send a message to the log from within a hook again? ( bash hook, that is )  juju-log message ?
<FunnyLookinHat> Doesn't seem to want to play nice...  :-P
<FunnyLookinHat> Oh I just missed it...
#juju 2013-06-22
<adam_g> jamespage, that issue andres was hitting looks like an issue with the Serializable() voodoo. the type of a single config value depends on how it was obtained: http://paste.ubuntu.com/5788369/
<adam_g> jamespage, and stub is already on it: https://code.launchpad.net/~stub/charm-helpers/bug-1192845-fix-serializable/+merge/170555
<AskUbuntu> MaaS minimum requirements with juju-jitsu? | http://askubuntu.com/q/311410
<virusuy> hi guys
<virusuy> it's just me, or flag --help in charm-tools don't work  ?
<AskUbuntu> Juju + openstack. Bootstrap is successful, But cant create any other services | http://askubuntu.com/q/311583
#juju 2014-06-16
<gnuoy> wallyworld, wow, thanks for the speedy fix to Bug#1329805
<_mup_> Bug #1329805: juju search for image does not find item if endpoint and region are inherited from the top level <juju-core:Fix Committed by wallyworld> <simplestreams:Fix Released> <https://launchpad.net/bugs/1329805>
<wallyworld> gnuoy: no problem. we didn't expect that the very top level would contain region/endpoint so didn't cater for that. we do now :-)
* mbruzek changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: mbruzek / tvansteenburgh || News and stuff: http://reddit.com/r/juju
<jamespage> hazmat, is there a reason that juju-deployer does not support "to: " with arbitary machine numbers other than 0?
<rick_h_> jamespage: because bundles can be deployed over an existing environment and the numbers are not promised
<rick_h_> jamespage: is my understanding
<jamespage> rick_h_, right - makes sense
<jamespage> rick_h_, I figure it out with my manual provider usage - deploy the two services and then target everything else at those.
<ali1234> does anyone know why my "juju status" has stopped working: http://paste.ubuntu.com/7643749/
<ali1234> it worked for a bit, then it stopped
<ali1234> the container is still running
<ali1234> one of them anyway, the other one never finished setting up
<ali1234> it tries to open a socket to the orchestration container and fails
<hazmat> jamespage, its not reproducible
<hazmat> jamespage, per what rick_h_ said.. i've come around though that there should be an unsafe mode for it, as it useful for a lot of folks
<jamespage> hazmat, ack
<jamespage> I worked around it
<jamespage> gnuoy, so I bought up a fresh, two node cinder cluster, and then added another unit to it OK
<gnuoy> jamespage, let me see if I can reproduce with cinder
<gnuoy> jamespage, I seem to see the problem with cinder as well. Deploy cinder on trusty with HA. Kick pacemaker and corosync and everything is fine. Add a new unrelated service and cinder pacemaker refuses to stop.
<jamespage> gnuoy, I don't understand why you are adding a new unrelated service?
<gnuoy> jamespage, because that's what triggers the breakage
<jamespage> gnuoy, oh - one second
<jamespage> it might be crapping itself out
<jamespage> the new units will get the existing configuration - however they won't have any of the right bits installed to actually run them
<jamespage> so maybe that's what's causing the problem?
<gnuoy> jamespage, the service I add is truly unrelated
<jamespage> gnuoy, if you add another neutron-api unit does that work OK?
<jamespage> gnuoy, one second - do you have just one instance of the hacluster charm deployed?
<jamespage> so you are 'add-relation' to two different services?
<gnuoy> I don't do anything to the unrelated service. I don;t add the ha charm at all
<jamespage> gnuoy, you just deploy it?
<jamespage> so its not running hacluster or anything?
<gnuoy> yep, let me work up an example with cinder
<jamespage> gnuoy, please do
<gnuoy> jamespage, http://paste.ubuntu.com/7653811/
<jamespage> gnuoy, I really don't understand this - how can turning on three arbitary new servers foobar your cluster?
<gnuoy> I have no inkling of a clue
<jamespage> gnuoy, can you check your serverstack hosts file on your basition pls
<jamespage> make sure there are no dupes
<gnuoy> jamespage, http://paste.ubuntu.com/7653847/ dupefree
<jamespage> gnuoy, /etc/serverstack-dns/tenant_hosts
<jamespage> gnuoy, I've just pretty much tried your steps and I'm not seeing the same problem
<gnuoy> http://paste.ubuntu.com/7653850/
<gnuoy> jamespage, that is interesting
<jamespage> gnuoy, one second
<jamespage> gnuoy, sorry - what is interesting?
<gnuoy> jamespage, that you don't see the issue. I can reproduce it every time
<lazyPower> dpb1: ping
<jamespage> gnuoy, this really has my head scratching
<kentb> Hi juju folks.  Would this be the proper way to include the open-source components for my charm as well as a EULA for the Dell-specific ones?:  http://bazaar.launchpad.net/~kentb/charms/trusty/openmanage/trunk/view/head:/copyright
<kentb> OMSA = OpenManage Server Administrator
<jamespage> gnuoy, your other servers are not using the same multicast address are they?
<gnuoy> jamespage, I don't believe so I'm just redeploying to double check
<jamespage> gnuoy, following your guide 100%
<AskUbuntu> Machines required Juju bootstrap | http://askubuntu.com/q/484166
<lazyPower> dpb1: i canceled that sync we had. I'm going to retarget @ the charm maintainer
<gnuoy> jamespage, having terminated the other instances the problem has gone. I'm sorry to have messed you about but I'm not convinced my corosync woes are completely fixed. I'll try and work up another test case in the next few days
<gnuoy> jamespage, I have a theory. When I was doing the ha testing before it was when other clusters were present (conder and nova-cc) I wonder if thats the problem.
<gnuoy> s/conder/cinder/
<jamespage> gnuoy, if you configured HA with the same multicast address it probably would be
<gnuoy> different multi cast addresses but maybe they're clashing somehow
<jamespage> gnuoy, pick a non-conflicting default :-)
<gnuoy> I did
<jamespage> gnuoy, hmm - now I see the same issue
<gnuoy> jamespage, how have you reproduced ?
<jamespage> gnuoy, I walked through your reproduced step-by-step
<gnuoy> jamespage, do you have another cluster running in the same env ?
<jamespage> gnuoy, noodles775
<jamespage> gnuoy, no
<jamespage> noodles775, sorry
<gnuoy> jamespage, I need to EOD, thanks for the additional eyes
<jamespage> gnuoy, ditto
<ali1234> http://paste.ubuntu.com/7654841/ <- what does this mean?
<ali1234> this is the point where it switched from saying "no tools available" to "too many open files"
<ali1234> it was printing that "no tools available" message every 10 seconds for about 10 hours
<mbruzek> Hello ali1234
<ali1234> hi
<mbruzek> Has juju worked before or is this a new environment
<ali1234> new environment
<ali1234> remember the other day, when i crashed it?
<mbruzek> yes
<ali1234> well we fixed that one. the problem was i ran bootstrap with sudo
<ali1234> that created root owned files in ~/.juju
<mbruzek> You seem to have the touch when it comes to breaking Juju
<ali1234> so i cleaned all that stuff out
<mbruzek> how?
<mbruzek> apt-get remove -p ?
<ali1234> juju destroy-environment --force
<ali1234> rm -rf ~/.juju
<mbruzek> Ok
<ali1234> then i bootstrapped again without root
<ali1234> that worked okay
<ali1234> then i deployed an elasticsearch unit and it worked
<mbruzek> But you said this was a new environment.
<ali1234> yes
<ali1234> in fact that unit is still working now, i can reach the container
<mbruzek> So this is on a different machine ?
<ali1234> no, this is on the same machine
<mbruzek> So Juju was working and now it is broken?
<ali1234> i guess you could say that
<ali1234> when i tried to deploy a second machine it broke
<ali1234> that machine never finished deploying
<mbruzek> OK sorry for the problems lets troubleshoot what you are seeing now.
<mbruzek> ali1234, would you mind destroying everything and starting "fresh" ?
<ali1234> that's fine
<ali1234> however currently juju commands do not work
<ali1234> because i left it in an error state overnight and now it's exceeded the maximum open files somehow
<mbruzek> sudo lxc-ls --fancy | pastebinit
<ali1234> http://paste.ubuntu.com/7654902/
<ali1234> okay i recognise one of those - machine-1 is the elasticsearch i deployed, which currently works correctly
<mbruzek> sudo lxc-stop -n al-local-machine-1
<ali1234> okay, it stopped
<mbruzek> sudo lxc-destroy -n al-local-machine-1
<ali1234> okay, it's gone
<mbruzek> juju destroy-environment -y local --force
<ali1234> okay
<mbruzek> ps -ef | grep mongo
<ali1234> not running
<mbruzek> excellent
<ali1234> should i always use the juju ppa?
<ali1234> (seems like now would be a good time to install it if so)
<mbruzek> Yes there is a stable and an devel branch if I am not mistaken
<mbruzek> stable would be the one I would suggest
<mbruzek> lets do this first
<mbruzek> sudo apt-get purge juju-local
<pmatulis> on ubuntu 14.04 i deployed wordpress/mysql and the charm for wordpress is from precise (charm: cs:precise/wordpress-22).  normal?
<ali1234> that just means the container machine will be running precise, doesn't it?
<ali1234> purged
<mbruzek> pmatulis, I get wordpress-22 when I deploy as well
<mbruzek> sudo add-apt-repository -y ppa:juju/stable
<mbruzek> ali1234, ^
<ali1234> juju and juju-core are being updated...
<mbruzek> ali1234, just incase something else is broken also purge juju-core
<mbruzek> before installing?
<mbruzek> ali1234, then sudo apt-get install juju-core juju-local
<ali1234> okay, done
<mbruzek> juju init
<ali1234> ERROR A juju environment configuration already exists.
<mbruzek> hrmm...
<ali1234> delete ~/.juju?
<mbruzek> back up your .juju/environments.yaml file and then delete the .juju directory.
<ali1234> any reason to back it up?
<mbruzek> ali1234, Only if you have other clouds defined other than local
<mbruzek> Otherwise rm-rf
<ali1234> okay, done
<mbruzek> juju init
<ali1234> done
<pmatulis> mbruzek: so normal?
<mbruzek> pmatulis, From what I saw it looks normal?  What are you concerned about?
<mbruzek> ali1234, can you pastebin your environments.yaml file?
<pmatulis> mbruzek: i just expected everything to be on trusty is all
<ali1234> the old one or the new one?
<mbruzek> ali1234, new one
<ali1234> http://paste.ubuntu.com/7654978/
<mbruzek> pmatulis, Oh.  No we are not auto promulgating charms.  They must have tests and be tested on Trusty before they advance.
<mbruzek> so pmatulis most of the charms are still on precise.
<pmatulis> mbruzek: ok, fair enough, cheers
<mbruzek> ali1234, you only need http://paste.ubuntu.com/7654994/
<mbruzek> ali1234, you can keep the other stuff in there commented out
<ali1234> i'm supposed to add that stuff?
<ali1234> last time i didn't edit the file at all
<mbruzek> default-series must be set
<ali1234> what happens if it isn't?
<mbruzek> https://juju.ubuntu.com/docs/config-LXC.html
<mbruzek> problems
<ali1234> that page doesn't say anything about default-series...
<mbruzek> it should
<mbruzek> Sorry I will fix that
<ali1234> admin-secret is any string?
<mbruzek> it is, that is just any string to log into juju-gui
<ali1234> okay i used your paste and i get ERROR couldn't read the environment when i try to juju switch
<mbruzek> can you give me uname -a ?
<ali1234> Linux al-desktop 3.13.0-8-generic #28-Ubuntu SMP Tue Feb 11 17:55:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
<mbruzek> ali1234, let me see error
<ali1234> "ERROR couldn't read the environment" is the full output
<ali1234> it works if i just add those lines into the generated config
<mbruzek> ok
<ali1234> if i do juju generate-config -f while your version is in place it says: "ERROR cannot parse "/home/al/.juju/environments.yaml": YAML error: line 1: mapping values are not allowed in this context"
<ali1234> okay the problem is environments: should not be indented
<mbruzek> OK don't use my paste bin then just edit your own file
<ali1234> so now it works
<ali1234> right, fixed
<mbruzek> ok
<mbruzek> juju deploy ubuntu
<ali1234> ERROR environment is not bootstrapped
<mbruzek> juju bootstrap -e local
<ali1234> uploading tools for series [trusty precise]
<ali1234> that's different
<ali1234> last time i did this it only said trusty
<ali1234> and then it failed hard when i tried to deploy something on precise
<ali1234> machine-1 was a trusty instance, machine-2 was precise and gave that "no tools" error
<ali1234> okay it's bootstrapped
<ali1234> deployed and pending
<mbruzek> ok
<mbruzek> ali1234, let me know if that works
<ali1234> i expect it will work now
<ali1234> tail: inotify resources exhausted
<ali1234> tail: inotify cannot be used, reverting to polling
<ali1234> but that's because of the previous "fun" i expect
<ali1234> i think the AU answer needs updating again
<jose> kentb: I *think* it's good now in terms of the license, up to the ~charmers now :)
<ali1234> because it doesn't specify to set default-series
<jose> mbruzek: taking a look at chamilo and how can I approach it now!
<kentb> jose, ok thanks!
<mbruzek> ali1234, did you get any errors
<ali1234> just that one about inotify on the all-machines.log
<ali1234> the instance hasn't come up yet
<ali1234> the inotify error is the last thing on the log too
<ali1234> and there is no cpu usage or network usage... it doesn't appear to be doing anything
<ali1234> oh hang on
<ali1234> http://paste.ubuntu.com/7655115/ <- machine-1.log
<mbruzek> ali1234, I also have the kvm support message
<ali1234> okay it's running now
<ali1234> i'm ssh'd into the machine-1
<ali1234> lxc-ls --fancy doesn't list any machines though
<mbruzek> you will not see any lxc on machine-1
<ali1234> no i mean on the host
<mbruzek> ali1234, Are you running or not?  I see that you can ssh to machine 1
<ali1234> the machine is running
<ali1234> it just doesn't show on lxc-ls
<ali1234> http://paste.ubuntu.com/7655165/
<ali1234> oh okay, it's cos i didn't sudo it
<ali1234> so that appears to be working fine
<ali1234> so now i'm going to attempt to "juju deploy solr" which is what broke it all last time
<ali1234> and it failed
<ali1234> agent-state-info: '(error: hook failed: "install")'
<ali1234> but at least it didn't completely ruin the whole environment this time
<ali1234> oh, it just failed to download the right source tarball (404 error)
<mbruzek> Right ali1234 it looks like your juju environment is "normal" now
<ali1234> yeah seems that way
<ali1234> unfortunately there's no working charm for solr
<ali1234> they all point to a 404 URL
<mbruzek> ali1234, your lxc-ls did not return any containers because you need sudo before it
<ali1234> yeah i figured that out already :)
<mbruzek> ali1234, you can open a bug on solr if the urls are incorrect
<ali1234> already has an open bug
<ali1234> https://bugs.launchpad.net/charms/+source/solr/+bug/1324641
<_mup_> Bug #1324641: install hook fails (download link inexistant) <solr (Juju Charms Collection):New> <https://launchpad.net/bugs/1324641>
<mbruzek> ali1234, OK
<ali1234> confirmed it :)
<ali1234> i did "juju destroy-service solr" and now it says it is dying... will it go away eventually?
<ali1234> i've bug-reported this experience: https://bugs.launchpad.net/juju-core/+bug/1330719
<_mup_> Bug #1330719: juju-local exceeded open file ulimit <juju-core:New> <https://launchpad.net/bugs/1330719>
<ali1234> mbruzek: thanks for the help
<mbruzek> ali1234, You are welcome, glad to get you working.
<mbruzek> ali1234, Sorry for all the trouble.
<AskUbuntu> Openstack Neutron - Cannot Access Tenant Router Gateway | http://askubuntu.com/q/484293
#juju 2014-06-17
<vila> Hi there !
<vila> I get the following error during a bootstrap:
<vila>     > ERROR bootstrap failed: cannot start bootstrap instance: cannot set up groups: failed to create a rule for the security group with id: <nil>
<vila> This is part of a job that ensure that the juju related security groups are *deleted* before calling 'juju bootstrap'
<vila>  http://162.213.34.117:8080/job/uci-engine-integration-test/34/
<vila> jam: Any idea about that 'security group with id: <nil>' during juju boostrap above ?
<vila> bootstrap even
<jam> vila: sinzui has been observing HP not actually giving us the number of security groups that we expect. (horizon says that we have 200, but starts giving us "no more security groups" after 10)
<jam> vila: so it at least sounds like an HP problem.
<jam> with id: <nil> at least sounds like we weren't able to create a security gorup
<jam> I don't know why you wouldn't get a "failed to create security group" rather than "failed to set attribute"
<vila> jam: hmm, HP limit... but 'nova secgroup-list' tells us we only have the 'default' group defined... Is there a place to check for some "hidden" ones ?
<Mosibi> Hi all.
<Mosibi> When doing a 'juju bootstrap -e openstack', the bootstrapped instance is trying to connect (apt) to internet sources.
<Mosibi> How/where can i configure that our own internal mirror is used for that?
<Mosibi> Okay, possible my question got lost in the middle of the netsplit :)
<Mosibi> again...
<Mosibi> Hi all.
<Mosibi> When doing a 'juju bootstrap -e openstack', the bootstrapped instance is trying to connect (apt) to internet sources.
<Mosibi> How/where can i configure that our own internal mirror is used for that?
<jamespage> gnuoy, stable updates for https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1330906
<_mup_> Bug #1330906: NSX support is broken in icehouse <nova-cloud-controller (Juju Charms Collection):In Progress by james-page> <nova-compute (Juju Charms Collection):In Progress by james-page> <quantum-gateway (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1330906>
<vila> jam: did I miss your answer to "hmm, HP limit... but 'nova secgroup-list' tells us we only have the 'default' group defined... Is there a place to check for some "hidden" ones ?" from the netsplit ?
<jam> vila: I don't *know* that it is the problem. But it does seem like more than coindidence that you're getting an error and we have a "nil" security group from HP around the same time that Curtis is saying something changed in HP and he's unable toget enough security groups.
<jam> It might be that HP APIs changed and we're now using them wrong.
<gnuoy> jamespage, the mps look good to me. Do you want me to approve and merge ?
<jamespage> gnuoy, just regression testing them again neutron plugin
<jamespage> gnuoy, happy for you just to approve - I'll merge once tested
<gnuoy> ack
<vila> jam: ha, right. We've seen that for quite some time but it occurs only rarely (best kind of bug you know :-/), if Curtis starts seeing related issues, I'll try to get in touch with him (he's offline, vacations ?)
<jam> vila: I thought he was around, though US times so he wouldn't be awake right now
<vila> jam: ack and thanks
<jamespage> gnuoy, time for https://bugs.launchpad.net/charms/+source/nova-compute/+bug/1329251 ?
<_mup_> Bug #1329251: ssh-keyscan of hosts in MAAS environments is incomplete <nova-cloud-controller (Juju Charms Collection):In Progress> <nova-compute (Juju Charms Collection):In Progress> <https://launchpad.net/bugs/1329251>
<Mosibi> When doing a 'juju bootstrap -e openstack', the bootstrapped instance is trying to connect (apt) to internet sources.
<Mosibi> How/where can i configure that our own internal mirror is used for that?
<marcoceppi> tvansteenburgh1: hey
<tvansteenburgh> marcoceppi: hey
<marcoceppi> so, I'm not sure if making Python the default charm template is a good idea
<marcoceppi> I mean, it's a GREAT idea
<marcoceppi> but I want to expose more of the "oh, here's a Python template, you can use -t to do stuff
<tvansteenburgh> yeah i just thought we wanted to encourage people towards making python charms instead of bash
<tvansteenburgh> besides, there will be plenty of chances for people to use -t once we get some more templates in there
<tvansteenburgh> e.g. ansible - i was going to add an ansible template once the initial release was done
<DaWoop> guys, the IRC link on the website redirects to the wrong channell
<DaWoop> #juj instead of #juju
<marcoceppi> DaWoop: really? what page?
<DaWoop> http://ubuntuonair.com/
<marcoceppi> DaWoop: oh, that's something else
<DaWoop> the "Join from yout IRS Client!"
<marcoceppi> jose: ^^
<DaWoop> IRC*
<jose> marcoceppi: still, it's going to be changed now, community team Q&A
<jose> no moar juju :(
<marcoceppi> DaWoop: thanks for the info
<DaWoop> np marcoceppi
<marcoceppi> tvansteenburgh: what I'm getting at is it would be great to say "Using default charm template: python, use -t to switch" when no -t flag is given
<jose> mbruzek: thanks for your review
<marcoceppi> tvansteenburgh: I've got a few other knitpicks, will post in merge
<marcoceppi> otherwise this is badass
<jose> marcoceppi: sent you an email
<tvansteenburgh> marcoceppi: sounds good, thanks for the review :)
<DaWoop> watcha guys up to?
<DaWoop> nothing, I see
<DaWoop> gotta love monospace fonts
<lazyPower> DaWoop: Charming up big data charms, rifling through the rev queue, working on charm tooling, presentations. You know - being a charmer. What are you up to?
<ZooZ> Question, i need to define Sound Fusion CD46xx soundcard , no voice in Ubuntu 12.10 , can anyone advise me from where to get installation package for sound driver ?
<ZooZ> It is CS46xx sound fusion driver, iam also new to ubuntu
<lazyPower> ZooZ: that would be better suited to #ubuntu
<DaWoop> lazyPower: I'm watching some Improv Everywhere stuff
<DaWoop> lazyPower: What are those Charming thing you're talking about? NEver heard of those!
<lazyPower> DaWoop: juju.ubuntu.com
<lazyPower> The most amazing data center orchestration tool on the planet
<nuclearbob> ahoy!  I'm having trouble with the local provider on trusty.  I get "agent-state-info: '(error: error executing "lxc-start": command get_state failed"
<DaWoop> alright, Juju is actually something, not a person
<DaWoop> good to know
<marcoceppi> tvansteenburgh: everything looks good, just looking to expose templates better. Discuss
<DaWoop> lazyPower: Linux and GUI-based application? What's this breakthrough?
<lazyPower> DaWoop: the future of devops
<DaWoop> lazyPower: first time I actually see a gui-based app in linux
<DaWoop> interesting stuff
<tvansteenburgh> marcoceppi: i like the log message approach
<marcoceppi> tvansteenburgh: cool, I figured the prompt might be a little annoying
<DaWoop> lazyPower: sorry for the retard question, but could we use Juju on a windows system?
<tvansteenburgh> marcoceppi: it seems annoying to me :)
<marcoceppi> DaWoop: there's work being done to do so, and we have a demo already of juju driving windows systems
<tvansteenburgh> marcoceppi: if you're okay with the log msg approach i'll add that real quick. no other nitpicks?
<marcoceppi> tvansteenburgh: that one about the messaging
<fpermana> what is juju?
<tvansteenburgh> marcoceppi: oh, in the diff comments, i missed that the first time
<DaWoop> fpermana: https://juju.ubuntu.com/
<DaWoop> lazyPower: Would it be possible to run it on, say, debian on a Raspberry Pi?
<DaWoop> and access the interface with the web app on windows?
<arosales> hatch: good morning
<hatch> morning arosales
<arosales> hatch: were you planning on putting http://manage.jujucharms.com/~hatch/precise/ghost in the charm store ?
<hatch> arosales yes there are just a few updates needed from the last review https://bugs.launchpad.net/charms/+bug/1229377
<_mup_> Bug #1229377: Charm needed: Ghost blogging platform <Juju Charms Collection:Incomplete by hatch> <https://launchpad.net/bugs/1229377>
<hatch> arosales here is the primary repo if you want to stay on the bleeding edge :) https://github.com/hatched/ghost-charm
 * arosales looks at bug
 * DaWoop can't stand the music they play at the radio
<arosales> ah you just updated the bug on june 11
<hatch> arosales yeah, it's only some pretty small changes from the last comment to get it into the store
<arosales> hatch: did you have any worked planned in the the near term?
<hatch> arosales my goal is to have it up for review again this week
<arosales> some tests would be a plus too
<arosales> hatch: good stuff :-)
<hatch> yeah tests aren't going to happen until someone other than me actually uses it :D
<arosales> hatch: I am looking to deploy my blog with it :-)
<hatch> haha darn
<hatch> well you can use the charm from GH right now - it deploys like a....charm....
<hatch> ;)
<hatch> arosales the only real change is the default port and some clarification in the documentation
<hatch> so you can easy do that yourself
<hatch> from the config options
<hatch> if you wanted to get it going sooner than it gets into the store
<arosales> hatch: good to know, and thanks for the work on it.
 * arosales also looks forward to seeing it in the store
<hatch> arosales np, I wanted to try and figure out how to do theming with the charm
<hatch> so we can use your site as a test bed :)
<marcoceppi> tvansteenburgh: let me know that change is pushed, I'll roll out 1.3.0 shortly after
<tvansteenburgh> marcoceppi: kk
<arosales> hatch: cool. I'll let you know how it goes.
<hatch> arosales basically I'm not entirely sure how theming should work with the charm....it obviously needs to apply the theme to every instance as it scales up - but the theme can't be stored in the charm because it will be different for everyone. We also don't want them to require an extra data store somewhere to store it
<hatch> arosales so I was hoping that I could use the juju storage somehow....?
<arosales> hatch: how about a config set from a git hub repo?
<hatch> arosales has this been solved with Discourse or Wordpress?
<arosales> hmmm, marcoceppi your thoughts on theming in wordpress ^
<hatch> well theming but also scaling a themed blog
<hatch> because each unit needs the same theme applied
<marcoceppi> hatch: wordpress lets you point to a wp-content repo
<marcoceppi> which is a git/bzr repo that has the contents of that directory (aka themes and plugins)
<hatch> marcoceppi right so that's an external url somewhere?
<marcoceppi> basically
<arosales> marcoceppi: does add-unit preserve the original/parent config?
<marcoceppi> arosales: it's a repo, so it just gets branched again
<hatch> that doesn't seem ideal - do charms not have access to Juju's storage instance?
<tvansteenburgh> marcoceppi: pushed
<marcoceppi> hatch: no
<arosales> marcoceppi: but I don't need to config-set again on the add-unit to keep the same theme
<marcoceppi> arosales: correct
<arosales> marcoceppi: thanks
<hatch> because we want to keep the charms fat (ghost is included in the charm) I feel that going out over the net again to request data from a source (which may change) is not the best possible story
<arosales> hatch: so I would just provide a config to pull a theme from a github repo.
<hatch> we should look into enabling storage support for charms
<hatch> arosales yeah that's going to be the only option now I guess....but not a very good one IMHO
<arosales> hatch:  I think the juju-core is working on that very feature this cycle.
<hatch> you could end up with two units with two different themes if you updated the repo then scaled heh
<hatch> arosales oh awesome
<arosales> hatch: ya if you updated the gitrepo post deployment you would need to config set on each service
<hatch> arosales the other story is that the user wants to customize the theme once deployed
<arosales> but on deployment with the same config the services should have all the seame theme
<hatch> yeah I'm just saying that it's very likely that someone updates the repo then scales without realizing
<hatch> and now they get two themes depending on which unit they hit
<arosales> well the original one wouldn't update unless the user took a specific action
<hatch> exactly - so keeping the theme in juju would make it so that that could never happen
<hatch> because it would only pull down the theme once, or when you tell it to
<arosales> hatch: but an interesting point on passing down config to scaled services
<arosales> hatch: well I think cofig works the same way. Juju only pulls the theme on deploy or config set
<hatch> not if you scale
<hatch> if you scale it'll go out and pull whatever  is on the end of that url
<hatch> so if that content is changed you now have two units with different themes
<arosales> hatch: ah, I follow you
<hatch> yeah so atm a config option to a url will have to do, and I'm sure it'll be just fine for 80% of the use cases out there
<egoist> hi
<arosales> hatch: I think that would be helpful.
<arosales> hatch: thanks again for the work on ghost
<egoist> what hook is executed when you add unit to service
<egoist> ?
<hatch> no problem - and now that I have someone who's not me using it I will have to put some extra time on it :D
<arosales> jcastro: note http://www.oscon.com/oscon2014/public/schedule/detail/34243 at oscon
<jcastro> yep
<jcastro> arosales, talking to leslie now on irc actually
<arosales> jcastro: cool
<egoist> what hook is executed when you add unit to service?
<jose> egoist: all the hook sequence is executed in the new instance
<jose> that is install, config-changed, start
<egoist> jose: ok, but whgat with relation?
<egoist> what*
<jose> egoist: then it's install, config-changed, start, name-relation-joined, name-relation-changed
<egoist> jose: ok, but every instance have relation data about new unit?
<jose> can you give me an example? that way it'll be a bit easier to explain :)
<jose> yeah, all instances should have data about the new unit
<egoist> yeah
<egoist> today i deploy mongo, to create replica set, and execute 'deploy ..... -n 3' and i want add new unit to this service. But this service was related to another instance. Problem is that this another instance don't have relation data about new unit
<lazyPower> egoist: when you add a unit to service - there is a peer-relatioin hook that is fired on the units to do any peering
<lazyPower> egoist: so 'every unit having data about teh new unit' is dependent on how the peering hook is built and what is sent over the wire via relation_set - in the case of mongodb, you get the full fange of credentials you would expect
<lazyPower> IP:Port combo
<egoist> oh ok o get it
<egoist> lazyPower: Thank you very much
<egoist> jose: Thank you very much
<egoist> for yours help
<lazyPower> egoist: there is a slight issue with deploying a  sharded / replicated server when relating to mongos
<lazyPower> i've got some WIP to fix it, but its far from complete, and is kind of a hack at the moment.
<lazyPower> just FYI if you're going to build a large production ready cluster.
<Egoist> lazyPower: but if i remove relation, and add new unit to replica, is this will be work, new unit as another secondary host?
<lazyPower> Egoist: so long as they share a replicaset key, they will be setup to replicate.
<Egoist> lazyPower: ok, get it thanks
<SIGILL> For my bachelor thesis I'd like to test 'virtual network embedding' algorithms in the wild, escpecially with latency in mind. Do you think there's room for improvement in juju regarding that? AFAIR juju just chooses "the first" machine when deploying a new service
<SIGILL> The goal would be to optimize the latency and/or amount of resources used by services. But I'm not sure if latency is really a problem in the real world, because all machines are in the same datacenter or something.
<lazyPower> SIGILL: sounds interesting. This would be better served in #juju-dev as thats where the core developers hang out.
<lazyPower> SIGILL: sorry i dont have a better answer for you - however if you're GO for hacking on the juju-core GO code to optimize a provider based on resources/latency, i'm positive someone would be happy to review the work
<SIGILL> lazyPower: ok, thanks â I asked the question a few days back in #juju-dev but got no answer
<lazyPower> SIGILL: Have you tried emailing the list?
<lazyPower> I'm pretty sure this is of interest to the core-developer base at large - considering its got implications of optimization on providers beyond what we have now. However they are pretty busy this sprint landing the roadmap we planned out a month ago. Sending it to the list will give the core team more time to respond than the catch/miss on IRC
<SIGILL> not yet, that'd be my next step. i prefer IRC though
<lazyPower> SIGILL: thing is, there's a pretty large overlap in timezones if you're US based. The project lead is a Kiwi and around when I'm headed to bed. So thats why I think you'll have better luck with the mailing list. Sorry you haven't gotten an answer yet though
<lazyPower> its frustrating when trying to gain traction on interest
 * lazyPower feels your frustration
<lazyPower> been there, done that.
<SIGILL> Nice, thanks for your help. I always hesitate to bother devs with my questions
<lazyPower> Nah, we're a friendly lot
<SIGILL> Heh, CEST it is...
<lazyPower> bother away
<lazyPower> SIGILL: juju@lists.ubuntu.com for reference.
<SIGILL> wonderful â I'd be so glad to combine thesis, Go and juju
<SIGILL> Got it
<lazyPower> also, let me know when you start work. I'll blog about your progress
<SIGILL> Oh nice â will do. If everything goes well that'll be in a few weeks
 * lazyPower doffs hat
<lazyPower> best of luck to you SIGILL, ping if you need anything
<tvansteenburgh> stub: any tips on getting `make test` to work on the postgresql charm?
<_mup_> Bug #1331151 was filed: 'juju destroy-environment' sometimes errors <pyjuju:New> <https://launchpad.net/bugs/1331151>
<lazyPower> jamespage: ping
<ahasenack> hi, I tried to deploy the hadoop cluster from the charm store, the bundle,
<ahasenack> and got a notification saying
<ahasenack> "Failed to load charm details.
<ahasenack> Charm API error of type: no_such_charm "
<ahasenack> and that's it
<ahasenack> it doesn't tell me which charm it was looking for
<ahasenack> oh, in juju-gui it's where I got that note
<ahasenack> any clues?
<achiang> what's the best practice for organizing source code + charm/yaml configs? specifically, a nodejs app. should i just stick the yaml into same dir as my node app?
<achiang> https://github.com/achiang/openmotion is the repo, if it helps...
<tvansteenburgh> achiang: one idea would be to make the charm deploy your app from github
<tvansteenburgh> achiang: i did something similar with this meteor charm: https://github.com/tvansteenburgh/meteor-charm
<achiang> tvansteenburgh: yeah, that's the plan. but the charm config still needs to live somewhere, so my question is where?
<tvansteenburgh> the config and metadata yaml files should live in the root of the charm
<achiang> tvansteenburgh: thanks, i'll take a look at your example
<breeze411> Hello newbie here, how do i set the proxy for juju ? juju set-environment http-proxy=http://10.0.44.15:80  gives me error ERROR environment is not bootstrapped
<tvansteenburgh> breeze411: `juju bootstrap` first
<lazyPower> ahasenack: actually yes
<lazyPower> ahasenack: i may have broken the bundle when i was resolving a related issue with hadoop yesterday
<lazyPower> the lp:charms/hadoop now points to the trusty version of the charm.
<breeze411> hmm the quick start doc syays juju --sync-tools and juju bootstrap ... This is a totally new juju env
<ahasenack> lazyPower: debugged that error and it's an issue in juju-gui
<ahasenack> lazyPower: I'm hitting other errors, though
<lazyPower> ahasenack: Which bundle?
<ahasenack> one I can workaround, it's a hostname call that failed
<ahasenack> lazyPower: hadoop-cluster
<lazyPower> ack, let me run a deploy on it and see what happens.
<ahasenack> I don't have proper dns in this cloud, and the install hook at some point calls hostname -f
<ahasenack> and that fails
<ahasenack> so let me fix that by hacking /etc/hosts and then I can get to the second error about which I have no clue
<lazyPower> ahasenack: which version of the hadoop charm as well? the precise version deploying hadoop 1 or the trusty version deploying 2.2?
<ahasenack> lazyPower: precise
<ahasenack> whatever is in the store, I don't know what it is deploying precisely
<ahasenack> it deployed hive, hadoop-master, hadoop-slavecluster and mysql
<ahasenack> and only mysql is green
<breeze411> so to be clear first do juju bootstrap ? and then juju ync-tools ? sorry for the basic questions
<ahasenack> hive has that hostname error which I'm fixing and I'll shortly run a retry
<lazyPower> ahasenack: ok. i just fired off a quickstart against amazon
<tvansteenburgh> breeze411: sync-tools followed by bootstrap is correct. i was just pointing out that you must bootstrap before using set-env
<breeze411> Oh i see , thank you. So how do i set the proxy for sync-tools ? it doesnt seem to use the shell env set with http_proxy
<ahasenack> lazyPower: hive is green, it was just the hostname -f error
<ahasenack> previously I had this, but it didn't happen again:
<ahasenack> 2014-06-17 19:54:59 INFO config-changed cp: cannot stat `/etc/hive/conf.dist': No such file or directory
<ahasenack> now let me check the others, what's going on there
<ahasenack> looks like it's hostname too
<ahasenack> 2014-06-17 20:36:38 INFO install hostname: Name or service not known
<lazyPower> ahasenack: wrt /etc/hive/conf.dist - it appears that its not creating it on the first run so there is a race condition in the charm somewhere if it resolvs on second run
<ahasenack> lazyPower: ok
<lazyPower> all of these issues are fair game for bugs against the charm however.
<ahasenack> lazyPower: got this on hadoop-master now, after a resolved --retry
<ahasenack> 2014-06-17 21:04:31 INFO config-changed cp: cannot stat `/etc/hadoop/conf.empty': No such file or directory
<ahasenack> should I just issue another resolved --retry?
<lazyPower> weird
<ahasenack> this is the whole unit log:
<ahasenack> http://pastebin.ubuntu.com/7660415/
<ahasenack> at 2014-06-17 20:35:50 was the hostname error
<ahasenack> which I fixed and ran a resolved --retry
<ahasenack> that explains the time gap
<ahasenack> between lines 19 and 20
<tvansteenburgh> breeze411: have you tried https_proxy? i think sync-tools uses https
<lazyPower> ahasenack: ok i'm still pending the bundle deployment
<lazyPower> let me see if mine run into the same issues so we have common ground for starting the debug process
<ahasenack> install_base_packages() didn't even run
<lazyPower> ahasenack: it was skipped?
<ahasenack> hostname -f is called in configure_hosts()
<ahasenack> that was the call that failed before
<ahasenack> I fixed /etc/hosts so hostname -f doesn't fail
<ahasenack> ran resolved --retry
<ahasenack> and somehow it didn't retry, it didn't run that again...
 * lazyPower blinks
<lazyPower> somethings not right
<lazyPower> if it skipped an entire hook or logic path in the charm
<ahasenack> the install hook runs configure_hosts, configure_sources, install_base_packages, install_optional_packages, configure_hadoop, configure_tmp_dir_perms
<ahasenack> hostname -f is in configure_hosts
<ahasenack> it failed
<ahasenack> see my previous pastebin
<ahasenack> then I ran resolved --retry
<ahasenack> and the first log entry in the pastebin after the failed hostname is "Installing optional packages..."
<lazyPower> ahasenack: everything here just greend on a public cloud. seems that the hostname woes are def. a root of the issue
<ahasenack> and that comes from install_optional_packages
<lazyPower> ahasenack:  i have a prelim DNS charm that may help some of this in your environment.
<lazyPower> ah wait, no - it wont do the relations until the service is installed. so its a moot point
<ahasenack> interesting that configure_hosts, the first thing it calls, tries to fixup /etc/hosts
<ahasenack> the whole hook is run with set -e
<ahasenack> I ran the install hook via juju run now
<ahasenack> I'll debug this some other time, introduce a failure in an install hook, let it fail, fix the failure, run resolved --retry and see where juju picks it off
<ahasenack> that was very weird that it didn't try the install hook again
<lazyPower> yeah
<lazyPower> resolved -r should re-run the failed hoo
<lazyPower> *hook
<lazyPower> i call schenanigans
<ahasenack> yep
<ahasenack> well, I'm on 1.19.x
<ahasenack> anything could happen :)
 * ahasenack runs terasort
<leftyfb> Any idea why the nagios -> wordpress relationship only monitors ssh? Shouldn't it be monitoring nginx as well as load, users, disk space, total processes, etc?
<leftyfb> same with monitoring a mysql charm, it should be monitoring the mysql daemon
<leftyfb> I'm going to guess it's because we don't have snmp or the agent install on the individual charms ... but I would assume if you build a relationship between the charms, it should install and configure the necessary packages (snmp)
<lazyPower> leftyfb: it leverages NRPE to make those connections.
<lazyPower> i'm not positive about nginx monitoring, etc. but the NRPE charm does populate more than just SSH
<lazyPower> leftyfb: those are however excellent feature requests to file against the NAGIOS charm.
<leftyfb> lazyPower: I just deployed nagios with connected to a wordpress bundle as well as monitoring the juju-gui charm itself. Besides localhost, the other charms are only monitoring SSH
<lazyPower> leftyfb: in its current incantation, you use NRPE to get those metrics.
<leftyfb> lazyPower: so that's something I have to configure manually
<lazyPower> if you want to tweak it from the default setup, correct.
<leftyfb> "Usage
<leftyfb> This charm is designed to be used with other charms. In order to monitor anything in your juju environment for working PING and SSH, just relate the services to this service. In this example we deploy a central monitoring instance, mediawiki, a database, and then monitor them with Nagios"
<leftyfb> so the default is to only monitor ping and ssh for the charms in it's relationships
<lazyPower> It appears that way. You can specify additional configuration through the config parameter on nagios - or by using NRPE + config.
<jamespage> lazyPower, evening
<lazyPower> jamespage: greetings. Did you have a chance ot read the email i sent you earlier?
<jamespage> lazyPower, yes - trusty ntp charm right?
<lazyPower> indeed
<lazyPower> its half promulgated. looks like the branch alias was defined before the repository was fully promulgated. A symptom of not specifying -s when doing juju promulgate charm - that has an existing precise branch.
<jamespage> lazyPower, I think the ganglia* ones are in the same state
<lazyPower> jamespage: i ran into this when i botched the hadoop promulgation to trusty
<jamespage> lazyPower, I pushed the branch but did not promulgate intentionally as I'd not added tests
<jamespage> lazyPower, if that's not a hard requirement then all three of those charms work just fine on trusty :-)
<lazyPower> jamespage: yeah, we *really* need those tests
<jamespage> lazyPower, yeah - I know
<lazyPower> even if its stand up, validate services are running - with some follow up. I dont want to set that precident, but tests are a required mechanism to make it into trusty
<lazyPower> :|
<lazyPower> I don't like being the bearer of bad news
<lazyPower> ok, i just pinged marco on this - and our bare minimum is unit tests. if you can drop in some units on that puppy you're g2g
<lazyPower> jamespage: thanks for the follow up.
<jamespage> lazyPower, sure
<lazyPower> jamespage: for now i'm going to unpromulate the trusty version of the NTP charm so its not interfearing with the API requests. Its causing all kinds of fun artifacting with the GUI - https://bugs.launchpad.net/juju-gui/+bug/1331202
<_mup_> Bug #1331202: Incomplete Charm data causes artifacting in the GUI <juju-gui:New> <https://launchpad.net/bugs/1331202>
<jamespage> lazyPower, please do
<jamespage> lazyPower, interestingly the charm store was ingesting that fine - you just had to address it like cs:~charmers/trusty/ntp
<jamespage> but that would appear no longer true
<lazyPower> jamespage: thats a work around to a larger problem
<lazyPower> during the promulgation command, if you dont specify the series when you prom it - it moves the branch tip before it promulgates the series branch
<lazyPower> which in turn causes the owner to return as 'charms' - because promulgate fetches that branch tip instead of the lp:~charmers/charms/foo/bar
<marcoceppi> lazyPower: can you open a bug against charm-tools that promulgate will require a series going forward and not assume precise?
<lazyPower> its a hairy thing that i only half understand.
<lazyPower> marcoceppi: https://bugs.launchpad.net/charm-tools/+bug/1331211
<_mup_> Bug #1331211: require a series flag on promulgate <papercut> <Juju Charm Tools:New> <https://launchpad.net/bugs/1331211>
<marcoceppi> lazyPower: thanks, I'll get that in the 1.3.0 release
#juju 2014-06-18
<Marek1211> Hi I was just wondering about scaling services. For example when we scale a wordpress services for load balancing by adding units...does it automatically manage replication of data like pictures from our main worpress service to the other? or how does it work?
<jose> Marek1211: yes, as they scale horizontally and use the same database
<jose> all that data is stored in the db
<jose> if have to leave, but if you have questions you can also email juju@lists.ubuntu.com
<Mosibi> When doing a 'juju bootstrap -e openstack', the bootstrapped instance is trying to connect (apt) to internet sources.
<Mosibi> How/where can i configure that our own internal mirror is used for that?
<jam> Mosibi: there is a configuration item "apt-http-proxy" and "apt-https-proxy", but I don't know that there is a way to tell it to use a different mirror entirely
<Mosibi> jam: there are indeed several proxy config items, and setting http-proxy works, but we will end up with a totaly isolated (private) cloud. So i must have the possibility to point at our own mirror.
<Mosibi> jam: but thx for looking and answering!
<lazyPower> Mosibi: you may have to stuff some of that info to rewrite teh proxies in cloud init. I would think we'd have an easier way to do it but thats all i'm coming up with
<lazyPower> and i'm far from an expert
<lazyPower> ..on that subject.
<Mosibi> lazyPower: how does the integration with juju and cloud-init work?
<jam> Tim Penhey (thumper) was the one who did the original proxy setting stuff. He would be the one to ask why there wasn't just an explicit "use this mirror" instead.
<lazyPower> Mosibi: cloud-init is isolated from juju. What you would be accomplishing with cloud-init is adding the proxy configuration logic so that info is written on the hosts first boot.
<Mosibi> Can i put/include some cloud-init config file that i can include with juju bootstrap?
<Mosibi> Or do ik have to hack it myself in juju-core?
<Mosibi> ik=i
<lazyPower> https://help.ubuntu.com/community/CloudInit
<lazyPower> Mosibi: i'd do some light reading on cloud-init first to verify this is feesable.
<lazyPower> Mosibi: and here's the openstack docs on using user-data with cloud-init: http://docs.openstack.org/user-guide/content/user-data.html
<Mosibi> lazyPower: user-data can be used to solve my problem, but ther are also specific apt hooks
<Mosibi> but how do i include that with a bootstrap?
<lazyPower> Mosibi: ah, that i'm not real familiar with and would redirect to Thumper, or the mailing list.
<Mosibi> ack
<Mosibi> thx!
<Mosibi> Thumper is not online on IRC?
<lazyPower> he's not around atm. he may be out today - i'm not positive.
<Mosibi> lazyPower: thx... i will hang around :)
<jam> lazyPower: thumper is in AU so it is after midnight for him right now.
<jam> Mosibi: I think the mailing list is probably your best bet
<lazyPower> jam: i thought he was a Kiwi
<lazyPower> i guess thats close enough to be considered AU
<jam> lazyPower: you'r right
<jam> AU is my shorthand for way in the >+10 Timezones, but I do know he is in NZ
<lazyPower> I just remember that being important to distinguish in social situations... dont ask me why.
 * lazyPower is feeling kind of loopy being up this early
<Mosibi> lazyPower: jam: we have a 'go' freak/fan/expert in our team. I am going to ask him to look at the juju-core code. Maybe he can implement it..
<lazyPower> that would be pretty excellent. We love community contributions
<dpb1> hi lazyPower, sorry I missed our meeting on monday. I scheduled a last-minute vacation and forgot to decline my appointments.  :(
<hazmat> Mosibi, user data supports mirror config, its a matter of wiring that support through core.. as an env config option used when rendering cloudinit userdata.
<lazyPower> dpb1: no worries. I realized it was better suited to follow up with the charm mainainer than the landscape team.
<lazyPower> so i actually canceled it
<dpb1> lazyPower: cool.  good to hear
<lazyPower> dpb1: thanks for putting up with my overzealous templar charmer persona. He has been sacked and replaced with more estute persona befitting a charmer.
<dpb1> lazyPower: give the old chap my regards if you see him again. :)
<lazyPower> Duly noted.
<jrwren> Mosibi: can you let me know what you find on that front? I've been curious about it for a while.
<allomov> hey, all pardon me for ignorance, but what is this thing in juju-gui ? https://www.evernote.com/shard/s108/sh/bd43cfbf-7767-42b8-b1d1-9f874e8d882e/8582ddd01135755bd944cac368aef624
<allomov> hey, all! pardon me for ignorance, but what is this thing in juju-gui ? https://www.evernote.com/shard/s108/sh/bd43cfbf-7767-42b8-b1d1-9f874e8d882e/8582ddd01135755bd944cac368aef624
<lazyPower> allomov: greetings. That's an indicator for how many services are attached to the subordinate charm.
<Guest2308> Hi guys, I have a problem in bootstrapping juju environment with openstack. It creates the VM and install all software, but after apt-get installation the command returns "no instance found" error. Do you have any idea?
<allomov> lazyPower: thank you for answer. still not sure I've got it right. is it count of external services or count of "green relations", could you tell where I can read about it ?
<lazyPower> allomov: https://juju.ubuntu.com/docs/authors-subordinate-services.html
<allomov> Guest2308: can you try to ssh to this instance ?
<allomov> lazyPower: great. thank you 1 > time
<Guest2308> allomov: yes I have ssh access to instance
<Guest2308> 2014-06-18 14:16:27 INFO juju.cmd supercommand.go:302 running juju-1.18.4-precise-amd64 [gc] 2014-06-18 14:16:27 DEBUG juju.agent agent.go:384 read agent config, format "1.18" 2014-06-18 14:16:27 INFO juju.provider.openstack provider.go:202 opening environment "openstack" 2014-06-18 14:17:27 ERROR juju.cmd supercommand.go:305 no instances found 2014-06-18 14:17:29 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: rc: 1
<dpb1> tvansteenburgh: hey there -- I pushed an update: https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102  sorry about that!
<leftyfb> where can I get help to understand how the nagios/NRPE charms work? Specifically, as an example, with an apache charm being monitored, why isn't the nagios charm monitoring port 80 and/or why isn't the NRPE charm monitoring the apache2 service locally?
<automatemecolema> hypothetical question: what happens if I lose access to my zookeeper? I'm really afraid to use juju for enterprise consumption because of so any unknowns
<tvansteenburgh> dpb1: thanks! i'll have a look after lunch
<marcoceppi> automatemecolema: we haven't used zookeeper in over a year. I assume you mean the bootstrap node, but I have to ask: What version of juju are you using?
<automatemecolema> yes I mean the bootstrap node
<automatemecolema> the latest baked into 14.04 now
<automatemecolema> Also, does anyone have a good place to point me on development around building charms with puppet?
<automatemecolema> Something I've noticed using the trusty juju-gui charm is dragging and dropping your .yaml files into the canvas doesn't work so great
<automatemecolema> Works fine with precise
<automatemecolema> Or do I have it wrong, and my environment has to be running in quickstart bootstrap to get that bundled functionality
<dpb1> Can I mix and match local provider lxc and kvm containers?
<automatemecolema> dpb1 I don't see why not
<automatemecolema> You would specify to local environments
<automatemecolema> two*
<dpb1> automatemecolema: ok, good idea, thx
 * dpb1 searches for special foo to get that work (something with an alternate port for the mongodb process)
<automatemecolema> binford2k so your saying you feel like there are major concerns with running foreman in the area of performance and reliability? That doesn't really answer the question though, because if you can write something better, then why doesn't something better exist?
<marcoceppi> tvansteenburgh: hey, I've got a merge on it's way for charm-tools, last bug before release, will you be around to review in about 20 mins?
<pmatulis> i did 'juju destroy-environment local' and now i would like to start fresh.  but neither 'juju bootstrap' ('environment is already bootstrapped') nor 'juju deploy wordpress' ('environment is no longer alive') works
<sparkiegeek> pmatulis: what does juju status show?
<pmatulis> sparkiegeek: it does show the previous machines (lxc containers).  weird
<sparkiegeek> pmatulis: be brutal and try "juju destroy-environment --force local"
<marcoceppi> tvansteenburgh: https://code.launchpad.net/~marcoceppi/charm-tools/require-series/+merge/223620
<pmatulis> sparkiegeek: actually, i think i messed up my lxc machines.  how do i really start fresh with juju?  can i remove ~/.juju and do the bootstrap thing?
<automatemecolema> I probably wouldn't remove juju
<sparkiegeek> pmatulis: IIRC there's a "kill" plugin
<sparkiegeek> pmatulis: https://github.com/juju/plugins/
<sparkiegeek> try that first :)
<automatemecolema> did you try the force parameter to kill the environment?
<pmatulis> automatemecolema: yeah, that's what i did originally
<automatemecolema> but juju status still shows the environment is still around?
<pmatulis> yeah
<pmatulis> can't i just remove some file(s) under ~/.juju ?
<pmatulis> and then bootstrap thingy?
<automatemecolema> you can remove the environments.yaml file and the file inside the environments directory .env
<automatemecolema> then do a juju init
<automatemecolema> and reconfigure your environments file and try another bootstrap
<avoine> pmatulis: check this out: http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider
<pmatulis> avoine: wow ok, that's a lot of stuff but it looks like what i'm after
<automatemecolema> avoine nice find
<tvansteenburgh> marcoceppi: ack, will review this afternoon
<arosales> pmatulis: I think you can safely keep your environment.yaml around. Its the ~/.juju/environments/local.jenv is the file that has tripped me up.
<pmatulis> arosales: i removed it but still no dice
<pmatulis> hmm, almost there:
<pmatulis> juju bootstrap
<pmatulis> ERROR cannot use 37017 as state port, already in use
<arosales> pmatulis: if your home directory is encrypted you will need to make sure juju is using a mount point outside your home dir.
<pmatulis> arosales: no encryption
<arosales> ok
<arosales> pmatulis: sounds like mongo is still running
<pmatulis> arosales: killed it, but now back to square one:
<pmatulis> juju bootstrap
<pmatulis> Bootstrap failed, destroying environment
<pmatulis> ERROR environment is already bootstrapped
<arosales> after you kill mongo you'll need to clean up once more from your previous bootstrap
<arosales> http://blog.naydenov.net/2014/03/remove-juju-local-environment-cleanly/ also had some nice script suggestions from the link automatemecolema provided
<arosales> pmatulis: so consider killing mongo in the same swoop when cleaning up lxc and juju
<pmatulis> grrr
<pmatulis> bootstrap worked but the new wordpress/mysql setup failed:
<pmatulis> http://paste.ubuntu.com/7664967/
<pmatulis> wow i actually learned something.  i looked at the logs ~/.juju/local/logs/unit-wordpress-0.log and found & corrected the error
<pmatulis> all good now, at least according to 'juju status'
<tvansteenburgh> marcoceppi: require-series patch back to you
<marcoceppi> tvansteenburgh: thanks, I was about to write tests, but got distracted
<jose> hey mbruzek, are there any chances you may check the owncloud MP again today?
<pmatulis> sudo find / -name charm | grep wordpress
<pmatulis> /vol1/lxc_images/ubuntu-local-machine-1/rootfs/var/lib/juju/agents/unit-wordpress-0/charm
<pmatulis> where do i find the actual charms?  after deploying wordpress & mysql i found that â
<pmatulis> but nothing for mysql
<tvansteenburgh> dpb1: i don't see a new commit on apache2/avoid-regen-cert
<dpb1> tvansteenburgh: pushing now.  D'oh!
<tvansteenburgh> :)
<dpb1> r57
<tvansteenburgh> cool, thanks!
<dpb1> tvansteenburgh: I also want to make the same change against trusty, should that be a separate mp?
<tvansteenburgh> eh, i think so? lazyPower?
<tvansteenburgh> or marcoceppi, mbruzek? (see dpb1's questions above) i assume the answer is yes...?
<mbruzek> dpb1 yes
<marcoceppi> dpb1: yes
<marcoceppi> unfortunately
<dpb1> all: thanks
<jcastro> marcoceppi, https://github.com/juju/docs/commit/6bdac65670c9fcedfa5421b615ee08cc0c9d9ccf
<jcastro> what does this metadata field do in the markdown?
<marcoceppi> what?
<marcoceppi> rather, can you re-ask your question jcastro?
<jcastro> se how he added
<jcastro> + Title: blah blah
<jcastro> in the markdown
<jcastro> what does that mean?
<marcoceppi> no idea, there's no plugin for that to my knowledge
<marcoceppi> It's going to render like crap in the live docs
<aquarius> jose, ping
<jose> aquarius: pong
<jcastro> marcoceppi, maybe he added stuff?
<jose> looks like you read that post :)
<aquarius> jose, you wanted my help to charm soonsnap?
<marcoceppi> jcastro: I don't see it anywhere
<aquarius> I did. :)
<jose> aquarius: yeah, check http://ec2-54-85-96-127.compute-1.amazonaws.com/
<marcoceppi> we already track metadata about the page in the navigation
<jose> aquarius: when I click the buttons nothing happened, I just downloaded apache and clones
<jose> cloned*
<aquarius> "ReferenceError: io is not defined" in the console.
<aquarius> that looks relevant.
<jcastro> marcoceppi, commits on truck without review? tsk tsk.
<jose> aquarius: if you want, I can give you ssh access to the box so you can take a look
<marcoceppi> we should get a bot lander to force no one being able to commit on trunk
<aquarius> you don't have socket.io installed, by the look of it
<marcoceppi> jcastro: like what core is doing
<jose> hmm, /me checks
<aquarius> jose, did you npm install?
<jcastro> marcoceppi, https://github.com/juju/docs/commit/d8504af882455766389ccc095a0551ead78d13b4
<jose> aquarius: not actually
<jcastro> marcoceppi, yeah, we should be doing that anyway
<aquarius> that's likely to be a reasonable part of the problem, then
<marcoceppi> jcastro: looks like he made a new plugin
<aquarius> jose, note that it's a node application. It's not a pure client-side app. So the server is run with "node app.js", as you'll see from Procfile
<jose> ah, ok
<aquarius> (or from "npm start", as defined in package.json)
<jcastro> o/ aq!
<aquarius> It can't be pure client-side until everyone supports webrtc. :)
<aquarius> heya jcastro!
<jose> npm ERR! message failed to fetch from registry: socket.io, weird
<jose> I'm just creating a brand new machine
<jose> aquarius: any idea on why I may get http://paste.ubuntu.com/7665293/ ?
<jose> fresh install, just installed npm
<aquarius> erm
<aquarius> should work
<aquarius> everybody uses express.
<aquarius> might just be npm weirdness
<aquarius> ah
<aquarius> you're using an ancient node
<aquarius> use a newer one.
<jose> hmm, those are the ones from the repos, I may need to get one from a PPA
<aquarius> indeed, yeah
<jose> do you think https://launchpad.net/~chris-lea/+archive/node.js/ is recommended?
<aquarius> trusty has a modernish node
<aquarius> if you're on something old, then use chris lea's ppa, indeed
<aquarius> that's what I use
<jose> awesome, thanks
<jose> I'm on precise, so... :P
<jose> aquarius: awesome, looks like I got it working. thanks! :)
<aquarius> excellent.
<jose> I hope you'll see that charm on the store soon :)
<jose> aquarius: want it to be listed as soonsnap or pubphoto?
<aquarius> don't know
<aquarius> pubphoto was its original codename
<jose> well, it's your app
<jose> I can do whichever you like
<aquarius> soonsnap is what the live real version is called
<aquarius> maybe call it pubphoto
<aquarius> so it doesn't conflict :)
<jose> ok, pubphoto then
<jose> I'll make sure to edit the index.html in order for it to display pubphoto and not soonsnap :)
<jcastro> jose, I need to pick a new charm school schedule
<jcastro> should I just put them directly on the onair cal?
<jose> jcastro: you just let me know and it'll be done
<jose> you can do that too
<jcastro> I can add them
<jose> bare in mind that there may be cases when I won't be available to host
<jcastro> that's fine
<jcastro> I can host
<jcastro> I like to work too. :)
<jose> :)
<jose> remember to let me know when you can have that call
<lifeless> jcastro: you like to work it
<jcastro> I need to step it up before jose ends up doing everything, heh
<jose> aquarius: also, you have your analytics code on the branch
<aquarius> jose, feel free to take that out if you want
<jose> cool, thanks
<wrale_> can i deploy ceph osd and nova compute charms on the same node?  I have 67 nodes with two 3tb disks in each..  i'd like all nodes to be both ceph storage node and hypervisor for openstack
<wrale_> *67 compute nodes
<jose> hey guys, does anyone know how can I run an node.js app in a port lower than 1024 without being sudo/root?
<jose> wrale_: if you notice, there's a machine number for each node. just do 'deploy charmname --to #' where # is the machine number
<jose> not sure if it will work for openstack, though, I haven't played with openstack :)
<wrale_> thanks jose.. i'll try that
<jose> or you can do lxc:# instead of just # and it'll deploy it in an LXC container inside that machine
<jose> wrale_: ^
<jose> let me know how it went
<wrale_> will do.. once my maas node installs and the rest comes together.. :)
<wrale_> using 14.04 LTS now..
<wrale_> first time
<jose> awesome!
<wrale_> maas on 12.04 hated me ..lol .
<jose> if you have any troubles with maas, people in #maas may be able to help you
<wrale_> they ignored me :)  but it's cool now, i hope
<jrwren> jose: if you can use your own nodejs binary, CAP_NET_BIND_SERVICE might work.
<jose> jrwren: figured it out, looks like setcap will be useful
<jose> thanks :)
<achiang> anyone know how to work around https://bugs.launchpad.net/charms/+source/mongodb/+bug/1312389
<_mup_> Bug #1312389: Need to make trusty version of the charm available in the charm store <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1312389>
<achiang> i suppose i should just type juju deploy cs:precise/mongodb instead
<achiang> switching topics... i had a bug in another charm i wrote. i think i've fixed the bug... but how do i redeploy the charm to test it?
<tvansteenburgh> achiang: juju destroy-service mycharm && juju deploy mycharm
<achiang> tvansteenburgh: ok. that's kinda what i figured
<achiang> thanks
<tvansteenburgh> sure thing
<jose> achiang: make sure to specify it's local and set the repository location
<achiang> jose: no, that wasn't it. i had to issue juju resolved <unit> multiple times
<jose> you mentioned redeployment of a charm you fixed, you need to specify those to deploy a local charm
<achiang> jose: i was already deploying a local charm though
<jose> ah, ok
<achiang> juju destroy-service doesn't seem to have removed the machine from my pool
<achiang> i guess i need to do that manually
<jose> correct, it just destroys the service, but not the machine
<jose> terminate-machine #
<achiang> that seems silly
<achiang> if i remove a service, juju should know what machine it's assigned to. if no other services are on that machine, it should remove the machine from the pool automatically
<jose> what in the manual provider?
<tvansteenburgh> but you might want to reuse it for something else
<tvansteenburgh> it's faster to deploy a service to an existing machine
<achiang> tvansteenburgh: well... when i did the 2nd juju deploy mycharm, it created a new machine instead of reusing the existing machine
<tvansteenburgh> yeah, that's the default
<jose> you could've specified --to # and deployed it to an existing machine, though :)
<achiang> of course the knobs are there -- i am simply relating my onramp experience with juju as a new user, who has heard many magical things about it, and discovering that it's not quite so magic
<sarnold> achiang: I think the reasoning also includes "don't destroy the user's data"
<achiang> sarnold: what are the semantics of 'destroy-service' then?
<achiang> the verb 'destroy' would normally imply... destroy ;)
<sarnold> achiang: as I understand it, tear down the service but preserve the machines/data in case you want to collect it before terminating instances or storage
<achiang> well, we have 'destroy-service' and 'remove-machine'
<achiang> my mistake
<achiang> it is remove-service, not destroy-service
<achiang> at least according to the docs
<tvansteenburgh> the latter is an alias
<tvansteenburgh> `juju help commands`
<sarnold> I could see a 'destroy-' variant cleaning up after itself...
<jose> I agree that destroy- sounds a little bit more... aggresive
<jose> new charm on the revq
#juju 2014-06-19
<mwhudson> is there much practical difference between the local provider and using the manual provider and using --to lxc:0 all the time?
<jose> mwhudson: well, manual is for boxes that are not your PC, so you can do what you do with local in an external server :)
<jose> using Juju and isolating each service into an LXC container
<axw> mwhudson: there's not a significant difference really. in the local provider you can do "add-machine" and it creates lxc/kvm containers under the covers, but that's about it as far as differences go I think.
<mwhudson> ok
 * mwhudson is going to try to smash openstack onto a single machine because that's all the arm64 nodes i have to play with right now...
<mwhudson> haha uh, is juju-deployer and the local provider a bad combination?
<jamespage> jacekn, https://code.launchpad.net/~jacekn/charms/precise/swift-storage/n-e-m-with-concat/+merge/221692 appears to have more than just the nrpe support - lots of changes around how rsync is managed as well?
<jacekn> jamespage: yes it enabled rsyncd functionality, otherwise swift will wipe rsyncd.conf
<jacekn> jamespage: so in other words swift charm was not subordinate friendly
<jamespage> jacekn, ah - so the nrpe sub uses rsync as well?
<jacekn> jamespage: yes, *external-master part means nagios is not part of the environment
<jacekn> so it needs to grab configs somehow
<jacekn> if/when we get cross environment relations we can rethink this approach
<X-warrior> Does juju supports amazon subnet (vpc) ?
<lazypower-travel> Not yet. There's an open ticket to work with vpc
<lazypower-travel> You can use the manual provider to interface with aws vpc deployments in the interim... But the aws provider as of today doesn't support it.
<automatemecolema> Can anyone tell me if Juju has a restful API?
<jcastro> marcoceppi, I need  a link to your 2nd troubleshooting charm school
<jcastro> automatemecolema, yeah, looking for the spec, one sec
<automatemecolema> jcastro thanks sir
<marcoceppi> jcastro: https://www.youtube.com/watch?v=75gKKnv_ze8&list=UUm7OifwnZoMCChidCJZQruQ
<marcoceppi> automatemecolema: no, not restufl, it's a websocket API
<galebba> can anyone tell me what is the minimum nodes require to setup openstatck with juju ? can i do a all in one with juju ?
<jcastro> marcoceppi, this spec from kapil is from 2012
<marcoceppi> galebba: the BARE minimum is two nodes. Recommended is seven, and if you want HA it's about 12
<jcastro> I am asking for API docs, let's see if they exist
<lazypower-travel> galebba: I've got a bundle that deploys 9 service nodes in a bare openstack setup
<marcoceppi> jcastro: it's a websocket API, not resful, and it's been implemented and the docs exist in the code base (kind of)
<jcastro> marcoceppi, hah, remember this? https://launchpad.net/~jrapi
<galebba> Thanks guys, so the two minimum would be not counting the boot strap node i assume ?
<lazypower-travel> galebba: it does include. its completely acceptable to deploy Openstack on a single machine to encasing VM's depending on the resources provided by that single machine.
<marcoceppi> automatemecolema: https://github.com/juju/juju/blob/master/doc/api.txt and there's a Python library that allows you to connect to the websocket, https://pypi.python.org/pypi/jujuclient/0.0.6
<jcastro> lazypower-travel, do you have a link to that CMU OpenStack day? I am making the conference schedule
<lazypower-travel> jcastro: I don't - i'm still wrangling for a date out of them.
<marcoceppi> galebba: it does, what you do is put all the supporting services on the bootstrap node in VMs, then use the second machine as your nova-compute node
<marcoceppi> (vms being containers, like LXC or KVM)
<lazypower-travel> marcoceppi: good follow up, ty for clarifying my generic statement.
<automatemecolema> maroceppi thanks for the info, ill review it shortly
<marcoceppi> lazypower-travel: yeah, you really don't want to virtualize the compute node, virt in virt is pretty slow
<galebba> ok thanks, i would like to use 3 physical nodes to distribute the boot strap/open stack control nodes. Any pointers to how to do that ?
<lazypower-travel> marcoceppi: i'm aware. re: testing virtualbox inside of KVM
<marcoceppi> galebba: you can, there's no real pointers I can give other than how to do it
<marcoceppi> Don't use ceph, as it needs it own machine and doesn't work well virtualized, so you'd do everything with juju deploy --to lxc:1 (where 1 is the machine you're using for control nodes), then juju deploy nova-compute and create all the relations
<marcoceppi> it's be a very light weight deployment of openstack, keystone, mysql, the image service stuff, compute and compute controller
<jcastro> automatemecolema, so the core team tells me API docs are a work in progress btw
<automatemecolema> jcastro Do they have a roadmap time frame of when these will be GA? We are on the hinge about whether to use juju as our production service orchestration engine or use something more supported
<automatemecolema> We really want to create a Continuous delivery environment using Puppet, Jenkins, and Juju. Having the api docs will allow us to integrate Jenkins with Juju
<lazypower-travel> automatemecolema: actually - you dont need to push directly to the websocket API for that automatemecolema
<lazypower-travel> automatemecolema: there's a juju run command. if you've got your environment configs on your CI server, you can use the juju CLI to perform your workload management from the CI Server
<jcastro> automatemecolema, here's the start: http://godoc.org/github.com/juju/juju/state/api
<lazypower-travel> i may be missing the specific requirements though, and just wanted to be clear that there are a few ways to tackle the integration.
<natefinch> automatemecolema: you may well be better off scripting against the juju CLI client... the API is not really designed to be consumed by outsiders consumers at the moment.  You certainly *can*... there will just be a steep learning curve
<TheMue> automatemecolema: from a technologically perspective itâs not RESTful, but it uses a websocket connection to transfer requests and responses encoded in JSON
<automatemecolema> natefinch I really sounds like maybe developing scripting utilizing the Juju CLI is the best route moving forward at this point.
<natefinch> automatemecolema: it definitely is where we have put the most development time.  The CLI is the primary way we expect people to interact with juju, and things which are simple with the CLI may be quite complicated going directly against the API.
<natefinch> automatemecolema: there's basically only one thing the CLI can't do that you can do via the API, which is create watchers to get updated when stuff in juju changes (so like, getting a notification that a new machine was added).
<pmatulis_> where do i get a list of charms that i can use?  https://jujucharms.com/ looks like some kind of demo site
<pmatulis_> and for folks on a cli server, should i need a desktop/browser to see a list?
<jamespage> hazmat, https://code.launchpad.net/~james-page/juju-deployer/pyyaml-fixup/+merge/223776
<jamespage> busted in utopic right now
<hazmat> ack, thanks
<hazmat> jamespage, pushed to trunk
<jamespage> hazmat, ta
<achiang> hi, working through the nodejs charm tutorial... is it appropriate to hack up the mongo-relationship-changed hook? or should that be considered "don't touch if you can avoid"
<achiang> use case is -- i want to run a bunch of data import scripts when mongo comes online
<achiang> (my importers are in python, and the data lives in the source tree)
<achiang> seems like the logical place to do this is in the mongo hook
<purpledog2> all, I am a 100% novice to maas and juju. I am trying to get juju to speak to the maas controller and having trouble. This is my setup.
<purpledog2> I have a maas controller with 2 nodes. Each node is a virsh vm.
<purpledog2> I have juju installed on my x86 laptop with a 14.04 ubuntu trusty envoronment
<purpledog2> I do not want the maas controller to install ubuntu or anything on the nodes.
<purpledog2> How do I get up and running to installing charms on those nodes ?
<purpledog2> I read that 'juju bootstrap' forces the maas to install ubuntu etc on the nodes which is not relevant for me.
<purpledog2> What do I do after creating the environments.yaml file with the credential info to get to installing charms ?
<purpledog2> ??
<pmatulis_> how do i get a "unit" out of the *pending* state?
<pmatulis_> http://paste.ubuntu.com/7669926/
<purpledog2> anyone ?
<jcw4> purpledog2: the documentation is a little vague on that point :)
<purpledog2> I think you are correct but there has to be someway to get it done .. it seems kinda basic ?
<jcw4> I think juju deploy <charm> is what you're looking for?
<purpledog2> even the documentaion on the juju ubuntu site dicates "juju --sync-tools" precedint a bootstrap which is incorrect
<jcw4> purpledog2: I just noticed though that it looks like you want juju to use the existing ubuntu os on your nodes?
<purpledog2> Do I not need to do anything else before juju deply charm ?
<purpledog2> How do I dicate which node to deply the charm to ?
<purpledog2> I need a way to know if basic setup and comunication works between the juju client and the maas server ?
<jcw4> purpledog2: (I'm way out of my depth here, but does this help? https://juju.ubuntu.com/docs/charms-deploying.html)
<purpledog2> BTW does the maas server need juju to be installed ?
<jcw4> purpledog2: assuming you started with https://juju.ubuntu.com/docs/config-maas.html
<purpledog2> jcw4: thanks ..  Yes I followed the maas documentation. but I think juju deployment documeation needs differentiion between when the nodes work with VMs that are already created (in which case it doesn't need to really install ubuntu) vs nodes that are just physical bare servers with nothing on it
<jcw4> purpledog2: yeah, I'm not too clear on that, but at the very least jujud will need to be running on the machine
<jcw4> purpledog2: and I think ubuntu is expected to be the base OS
<purpledog2> jcw4: thanks .. I do not think I have jujud running on the maas server. I will research that now. thanks for the tip..
<jcw4> purpledog2: I believe that part of juju bootstrap is getting that installed and running
<purpledog2> jcw4: I really wish I could find some real good tutprial on juju and a hands down on each step.. maybe if I have it figured out I will share it!!
<purpledog2> However if a VM is up already there is no installation required and wish juju doc^n would mention what is expected when the nodes are virsh nodes.
<jcw4> purpledog2: yeah; usually there are folks in this channel who know enough to help; I'm just jumping in out of interest :)
<dpb1> Anyone know about juju and why it would be causing my syslog to be zero length.  perhaps related to the local provider?
<galebba> setting up juju bootstrap node behind a proxy and getting the follwing curl error. Everything else seems to be downloading correctly with maas passing proxy/dns . Any idea how to get around this ?  curl: (7) Failed to connect to streams.canonical.com port 443: Connection timed out
<sarnold> jcastro: jamie provided a nice answer to http://askubuntu.com/questions/485401/ but it's currently a vote behind the 'leader' -- mind giving jamie's answer a read and maybe evening out the vote count? :) thanks
<jcastro> sarnold, it helps if he mentions right up front that he helped spec, design, and implement the whole thing
<sarnold> jcastro: good idea
<achiang> hi, i'm working on a charm, and trying to iterate. is it really the case that i have to remove-service and then add-service to test changes? why doesn't juju deploy Just Work on a failed charm? alternatively, does juju have a concept of "upgrade" ?
<achiang> ok, i found upgrade-charm and --force
<achiang> i'm going to try that
<jcastro> yep
<jcastro> achiang, make sure you read juju help upgrade-charm
<jcastro> for a failed deploy
<jcastro> you'll want `juju resolved --retry foo`
<achiang> oh
<jcastro> bac, arosales: you guys need to stop stepping on each other in my PR please
<bac> jcastro: i think where there is conflict it is implementer's choice
<arosales> jcastro: /me just adding comments
<jcastro> ok well, I've been trying to go to lunch for 30 minutes but you guys have turned "add quickstart" to "let's do an entire review of every page this touches"
<bac> and i was just adding clarification.  at no time did i call arosales a mutton-head
<jcastro> which is fine, we can just do that after.
<arosales> jcastro: feel free to take lunch now and come back to comments later
<jcastro> I would just like to commit!
<arosales> jcastro: patience :-)
<arosales> comments are good :-)
<jcastro> and that's fine
<arosales> jcastro: my comments are in for your https://github.com/juju/docs/pull/123
<arosales> jcastro: thanks for working on that
<achiang> jcastro: hey, is there a known issue with the nodejs charm? http://pastebin.ubuntu.com/7670829/
<achiang> above occurs when i remove-relation, followed by remove-service
<jcastro> on the phone
<galebba> setting up juju bootstrap node behind a proxy and getting the follwoing curl error. MAAAS boots up the node and installs ubuntu just fine.Seems to be curl not setup for proxy ?  Any idea how to get around this ?  curl: (7) Failed to connect to streams.canonical.com port 443: Connection timed out
<automate_> Can anyone tell me how to have juju associate an haproxy charm with an Elastic IP on Amazon?
<natewarr> Does anyone have knowledge about how a juju charm could pick up a specified EIP on EC2? Then for kicks, run these with 2 nodes for HA?
<natewarr>  /delete automate_
<Delair> Hi ALL.. Any juju openstack expert in this IRC .. Need some help to deploy openstack in 2 nodes using juju
<Delair> Hi ALL.. Any juju openstack expert in this IRC .. Need some help to deploy openstack in 2 nodes using juju
<sarnold> hello Delair -- note IRC works best if you just ask whatever questions you want to ask :)
<Delair> sure .. didnt wanted to bug everybody..
<Delair> Can I install openstack using JUJU + MAAS in 2 nodes only .. ONE to use a controller/network and SECOND  to use as compute node only....
<Delair> i did setup juju+maas with no issues
<lazypower-travel> Delair: what you do is put all the supporting services on the bootstrap node in VMs, then use the second machine as your nova-compute node (vms being containers, like LXC or KVM) Don't use ceph, as it needs it own machine and doesn't work well virtualized, so you'd do everything with juju deploy --to lxc:1 (where 1 is the machine you're using for control nodes), then juju deploy nova-compute and create all the relations it's be a very light weight
<lazypower-travel> deployment of openstack, keystone, mysql, the image service stuff, compute and compute controller
<Delair> but whenever i try to deploy services it automatically tries to choose nodes
<sarnold> hey lazypower-travel :)
<lazypower-travel> o/ sarnold
<marcoceppi> Delair: what you need to do
<marcoceppi> is run juju add-machine
<marcoceppi> this will allocate the second node to juju for use for the compute node
<marcoceppi> run juju deploy nova-compute
<marcoceppi> that will place it on it's own node
<marcoceppi> for everything else
<marcoceppi> you need do juju deploy --to lxc:0 <service>
<Delair> Thanks lazypower and marco
<marcoceppi> that will put it on the bootstrap node as an LXC container
<lazypower-travel> marcoceppi: read above ^ i got you covered. Paraphrased from earlier.
<Delair> this is exactly what i want and what i tried to do
<marcoceppi> lazypower-travel: ah, didn't see your message with the --to
<lazypower-travel> its all buried :)
<Delair> i bootstrapped to one node
<Delair> and tried manually deploying services using --to option to one node
<lazypower-travel> Delair: did you use --to lxc:0 ?
<Delair> but as soon i tried to build realtion using sql and keystone it give me error messsage "hook failed: "config-changed"'
<Delair> I tried lxc also
<marcoceppi> Delair: is the error on mysql?
<lazypower-travel> Delair: did you read the README on the MySQL charm?
<Delair> marco the error was on keystone
<lazypower-travel> o nvm
<Delair> lazypower the issue with lxc is that it assign 10.30.0.x ip to each container which i dont know how to access from outside
<marcoceppi> odd, can you run juju status and redirect it to pastebinit? (juju status | pastebinit)
<Delair> @lazypower no i didnt ready the README on mysql
<Delair> @marco not sure what is pastebinit
<Delair> i tried to look online to fix the "hook failed: "config-changed"' and looks like it is a bug
<Delair> SO :)
<Delair> is there system requirement and step by step document which i can following to install openstack in 2 nodes
#juju 2014-06-20
<pmatulis> i had a hard time with the mysql charm and discovered the following in the logs:
<pmatulis> http://paste.ubuntu.com/7671996/
<pmatulis> i ssh'd in to the respective lxc container and installed git manually. then, voila, everything started
<pmatulis> (side question: why do i need git?)
<jose> pmatulis: weird thing. I know Juju uses git in the background for some stuff, but not sure why it's giving the error
<danharibo> juju local environment seems to be broken ootb, on my trusty install: http://pastebin.com/Aspq0zJd
<danharibo> that's the result after following the getting started guide, no public addresses
<galebba> could anyone tell me why my proxy doesnt seem to work on juju bootstrap node with env | grep proxy showing both http and https proxies ? getting ERROR juju.cmd supercommand.go:300 cannot upload charm to provider storage: gomaasapi: got error back from server: 504 Gateway Timeout
<jose> danharibo: is this the first time you are booting in Local?
<jose> galebba: maybe it's an issue with your proxy blocking the storage provider?
<galebba> so if i set my env variable manually on the bootstrap node it works perfectly well. not just through juju
<danharibo> jose: I've run the commands listed on https://juju.ubuntu.com/docs/getting-started.html up to juju-status, after following the lxc configuration
<sarnold> danharibo: how long have you waited?
<danharibo> a few minutes
<sarnold> danharibo: iirc step 1 is "download a cloud ubuntu".. it can take a while
<danharibo> is there any way to get more detailed information? like what it is waiting on?
<lazypower-travel> danharibo: its a one time operation - you're fetching a 300mb cloud image
<lazypower-travel> danharibo: if you pass -v (or -debug) you'll see output on what its doing
<lazypower-travel> danharibo: once that cloud image is cached, it'll create a template container which makes the local provider crazy fast. Seconds to spin up an LXC container, vs the 5 to 10 minutes you'll wait on that cloud image to be fetched.
<galebba> is it possible to do no proxy env variable on juju ? i have set http-proxy and https-proxy and it seems my maas server api access is being send through the proxy which i want to avoid
<lazypower-travel> galebba: are you referring to the proxy being defined in /etc/apt/90-curtain?
<lazypower-travel> or there abouts
<galebba> proxy set with juju set-env like  juju set-env https-proxy=http://x.x.x.x:80/
<lazypower-travel> oh i read that wrong. sorry
<galebba> np, found a old thread saying this should be fixed but cant find the syntax
<lazypower-travel> thumper: you kicking around over there?
<galebba> got it , juju set-env no-proxy=127.0.0.1,172.18.112.140,localhost
<thumper> lazypower-travel: whazzup?
<thumper> ah proxy thing
<lazypower-travel> thumper: is there a way to route only the juju traffic through a proxy so we dont blanket out the maas traffic?
<thumper> yeah, galebba's got it
<thumper> lazypower-travel: when you set the proxies through juju, they only get set in the hooks, and for juju run
<thumper> lazypower-travel: it doesn't do any magic system things
<lazypower-travel> ahhh ok. I thought it set system level proxy envs
<thumper> no
<lazypower-travel> well good to have that cleared up. Ty thumper.
<sarnold> thumper: http://askubuntu.com/a/431192/33812  :)
<thumper> I suppose I should say it is fixed now?
<sarnold> that was the first google hit for 'juju proxy' so it'd be as good a place as any for the docs :) hehe
<galebba> i know right
<thumper> updated.
<lazypower-travel> thanks thumper
<sarnold> thanks thumper :)
<jose> lazypower-travel: who can I confirm with that the LXC charm school is taking place tomorrow?
<lazypower-travel> jose: i dont think anyone's around atm since the steam sales on
<jose> oh that's right
<lazypower-travel> jose: but i'd follow up in the am with jcastro.
<jose> should be good
<lazypower-travel> jose: or you can just email them... might be the better way to go
<jose> will just email the list :P
<jose> thanks!
<danharibo> checking network traffic, nothing... machine-all log also doesn't seem to say it's downloading anything
<lazypower-travel> danharibo: if you run sudo lxc-ls --fancy do you see a template container?
<danharibo> I see juju-precise-template, stoped
<lazypower-travel> danharibo: ok looks like the image was downloaded.
<lazypower-travel> and the template was created as well
<lazypower-travel> this is your first time running juju bootstrap on the local provider correct?
<danharibo> I ran it a few times before, but ran destroy-environment afterwards
<lazypower-travel> danharibo: you may need to nuke your local installation from orbit if its in an inconsistent state
<lazypower-travel> this is available as a plugin - but if you're not savvy on that - https://github.com/juju/plugins/blob/master/juju-clean - the commands are listed here
<marcoceppi> lazypower-travel: you able to do a quick review?
<lazypower-travel> marcoceppi: link me
<lazypower-travel> i need to plugin in 1 sec
<marcoceppi> lazypower-travel: https://code.launchpad.net/~marcoceppi/charm-tools/fix-setup-py/+merge/223850
<marcoceppi> lazypower-travel: just review, I'lldo the merge
<lazypower-travel> ack
<lazypower-travel> looking now
<lazypower-travel> dude cool #TIL
<lazypower-travel> find_packages
<marcoceppi> lazypower-travel: yeah, thank tvansteenburgh for that one
<lazypower-travel> marcoceppi: runs on my box
<lazypower-travel> +1
<marcoceppi> for some reason the debian python helpers are a PITA
<marcoceppi> this fixes that
<marcoceppi> thanks man
<lazypower-travel> :)
<lazypower-travel> danharibo: if that doesnt fix it, can you attach a copy of your all-machines && machine-0.log to a pastebin for me?
<danharibo> running juju clean seems to have declogged it, mysql and wordpress charms are "started"
<danharibo> I am getting a 502 from nginx on wordpress' public-address though
<danharibo> oh no it was just a cached page, all works. thanks
<marcoceppi> CLEAN ALL THE THINGS
<jose> \o/
<jose> marcoceppi: are we having the charm school tomorrow?
<marcoceppi> oh god, we are aren't we
<marcoceppi> what's the topic?
<jose> marcoceppi: LXC Troubleshooting
<jose> 19 UTC
<marcoceppi> cool
<mwhudson> ubuntu@mytrusty:~$ juju status
<mwhudson> ERROR failed getting all instances: exit status 1
<mwhudson> what might that mean?
<mwhudson> oh, i logged in before my user was added to the libvirt group
<galebba> Has anyone used juju to install openstack  with Neutron ?
<lazypower-travel> Galebba: lots of talk around that. I'm aware of our OpenStack charmers working with neutron
<Mosibi> galebba: yes, we installed neutron/openstack with juju
<Mosibi> If youre question is about vlans, yes that's not supported yet..
<nottrobin> is there any way to permanently set the location of my local charms in a config file?
<nottrobin> it's a bit annoying typing "--repository=~/charms" all the time
<marcoceppi> nottrobin: set the JUJU_REPOSITORY environment variable
<nottrobin> ooh!
<nottrobin> handy
<nottrobin> thanks marcoceppi
<marcoceppi> np
<pmatulis> are 'charm-tools' and 'juju.plugins' what everybody uses?  anything else i should be looking at?
<lazypower-travel> pmatulis: juju-quickstart
<pmatulis> lazypower-travel: ok, will look thanks
<automatemecolema> can anyone point me in the direction on how to deploy haproxy with HA using an amazon EIP?
<automatemecolema> with the haproxy charm naturally
<automatemecolema> anyone familiar with the haproxy charm?
<automate_> Can someone point out how to find the addressupdater worker to force EIP changes on services?
<marcoceppi> You can't provision an elastic ip from within a charm, you need to assign it in the control plane
<automate_> So here's a question, say I provision two haproxy servers, I assign an EIP to one of the haproxy servers, it fails, and the other picks up. Is there a way I can make that instance recognize the EIP
<automate_> and How can I force juju to recognize the EIP? Ive read there is an addressupdater worker that check every 15 minutes, but that's not good enough for production
<automate_> I have deployed to haproxy units with juju, and I notice they realize their peer, but how do I front the Haproxy service with only one endpoint?
<automate_> two*
<automate_> https://gist.github.com/anonymous/29cc347bd9ce1015036e I need these to know about each other and fronted by an EIP Thoughts?
<jcastro> hey marcoceppi
<jcastro> frankban has a point wrt. mac users, I think juju-quickstart is fine imo
<jcastro> bigger battles to fight, etc. etc.
<marcoceppi> jcastro: what about windows users?
<marcoceppi> It's still a poor argument
<marcoceppi> it obscures juju and bothers me a bit with promoting this as juju
<jcastro> well, it's not my fault juju needs quickstart in order to be usable. :()
<marcoceppi> it's great it does all this, but if users get to juju-quickstart before getting to juju we've failed in documentation
<jcastro> it's not really any worse
<jcastro> it's just `juju-quickstart -i` instead of `juju generate-config`
<marcoceppi> juju init*
<jcastro> which was not mentioned in the docs
<marcoceppi> I think quickstart is super important, and deserves to be on the getting started page, but people are here to learn about juju
<jcastro> but that's an evilnick-ism
<marcoceppi> and we've just hidden the  entire core of juju
<jcastro> you're the only one who knows that
<marcoceppi> bootstrapping, deploying, relating, etc
<jcastro> as far as they know/care, quickstart is part of juju
<marcoceppi> but it's not, it's juju-quickstart, not juju quickstart
<marcoceppi> no other juju commands are hyphenated
<marcoceppi> so either we tell people, On Mac OSX run brew install juju juju-quickstart then do juju quickstart -i
<jcastro> IMO getting people into the curses menu to put their keys in is like the #1 thing we need to do
<marcoceppi> sure, lets do that
<marcoceppi> but lets not give users the wrong expectation of the command chain
<jcastro> ok so give me details
<jcastro> what exactly do you want me to change in the commands
<marcoceppi> Everytime you tell a user how to install juju-quickstart, just also include the installation of juju
<marcoceppi> brew install juju juju-quickstart
<marcoceppi> apt-get install juju juju-quickstart
<jcastro> oh ok
<jcastro> is that all?
<marcoceppi> windows: Sorry, can't do that
<jcastro> man why didn't you say that. :)
<marcoceppi> yes, then all you need to do is have `juju quickstart` instead of the -
<marcoceppi> and the command UX is intact, quickstart keeps being amazing
<marcoceppi> and the world rotates on
<jcastro> rick_h_, any issues? I'm tired of my PR becoming mailing list discussions. :)
<marcoceppi> rick_h_: also, whats the TLDR on Windows + Quickstart?
<rick_h_> marcoceppi: your team can add it next cycle
<marcoceppi> what/
<rick_h_> marcoceppi: :)
<marcoceppi> so quickstart won't install juju on windows?
<rick_h_> marcoceppi: quickstart doesn't work at all on windows
<rick_h_> and it's not planned for this cycle, but is brought up to be important for next
<jcastro> marcoceppi, rick_h_ didn't even know python on windows was a thing. :)
<rick_h_> jcastro: oh we know, but ncurses on windows isn't
<marcoceppi> jcastro: hey man, charm-tools is on there ;)
<rick_h_> marcoceppi: let me know when you put a gui on charm tools
<marcoceppi> haha, no thanks. It was enough "fun" getting charm-tools packaged for windows
<jcastro> I hate to be that guy, but a proper juju snapin for like sccm is the way to go
<rick_h_> marcoceppi: we're handing off quickstart to eco next cycle so you guys can build a webui to it and enable it for windows then
<jcastro> ok so if no one has any issues, I will amend the commands in the docs
<rick_h_> jcastro: ok, it's fine. I was just letting you know you can make things simpler if you want
<rick_h_> but I'm not anti install juju with quickstart
<rick_h_> sometimes comments are just fyi :)
<jcastro> rick_h_, you convinced me, it's Captain Pedantic over there ....
<marcoceppi> hey man, you're the one that had to go document things, that's on you :P
<jcastro> though seriously, quickstart should eventually be in core someday like deployer, etc.
<rick_h_> lol
<marcoceppi> seriously, yes. It simplifies the user experience 10 fold and makes everyone happy
<rick_h_> jcastro: yea, it can't until deployer is and such.
<jcastro> is the packagename "juju" or juju-core in brew?
<rick_h_> jcastro: but true, at some point makes sense. The original goal of quickstart though was to help you avoid finding the ppa, etc.
<marcoceppi> I'd love to see how core handles the ncurses issue on windows
<marcoceppi> jcastro: it's juju
<rick_h_> it started before trusty and there was a decent juju in universe
<jcastro> http://www.projectpluto.com/win32a.htm
<jcastro> ok guys, git pushed
<rick_h_> jcastro: thanks for the docs. Appreciate it
 * marcoceppi finds something else to be pedantic over
<jcastro> marcoceppi, like say ... mysql
<purpledog3> ASkUbuntu: I was wondering if you could help me with a very fundamental question.
<purpledog3> Askubuntu: Its about juju with ubuntu trusty. Its too basic and unfortunatley I don't find documentation on it
<jcastro> askubuntu is a bot
<jcastro> but we're humans if you want to ask your question
<jcastro> mbruzek, tvansteenburgh1, have you guys had a chance to review yet?
<mbruzek> jcastro, review what?
<jcastro> you guys are on review duty this week
<purpledog3> jcasto: Ahhh thanks.. .
<purpledog3> jcasto: Do you know the resource I could use to work with juju/maas/openstack. Am a total novice and depending on documentation which either I can't seem to get approproate of or isn't appropriate.
<jcastro> https://insights.ubuntu.com/2014/05/21/ubuntu-cloud-documentation-14-04lts/
<jcastro> it's currently a PDF
<jcastro> but we're working on HTMLizing it
<jcastro> that's the place to start
<purpledog3> thanks.. let me dive into it ..
<purpledog3> I have my oen dat servers on my table here.
<purpledog3> dat->data
<jcastro> jamespage, that Boyd guy on the juju list was having that problem with ceph-radosgw if either you or zul can respond that would be swell.
<purpledog3> Do you know if it might have an example of how to use 2 VM nodes defined by virsh ... as a maas cluster and work stunts with them ? like setup Openstack etc ?
<jamespage> jcastro, I need to make time to read that doc to figure out what he's asking
<ahasenack> hi guys, any clues about what happened here?
<ahasenack> http://pastebin.ubuntu.com/7675284/
<ahasenack> juju 1.19.3
<automate_> anyone available for questions around the HAproxy charm?
<pmatulis> ask and see
<automate_> I need some incite on how to deploy an HA pair of HAproxy servers in Amazon using and EIP? I up for both scenarios of either Active/Passive or Active/Active
<automate_> We are playing with the idea of rolling our own haproxy charm with the haproxy beta package that supports SSL using puppet as the deployer.
<automate_> Has anyone had experience using puppet as the means to deploying your app in a charm, as opposed to building it into a shell script to install packages.
<wrale_> using juju on 14.04... my nodes are multihomed with public unmanaged.. juju bootstrap fails because name resolution fails upon http call.. must a public interface (or NAT) be up where bootstrap happens?
<wrale_> maas is the underpinning of my cluster
<hazmat> automate_, why not just go with an elb charm?
<automate_> hazmat elb isn't near as flexible, and ELB's change addresses all the time, we want to keep our load balancers tied to one endpoint
<lazypower-travel> ahasenack: thats really strange. you got locked out of your monodb instance?
<ahasenack> lazypower-travel: it was just a normal bootstrap, like many others
<lazypower-travel> ahasenack: when you ran destroy and rebootstrapped did it work?
<ahasenack> yes
<lazypower-travel> hmm interesting
<ahasenack> well, if you look at the logs you can see that it ran destroy on its own
<lazypower-travel> ahasenack: thats the first time i've seen that happen...
<ahasenack> so I was left with nothing running
<lazypower-travel> yeah, it should do that if it encounters an issue while trying to bootstrap
<lazypower-travel> that way its tidy about a startup failure
<ahasenack> sinzui: pointed me at an older bug in the 1.19 series but it was during destroy
<automate_> I'm trying to figure out a way to have different instances to be provisioned to separate availability zones within the same juju environment. Has anyone figured out a clean way to do this? I saw a bug report on this trying to be fixed..
<automate_> Scenario 1: What happens if my bootstrap node dies?
<automate_> Scenario 2: I boostrapped my environment, everything was working, then a few hours later the environment eats itself and everything is gone.
<jcastro_> automate_, in scenario 1, you lose orchestration
<jcastro_> but like the instances all stay running, you're just down to ssh
<jcastro_> when HA lands you'd just have one of the other bootstrap nodes take over
<automate_> HA for bootstrap nodes are on the roadmap, do you know how long that will be before it's available?
<jcastro_> I do not think we yet have a way to do multizone in one environment
<jcastro_> it's supposed to be landing like RSN now if it hasn't already
<jcastro_> jam, do you know the status of HA?
<jcastro_> I lose track of which team is working on what
<automate_> So, today if my bootstrap node completely dies there is no way to recover it?
<jcastro_> you're pretty much doomed
<sarnold> jcastro_: https://bugs.launchpad.net/juju-core/+bug/1183831
<_mup_> Bug #1183831: unable to specify availability zone <charmers> <constraints> <ec2-provider> <landscape> <reliability> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1183831>
<automate_> :(
<jcastro_> they've been focusing on HA for the node for a while though, it's like the last major thing we need for prod
<rick_h_> jcastro_: juju ensure-availability --help is there now. I think it's fleshing out final bits but is testable
<rick_h_> so don't do it with your prod environment yet
<automate_> Can you explain if there is a way to force juju to know about address changes? I know there is an addressupdater worker, but there might be times where I want juju to know abut it on the fly
<automate_> Scenario would be when I assign a node an EIP
<automate_> I want it to know about the EIP right away
<marcoceppi> automate_: juju should pick that up automatically now, if not now then soon
<marcoceppi> besides, the EIP only affects outside access to it
<marcoceppi> the machine will always show the current and the internal address in amazon will be used anways
<marcoceppi> so it doesn't affect other services connected to a service with an EIP
<automate_> Yea, when I meant address changes, I'm relating strictly to public addresses
<automate_> I'm currently running 1.18.1
<marcoceppi> automate_: well you're already a few juju releases behind, latest stable is 1.18.4 iirc and latest dev is 1.19.3
<automate_> juju upgrade-juju get me on the latest stable?
<marcoceppi> what are you relating that requires the public address?
<marcoceppi> automate_: yes
<marcoceppi> except for humans, almost all services should just be using the private addresses
<automate_> Scenario: Two haproxy servers, need them in ha pair, I have one assigned an EIP, it fails, I want the other one to get the EIP reassignment
<marcoceppi> that's something you have to do outside of juju though
<marcoceppi> juju won't reassign EIPs
<automate_> Yes, but I need juju to know about the EIP as soon as I reassign it
<marcoceppi> why? Elastic search will just continue to listen to 0.0.0.0
<marcoceppi> traffic will flow, haproxy* will continue to runn
<marcoceppi> not elastic search, sorry about that
<automate_> I'm not sure I'm following you properly
<marcoceppi> I'm not sure I'm following you. Why does juju need to know about the EIP
<marcoceppi> EIPs are transparent to the underlying machine
<marcoceppi> they're done on the software level
<marcoceppi> in Amazon
<automate_> Ok let me explain it a little better.
<marcoceppi> Amazon simply maps public ip to the private ip and routes traffic
<marcoceppi> so the unit knowing it has a new public ip doesn't matter
<automate_> I have two haproxy servers provisioned by juju, naturally they get a public address assigned to them by Amazon, I reassign node 1 with an EIP, juju knows about the auto assigned public address and I need it to refresh and know about the EIP i just assigned it.
<marcoceppi> automate_: okay, I see what you're saying. My question is why does juju need to know that.
<marcoceppi> how does that change how haproxy is running
<automate_> Then I find that node1 just failed, and I need node 2 to pick the load balancing, so I reassign my EIP to node 2, and I need juju to know when I reassign it
<automate_> Am I not exposing the service via juju?
<marcoceppi> yes, you are
<marcoceppi> now let me explain briefly what happens amazon side of things
<marcoceppi> You launch an instance in amazon. That instance gets a private address on eth0 - 10.0.0.2; your other instance gets eth0 10.0.0.3. You have secruity groups that define ports and access to that instance. Amazon gives you two random public ipaddresses and has their switch software map 72.0.0.2 and .3 to those respective private ip aaddresses. The instance has no idea what it's public ip address is. Juju knows because it's querying the data from
<marcoceppi> amazon directly using the API. Now you create EIP 74.0.0.15 and assign it to the first node. 72.0.0.2 goes away and instead 74.0.0.2 routes to the private IP of 10.0.0.2 The instance has no idea this change happened, Juju knows because it's chatting with the API and updates the metadata for that instance in juju. Now you failover, move the EIP to the second instance now 74.0.0.15 maps to 10.0.0.3 it has no idea the public ip address has changed.
<marcoceppi>  It doesnt' need to, it simply continues listening to traffic on eth0 as it was designed to do
<marcoceppi> there isn't a pressing need to have the unit data updated immediatly for a public address change in juju. For private addresses completely because that's what the unit uses when listening and what other services are talking to it on
<sarnold> marcoceppi: very interesting, thanks :)
<marcoceppi> The public IP addresses, EIP or not, are maped outside of the instance and outside of juju in Amazon's switches. The instance is blissfully unaware of what's going on
<marcoceppi> OpenStack, with quantum/neutron, IIRC works very much the same way
<marcoceppi> providers like Digital Ocean, do it differently, there's two nics on the machine private and public, and thats where magic happens
 * marcoceppi spins up an amazon instance to verify that giant text dump is true
<automate_> marcoceppi: thanks for that explanation, that was my understanding of EIP, If www.mywebaddress.com points to 70.0.0.15 and node 1 was assigned that EIP, node 1 fails, traffic stops until the EIP is reassigned to node 2? Just checking to make sure at least I
<automate_> m right on the first part
<automate_> marcoceppi, I completely am on the same page now...I understand that it doesn't matter if juju knows the public address change right away...I had to fight that battle in my small brain
<marcoceppi> automate_: yeah, so if the node is no longer running
<marcoceppi> traffic will hault, after you reassign it to another node traffic will flow again after a few mins
<automate_> So... moving on then, I have more of an haproxy specifc charm question around keepalived EIP reassign...
<automate_> Maybe this gets into me having to customize the haproxy charm... I need a keepalived script embedded so if node one goes down then it can talk to the Amazon API to reassign the EIP to the failover node
<marcoceppi> automate_: so, this sounds like a good case for a subordinate charm
<marcoceppi> where you don't have to hack the exisiting charm, but instead build a charm that only works once attached to another running charm
<marcoceppi> then you can amend the functionality of the service running with additional scripts, configuration, or software
<automate_> hmmmm that sounds like the plan for us then
<automate_> So.......one more crazy question....Any charmers using puppet to deploy their apps in a charm? We use puppet enterprise, and want the ability to modify instances with the puppet master
<marcoceppi> so, people have used chef, ansible, saltstack, and there's one or two puppet charms
<marcoceppi> bascially it's using puppet standalone to drive machine setu
<marcoceppi> I can't think of any examples off the top of my head at the moment though
<marcoceppi> I'd have to hunt down which charms do that
<automate_> Use case 1: We want to control haproxy configuration with puppet...Well deploying the charm store haproxy charm doesn't allow us to really modify it with puppet
<jose> marcoceppi: in amulet, is it possible to configure a service after deployment?
<marcoceppi> jose: yes
<jose> cool, thanks
<purpledog3> jcasto: I read a good portion of the document you pointed me to : http://insights.ubuntu.com/wp-content/uploads/UCD-latest.pdf?utm_source=Ubuntu%20Cloud%20documentation%20%E2%80%93%2014.04%20LTS&utm_medium=download+link&utm_content= But it does not talk about how to setup a maas controller when you create a cluster with VM nodes that are created with virsh.
<jose> marcoceppi: 'this video is private' :(
<purpledog3> Following the guide http://insights.ubuntu.com/wp-content/uploads/UCD-latest.pdf?utm_source=Ubuntu%20Cloud%20documentation%20%E2%80%93%2014.04%20LTS&utm_medium=download+link&utm_content= //
<purpledog3> juju quickstart does not exist ?
<marcoceppi> jose: I'muploading a new one
<jose> oh cool
<jose> purpledog3: afaik it's juju-quickstart
<purpledog3> ahaha!! bug to note and correct in the documentation ::!! Thanks a bunch jose!!
<arosales> purpledog3: yes you need to have juju installed first
<jose> np :)
<arosales> purpledog3: if you don't have juju installed first then you will need to issue juju-quickstart, and not the juju plugin command "juju quickstart"
<marcoceppi> purpledog3: if you're getting that error, that means you didn't install juju
<jose> there you go :)
<purpledog4> marcoceppi: juju is installed for sure!!
<purpledog4> juju <enter> shows me all the help commands etc
<marcoceppi> purpledog3: what version of juju do you have installed? `juju version`
<purpledog4> marcoceppi: I have a maas controller I setup with 2 nodes added and each node is a VM with ubuntu running on it/
<marcoceppi> juju quickstart should most definitely work
<purpledog4> 1.18.4-trusty-i386
<marcoceppi> purpledog4: can you run `juju help plugins` and report the output to paste.ubuntu.com ?
<purpledog4> ok
<purpledog4> http://pastebin.com/gZf9tvFU
<purpledog4> marcoceppi: I need for the maas controller to not try and commission these nodes since they already have ubuntu etc on them. What can I do ?
<marcoceppi> purpledog4: nothing really, maas will always attempt to reboot the machines and pxe boot ubuntu to install a fresh image
<marcoceppi> that's how maas works
<purpledog4> The VM nodes already have ubuntu running.. if it tears down those nodes I don't see how it can create those again ? Those Vms are hosted on another server and created with virsh.
<purpledog4> Does this mean adding a virsh node is no good and can't work with maas ?
<marcoceppi> purpledog4: it's not going to tear them down
<marcoceppi> it's going to attempt to boot them up
<marcoceppi> and then, when it registers with maas DHCP, it will get a PXE boot image
<purpledog4> marcoceppi: Not sure I understand, a VM node is already created, How can it reboot the VM node ? PXE is not natively supported on the hardware I work with (I think I need to get uboot to support PXE)
<marcoceppi> purpledog4: virsh can reboot VMs
<marcoceppi> if it can't PXE it'll just keep the same image
<purpledog4> marcoceppi: Ok.. I have seen virsh do that.. If it can then it is settled I will used virsh locally to see if it can reboot the VM to get assurance it can. I know PXE is non existent.
<purpledog4> Thanksfor your help..
<purpledog4> BTW is maas dhcp same as the dns-masq ?
<marcoceppi> nope, dns masq is for the domain name resolutions
<marcoceppi> DHCP is a networking thing
<purpledog4> So I do need to setup the maas nw interface for it I think
<purpledog4> all this time the VM has been getting an IP address on its own network so those have been able to ping out but but not can reach that VM node .. so I guess that is the purpose of the MAAS dhcp I think
<sarnold> marcoceppi: are you sure there? I wouldn't be surprised if dnsmasq is being used for dhcp
<marcoceppi> sarnold: I thought maas was doing something differently with dhcp, I could be wrong though
<sarnold> marcoceppi: it has been a while since I've looked..
<purpledog4> sarnold, marcoceppi: Hmm .. after alot of struggle to get virsh running and creating nodes .. I found that if the local NW was 10.0.0.X the nodes on the VM were able to access the NW just fine.. but had IP addresses of 192.168.x.y/
<purpledog4> the nodes could access anything outside but not vice versa and documentaion indicated that was to be expected.
<purpledog4> It was using dns-masq etc.
<purpledog4> Sarnold: Now with with maas dhcp in the mix I will need to resarch how the VMs get IP addresses and I am a novice at application work.
<purpledog4> More comfortable at the low level flipping bits/waveforms/RTL/etc etc thanks for the help.. really need it and appreciate it!!
<galebba> could anyone tell me how to clear  "agent-state-info: 'hook failed: "shared-db-relation-changed"' " left over from keystone charm ?
<tvansteenburgh> galebba: if you just want to clear it, `juju resolved`
<galebba> awsome, thank you.. was trying this without luck juju resolved --retry keystone/0
<jcastro_> hey marcoceppi
<jcastro_> so kirkland and I are doing a charm
<jcastro_> and we fixed the bug we had in a hook
<jcastro_> and then did `juju upgrade-charm`
<jcastro_> and then `juju resolved --retry`
<jcastro_> but the version of the charm on the deployed unit didn't upgrade
<jcastro_> are we missing a step?
<AskUbuntu> Can Ping but Cannot SSH to Openstack VM Instace | http://askubuntu.com/q/486151
<dpb1> tvansteenburgh: hey -- around still?  Anyone I can poke to get this promulgated? https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102
<jose> dpb1: someone from the ~charmers team will have to take a look, but bare in mind that there's a queue of other things that are waiting too :)
<dpb1> thx jose. :)
<kirkland> marcoceppi: I'm struggling with the varnish charm...do you know anything about it?
<jose> kirkland: what's the specific prob?
<kirkland> jose: /etc/varnish/* isn't getting automatically updated with the related web service's hostname when making the relation
<jose> hmm, what I see on the hooks is that /etc/varnish/default.vcl is the file that needs to be updated, you say there are no changes there?
<kirkland> jose: right, my hostname is not inserted in the top bit
<kirkland> jose: my service is
<kirkland> jose: so it's at least partially run
<kirkland> jose: hmm, maybe I need to do soemthign more in my charm?
<jose> kirkland: possibly. are you setting on the interface-relation-joined hook "hostname=`unit-get public-address`"?
<jose> if that's not set, then varnish won't know the address of the unit
<kirkland> jose: hm, no, I'm setting it in website-relation-joined
<jose> interface is where the name of the interface goes
<jose> and I assume you're doing `juju add-relation service:website varnish:reverseproxy`?
<jose> kirkland: ^
<kirkland> jose: ah, I think that's it
<jose> let me know how that goes :)
<marcoceppi> kirkland: who wrote the charm?
<marcoceppi> jcastro_: yeah, you can't do resolved --retry
<marcoceppi> the upgrade-charm event is still queued
<marcoceppi> so you don't have the new payload yet
<jose> it's event-based, correct
#juju 2014-06-21
<threebadwheels> hi
<threebadwheels> does anyone know how to assign a floating ip to a lxc?
<jose> guys, let's say I have only one AWS account but set two environments with different names but the same credentials. will that work for having 2 diff environments?
<sarnold> jose: I haven't tested but that sure feels like something that should just work
<jose> sarnold: if you can tell me it won't break everything, then I can give it a try :P
<sarnold> jose: you're a charmbot 2000! what could go wrong? :)
<jose> haha, let's do this
<sarnold> jose: (just be sure to look at the storage used for the bucket uploads once you're done -- I got a surprise bill from aws when my free period ended because I hadn't cleaned up my juju bucket -- just clearing the instances wasn't sufficient, of course.)
<sarnold> <3 aws billing alerts
<jose> sarnold: thanks for the tip, will need to make sure! I depend on billing alerts too :P
<jose> sarnold: should I use a different bucket name for the second env?
<sarnold> jose: eek. no idea :)
<jose> will try with the same
<jose> sarnold: voilÃ ! it worked! just needs a different bucket name!
<sarnold> jose: cool :D
<sarnold> charmbot.. 3000? :)
<jose> I was 5000 originally, maybe I can go up to 4999? :P
<sarnold> oh! my apologies.
<jose> (see '/ns info joseeantonior' for more info)
<sarnold> jose: aww, I don't see any charmbot 5000 details there :/
<lazypower-travel> sarnold: yeah he needsa a different bucket name :) for reference.
<sarnold> heya lazypower-travel :)
<jose> sarnold: if you can't see it, /ns info charmbot5000
<sarnold> jose: haha
<sarnold> lazypower: no more travel?
<lazypower> sarnold: not until Monday
<sarnold> lazypower: ahhh. all the more time to go get some more of that steak. was it as good as it looked? :) heh
<lazypower> oh man, not only did i have steak, i had cheesesteak
<sarnold> CHEESESTEAK? oh man.
<lazypower> it was amazing
<lazypower> philly cheesesteak in philly
 * jose has never had philly steak
<sarnold> one of these days I'll make it to that side of the rocky mountains and get some from the source...
<lazypower> its worth it. true story.
<jose> https://joseeantonior.wordpress.com/2014/06/20/juju-multiple-environments-with-just-one-account/
<sarnold> jose: charmbot 5001 :)
<jose> \o/
<lazypower> wooo https://twitter.com/cammoraton/status/480124918173294592
<sarnold> \o/
<rbasak> jcastro_, jamespage: FYI, bug 1328958. Seems critical to me for the normal user trying out juju use case.
<Delair> Hi ALL.. Can any body how can i deploy a service in kvm i.e neutron-gateway when all other services are in lxc mode
<galebba> any idea how to get rid off a lingering dead charm ? I tried juju remove-service and restart the jujud on the target unit but the charm is still there. shows up as agent-state: stopped  life: dead
<galebba> the charm shows up life as dead and agent as stopped however doesnt get removed. Same result by destroying it from the juju gui. any idea how to remove these dying charms
<Delair> galebba try juju resolved service-name
<Delair> after destroying the service
<Delair> galebba: if that didnt work then you have to destroy the machine where that service is in dying state
<galebba> yeah it says the service is not in error state
<galebba> so i guess destroying the machine is the last resort ? i have other services in the same machine
<Delair> but when you destroy machine you might have to use --force command
<Delair> if you have other services then try following 1. destroy-service 2. remove-service and then juju resolve service-name
<Delair> this works for me few times
<galebba> i guess first i have to destroy the relations before those ?
<Delair> yes if you have relations then yes
<Delair> then do the same 1. destry-relation 2. destroy-service 3. remove-service and then after 1 min  juju resolve service-name
<Delair> i will check the logs next time i face that issue but i think juju resolve try to clear all the pending / dying states
<galebba> could anyone tell me do i need to set the ppa:juju/stable repos to install openstack on 14.04 ? From all the docs i see this is not mentioned and i cant seem to set it anyways
<lazypower> galebba: you shouldn't need to no. 14.04 has juju-stable in the repositories
<galebba> thank you, lazypower
#juju 2014-06-22
<Azendale> I'm using the local lxc provider on trusty, with the "default-series: trusty" option. Whenever I try to deploy a non-trusty charm, it never finishes (stays in pending state), and the log says "machine-0: 2014-06-22 00:28:59 ERROR juju runner.go:220 worker: exited "environ-provisioner": failed to process updated machines: cannot start machine 9: no matching tools available". How can I upload tools for all versions?
<AskUbuntu> Upload all release versions of tools with juju's local (lxc) provider? | http://askubuntu.com/q/486542
<jose> Azendale: I think that is set on your environments.yaml file, lemme double check
<jose> Azendale: nope, I was wrong, you need to use `--upload-tools precise`
<ali1234> Azendale: you have to use "default-series: precise"
<ali1234> otherwise "bad things will happen"
<thumper> anyone around for charmhelpers help?
<jose> thumper: I know a bit about them but may be able to help, what's up?
<thumper> jose: I'm looking for a function to ensure that a symlink exists
<thumper> that checks source and target
<thumper> know of any
<thumper> or shall I write one
<jose> hmm, maybe that's in python itself?
<jose> shouldn't that be something like a 'if file exists'?
 * thumper shurgs and looks
<thumper> os.path.islink exists
<thumper> but how do I find the source?
<jose> what do you mean?
<jose> like, the source where you need to point the function to check?
<jose> well, if you're writing a charm then you should know where that one is installed
<thumper> hmm...
<thumper> I guess I only need to check that the target exists, and is a link
<thumper> yes, in a charm
<thumper> so... should be enough perhpas...
 * thumper pokes
<thumper> jose: ok, so effectively I want something like this:  ensure_symlink('/srv/nginx.conf', '/etc/nginx/sites-enabled/foo.conf')
<thumper> looks at the second param to see if the link exists
<thumper> and if so, make sure it points to the first param
<thumper> if not, delete it
<thumper> then make a link if not there (or just deleted it)
<jose> sorry, /me was looking at another channel
<thumper> so... given a symlink at /etc/nginx/sites-enabled/foo.conf, how can I check it points at /srv/nginx.conf
 * thumper looks at more python docs
<jose> ok, so I wouldn't see a case where it won't point there (if you told the charm to do that)
<jose> thumper: you already got through verifying that the symlink existed, right?
<thumper> sure, os.path.islink works
<jose> ok
<jose> thumper: os.readlink('/srv/nginx.conf')
<jose> not sure what's the format in which it's returned, though, you may need to give it a try
<thumper> awesome
<thumper> that's it I think
<jose> if you find the way it prints it out and do an if, then problem solved :)
 * thumper upgrades charm again
<jose> thumper: just so you know, os.readlink('/srv/nginx.conf') should return this "    '/etc/nginx/sites-enabled/foo.conf'    " (remove the spaces and the quotes
<jose> so it should be just this:
<jose> '/etc/nginx/sites-enabled/foo.conf'
<jose> including those quotes
<thumper> jose: with logging: paste.ubuntu.com/7683418/
<jose> lemme check
<jose> (but gimme a min, I'm finishing deploying a cloud on a cloud)
<thumper> jose: oh, it works
<thumper> don't worry about that :-)
<thumper> I should get around to submitting some nice utility functions for charmhelpers
<thumper> perhaps jcastro_ or marcoceppi should poke me about it on a Friday afternoon
<thumper> I should also extract the nginx code from my app subordinate for python-django charm
<jose> thumper: well, glad I could help :) if you have any questions just go ahead and ask
<thumper> and make it a generic subordinate
<jose> that'd be cool, if it deploys good
<thumper> or alternatively, add nginx to the python-django charm as an option
<jose> thumper: ever played with openstack?
<jose> (like, deploying)
<thumper> only some basics
<jose> ah, nvm, /me missed a letter on the URL :P
<thumper> :)
<AskUbuntu> What does "LDS" stand for in the openstack reference implementation? | http://askubuntu.com/q/486609
<Simhon> questions about maas installations goes here ?
<Simhon> I am installing maas controller from the ubuntu cloud documentation, does the maas should be installed on a desktop or server ubuntu os ? or it does not matter any of the two....
#juju 2015-06-15
<therealmarv> Hi, my charm is on revision 4 here (search for the realmarv) https://code.launchpad.net/charms and here https://code.launchpad.net/~therealmarv/charms/trusty/pybossa/trunk but not on https://jujucharms.com/u/therealmarv/pybossa/trusty/1 Iâm doing a major update before pushing it again to the review queue. Can someone explain me why the revision number does not update on jujucharms.com (it is still on 2) ?
<therealmarv> Is there a problem with ingestion ?
 * Mmike grabs food
<redelmann> hi, is anyone having troubles with local deploy (KVM/LXC) and dhcp changes for using some machine in differents dhcp server? (home/work)
<redelmann> juju set apiaddresses on all agents to wlan ip
<lazyPower> redelmann|afk: i'm not sure what you're asking - whats happening?
<redelmann> lazyPower, after add a machine "juju add-machine"
<redelmann> lazyPower, for some reason inside added machine
<redelmann> lazyPower, in "/var/lib/agents/machine-n/agent.conf"
<redelmann> lazyPower, apiaddresses: - my.wlan.ip:17070
<redelmann> lazyPower, so agents are using my wlan0 ip that always changes if i connect to another network
<redelmann> lazyPower, after a reboot, if wlan0 ip changes all agents stop working.
<lazyPower> redelmann: did you alter the default lxc networking config?
<redelmann> redelmann, no, everything is in default. fresh installation
<lazyPower> which version of Juju?
<redelmann> lazyPower, i'm using container: kvm in environment.yaml
<redelmann> 1.23
<lazyPower> dimitern: ping ^
<redelmann> lazyPower, i could see   STORAGE_ADDR: 192.168.122.1:8040, which is my virbr0 ip
<lazyPower> redelmann: i'm not sure why thats happening, but i've pinged a dev thats been working on networking support in juju, which may or may not be related
<lazyPower> and he might be able to help square you away. What I suggest is to file a bug w/ this behavior, and ping the list with it to get some eyes on it
<redelmann> ack
<dimitern> lazyPower, pong
<dimitern> redelmann, lazyPower, ack, will look a bit later
<lazyPower> thanks dimitern
<pmatulis> in the about-juju docs [1], what is meant by "reuse whatever they want from other teams"
<pmatulis> [1]: https://jujucharms.com/docs/stable/about-juju
<rick_h_> pmatulis: so let's say you're part of two teams and the application stack is mostly the same (apache, django, postgresql) but my team uses redis and your team uses memcache.
<rick_h_> pmatulis: we can work together, using charm interfaces to reuse the rest of the stack yet the two teams can easily disagree on the caching part of the stack
<rick_h_> e.g. if I find better apache config settings and update the charm, you can reuse that and gain the same benifits
<pmatulis> rick_h_: gotcha, so share some charms but not necessarily all
<rick_h_> pmatulis: exactly, and you can even have two different charms that implement the same inteface and swap them out
<rick_h_> pmatulis: so let's say I want haproxy up front but you want to proxy throgh nginx. As long as we use the same proxy relation guide we can take the same app but proxy it different just by connecting it to different services
<pmatulis> rick_h_: makes sense, thank you
<rick_h_> pmatulis: np
<kandar> hey
<lazyPower> hazmat: ping
<hazmat> lazyPower: pong
<lazyPower> ahoy, I just noticed a comment on the DO api v1 sunset issue over on our juju provider
<lazyPower> are you still working through that? if not i'll advise to use manual for the time being, v2 contributions appreciated.
<hazmat> lazyPower: if we allow for growing the dependencies, its easy enough to knock out
<lazyPower> i dont think there's any reason to be stingy with the dependency tree at this time
<lazyPower> considering projects like this exist: https://github.com/koalalorenzo/python-digitalocean
<lazyPower> we could pretty much bind on that API lib, and do the rewrite in a weekend (thats engineering optimism at its finest right?)
<lazyPower> hazmat: replied. Thanks mate
<hazmat> lazyPower: i'd guess a few hrs ;-)
<lazyPower> Yeah, but you're a wizard harry
<hazmat> lazyPower: i like this lib the most re doV2 api but its got some unfortunate ssh helpers that widen the dependency tree to include paramiko https://github.com/changhiskhan/poseidon
<lazyPower> ah yeah
<lazyPower> well, lets cross fingers someone in the community steps up to do the v2 migration in the next couple of weeks. It would be nice to see the consumers give back :)
<cholcombe> lazyPower: do you know how maas zones are supposed to work?
<cholcombe> i'm wondering what gets relayed back to my charm and what i'm supposed to do with it
<cholcombe> can amulet use an apt-proxy?
<lazyPower> cholcombe: sorry i was in my audio studio and didn't get your pings until just now
<cholcombe> no worries
<lazyPower> cholcombe: AIUI, MAAS zones are the same as regions, your charm wont see it, as thats abstracted away by the "provider" layer of juju
<cholcombe> oh..
<cholcombe> what if i want to see it? haha
<cholcombe> my charm needs to layout bricks across fault zones
<lazyPower> also, re: amulet using an apt-proxy - it will default to route through the config of whichever machine you're using. If your charms demand an apt mirror it would be good tos etup a squid-apt-proxy in your test env so they can zero conf configure+utilize if its available.
<lazyPower> that would be something you leverage in the bundle
<lazyPower> thats a layer above the charms concerns
<cholcombe> hmm
<cholcombe> interesting alright
<lazyPower> bcsaller: am I correct in this statement? i'm starting to second guess myself ^
<cholcombe> i'm wondering how to do this now
<lazyPower> cholcombe: i'm 90% certain thats the case
<lazyPower> you use constraints to set the zone
<cholcombe> so with constraints in place is it the idea that juju will hand me units in the right order across zones?
<cholcombe> i'm missing something
<cholcombe> the case i'm thinking of is i have 3 racks of machines.  each rack is defined as a failure zone.  how does my charm get a list of units from those zones?
<cholcombe> i can't be the first one to have thought of this lol
<bcsaller> lazyPower: iirc you used to be able to juju set-env http(s)-proxy xxx or set-env apt-http(s)-proxy and apply those changes across the whole env, Is that what you mean?
<lazyPower> bcsaller: ah no, i was referring to zones
<lazyPower> but thats good to know about pthe proxies too, it's been a good while since i've tinkered with the proxy settings.
<lazyPower> cholcombe: you define 3 service groups, that deploy to 3 sep. zones as constraints
<lazyPower> eg: "gluster-group1" -- constraints="zone=rack1"     "gluster-group2" --constraints="zone=rack2"
<cholcombe> so effectively my charm has no idea what is going on
<lazyPower> so forth and so on
<cholcombe> all i need to do is layout across servers and i'm good
<cholcombe> alright that's good enough i think
<lazyPower> basically, so long as they can communicate with one another that should work as intended
<cholcombe> right
<lazyPower> you get a full service group, comprised of 3 clusters in different zones.
<cholcombe> and i just need to make sure i layout bricks across whatever i see
<lazyPower> and you dont even need to name them differently afaict, you can just juju deploy, and/or add-unit witht eh constraint in place and it should "just work"
<lazyPower> but ymmv with "just work"
<cholcombe> right
<lazyPower> mostly because you're listening to me :P
<cholcombe> so slightly annoying from an admin perspective but easy from my charm's perspective
<lazyPower> dangerous situation to be in
<cholcombe> heh
<lazyPower> well, if you think about it like this
<lazyPower> the admin has to setup these service pools/zones in the first place
<cholcombe> yup
<lazyPower> how would charm know what to do that is proper?
<cholcombe> definitely
<cholcombe> it wouldn't
<cholcombe> it doesn't have enough info to act
<lazyPower> i mean, just because you have 8 zones available to you, it doesn't make sense to propigate across all 8 zones
<cholcombe> right
<lazyPower> cholcombe: what kind of music do you listen to?
<cholcombe> electronic while i'm working.  heavy metal when i'm working out :)
<lazyPower> oh really?
<lazyPower> Snap
<cholcombe> why?
<lazyPower> I'll have something for you on dropbox in a couple minutes then
<cholcombe> haha
<lazyPower> Not going to publish this until tomorrow
<cholcombe> ok
<lazyPower> but i digress, would be awesome to get feedback
<cholcombe> sure thing
<cholcombe> so these constraints.  will i have separate clusters for each zone ?
<cholcombe> cause that could get messy
<lazyPower> that awkward moment when you realize you're in the public channel
 * lazyPower facepalms
<cholcombe> :D
<lazyPower> Nah, you can use the same service name
<lazyPower> just ensure when youd eploy, and you set the constraints, that you use different zone names for the unit(s)
<cholcombe> i'm going to have to test this lol.
<lazyPower> might be a good idea to model it on the CLI and export from teh gui so you get a sense of what it should look like as the final bundle form
<cholcombe> right
<lazyPower> but yeah, juju add-unit gluster --constraints="zone=foobar" should do the trick when you go to add unit(s)
<cholcombe> i know what it should look like from gluster's perspective.  i'm just trying to figure out what my charm ends up seeing
<cholcombe> lemme read the constraint page again
#juju 2015-06-16
<thomi> while doing 'juju bootstrap' on the local (lxc) env I get "ERROR there was an issue examining the environment: required environment variable not set for credentials attribute: User"
<thomi> Any hints as to what that means?
 * thumper thinks
<thumper> thomi: you aren't using lxc
<thumper> or local
<thomi> thumper: oh wait, I think yeah
<thumper> that error comes from the openstack provider
<thomi> thumper: sorry, thinko on my part
<thumper> np
<thomi> forgot I had the env var exported
<thomi> thanks
<lazyPower> thumper: good catch
<thumper> o/ lazyPower
<lazyPower> thumper: btw, i *will* get to your django MP's this week, soz its taken me so long to get to them
<thumper> lazyPower: review poke
<thumper> :_
<thumper> heh
<lazyPower> hah
<lazyPower> already on your wavelength mate
<thumper> lazyPower: once that one is in, I'll submit the celery one
<lazyPower> I have something you're going to want to take for a spin i think.
<thumper> I'm using it now
<lazyPower> http://github.com/chuckbutler/dns-charm
<lazyPower> i've been reviving this project from last year quite a bit
<lazyPower> huge feature branch is going to land later this week that includes RT53 as a provider
<thumper> lazyPower: interesting
<lazyPower> I like to think so
<lazyPower> I've got a long road ahead of me w/ the unit tests that are failing
<lazyPower> i think i've failed to encapsulate my contrib code from charmhelpers somewhere, its failing on the cache file for config() in ci
<lazyPower> but thats future chucks problem (by future i mean tomorrow)
<thumper> :)
<lazyPower> thumper: i'll hit your MP up first thing when i clock into the office tomorrow, that sound good? that gives you 3 days to refactor before I head out for SF if anything needs some touch ups
<lazyPower> then i'll be out until Thurs of next week
<thumper> lazyPower: sounds god.
<thumper> lazyPower: should be fine though
<lazyPower> aight, you're on the calendar
 * thumper crosses fingers
<lazyPower> It more than likely is :)
<lazyPower> i have faith in your ability to python
<thumper> nice
<thumper> lazyPower: I just remembered another fix that I should submit...
<thumper> lazyPower: although this one is all docs
<lazyPower> YOU WROTE DOCS?
<thumper> no
<lazyPower> oh
<thumper> the readme was stomped over
<thumper> between version 6 and 7 of the charm
<lazyPower> dude dont get me excited like that
<thumper> and they no longer reflect reality
<thumper> however...
<lazyPower> i dont think my heart could take it
<thumper> I am going to write docs
<thumper> around how to write a payload charm for it
<thumper> because that truly sucked
<thumper> messing around working that out
<lazyPower> i wouldn't doubt it
<lazyPower> payload charms are tricky to get write
<lazyPower> *right
<thumper> I did learn a lot though :)
<thumper> would be good to capture that
<thumper> in a way someone else can learn from it
<jam> wallyworld: hey, I just saw your cursor on the resources spec, there was a change to "disk-path" I mentioned.
<wallyworld> jam: i've been making lots of changes and also responding to rick's comments. i understand the default is needed but also think we need to not hard code it to allow deployers to say their resources go elsewhere, eg onto an ebs volume
<wallyworld> we can hard code it if that's the plan, but i think it is a bit limiting? what if the resource won't fit on the root disk?
<jam> wallyworld: can we fit those resources in gridfs?
<wallyworld> jam: i had thought'd we'd use a separate mongo db so we can shard etc
<wallyworld> but i guess it doesn't matter, we can just hard code
<jam> wallyworld: so I'd like to leave it in the "do we want to add this" pile
<jam> we can decide on it, but I'd rather start simple
<wallyworld> ok
<wallyworld> it's not that much to support, there' s much harder stuff first up :-)
<wallyworld> also, isn't the default root disk size on aws 8GB?
<jam> wallyworld: I certainly agree it isn't hard, but it is complexity that we may never actually need.
<wallyworld> that's rather small
<jam> wallyworld: it is, but so is the size of the MongoDB that's running the environment.
<wallyworld> not if we use a separate  db for resources
<jam> wallyworld: where does that DB *live* ?
<wallyworld> well, fair point
<wallyworld> that would be a complication first up
<jam> wallyworld: I guess if we let you tweak the Juju API server to put the Resource cache onto a different disk
<wallyworld> something lke that. but as you say, we can start simple
<jam> wallyworld: so I'm happy to have it as a separate logical Database (like we do for presence and logging)
<wallyworld> yup
<jam> wallyworld: and especially for the large multiple environments having a way to go in and do some sort of surgery to handle scale will be good
<wallyworld> yeah, we always planned to use a separate logical db
<wallyworld> jam: so i'm off to soccer soon, i think i've answered most of rick's questions but i need time away from the doc as it's starting to blur into a mess of works. i'll revisit later and tweak some more. need to add sample output etc. there's still some points needing clarification. hopefully it's getting close
<jam> wallyworld: np, have a good night
<wallyworld> ty, be back after soccer
<Odd_Bloke> If someone could take a look at https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/update_charm-helpers/+merge/262072, it would be much appreciated.
<Odd_Bloke> I failed to add some of the new charmhelpers files, so the ubuntu-repository-cache charm is broken.
<Odd_Bloke> It's a very easy code review. :)
<lukasa> lazyPower: ping =)
<lazyPower> lukasa: pong
<lazyPower> o/
<lukasa> o/
<lukasa> Wanted to get your eyes on this quickly: https://github.com/whitmo/etcd-charm/pull/10
<lazyPower> so, as these etcd units are not raft peers they aren't part of the same cluster?
<lazyPower> just independent etcd nodes on each docker host?
<lukasa> lazyPower: Correct
<beisner> hi coreycb, a merge/review for you re: ceilometer amulet test updates:  https://code.launchpad.net/~1chb1n/charms/trusty/ceilometer/next-amulet-kilo/+merge/261850
<lazyPower> well, s/docker/service/
<lazyPower> ok
<lukasa> lazyPower: Eh, you say tomato...
<lazyPower> hehe, well the bug mentions calico openstack
<lazyPower> but i bet this is for both
<lukasa> =P Certainly on OpenStack we deploy etcd proxies everywhere for scale reasons more than anything else
<coreycb> beisner, ok I'll look later today probably
<lukasa> But also for homogeneity
<lukasa> (Fun word, glad I got to use it)
<lazyPower> ok, i'm good with this. would be excelent to see tests here too but i wont block on that
<lukasa> Well, do you want to hold off a sec?
<beisner> coreycb, ack thanks
<lazyPower> sure
<lukasa> I'm writing the Calico side of things, and I can quickly sanity check by actually running the damn thing
<lukasa> =D
<lazyPower> i'm +1 for that
<lazyPower> while i'ev got your attention
<lukasa> Awesome, so that'll get done today or tomorrow
<lazyPower> is the docker merge still blocked on CLA?
<lukasa> AFAIK, yes, but I'll double check
<lazyPower> ok let me reach out to my contact and poke them again
<lukasa> lazyPower: Fab, I'm checking on my end as well
<lukasa> lazyPower: Yup, as far as we know we're still waiting on the CLA stuff
<lazyPower> i unfortunately had presumed as much, i just poked my contact again. i think they're dragging feet on a confirmation from management to sign it.
<lazyPower> i'll run the ropes on this and see if i cant get it expedited
<lazyPower> when you get some free time i'd like to work through whats there with you, i still haven't gotten a good test from it yet, but thats more than likely pebkac
<lukasa> Hopefully I'll be sitting on a little bit of time this week, assuming this etcd charm change goes off without a hitch
<lazyPower> right on
<lazyPower> I'm the lone ranger left on my team prepping for dockercon, so our roles have been reversed this week
<lazyPower> but after the conf i should have some time
<lukasa> =D Nice
<lukasa> Our docker folks are all heads down atm, so I'm manning the fort on the charms side
<lukasa> lazyPower: Still about?
<lazyPower> surely
<lazyPower> whats up
<lukasa> The install hook of the etcd charm assumes that easy_install will be present
<lukasa> But it's not present on an Ubuntu cloud image as far as I know...
<lukasa> So installing the charm explodes =P
<lazyPower> easy_install is shipped in cloud images on CPP clouds
<lazyPower> where are you running these tests?
<lukasa> On a MAAS box
<lazyPower> hmm
<lazyPower> thats bizarre, ok.
<lukasa> Well, it's not necessarily the most up to date MAAS in the world
<lazyPower> i guess we can throw down a quick block fo code to install easy_install.
<lukasa> It's easy enough to fix, just need to manually intervene
<lukasa> Well, you could
<lazyPower> but easy_install has been present on everything i've tested on
<lukasa> Or you could just skip the middle-man and use get-pip.py directly to install pip ;)
<lukasa> Which has the advantage of doing it over a secure connection, unlike easy_install
<lazyPower> ah, i'm not a fan of doing the wget | bash method
<lukasa> Oh sure, I mean literally bundle get-pip.py
<lukasa> Just a single file =)
<lazyPower> this all stems from our pip package in archive being busted
<lazyPower> install requests and the world blows up
<lazyPower> stupid python dependencies :|
 * lazyPower rages silently against a problem thats been cropping up more and more
<lukasa> =P This is where I put my hand up as a requests core developer
<lukasa> So this is a little bit my fault
 * lazyPower instantly un-rages and apologizes
<lukasa> =D
<lukasa> It's totally ok
<lukasa> The situation is a mess
<lazyPower> it really is
<lukasa> But HP are paying dstufft full-time to fix it
<lazyPower> system dependencies not being in a venv make this tricky
<lukasa> Yup
<lukasa> Presumably the charm could have a virtualenv, though...?
<lazyPower> thats tricky. we have venv being prepped in our docker charm - but we haven't really leveraged it
<lazyPower> i'm not sure what issues we will crop up with going that route - but i'm game for trying it out
<lazyPower> thats a hefty feature branch however, as it effects the entirety of the charm
<lukasa> Yeah, I wouldn't do it now
<lazyPower> lets file a bug and explore that at a later date
<lukasa> For now I can just do a juju add-machine and hop on and install easy_install
<lukasa> Then deploy the charm to it directly
<lazyPower> ok, sorry about the inconvenience, but good to know if we have a substrate thats not shipping with batteries
<lukasa> =P It's a pretty minor inconvenience
<lukasa> I think I also have a too-old Juju, so I'm updating that as well while I'm here
<lazyPower> but *handsigns* magic
<lazyPower> be aware that 1.23.x has an issue whend estrying the env it pulls the socket out from underneath you
<lazyPower> *destroying
<lukasa> What's the net effect of that?
<lazyPower> things like bundletester have random bouts of errors when running multiple test cases
<lazyPower> client connections are terminated and you get a stacktrace while destrying an env, but the env *does* get destroyed.
<lazyPower> http://juju-ci.vapour.ws:8080/job/charm-bundle-test-aws/173/console
<lazyPower> is a good example of the output you'll see
<lazyPower> the "reset" bits that loop for ~ 30 lines
<lukasa> Eh, I'm not scared of stacktraces
<lukasa> Oh, btw, we're dropping a new 'feature' that should make docker demos a bit nicer, which we may want to incorporate into the charm
<lazyPower> oh?
<lukasa> But basically, on cloud environments we can set up ip-in-ip tunnels between hosts and run the Calico traffic through them
<lukasa> This means you don't need a cloud that gives you a proper fabric
<lazyPower> nice :)
<lazyPower> when is that expected to land?
<lazyPower> i have a work item this week to get SDN in our bundle we're using @ the conf
<lukasa> It's already in the latest release of Calico, I think the next calico-docker release will contain it
<lukasa> Which I'd expect...today, I think?
<lazyPower> oh nice
<lazyPower> i'll def. tail the repo and when it lands give it a go
<lukasa> We don't plan to call that a productised feature because customers won't deploy Calico in that kind of fabric
<lazyPower> right
<lukasa> But it's useful for demos and trying it out on clouds
<lazyPower> +1 to that
<lukasa> Also, setting up those tunnels involves typing a series of *super* cryptic 'ip' commands into Linux, so charms are perfect for it. ;)
<lazyPower> juju power activate!
<lazyPower> calco will form the network
<MrOJ> Hello to everybody. Is this the place where I can share my troubles with Juju? =)
<gnuoy> jamespage, odl-controller mp https://code.launchpad.net/~sdn-charmers/charms/trusty/odl-controller/odl-cmds/+merge/262095 (no great rush)
<MrOJ> Most of all I have question about juju agents. Is there a way to restore or regenerate agents apipasswords?
<lazyPower> MrOJ: agent configurations are all listed /var/lib/juju
<lazyPower> let me get a direct path for you 1 m
<lazyPower> MrOJ: so assuming your charm name is 'test'
<lazyPower> teh agent config path is /var/lib/juju/agents/unit-test-#/agent.conf
<lazyPower> the .conf file is a yaml formatted key/value store of all the data required to communicate w/ the state server. You can update all the values in there if required, including repointing to a new state server, updating the api password, etc.
<MrOJ> Yes I know that. It' a long story but right now I don't have that directory in my system
<MrOJ> Sorry my english. I'm from Finland and  its not my main language
<lazyPower> no worries MrOJ
<MrOJ> I think that somehow Bug #1464304 might have made this situation
<mup> Bug #1464304: Sending a SIGABRT to jujud process causes jujud to uninstall (wiping /var/lib/juju) <cts> <sts> <juju-core:Triaged> <https://launchpad.net/bugs/1464304>
<lazyPower> yikes!
<MrOJ> I've managed to manually restore agents.conf and all other files in /var/lib/juju and jujud start scripts in /etc/init.
<MrOJ> But if I start jujud-machine-xx it removes /var/lib/juju again in that node.
<MrOJ> In /var/log/juju/machine-xx.log is mention about "invalid entity name or password" and after that "fatal "api": agent should be terminated "
<lazyPower> natefinch: ping
<natefinch> lazyPower: sup
<lazyPower> have you seen behavior like this? is this due to a stale status left over in the state server terminating the restoration of the unit agent? i'm a bi tout of my depth here
<lazyPower> MrOJ: ran into a pretty hefty bug that's terminating a unit out from underneath him
<natefinch> lazyPower: reading history
<natefinch> MrOJ, lazyPower:  ouch, that's a gnarly one.
<MrOJ> Version is 1.23.3
<natefinch> MrOJ: what provider are you using?  (like, amazon, maas, openstack, etc)?
<MrOJ> It's maas
<natefinch> MrOJ: Do you need to keep that machine running. or can you just destroy it and recreate it?
<MrOJ> I need to have it running because it's in production.
<MrOJ> I have small Openstack cloud running in our company and machine is part of it.
<MrOJ> Openstack deployment itself is ok
<natefinch> MrOJ: tricky. I'm talking to some of the other devs to see if we have a way to get that machine back in working order.
<MrOJ> I've learned basics about mongodb and have recovered most of data straight from there but I can't figure out how I can restore apipassword
<MrOJ> natefinch: Thank you
<natefinch> MrOJ: Still doing some tests to try to figure out the best way to get you recovered.
<MrOJ> natefinch: Thanks again!
<perrito666> hey MrOJ :)
<perrito666> let me recap here,  the files in /var/lib/juju where lost and you rebuilt it right?
<perrito666> and all seems ok excepting for the api password
<natefinch> MrOJ: I have to run for a bit, so I'm handing you off to the very capable perrito666.
<MrOJ> perrito666: Yes that's right. I forgot to mention statepassword too..
<perrito666> MrOJ: currently the status for said service says something?
<MrOJ> perrito666: juju status says "agent-state: down"
<perrito666> is it the only unit for that service?
<MrOJ> perrito666: no but they all says the same
<perrito666> oh, so you have multiple machines/containers in that shape?
<MrOJ> yes that is the situation..
<perrito666> ah, sorry, I had missed that part
<MrOJ> it's ok..
<MrOJ> Actually all my machines are in that situation..
<MrOJ> Except state servers
<MrOJ> I had to restore HA state servers and same time I had dns problem in MAAS.. I didn't know that then. Because of this I bumped to bug I mentioned earlier
<MrOJ> At least, I think this is what happened..
<perrito666> MrOJ:  I am thinking, I am sure we can rescue this, but I am thinking which is the best way, either to nuke all the password for one that we can use or something like that
<perrito666> brb, lunch.
<MrOJ> perrito666: I can restore each unit one by one.. We have only about 50 units so it's not so big job..
<MrOJ> perrito666: Ok. Take your time and have a great lunch =)
<cholcombe> can i setup a new deployment in each test_case for amulet?  Is that advisable ?
<lazyPower> cholcombe: Typically when the entire toplogy is undergoing a rapid change it warrants a new test file as the deployment map is defined in __init__()
<lazyPower> but i'm open to seeing a different pattern emerge :)
<cholcombe> ok interesting
<lazyPower> cholcombe: if you're only adding a unit to the topology, it should be fine to just self.deploy.add_unit() or add a new service. start w/ bare bones and iterate through the test file
<lazyPower> it'll cut down the overall test-run time which is a good thing, right now the integration tests are very slow
<lazyPower> so its kind of dependent on what you're doing
<cholcombe> well the issue is gluster has like 10 different volume types and i want to test each one
<lazyPower> is this something that needs to be defined at deploy time?
<lazyPower> or can you reconfigure the charm w/ the different volume type
<cholcombe> i've been setting the volume type in the charm config and then running deploy
<lazyPower> meaning once its stood up and running, is it possible to reconfigure the charm for that volume type
<lazyPower> or do you *have* to redeploy to gain that volume type
<lazyPower> i'm thinking this is like ceph, that your volumes are defined at deploy, and as its storage you're locked into that volume type for the duration
<cholcombe> pretty much yeah
<perrito666> MrOJ: oh, didnt catch that and had a medium to bad lunch :p
<cholcombe> you set it before you run it and you're locked in
<lazyPower> Yeah, you'lll need to do a different permutation of the charm then, which would warrant a new test - as afaik there's no way to destroy a service in amulet to date
<MrOJ> perrito666: I know the feeling =)
<skay> thanks for python-django work
<skay> I haven't been able to look at it in a while, but definately appreciate the work in the meanwhile
<skay> delurking to give props
<beisner> cholcombe, lazyPower - you might be interested in related in-flight work on the ceph amulet tests...
<cholcombe> oh?
<beisner> the pivot point is different (ubuntu:openstack release) with the same topology
<beisner> WIPs @ https://code.launchpad.net/~1chb1n/charms/trusty/ceph/next-amulet-update/+merge/262016
<beisner> & https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates
<beisner> so that exercises the same ceph topology against precise-icehouse through vivid-kilo
 * cholcombe checking
<cholcombe> yeah that's similar to what i need to do
<beisner> actively working to update and enable kilo and predictive liberty prep
<beisner> ^ on all os-charms that is.
<lazyPower> beisner: wow thats a huge diff
<lazyPower> the tl;dr is you get bundle permutations mid-flight with this?
<cholcombe> yeah really
<beisner> yeah, some refactoring for shared usage by cinder and glance when i get there
<lazyPower> hmm
<lazyPower> you should blog about this :)
<lazyPower> so i can read the blog intead of the diff
<lazyPower> <3
<lazyPower> *instead
<cholcombe> lol
<beisner> ha!
<beisner> how about a pastebin of the test output?  ;-)   trusty-icehouse:  http://paste.ubuntu.com/11726195/
<beisner> oh i guess that paste includes precise-icehouse too.   just got the kilo stuff working, but no osci results yet.
<lazyPower> beisner: it has no pictures
<lazyPower> i need pictures, and a story to go with it
<beisner> i know i know, needs shine  ;-)
<lazyPower> :D
<lazyPower> so, i'll put you down as writing a blog post next week on this? excellent
<lazyPower> jcastro: ^
<lazyPower> you saw it here first, beisner agreed to blog about his awesome osci permutations code
<beisner> actually, that is on my list o things to do, lazyPower
<lazyPower> I'm being deliberately obtuse to rally support for your cause
<lazyPower> in the form of giving you work items
<lazyPower> that are totally awesome, and i can tweet about
<lazyPower> where do you blog currently beisner?
<lazyPower> i'd like to add you to my feed
<beisner> lazyPower, http://www.beisner.com - it's been mostly idle though as i've been mostly throttled
<lazyPower> ack, thanks for the link
<jcastro> hey jose
<jose> ohai
<jcastro> hey so office hours in 2 days iirc?
<jose> jcastro: yes, want me to host? if so, can we move it up by 2h?
<jcastro> I would like to confirm the time so marcoceppi doesn't make fun of me
<jcastro> I can host, can you resend me the creds just in case though?
<natefinch> MrOJ: looks like we're handing you back to me.  Do you have log files you could share?  all-machines.log on one of the state servers might contain some useful information for figuring out what went wrong.
<jose> jcastro: sure, will do right now, along with some instructions
<jcastro> excellent
<jcastro> my calendar has it for 8pm UTC, is that what you have?
<jose> jcastro: sorry, was having lunch. I do have it at 20 UTC. looks like we're good
<natefinch> MrOJ, ericsnow: we should talk here.  MrOJ, ericsnow is one of the developers that worked on the backup code (along with perrito666).
<ericsnow> natefinch, MrOJ: note that I'm not nearly as familiar with the restore side of things, but I'll help as much as I can
<natefinch> MrOJ: do you have a machine log from one of the machines that killed its state server?
<MrOJ> natefinch, ericsnow: I think I have. just a moment
<perrito666> I am back
<MrOJ> natefinch: Yes I have log, but filesezi is almost 20M
<natefinch> MrOJ: how big is it if you compress it?  It should compress a *lot*
<MrOJ> natefinch: I'll check
<MrOJ> natefinch: Ok.. Now I have log from machine-0 and machine-2.
<MrOJ> natefinch: those files are about 1M compressed
<MrOJ> natefinch: Can I email those to you or somebody else?
<natefinch> MrOJ: email to nate.finch@canonical.com please and thank you
<MrOJ> natefinch: ok.. I'll send those from my work email -> timo.ojala@kl-varaosat.fi
<thumper> lazyPower: cheers for the python-django review
<natefinch> MrOJ: btw, you said you were doing a restore while having DNS issues... why were you doing the restore in the first place?
<lazyPower> thumper: happy to help :)
#juju 2015-06-17
<dimitern> charmers o/
<dimitern> anyone around?
<dimitern> I'm thinking of creating a simple subordinate charm that has actions returning various networking settings - iptables, routes, etc.
<dimitern> So, can a subordinate have actions of its own?
<lazyPower> dimitern: it can
<dimitern> lazyPower, yeah, I've even found a good example with the chaos monkey
<dimitern> :)
<lazyPower> +1 for that
<dimitern> lazyPower, hey, still around?
<lazyPower> o/ dimitern
<dimitern> lazyPower, I have a couple of actions questions if you have some time
<lazyPower> surely
<dimitern> lazyPower, so is there an accepted way to implement an action like "run something and get me the output" ?
<dimitern> lazyPower, like, should the output be a param? or it could be dumped (without extra YAML) by action do / fetch
<lazyPower> i dont think the output necessarily needs to be a param unless you need dynamic output data. action-set will take care of giving you the data points you care about.
<lazyPower> a really good overview of this can be seen by the benchmark actions marco's team implemented. let me fish up one, 1 sec
<dimitern> ah, that'll definitely help
<lazyPower> https://github.com/juju-solutions/siege
<lazyPower> notables: siege is paramterized, the output is parsed from json and stashed in the action dictionary for the run
<lazyPower> that should get you a good start w/ how to implement a well-defined action, and have meaningful results available to consume outside of your env.
<dimitern> thanks a lot! I'll look at that in detail
<marcoceppi> dimitern: you should never just dump yaml
<marcoceppi> dimitern: use action-set instead to relay data back
<lazyPower> morning marcoceppi
<dimitern> marcoceppi, but action-set returns YAML (or JSON I guess) - is there a way to dump a given param as-is?
<lazyPower> action-set returns something? thats news...
<dimitern> lazyPower, sorry, I meant juju action fetch <ID>
<lazyPower> dimitern: did you mean juju action fetch?
<lazyPower> ok
<lazyPower> :)
<dimitern> :)
<lazyPower> i was like "uh oh, what changed"
<lazyPower> yeah, no the fetch aiui returns the full data  feed, you cannot narrow it down to specific keys without parsing it
<lazyPower> maybe in a future revision?
<dimitern> right, well if it does - it does, I'll parse it then :)
<lazyPower> some fancy jq combined with --format=json
<dimitern> yeah, it'll help to have "juju action fetch <ID> [param], returning only that part of the results
<dimitern> lazyPower, so no concept of "output params", e.g. in that example doing a db backup and taking outfile=foo.tar.bz2
<dimitern> this won't actually mean foo.tar.bz2 will be created in the $PWD where you run juju action fetch
<lazyPower> i would suggest you provide fullpath to foo.tar.bz2, that way you can parse that key and pipe that to juju scp
<dimitern> right
<lazyPower> But i dont think its possible to set file contents like that and have it automagically fetched. its a 2 step process
<lazyPower> run proc, get data, action on data.
<dimitern> yeah, unless fetch provides the sync-point where the foo.tar.bz2 gets created when fetch is called in a certain way
<dimitern> lazyPower, the tip about jq is awesome though!
<dimitern> thanks :)
<lazyPower> np happy to help dimitern :)
<lazyPower> marcoceppi: silly question, i know how to trigger relations on another host out of band, eg: mysql:17 - but if i'm same host, and in a different relationship context - are there any amenities to help me kick off a *-relation-changed event aside from caching relation data thats missing, and stuffing that in a direct execution to the hook?
<marcoceppi> lazyPower: you mean, kick off a relationship on the same unit that you're currently running a hook?
<lazyPower> yeah. I'm in a position to proxy data between hook contexts
<marcoceppi> lazyPower: like, service foo has  a bar and baz relation, kick off bar from the baz relation?
<lazyPower> yep
<marcoceppi> not really, just need to abstract away the underlying code for bar so that it's invokable from baz, like a pyhton method
<lazyPower> when i relation_set(baz, mydata) - it triggers execution on the remote host, not the same host.
<marcoceppi> right
<lazyPower> ok, i figured taht would be the case
 * lazyPower snaps
<lazyPower> fg
<marcoceppi> bg
<marlinc> Is it possible to take advantage of OpenStack security groups when using Juju to deploy services?
<jrwren> marlinc: to what end?
<marlinc> Well I would like to setup security groups to firewall off some machine parts
<marlinc> Like giving a webserver only access to the ports that NFS uses for storage
<marlinc> And its only possible to for example access the webserver on port 80
<jrwren> marlinc: i cannot answer. sorry.
<marlinc> No problem
#juju 2015-06-18
<bilal> Is there a way to use juju API or some file in juju node where I can get the IPs of all machines that are being used in the whole openstack deployment?
<stub> marlinc: Juju lets you open ports and expose them to outside of the environment per service, which will use OpenStack security groups. Inside the environment however, you need to do your own firewalling. If you are writing your charms in Python, charmhelpers.contrib.ufw can help here.
<stub> charmhelpers.contrib.network.ufw sorry.
<stub> marlinc: I believe features are being worked on that will allow better control inside the environment, but I don't know what release is targetted.
<ezobn> Hi all, Does the juju charms ready for deploying the DVR in the Openstack Kilo ? Can I use gre with them ? I use /juju-proposed repo for the juju...
<ezobn> Hi all, Does the juju charms ready for deploying the DVR in the Openstack Kilo ? Can I use gre with them ? I use /juju-proposed repo for the juju...
<Mmike> Hi, lads. I upgraded juju on my control node to 1.24 (from 1.23). Now I'd like to upgrade agent across all the deployed services, but one by one. Is that possible? I see that juju upgrade-juju doesn't accept service name....
 * Zetas waves to everyone
<frankban> cory_fu: hi and thanks for the review on the redis charm. I updated it following your suggestions, could you please take another look when you have time?
<cory_fu> frankban: Absolutely.  I have RQ time slotted for today, so I will follow up on it in the next few hours.
<frankban> cory_fu: thanks!
<cory_fu> np
<mbruzek> jose: Can you help me with an on-air charmers meeting?
<mbruzek> jcastro: ping!
<mbruzek> jcastro ^^
<jcastro> mbruzek: yo
<mbruzek> jcastro: I need an on-air meeting
<jcastro> anyone can make one
<jcastro> https://plus.google.com/hangouts
<jcastro> there's a "str a hangout on air" button
<jcastro> mbruzek: is this a one off or a regular meeting?
<mbruzek> jcastro: we got it sorted.
<mbruzek> jcastro marcoceppi set up the meeting
 * jcastro nods
<marcoceppi> https://github.com/juju-solutions/review-queue
<mbruzek> The Review Queue repository, let us know if you have issues
<mbruzek> ^^
<ezobn> Hi all, do the last kilo OpenStack charms support the gre for dvr mode ? I use 1.24 juju-core with the latest available charms...
<marcoceppi> https://insights.ubuntu.com/2015/04/15/using-the-services-framework-to-implement-your-charms-intent/
<jamespage> lukasa, bird update for 14.04 just got released - is that good enough to avoid using the PPA?
<lukasa> jamespage: *Possibly*, I promise nothing. ;)
<lukasa> I'll check tomorrow
<cholcombe> can i disable the sudo apt-get update everytime i run amulet?
<cholcombe> or bundletester i mean
<cholcombe> lazyPower: is there a limit on how long bundletester tests can take overall?  I have 9 tests i think at this point
<lazyPower> cholcombe: afaik just the standup side of the test.
<cholcombe> ok
<lazyPower> i think the tests themselves can be executed for any indeterminant amount of time
<lazyPower> if that's not the case, we should probably get a bug filed against bundletester to make that tuneable
<lazyPower> thumper: https://github.com/chuckbutler/DNS-Charm/releases/tag/v0.2.3
<ezobn> does anybody know dhcp_agents in Kilo release of charms still installed by quantum ? or can like dvr installed by neutron-api/neutron-openvswitch charms ?
<lazyPower> ezobn: i'm not sure what you're asking - can you clarify?
<ezobn> lazyPower, I am trying to install Kilo Openstack using dvr option. So wondering do I still need the quantum charm ?
<lazyPower> ah, i'm not goign to be much help i'm sorry. I would suggest you ping the mailing list with that question. Our OpenStack charming team is primarily UK based - and active on the ML as it helps them respond during their TZ operating hours.
<ezobn> lazyPower, NP, just continue experimenting ;-) it is the only way to make sure ... :-)
<bdx> hello, how is everyone??
<bdx> I am trying to "juju deploy <charm> --to lxc:<machine id>" and am having no luck with my containers getting ip addresses allocated to my deployed containers
<bdx> see here: https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/627
<bdx> I am using non-maas dhcp/dns
<bdx> everything works minus deploys to containers
<bdx> could someone share some insight as to what extra steps may need be taken when deploying to containers using non-maas dhcp/dns???
<bdx> anything would be useful
<bdx> thanks
<bdx> charmers, openstack-charmers: https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/issues/627
<bdx> https://bugs.launchpad.net/juju-core/+bug/1466629
<mup> Bug #1466629: Containers fail to get ip when non-maas dhcp/dns is used <dhcp> <dns> <lxc> <maas> <openstack-installer> <openstack-provider> <ubuntu-engineering> <ubuntu-openstack> <juju-core:New> <https://launchpad.net/bugs/1466629>
<mbruzek> jcastro: Juju Office Hours are where.
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYdV8bIUC8y55qgI8K7oE749yUCVWW5CqvYM5fe32jcnc9G6qg?authuser=2&hl=en
<jcastro> in here folks!
<jcastro> marcoceppi: mbruzek ^^^
<jcastro> lazyPower: ^
<marcoceppi> thumper: https://design.ubuntu.com/wp-content/uploads/logo-ubuntu_cof-orange-hex.png
<jcastro> if anyone has any questions for the team, we're streaming on http://ubuntuonair.com
<jcastro> https://jujucharms.com/docs/devel/reference-release-notes
#juju 2015-06-19
<lukasa> lazyPower: Ping =)
<dweaver> Having a problem wit LXC containers in Juju 1.23.3 on trusty with a vivid kernel (3.19.0-21-generic).  We ask Juju to deploy some LXC containers.  The containers start, but get no IP address.  Anyone want to give us any pointers on how to debug further?
<dweaver> Seems to be a problem with DHCP responses not getting through to the container network namespace.
<lazyPower> lukasa: pong
<lukasa> lazyPower: Mind if I PM?
<lazyPower> no go right on ahead
<frankban> cory_fu: thanks for your review? what do I need to do for the promulgation at this point? just wait?
<cory_fu> We need a full charmer (which I am not, quite yet) to do the promulgation.  It would also be good to make sure marcoceppi has reviewed it, since he is the maintainer of the charms it will be supplementing / replacing.
 * rick_h_ sends bribes of car parts to marcoceppi 
<marcoceppi> frankban: rick_h_ it seems fine to me, I recently had Redis Labs look over it, and had a hard time explaining it's structure. I worry it may be slightly more complex a layout for a relatively simple execution
<marcoceppi> but that's not really a blocker, I'll take a pass at it and a few others later today
<frankban> marcoceppi: thanks! what did you find complex about the charm?
<rick_h_> marcoceppi: all good, ty. We're tring to help bring up charms we use in prod and get them so we're helping maintain more.
<marcoceppi> frankban: I had a hard time explaining the logic tree from the services framework, but this is probably ultimately a fail on my part of lack of general understanding of the framework
<rick_h_> marcoceppi: :( we thought this was the cool new way to write charms these days
<rick_h_> we try to keep up :P
<marcoceppi> Don't take my lack of familiarity with a framework as it not being the cool way, I'm not too hip these days ;)
<cory_fu> The framework has its upsides, mainly when dealing with multiple dependencies, but it's acknowledged that it does make the simple case harder to follow
<khuss> i'm using Juju charms to install openstack. I also have to make some additional changes in the nova.conf file. However changes seem to be overwritten when I reboot the machine.
<khuss> How do I make changes in the configuration files so that they are not overwritten by Juju
<marcoceppi> khuss: you need to embody these changes in either the nova charm, or as a subordiante. The OpenStack charms own those configuration files, so you really can't (and shouldn't) make changes out of band of Juju
<marcoceppi> khuss: out of curiosity, what are you trying to change in nova.conf?
<khuss> marcoceppi: changes in the network_api_class. metadata agent configuration etc
<khuss> marcoceppi: if I use a subordinate charm, do I edit the file directly or use some helper functions? Exactly, how do Juju determine if files need to be overwritten?
<khuss> marcoceppi: we also have changes in cinder.conf and neturon.conf
<jrwren> khuss: any juju hook might rewrite the file. config-changed and relation add/remove being most likely.
<khuss> jrwren: then there has to be a dependency to make sure that the last charm did the right configuration?
<jrwren> khuss: I am not sure what you mean.
<jrwren> khuss: typically a charm owns a config file.
<marcoceppi> khuss: juju doesn't manage the config files
<marcoceppi> the openstack charms do
<marcoceppi> they just happen to own those files, each charm is different. You either need to modify the openstack charms to do what you want, or in some cases, like cinder and nova, you can build subordinate charms which can communicate with the parent charm (nova, cinder, etc) on what values should be in the configuration file
<khuss> marcoceppi: yes.. i understand. The nova-compute charm edits the nova.conf file. now If I want to add my changes, I can probably create a subordinate charm which will edit the same config file
<jrwren> khuss: There are some facilities in some charms which can help you go off tested path. e.g. nova-compute charm has a nova-config setting. You could use that to have the charm write to a different config file and then merge with your needs yourself, but you are own your own :)
<marcoceppi> khuss: no, the subordiante editing teh file won't work, the openstack charms have special relations which allow you to convey a context whcih icnludes the chagnes to configuration files it should include when building teh file
<khuss> marcoceppi: not sure how the subordinate charms communicate the information with the parent charm.. For example, I want to add "security_group_api=nova" in the nova.conf file
<khuss> marcoceppi: how would subordinate charm communicate this with parent charm
<jrwren> khuss: it wouldn't. the parent charm would need to be updated to support this.
<khuss> you mean to say we need to have a custom version of nova-compute to add some changes in the nova.conf file?
<jrwren> khuss: I think so, but I am not sure.
<marcoceppi> okay
<marcoceppi> khuss: the openstack charms, a lot of them, have been created with a special relationship that allows ytou to send data, over teh relation wire, to tell it how to write some portions of teh configuration
<marcoceppi> khuss: this is something that's unique to the openstack charms, it show venders can create "charms" for cinder without having to fork the cinder charm over and over again, etc
<marcoceppi> I'm not sure it works as well with the nova charm, but cinder definitely has this concept, though it's more geared for cinder backends, the logic is there
<marcoceppi> Depending on what the changes are, and how many you have to make, will ultimately drive if this should be a subordinate or a fork
<marcoceppi> if it's one or two changes in nova.conf for general/generic options, having it as a configuration option on the charm is probably teh best way to go, if it's a lot of stuff, or for a custom compute plugin, a subordinate is probably the way to go
<khuss> marcoceppi: apart from the looking into the code, is there any other way to see what relationships are supported and what type of data can be sent?
<marcoceppi> khuss: OpenStack charms use this notion of "contexts" which are Python classes that allow you to define the file you want to make changes, the section of that file (if supported), and then the keys/values to set. You then can send that "context" over teh relation
<marcoceppi> there's not much documentation around this, and only a few charms that (cinder, definitely, neutron to an extent, and nova I think as well)  support it so far
<marcoceppi> it's best to find a charm that closely resemebles what you're trying to do, there are a few cinder charms, and some neutron charms that touch both their services and nova-compute
<khuss> marcoceppi: if you have some examples of "sending context over the relation" it would be great
<marcoceppi> khuss: this is one that I'm familiar with: https://jujucharms.com/u/marcoceppi/cinder-vnx/trusty/4
<marcoceppi> but it's a storage backend for cinder, so it sends a slightly different context. An openstack-charmer could probably give you a better example, mailing juju@lists.ubuntu.com would be a good start to that conversation if none are listening right now :)
<khuss> marcoceppi: ok  thanks I will take a look at this charm
<pmatulis> hello, re restrictions (juju block|unblock), does an unblock override a restriction set in environments.yaml ?
<moqq> so, jujud on both of our environments is sitting at 100% cpu
<moqq> for no apparent reason
<moqq> has anyone seen this before?
<gQuigs> moqq: disk isn't full? (that's the only time I've seen it)
<moqq> nope
<moqq> lots of space
<moqq> i think its talking to mongo a lot because the mongo process is spiking quite a bit too
<amcleod-> moqq: strace -c?
<moqq> amcleod-:  95.44    1.453291        4844       300        17 futex
<moqq> amcleod- https://gist.github.com/tysonmalchow/d77900349832ebea23aa
<amcleod-> hm right so maybe db or fs?
<moqq> mongodb is sitting above average too
<moqq> but more spikes and less consistent than juju
<moqq> and if it was waiting on IO shouldnât that be an interrupt/idle wait? why would that cause cpu to spike
<amcleod-> i dont know if userspace io would be shown that way, is there something like gluster on it?
<amcleod-> moqq: im just guessing now
<moqq> no, its an aws block device
<amcleod-> moqq: maybe just strace the process and see what its doing, probably a lot of futex wait and nothing particularly helpful.
<amcleod-> ..
<moqq> yeah
<moqq> thatâs all iâm seeing
<moqq> most time spent on futex
<moqq> 95%+
<moqq> i dont understand what kind of io it could even be concerned with that much when there are no commands and the cluster is otherwise idle??
<amcleod-> well maybe its not io, maybe its db as you suggested
<amcleod-> http://stackoverflow.com/questions/17211357/debug-a-futex-lock
<amcleod-> ^also not particularly helpful
<amcleod-> moqq: maybe check mongo processes? http://docs.mongodb.org/manual/reference/method/db.currentOp/
<amcleod-> moqq: ....errr.. http://compgroups.net/comp.unix.programmer/futex-high-sys-cpu-utilization/1391360
<moqq> seemed so promising but no dice
<amcleod-> hmm :/
<moqq> trying to connect to mongo now but it seems to be failing.. where does the juju mongo db keep its logs?
<amcleod-> not sure sorry
<jrwren> syslog, I thought.
<hazmat> anyone in town for the docker hackathon?
#juju 2015-06-20
<whit> hazmat, http://www.eventbrite.com/e/conducting-systems-and-services-an-evening-about-orchestration-tickets-16869087896
<hazmat> whit: you may have to promise him bacon
<whit> hazmat, I have access to amazing bacon
#juju 2015-06-21
<StoneTable> Ah, I thought that was hazmat in wwitzel3's photo!
<hazmat> StoneTable: it was ;-)
#juju 2016-06-20
<neiljerram> Morning all!  Does anyone happen to know the purpose of juju-br0 ?
<marcoceppi> neiljerram: juju-br0 I think is used to bridge LXC/LXD containers and KVM machines deployed by juju to a network routable address
<magicaltrout> I dont' think it includes LXD containers
<magicaltrout> I have lxd local(although not the latest beta) and no juju-br0
<magicaltrout> just a fat stack of veth adaptors
<marcoceppi> magicaltrout: this would be more `juju deploy --to lxd:` than lxd running on your machine as a provider
<magicaltrout> ah, yeah
<neiljerram> marcoceppi, magicaltrout Thanks. I found some email trails that seemed to say that sometimes you get juju-br0 and sometimes lxcbr0 - but it wasn't clear to me why.
<magicaltrout> I do have lxcbr0, I guess marcoceppi is correct, because lxd local stuff would all have networking at the same level, but if you dumped it into a running box, you'd have a hard time addressing it :)
<andrey-mp> hi! can somebody point me to instruction how to publish charms? I've read this - https://jujucharms.com/docs/stable/authors-charm-store#submitting     I've made all needed branches - https://code.launchpad.net/~cloudscaling  and  https://code.launchpad.net/~apavlov-e   but my modules was not appeared in Juju. where is my mistake?
<magicaltrout> its out of date andrey-mp :(
<rick_h_> andrey-mp: working on getting an updated link
<magicaltrout> https://jujucharms.com/docs/devel/authors-charm-store
<rick_h_> andrey-mp: yes, sorry. The new docs should be going live this week
<andrey-mp> thanks! will read and try )
<rick_h_> andrey-mp: let us know if you hit any issues
<andrey-mp> rick_h_: ok
<aisrael> tvansteenburgh: Any known issues w/bundletester and juju 2 beta 9?
<tvansteenburgh> i haven't tried it yet but if beta9 has the service->application rename then it's probably broken
<tvansteenburgh> aisrael: ^
<aisrael> tvansteenburgh: ack, confirming that's broken :/
<rick_h_> aisrael: tvansteenburgh yes, the stack is broken on top of latest beta. We need to get things updated to work with the changes.
<rick_h_> aisrael: tvansteenburgh another round will break in beta10 with the api updates coming as well
<aisrael> rick_h_: Do we have a target date for beta 10?
<rick_h_> aisrael: once a week is the beta request so I'd look at the end of this week
<aisrael> rick_h_: awesome \m/
<petevg> tvansteenburgh aisrael: can also confirm that amulet tests (and therefore bundletester) seem to be broken in beta9.
<tvansteenburgh> petevg: everything is broken
<petevg> At least it's not just me :-)
<tvansteenburgh> sing it to the tune of the lego movie
<petevg> Has anybody filed a ticket about a cheetah dependency breaking charm tools in Python3?
<petevg> Now I've got it stuck in my head :-p
<xilet> with the juju add-storage options, is there any way to use a lvm volume or does it have to be a physical device?
<arosales> Hello, just as an fyi on the GUI, the charm store was updated today and looks to resolve:
<arosales> https://github.com/juju/juju-gui/issues/1685
<arosales> https://github.com/juju/juju-gui/issues/1765
<arosales> but 2 new bugs came up as juju2 is still updating back-end services
<arosales> https://github.com/juju/juju-gui/issues/1781
<arosales> https://github.com/juju/juju-gui/issues/1780
<arosales> marcoceppi: jcastro lazypower-travel ^^ note this week for conferences
<arosales> and thanks to the UI team for working hard to try and keep up with a fast changing juju-core
<bdx> charm store is bugging out
<bdx> http://imghub.org/image/B6cP
<rick_h_> bdx: is that public? Did your login expire? try /logout and login?
<bdx> rick_h_: I did
<bdx> multiple times
<bdx> rick_h_: it shows here -> http://imghub.org/image/BI0X
<bdx> `charm list` shows it -> http://paste.ubuntu.com/17607650/
<rick_h_> bdx: ugh, file a bug please? and I'll get folks to look at it. https://github.com/CanonicalLtd/jujucharms.com/issues
<rick_h_> there was a deploy today so maybe a regression. Can you check if it works for staging.jujucharms.com as well ple`ase?
<bdx> yes, omp
<bdx> rick_h_: even worse on staging
<bdx> rick_h_: http://imghub.org/image/BQyu
<bdx> that was after multiple logouts/logins
<cholcombe> thedac, for your keystone-credentials interface are you meant to relate to keystone when using it?
<rick_h_> bdx: oh right, to be expected I guess since you didn't upload your charm to the staging charmstore
<cholcombe> thedac, i imagine the answer is yes
<bdx> rick_h_: the charmstore client is developed at https://github.com/juju/charmstore-client ?
<rick_h_> bdx: yes
<bdx> rick_h_: ahhh, I previously missed the link you posted, you want the bug there instead of the charmstore-client repo?
<rick_h_> bdx: right  i think the bug is in the website. you mention the cli tool working fine
<bdx> entirely - gotcha
<kwmonroe> hey, does hookenv.log work in an action?
<kwmonroe> nm, yes it does
<aisrael> kwmonroe: most, if not all, of the hookenv stuff works inside actions.
<kwmonroe> cool aisrael -- turns out i was debug-logging with -i on the wrong unit and being like, um, where's me logs!?!
<aisrael> :D
<xilet> Is there a way to just attach a block device to a running charm (working with juju 2.0) as a raw device?
<kjackal> hey kwmonroe, r u there?
<marcoceppi> xilet: type: block for storage should be a "raw" disk
<xilet> I am trying to get the syntax down, how would I add that to an exisiting charm?
<marcoceppi> xilet: it should look something like this: https://jujucharms.com/docs/devel/developer-storage#adding-storage
<marcoceppi> the second example block shows blockdevices
<xilet> yeah, but do I just give it the location: of the block device I want to use? And do I need to redeploy the charm for it or can I use the set-config command for it? (Sorry new to juju)
<kwmonroe> hey kjackal!  you're still on holiday.
<bdx> charmers: I charmed up DHC's internal recommendation engine, and introduced one of our devs to charming/charmstore - he's super pumped
<bdx> charmers: this is whelp -> https://github.com/DarkHorseComics/layer-whelp
<bdx> I can't publish it to the store bc the wresource need be kept private
<bdx> the charm layer is g2g for github - needs some tests though
<marcoceppi> bdx: bad ass!
<marcoceppi> bdx: a lightning talk on this at the charmer summit would be sweet
<magicaltrout> didn't he already do one last time around? :P
<bdx> magicaltrout: yeah - crashburnboom
<marcoceppi> magicaltrout: great question
<magicaltrout> "just wait... there is a chart coming" ;)
<magicaltrout> touche
 * magicaltrout can't be bothered googling the crazy e the french use
<bdx> marcoceppi: totally
<bdx> marcoceppi: what are you trying to do with that DjangoYamlTactic ?
<marcoceppi> bdx: convert django.yaml to layer options for backwards compat
<marcoceppi> haven't really gotten around to it
<marcoceppi> (yet)
<bdx> ok, I'm about to dump some cycles the next few days and refine some of mods I have for it ... how to you feel about approaching writing out settings files similar to pythonn-djago? - I already have this going on for memcache and postgres and was about to get busy on mongo and redis ...
<bdx> https://github.com/jamesbeedy/layer-django/blob/modularize_and_refactor/reactive/django.py#L63-84 ?
#juju 2016-06-21
<kjackal> Hey marcoceppi, how are you? All cool? I want to bug you about adding me to charmers group. I was told you hold the keys to paradise!
<magicaltrout> http://goo.gl/zZXHZO I think of it more like this
<kjackal> magicaltrout: oups!
<Raj__> I am connecting to mariadb from remote machine, I am getting error: ERROR 2005 (HY000): Unknown MySQL server host
<Raj__> any configuration changes I need to do on Mariadb for connecting from remote machine
<magicaltrout> how you trying to connect Raj__ ?
<magicaltrout> don't forget the juju unit names don't exist in real world DNS
<Raj__> mysql -h db_host -u root -p mypassword
<magicaltrout> yeah but is db_host an ip, a juju unit, something else?
<magicaltrout> it looks like a standard dns issue more than anything juju specific
<Raj__> db_host is of mariadb
<magicaltrout> yeah i appreciate that.
<magicaltrout> okay if you do
<magicaltrout> ping db_host
<magicaltrout> what happens?
<Raj__> let me try that
<Raj__> ping is working to db_host
<Raj__> 64 bytes from 10.0.3.46: icmp_seq=1 ttl=64 time=0.102 ms 64 bytes from 10.0.3.46: icmp_seq=2 ttl=64 time=0.068 ms
<magicaltrout> weird
<magicaltrout> okay so if you do
<magicaltrout> mysql -h 10.0.3.46 -u root -p mypassword
<magicaltrout> what happens?
<magicaltrout> also if you need to "juju expose"
<magicaltrout> although the mysql error looks different to what i'd expect for a closed port
<magicaltrout> also standard stuff like, are you pinging from the same box you're trying to run the mysql client from? ;)
<Raj__> yes mysql client is installed
<Raj__> if I connect to container and execute :mysql -h 10.0.3.46 -u root -p mypassword  it connects but same command from script has this error
<magicaltrout> in your script are you injecting some variables as the hostname or something?
<magicaltrout> something smells off
<Raj__> yes , variable captured host name
<magicaltrout> okay, I'd echo that or something to make 100% sure you're trying to connect to the correct place
<magicaltrout> or just hard code the ip to test
<magicaltrout> if you replace the var with 10.0.3.46 in the script
<magicaltrout> you'd know if your addressing was screwy or something
<Raj__> ok, I will try that, anything about error: ERROR 2005 (HY000): Unknown MySQL server host
<magicaltrout> that "Unknown MySQL server host" error is 99.9% of the time because it can't resolve the hostname you're passing on -h parameter
<magicaltrout> I can't think of a time when that wouldn't be the error
<Raj__> ok,  I will try
<coreycb> has anyone had 2.0 bootstraps fail with "missing CloudCredential not valid"?  seems as if that would mean creds are invalid but my creds look fine to me.
<magicaltrout> https://www.crowdsupply.com/lime-micro/limesdr bought some ubuntu snappy based SDR thing today
<magicaltrout> interesting bit of kit!
<magicaltrout> they're actually based in the same office building as us, i might go and annoy them
<admcleod> magicaltrout: hows things? when're you going to space?
<magicaltrout> lol
<magicaltrout> busy away admcleod, i'm trying to get Saiku 3.9 released, the guys working on Saiku 4, exit from Meteorite, negotiate contracts with NASA, work in various bits of NASA shit that are supposed to be at GA mid next month but way behind schedule......
<magicaltrout> you know
<magicaltrout> the usual ;)
<magicaltrout> fire fighting like the best
<admcleod> as long as you're enjoying it
<magicaltrout> i'll enjoy it when i renew my golf membership and disappear for a few afternoons :P
<admcleod> is there sun there?
<magicaltrout> pissed it down yesterday
<admcleod> 32 here
<magicaltrout> 21 C and sunshine today
<magicaltrout> usual british weather
<admcleod> i expect itll be dark and stormy on the 23rd
<magicaltrout> lol
<admcleod> *fingers crossed i dont get kicked out of the eu*
<magicaltrout> it's gonna be shit if we leave
<magicaltrout> I might join you and get a bunch of passports
<magicaltrout> just in case ;)
<magicaltrout> problem is I don't want to agree with Dave
<magicaltrout> the big forheaded t*at
<magicaltrout> +e
<magicaltrout> be nice if dave swapped to the leave side
<admcleod> haha
<admcleod> not sure how much use my other passport is in terms of living in places. great for holidays...
<magicaltrout> Boris is amusing, but I'd hate to see him as PM
<magicaltrout> crazy git
<magicaltrout> he'd just smoke lots of dope
<magicaltrout> take some class A's
<magicaltrout> if him and Trump win, i'm using my NASA status to put them both on the ISS
<admcleod> just flag the aliens
<admcleod> "its time"
<magicaltrout> hehe
<admcleod> im thinking boris vs trump boxing match
<magicaltrout> trump would suck
<magicaltrout> you would get to see him without his wig though
<magicaltrout> that would be an awful sight
<bdx> elastic-stack-devs: xenial-kibana -> https://jujucharms.com/u/jamesbeedy/kibana/6. I'm working on a two actions that will create and remove httpd user and password, as well as some tests but it should be g2g
<valeech> running juju 2.0 beta 9. I have successfully bootstrapâd a maas environment. I would like run juju in ha. When I issue enable-ha command, it fails and says I need to create new controller machines using juju add-machines. Am I misunderstanding something? I thought the enable-ha command would try to spin up new machines on maas to act as juju controllers.
<cholcombe> did the juju 2 status set command change?  I think the answer is no but I want to double check
<shruthima> hi Kwmonroe, i want to discuss regarding IBM-IM issue
<kwmonroe> sure shruthima - i did see your most recent note about deploying ibm-im, but haven't tried juju attach yet..
<shruthima>  oh k .After checking in all ways i feel there is some issue in code if iam not wrong,  because as resource is getting downloaded but it is not going to unzip part in the code
<kwmonroe> i'll try that now.  unfortunately, that error isn't very helpful :(  i suspect there's something wrong in the fixpack handler
<kwmonroe> shruthima: i'll also look for ways to be more verbose than simply "install-ibm-im-fixpack' returned non-zero exit status 1"
<shruthima> kwmonroe: ok thanks kevin
<kwmonroe> yo petevg, i had some beef with https://github.com/juju-solutions/bigtop/pull/15 and the status we're showing.  lmk if you'd like more deats on what i think good status looks like.. i don't mind whipping up a PR if you're in a hurry to get this charm +1'd.
<petevg> kwmonroe: thanks for the feedback. I'm not in a hurry to get stuff +1ed -- I just want to make sure that the charm looks like we want it to look when we submit it upstream :-)
<shruthima> kwmonroe : il be logging out from chat now,if any solution for IBM-IM issue, Could you please email us ..
<petevg> kwmonroe: I update the status messages, and argued with you about the quorum messaging (with the non automated restarts, I think that the check gets to be too complicated -- I'd like to add it back after we add back the automated restarts, though).
<kwmonroe> yup shruthima - will do
<shruthima> thanks you kevin :)
<Prabakaran> hi lazypower .. is it possible for me to get apt.<package-name>.installed state in bash if i use options in layer.yaml
<Prabakaran> i have written some piece of code http://pastebin.ubuntu.com/17651116/ ... here i am installing mysql-client using options in layer.yaml
<Prabakaran> here is it possible for me to get the state  apt.mysql-client.installed' ?
<marcoceppi> bdx: there's a layer for kibana, iir
<marcoceppi> Prabakaran: no, mysql-client, when defined as part of the layer options will always be installed before the layer runs
<marcoceppi> apt.mysql-client.installed is set from the apt layer, where you may need to dynamically install deps
<Prabakaran> so i can remove this state  'apt.mysql-client.installed' in this code .. because it will be installed before only
<Prabakaran> http://pastebin.ubuntu.com/17651116/
<Prabakaran> marcoceppi
<Prabakaran> so i can remove this state  'apt.mysql-client.installed' in this code http://pastebin.ubuntu.com/17651116/ .. because it will be installed before only
<kwmonroe> correct Prabakaran
<Prabakaran> Thanks kevin for the confirmation
<Prabakaran> :)
<kwmonroe> np Prabakaran -- you've got a bigger problem in that pastebin though
<kwmonroe> Prabakaran: line 3 here: http://pastebin.ubuntu.com/17651116/.  bash reactive doesn't support the concept of a 'mysql' object in the same way that python does
<kwmonroe> so you can't use a mysql object like you're doing on line 16
<Prabakaran> i was typing this question to you....
<Prabakaran> i just got this answer
<Prabakaran> :)
<kwmonroe> :)  so instead, do something like:
<kwmonroe> my_host=`relation_call --state 'mysql.available' 'host'`
<kwmonroe> and then use $my_host in your mysqlshow -h command
<kwmonroe> similarly, my_port=`relation_call --state 'mysql.available' 'port', etc, etc
<Prabakaran> cool <kwmonroe> .. let me just replace those lines
<Prabakaran> <kwmonroe>i dont need to worry about ssh exchange between containers if i use mysql-client apt package right?
<kwmonroe> correct Prabakaran - db consumers can talk to the mysql host with the mysql-client utilty
<kwmonroe> Prabakaran: fyi, the settings you can get from the relation are defined here: http://bazaar.launchpad.net/~charmers/charms/trusty/mysql/trunk/view/head:/hooks/db-relation-joined#L81
<kwmonroe> so host, database, user, and password will give you relevant info, but it doesn't look like 'port' is an option on the relation.  looks like that is always hard coded to 3306.
<Prabakaran> now i got all these.. Thanks kevin <kwmonroe> .... you made me clear on this..
<bdx> marcoceppi: yea, I was just indicating that it supports xenial now :-)
<bdx> big-data: aside from tests and the httpd auth bit, how is https://jujucharms.com/u/jamesbeedy/kibana/7 looking to you guys
<bdx> I'm going to put some finishing touches on it and submit to the RQ
<bdx> just wanted to get some prelim feedback
<kwmonroe> bdx: i haven't been paying much attention on the kibana front, so others may have better feedback.. but how close is yours to the ~containers version (https://jujucharms.com/u/containers/kibana/)?  i'm assuming it's the xenial support..
<kwmonroe> and if that's it, is the plan to merge those and have 1 maintained kibana?  or is there some reason we might need two?
<bdx> kwmonroe: mine is the reactive re-write
<bdx> + xenial
<kwmonroe> oh snap bdx!  that's how much i don't know about kibana :/  i didn't realize the ~containers version wasn't reactive.
<kwmonroe> well bdx, +1 on making it reactive, -2 for your readme skills.  i like that the ~containers readme talks about deployment and dashboards.  would you consider copying the relevant bits from their readme into yours?
<bdx> kwmonroe: entirely, I haven't given any cycles to the readme yet thats for sure :-0
<bdx> but yea, totaly down for a swap out
<bdx> there are a few new dashboard bits I might need to commandeer 4 sure
<bdx> charmers: what is the prefered method of mitigating the AUTHORS file between layers ?
<bdx> charmers: I feel like the AUTHORS file should be generated on charm build by pulling the maintainers out of the top layer?
<marcoceppi> bdx: we've talked about that
<marcoceppi> bdx: one way would be to generate a copyright file using the debian copyright format
<bdx> marcoceppi: and just rm the authors file before pushing?
<marcoceppi> bdx: well, instead of authors, it'd be a copyright file
<bdx> totally
<marcoceppi> but basically, yeah
<marcoceppi> it'd look something like this: https://github.com/juju/charm-tools/issues/136
<marcoceppi> bdx: ^^
<marcoceppi> feel free to pitch in on that issue
<marcoceppi> it's a bit stalled atm
<bdx> I see ... hmmm, seeing as each AUTHOR is represented in their respective layer, can charm build exclude the AUTHORS file, and just have a flag to pull down and include the desired copyright to point at the maintainer from metadata.yaml?
<bdx> possibly the AUTHORS parsing doesn't need to be as complicated as the actual merging of the layers ...?
<bdx> bc it could just boil down to just not merging any AUTHORS and just generating the desired copyright
<bdx> I'm probably way out in left field here, but it seems reasonable
<aisrael> tvansteenburgh: Do you have an eta on bundletester compatibility with beta 9 (in light of the push to clear the queue)?
<tvansteenburgh> aisrael: no. there is a chain of things to fix. jujuclient -> deployer -> amulet -> bundletester
<tvansteenburgh> aisrael: i'll probably get it all working just in time for beta10 to break it again
<magicaltrout> hehe /o\
<tvansteenburgh> aisrael: strongly suggest you just use beta8 for now
<magicaltrout> aisrael: lets face it... the queue will never be clear! ;)
<magicaltrout> be honest with yourselves :P
<aisrael> tvansteenburgh: lol, beta 8 it is
<aisrael> magicaltrout: A boy can hope!
<magicaltrout> hehe
<petevg> I wound up rolling back to beta7, because apt was having a hard time finding beta8. That may just be weak package manager fu on my part, though.
<petevg> In any case ... yeah: going to wade into the queue with a slightly outdated beta. I've done worse things :-)
<aisrael> And I autoremoved recently, so I don't have a backup of beta 8
<jhobbs> i'm trying to bootstrap juju2 against an openstack cloud and getting errors, probably doing something wrong. Can someone have a look and see if it's something obvious? https://bugs.launchpad.net/juju-core/+bug/1594958
<mup> Bug #1594958: Bootstrapping on OpenStack fails with juju 2 <juju-core:New> <https://launchpad.net/bugs/1594958>
<kwmonroe> jhobbs: i don't have much exp with openstack clouds, but in my ~/.local/share/juju/credentials.yaml, my default-region is set to $OS_REGION_NAME and my tenant-name is set to $OS_TENANT_NAME.  is your OS_REGION_NAME 'RegionKVM' and is your OS_TENANT_NAME 'admin'?
<kwmonroe> and when i say "my X is set to $ENV", i mean it's set to the actual value, not the string "$OS_X_NAME"
<jhobbs> kwmonroe: i used add-credential without using a credentials.yaml - the tenant is correct but it didn't ask for a region name there. The region name is set in my clouds.yaml when I added my cloud, that value is correct, but I have two regions, wonder if that could be related
<jhobbs> default-region isn't an accepted value (at least for openstack) in credentials.yaml - it rejects that
<jhobbs> sorry, only one region in this deployment, so that's not related
<kwmonroe> hmph, not sure if i can be any help, but "index file has no data for cloud {RegionKVM <endpoint>} not found" feels like there's some disconnect between the region and the endpoint.  you sure it's port 5000 there?
<jhobbs> yeah - it is; i tested it with keystone client.  I think it's trying to find a set of images for that cloud in the list of images here http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson
<jhobbs> There are some docs on how to setup a private cloud here, but they haven't been updated for 2.0 https://jujucharms.com/docs/devel/howto-privatecloud
<kwmonroe> cory_fu: in bash reactive, is the right way to make an explicit @hook handler?
<kwmonroe> @hook 'upgrade-charm'
<kwmonroe> function check-fixpack(){
<cory_fu> Yep
<cory_fu> Though I don't think "check-fixpack" is a valid identifier
<cory_fu> kwmonroe: ^
<kwmonroe> whatchu talkin bout willis?
<magicaltrout> the use of a - in a function name... i believe
<kwmonroe> i think you're all nuts
<cory_fu> Hrm.  Apparently it's fine for function names but not variables
<kwmonroe> hey - i didn't even know it was a thing.. apparently ksh/bash/zsh are cool, but ash/dash/csh are not.
<kwmonroe> underscores ftw
<magicaltrout> jesus f**cking christ... there's some idiots on the news tonight
<magicaltrout> its like Im watching Fox or CNN
<kwmonroe> magicaltrout: i think you mean "there *are* some idiots on the news tonight"
<magicaltrout> that might be true! :P
<magicaltrout> admcleod: i might ship all the vote leave's down your way
<magicaltrout> i hear the costa del sol is nice for british people looking for an alternative home
<bdx> jhobbs: I put some heat on that bug for you
<jhobbs> bdx: cool thanks
<jhobbs> anastasiamac: I updated bug #1594958, sorry, must have posted the wrong pastebin
<mup> Bug #1594958: Bootstrapping on OpenStack fails with juju 2 <v-pil> <juju-core:New> <https://launchpad.net/bugs/1594958>
<anastasiamac> jhobbs: tyvm. looking :)
<anastasiamac> jhobbs: do u still have a complete log from this run? if it was at trace-level would be stunning \o/
<jhobbs> bdx: i got juju 1.25 to work against openstack - I had to specify both metadata-source to bootstrap (as a filesystem path) and image-metadata-url (as an http url) in environments.yaml
<jhobbs> anastasiamac: how do i set it at trace level?
<anastasiamac> jhobbs: https://github.com/juju/juju/wiki/Juju-Logging
<anastasiamac> jhobbs: could you please also kindly update the bug to say that it works for 1.25? :) otherwise I'll mark it as invalid for 1.25 :D
<jhobbs> anastasiamac: i updated the bug WRT to 1.25; Is there somewhere to find a log for this other than the juju bootstrap itself?
<jhobbs> *the output of juju bootstrap
<anastasiamac> jhobbs: if u set the environment variable and re-run bootstrap command as u were (with debug-log on) i *think* i'll get enoguh info
<anastasiamac> jhobbs: at this stage, the command that i see u run does not have --metadata-source...
<jhobbs> uhh
<jhobbs> weird
<jhobbs> anastasiamac: http://paste.ubuntu.com/17668083/
<jhobbs> anastasiamac: i did the set-envs there and it didn't seem to change the output at all
<jhobbs> sorry, set-model-config in this case
<anastasiamac> jhobbs: yeah, i can see in this output and now in mine (bootstrapped) that --metadtaa-source is not registering... i'll investigate further \o/
<anastasiamac> jhobbs: thank you for input and pain ;)
<jhobbs> anastasiamac: np, thanks for your patience on the logs and for looking into it
<anastasiamac> jhobbs: \o/
<bdx> jhobbs: no WAY! nice!!!!
<bdx> jhobbs: was that documented anywhere?
<jhobbs> no
<jhobbs> bdx: well kind of - the juju output from metadata generate-images says to do one or the other, but you need to do both to get it to work
<bdx> jhobbs: sad, did you file a bug for that specifically?
<valeech> in juju 2.0 is there a way to specify the default series and an admin-secret when bootstrapping a maas 2.0 environment?
<jhobbs> bdx: yeah bug #1594977
<mup> Bug #1594977: juju-1 bootstrap forgets about  metadata-source argument <v-pil> <juju-core:New> <https://launchpad.net/bugs/1594977>
#juju 2016-06-22
<xilet> Another block device question, with lxc I have /etc/lxc/defaults.conf with lxc.cgroup.devices.allow = b 43:* rwm.  If I start an lxc container manually I can attach a storage device and use it normally inside the container. However if I deploy a juju container I can attach the device, it shows up but won't let me access it.
<xilet> Is there another place with juju charms that defines the lxc defaults for those sorts of settings to allow device access?
<xilet> Juju 2.0
<admcleod> magicaltrout: noooo
<admcleod> kjackal: so im thinking abotu our bigtop charms, and the principal and any subordinates will both unpack the bigtop repo to /home/ubuntu/bigtop.deploy right?
<kjackal> yes
<admcleod> kjackal: so this means if we're writing any values to hiera files we have to consider them stateless - we write, we apply, we imagine they're gone (because they might be)
<kjackal> yes, true
<kjackal> now if these values propagate to a resource that is shared accross bigtop roles (eg a shared hadoop-core.xml file) then we might get into trouble
<admcleod> kjackal: yeah. well. it would have to be specific values in that file, e.g. hdfs-site.xml ... and i think the top layer's value should take precedence
<admcleod> hey kwmonroe_, your bigtop smoke-test stuff works ootb with sqoop without any template mods (as long as i hardset the env var)
<kjackal_> admcleod: how long does it take to run?
<kjackal_> 3 minutes, looks fine
<petevg> admcleod, kjackal: stateless hiera files are the reason that I stashed the list of Zookeeper nodes in a .json file under the Zookeeper charm's resources directory. Whenever I run puppet, I read from that  file, and pass it in as an override. I'm not sure whether that's a best practice, though.
<admcleod> petevg: what is it you're putting in that json file again?
<petevg> admcleod: the list of zk peers. We override the ensemble var in heirdata, and it ends up getting written to the zookeeper config.
<admcleod> petevg: so you write the peers to the hieradata and then run puppet apply every time the list changes?
<petevg> admcleod: yes.
<admcleod> petevg: cool
<petevg> :-)
<admcleod> petevg: why was it you said you chose not to use leader settings?
<petevg> admcleod: the list is different on each box (each node lists itself first), and it has to get updated when a node joins, right before it figures out who the leader is.
<petevg> ... so it was either throw in a bunch of waits that made things confusing, or just stick the data somewhere else.
<admcleod> petevg: hmm when you say 'figures out who the leader is' are you talking about juju or zookeeper 'figuring'?
<petevg> juju
<petevg> Zookeeper has its own ideas about leadership, which are separate.
<admcleod> right.. i didnt think the leadership election took very long though. if theres one node, its the leader, and if another one joins the first one is still the leader. or so i thought
<petevg> Yes. But the node that is joining doesn't figure that out right away, and I get errors trying to write to the leader.
<petevg> The wait probably wouldn't be a long one ... it might be worth revisiting and refactoring, now that I've got the basic flow of stuff working.
<admcleod> petevg: you might evne be able to use stub's leadership layer to wait until election has completed
<admcleod> petevg: https://launchpad.net/layer-leadership
<petevg> admcleod: the tricky thing is that I need to persist the state right away. If the process exits because it is waiting for something, then I lose the state.
<petevg> I'll play around with it a bit.
<neiljerram> Does anyone know how Juju 2's idea of the current controller/model is stored?
<neiljerram> If I have a long-running test script in one terminal window, where model 'm1' is the default, can I do 'juju add-model m2 && juju switch m2' in another window, without disturbing the first test?
<cherylj> neiljerram: the current controller is stored in your JUJU_DATA (~/.local/share/juju), so if you're using juju switch, it will take effect in all terminals
<neiljerram> Thanks cherylj.
<cherylj> neiljerram: you can use the JUJU_MODEL env var
<cherylj> that should just be local to the terminal it's set in
<neiljerram> Ah, great.
<neiljerram> I was thinking that I should modify my scripts to put an explicit "-m <model>" parameter in every Juju command.  But JUJU_MODEL would be much simpler.
<cherylj> yeah, the commands will look at JUJU_MODEL first, before inspecting what the current model was switched to with 'juju switch'
<neiljerram> This is very nearly perfect... :-)  One slight snag, that I just discovered, is that 'juju add-model' implicitly does a 'juju switch' as well - which means there is a risk of disturbing a test that is already running.
<neiljerram> I suggest it would be better if 'juju add-model' did not do that.  Then I could do 'juju add-model M2; export JUJU_MODEL=M2' in another window, without any disturbance of the existing test.
<Prabakaran> Hello  Team, Can i get any sample layered charm to check peer relation in bash? I am asking this for my learning
<arosales> Hello
<arosales> charmes have a question for you
<arosales> I woud like to contribute to https://jujucharms.com/mediawiki-single
<arosales> as the readme is incorrect
<arosales> I follow the contribute link to https://code.launchpad.net/~charmers/charms/bundles/mediawiki/bundle
<arosales> which does not match the download zip
<arosales> :-/
<arosales> so I am guessing this was pushed from a different copy that what the contribute link is pointing to
<marcoceppi> arosales: mediawiki-single is old, wiki-simple is the new one
<arosales> marcoceppi: question still valid but
<arosales> I ask as mediawiki-single is the example on  https://jujucharms.com/get-started
<arosales> marcoceppi: should we update https://jujucharms.com/get-started to use mediawiki-single and un-promulgate mediawiki-single ?
<marcoceppi> arosales: yes, jcastro has been trying to do this for a few weeks now
<arosales> sorry update https://jujucharms.com/get-started to use mediawiki-simple and un-promulgate mediawiki-single
<arosales> marcoceppi: I was also updating CWR to test the bundle at /get-started
<marcoceppi> it's wiki-simple
<arosales> marcoceppi: given jcastro has been trying for weeks is there a github bug I can follow up on?
<marcoceppi> https://jujucharms.com/wiki-simple/
<marcoceppi> probably
<marcoceppi> arosales: https://github.com/CanonicalLtd/jujucharms.com/issues/242
<valeech> Hello. What are some troubleshooting steps I could take to determine why juju 2.0 beta9 gets stuck boostrapping maas 2.0 beta 7 at the fetching tools stage?
<arosales> wow april
<arosales> marcoceppi: so should we un-promulgate mediawiki-single then?
<marcoceppi> arosales: yes, when the get-started page gets updated
<marcoceppi> otherwise we just break the new user experience even more
<arosales> marcoceppi: ok, I'll work on that
<arosales> marcoceppi: last question
<arosales> http://status.juju.solutions/bundle/cwr-test-410
<arosales> failing on mysql
<marcoceppi> arosales: https://lists.ubuntu.com/archives/juju/2016-April/007132.html
<arosales> should we update wiki-simple to use mariadb?
<marcoceppi> ugh, it's the openstack tests.
<marcoceppi> the charm works
<marcoceppi> the tests don't
<arosales> marcoceppi: awesome on the lists and bugs. Just need to follow up on seeing this done
<arosales> marcoceppi: I think we can un-promulgate mediawiki-scalable per the mail list post
<arosales> and I'll work on updating /get-started so we can unpromulgate mediawiki-single
<marcoceppi> arosales: yup
<arosales> marcoceppi: can you un-promulgate if I ask nicely?
<marcoceppi> arosales: already doing it
<arosales> marcoceppi: thanks
<arosales> marcoceppi: re mysql the bigdata team is hitting the same thing in their tests
<marcoceppi> arosales: because we have openstack tests mixed in the bunch
<marcoceppi> I'll update the charm, but I don't think the OS team will appreciate it
<arosales> easiest way forward is to use mariadb to show green, but the correct way forward is to not run openstack tests
<arosales> marcoceppi: we should keep the openstack tests, but figure out a way to not run them in non-openstack contexts
<marcoceppi> well, not having them in the charm is a good start
<arosales> beisner: is openstack using mysql or percona as the sql db?
<arosales> marcoceppi: sure, we just need to give openstack an alternative to testing mysql in openstack
<beisner> hi arosales, marcoceppi - percona-cluster is the primary focus.  we may still have some test bundles with the mysql charm in play, but i'd say it's safe to remove the keystone bits from the mysql amulet tests.
<beisner> marcoceppi, arosales - what's the status of mongodb for xenial in the charm store?   seems like i saw some convo around that recently but i don't find one avail.
<marcoceppi> beisner: in progress
<beisner> marcoceppi, ack thx.  fwiw, we're a bit blocked on cs: bundles for xenial-mitaka as ceilometer requires mongodb.
<marcoceppi> beisner: but why not just deploy trusty mongodb?
<arosales> beisner: hopefully by end of month we will have an updated mongo as system Z also needs that
<beisner> marcoceppi, arosales - hmm, gonna try that now.    this is for system z s390x openstack validation this wk.
<arosales> marcoceppi: beisner so where do we stand on mysql openstack tests?
<marcoceppi> arosales: sounds like I can just pull the tests
<arosales> beisner: I think we need a special system z binary for mongo on z
<beisner> arosales, marcoceppi - i'd say it's safe to remove the keystone bits from the mysql amulet tests.
<arosales> not yet in the charm
<beisner> i'm about to find out :)
<arosales> beisner: can marcoceppi pull all the openstack tests or just keystone?
<arosales> beisner: no I am telling you re mongo :-)
<beisner> arosales, marcoceppi - refactor mysql tests to suit
<marcoceppi> beisner: \o/
<beisner> dammit arosales :)
<marcoceppi> arosales: I'll have it updated today
<arosales> beisner: but perhaps some happy coincidence has occurred last time I looked. I just know the IBM system z folks were working on a mongo ppa for Z
<arosales> dannf: do you know the status of mongo and xenial or ppa?
 * arosales also searching
<dannf> arosales: well, the guy doing the work on ibm's side left the company. he has a replacement, but i haven't seen a drop from him yet
<arosales> dannf: gotcha, but stock mongo doesn't work on xenial, correct?
<arosales> dannf: and no current s390 mongo ppa that you know of
<dannf> arosales: correct (not on s390x)
<arosales> beisner: ^ :-/
<dannf> arosales: correct. i made one: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb
<dannf> arosales: but mongo FTBFS. i'm sure *we* could fix the build issues, but IBM was supposed to
<arosales> dannf: ok thanks for the update I'll email IBM and see how we can move this forward
<dannf> arosales: i suspect it's just a missing build-dep fwiw
<dannf> arosales: let me forward you the last thread on this...
<arosales> beisner: I'll cc you on mail to IBM in working to get a mongo for s390 we can put into charms
<arosales> dannf: thanks I'll follow up from there
<beisner> arosales, ack.  appreciate it.  be aware that without mongodb, we have no ceilometer.
<dannf> arosales: that, and java seems stalled too :(
<dannf> arosales: i sent them a git tree w/ fixes for their java packages, but *PLONK*
<dannf> (OT here though i suppoe)
<beisner> arosales, dannf - sure enough. no mongodb pkgs in ubuntu-ports s390x packages.
<dannf> beisner: yeah, main reason for that is that s390x needs a new upstream version - and upgrading mongo in general in ubuntu is an issue, because upstream mongo doesn't support upgrading from the old version we have to current
<dannf> beisner: solution for that is to version the mongo packages, so that upgrading isn't an issue, but i don't know of anyone working on that
<dannf> s/solution/proposed solution/
<andrey-mp> hi all. can I remove my charm from charm store?
<beisner> arosales, dannf - fyi, raised for tracking and reference in our current validation docs:  https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242
<mup> Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) <s390x> <uosci> <mongodb (Ubuntu):New> <ceilometer (Juju Charms Collection):New> <ceilometer-agent (Juju Charms Collection):New> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1595242>
<stub> andrey-mp: You can revoke access to it. You can't remove them yet.
<andrey-mp> stub: ok, thanks. i already revoked access.
<stub> Does open-port take effect immediately, or only if the hook terminates successfully?
<arosales> beisner: thanks
<aisrael> tvansteenburgh: Do we know for sure bundletester is working with beta 8? This might entirely be pebkac, but it's acting wonky on me. Hanging when a unit hits an error state. Huh, and failing on `juju api-endpoints -e local.reviewqueue:default` because api-endpoints is gone.
<tvansteenburgh> aisrael: what's the output of juju list-controllers --format yaml
<aisrael> tvansteenburgh: http://pastebin.ubuntu.com/17705681/
<tvansteenburgh> aisrael: okay, i'll need to see the full output of the test run
<aisrael> tvansteenburgh: http://pastebin.ubuntu.com/17705768/
<aisrael> I don't think it should matter, but this is a fresh xenial install, running in lxd (nested)
<tvansteenburgh> aisrael: and you have latest juju-deployer installed?
<aisrael> Hm. That might be a problem. I installed it from archives, and that's 0.6.4, and I installed via pip and that's 0.8. Removing the older one will remove amulet, too, but I can pip install that
<tvansteenburgh> aisrael: if you want to install the deb you need the one in ppa:tvansteenburgh/ppa
<tvansteenburgh> aisrael: same for python-jujuclient
<aisrael> tvansteenburgh: ok, I think this is related to pip installing everything in ~/.local
<tvansteenburgh> aisrael: you can also pip install both of those if you want, latest are on pypi too
<petevg> Following this conversation w/ interest. I saw bundletester hang on an error just now ... the only version of juju-deployer I have is the one from pip, though (0.8.0).
<aisrael> tvansteenburgh: thanks. Let me get this pip stuff straightened out, and I may grab you for a few minutes after standup if I'm still stuck.
<aisrael> petevg: what version of juju?
<petevg> aisrael: 2.0 beta8
<petevg> aisrael: I installed it from the archive, because beta7 was the only thing in my apt cache. I'm also running on xenial. I think that the only major difference in our environments is that I did a hung and destroy session for stray Python packages yesterday.
<petevg> whoops: "hung" -> "huge search and"
<aisrael> I'm going to downgrade to beta7 and see if that makes any difference
<petevg> Cool.
<aisrael> So far, beta7 is working much better
<petevg> Cool. Beta 7 has that issue where it occasionally gets upset when you destroy a model and immediately create another one. If I get frustrated with test hangs, I'll give it a try, though.
<beisner> arosales, do you know - is the manual provider still a thing with juju2?   i'm not succeeding in finding usage/docs.
<arosales> beisner: it is
<beisner> aha https://jujucharms.com/docs/master/clouds-manual
<arosales> beisner: also at https://jujucharms.com/docs/devel/clouds under "manual"
<beisner> arosales, if i need to do both manual machines and containers, do i need to stand up the containers and bring those in the same way?
<arosales> beisner: yes I believe so as you still need the resource
<arosales> and manual won't set up a lxc container for you
<beisner> right, makes sense.  thx arosales
<icey> can interfaces on interfaces.juju.solutions point to other repositories besides github now?> I have a gitlab server where I've been storing stuff
<cory_fu> petevg: Hey, have you started on / finished the restart action for Zookeeper yet?
<cory_fu> I just finished a discussion with bcsaller about how we want to handle the actions that would be relevant
<petevg> cory_fu: I finished, tested and pushed.
<petevg> But I can refactor if we have something better :-)
<petevg> (Just realized that I forgot to move the card to review -- just did that.)
<cory_fu> petevg: Actually, looking at your action, nevermind.  Your action is fine the way it is, save for a couple of minor, unrelated comments I will add to the PR.
<petevg> Cool.
<bdx> icey: lol ..... trying to intrduce a dependency on your personal gitlab for all ? - Not that I doubt its functionality, availability, or capibility ... if people could just add arbitrary repos ... doesn't that seem like something that would decrease the stability of the framework as a whole?
<icey> frankly, I think the iwhole interfaces.juju.solutions needs some way of specifying "This is still in dev!"
<icey> I can't get our CI to test things that aren't there  :)
<bdx> aaah
<icey> also, I'm attributing these top myself, if you trust me to know what I'm doing, by all means, use it ;-)
<bdx> like a stamp of approval, or "supported" - something to that affect?
<icey> bdx:  long term, these things /should/ move under github.com/openstack but for now we haven't merged them in :L)
<bdx> interfaces and layers?
<icey> bdx: yeah, I'm working on replacing the current ceph* charms with layers
<jhobbs> Is there a working juju daily ppa somewhere? https://launchpad.net/~juju/+archive/ubuntu/daily looks like it's out of date
<icey> which means, 2 new layers fort ceph-mon (ceph-base and ceph-mon), as well as 4 new interfaces
<jhobbs> I need to get a tip or close to tip juju and I was hoping there was a PPA I could use so I don't have to learn how to build it
<icey> SO, I'm going to have these (not yert tested or peer reviewed) layers + interfaces going onto interfaces.juju.solutions
<bdx> icey: I was looking over those ... super cool
<bdx> I see, what is the protocol for testing layers and interfaces? Is there one?
<icey> well
<icey> the open stack team is using gerrit (with jenkins) to run tests
<kwmonroe> anyone know the "callout" box syntax for charm readmes?  docs say "!!! Note: foo" should do it, but it doesn't when the readme is rendered in the store.
<valeech> arosales: Thank you for the help! Got it figured out with the help from #maas
<valeech> Any idea how to get juju 2.0 to deploy trusty on maas machines when doing an add-machine? Everything I have tried it deploys xenial even though mas has the default commission and deploy set to trusty.
<ockra> Help. Trying to deploy juju within juju on a localhost (LXD) cloud.
<ockra> Error: Failed to change ownership of: /var/lib/lxd/containers/juju-d8b754-0/rootfs
<ockra> I used --keep-broken for juju bootstrap to read logs
<ockra> Only two lines in there were "read uid map: type u nsid 0 hostid 100000 range 65536"
<ockra> and read uid map: type g nsid 0 hostid 100000 range 65536
<magicaltrout> SaMnCo: the chap from Mesosphere seems pretty interested, thanks for the intro
<magicaltrout> be nice to work with them to smooth out the extraction of DC/OS into an archive I can upgrade easier
<arosales> valeech: the maas guys rock :-) glad you got a setup working :-)
<ockra> The error propagates from lxc/lxd "C.shiftowner(cbasepath, cpath, C.int(uid), C.int(gid))"
<aisrael> tvansteenburgh: I may have a ci job hung up. I don't think this should be running for 4+ hours: http://juju-ci.vapour.ws/job/charm-bundle-test-lxc/4678/console
<DenverParaFlyer> Hello all
<DenverParaFlyer> Really having a hell of a time getting Kubernetes running on AWS using juju
<DenverParaFlyer> anyone seen this? ubuntu@612c225cd992:~$ juju deploy local:trusty/kubernetes ERROR unknown schema for charm URL "local:trusty/kubernetes"
<SaMnCo> magicaltrout: cool, let me know if we can help in any way...
<DenverParaFlyer> ahh maybe just need to remove the local:
<bdx> DenverParaFlyer: use relative paths e.g. `juju deploy ./../../wordpress`
<bdx> DenverParaFlyer: if you're using 2.0 ... otherwise 1.x uses the 'local:'
<bdx> prefix
<DenverParaFlyer> thanks. I was following the instructions here: https://jujucharms.com/kubernetes/trusty
<aisrael> DenverParaFlyer: replace local: with cs:
<aisrael> juju deploy cs:trusty/kubernetes
<aisrael> local: assumes you have a local copy of the charm. cs: will download one from the charm store
<DenverParaFlyer> thanks @aisrael!
<DenverParaFlyer> any idea now how I fix this? " hook failed: "etcd-relation-joined" for etcd:client"
<DenverParaFlyer> shows up when I do a 'juju status'
<DenverParaFlyer> already tried "juju deploy trusty/etcd juju deploy local:trusty/kubernetes juju add-relation kubernetes etcd" from the guide @  https://jujucharms.com/kubernetes/trusty
<DenverParaFlyer> similar to this issue? https://github.com/juju-solutions/bundle-observable-kubernetes/issues/17
#juju 2016-06-23
<blackboxsw> bogdanteleaga, it seems in juju2 that behavior has changed from juju1 for the websockets api call to "Action" "RunOnAllMachines". It looked like juju1 would return the synchronous results of the RunOnAllMachines call as the api response complete with Stdout and Stderror for the commands run. It appears that juju2 now returns immediately from a RunOnAllMachines call with a response that lists a bunch of actions still in "pending" s
<blackboxsw> tate. So our responsibility is to sniff action deltas  from the Allwatcher Next queue for action changed deltas. Does that sound about right?
<blackboxsw> I mentioned some misunderstandings I was having with RunOnAllMachines to cherylj a bit earlier today and it sounded like you might have a bit of additional context on what changed in juju2 for the Run* api calls.
<blackboxsw> ... just wanted to queue up the question for tomorrow. Will check in later
<xbox360pll> =D:)=);D =D ;D:P=) :D:D :P :):P :P=D :D;D:>=D;D=D;D=);D:) :D ;D :P =D=) :)=D=D =D;D =) :> ;D=D :>:P =D:) =):P=) :D=D :P :) :)=D ;D:D :P :> :D :) ;D =D :P=):D =) ;D=)
<dimitern> I'm trying to follow https://jujucharms.com/docs/devel/developer-layers-interfaces and looking at existing examples
<dimitern> Is the version: significant? Do I need to define both interface.yaml and metadata.yaml (for the 'peers: ...' section), or just peers.py and interface.yaml?
<stub> dimitern: Just peers.py and interface.yaml. 'version' is only documentation at the moment, but may one day be used for 'juju relation-add' or just flagging charms in need of maintenance.
<dimitern> stub: OK, less stuff to care about keeping in sync :) thanks!
<dimitern> stub: can I define config settings in the interface layer somewhere, so they are available in peers.py?
<stub> dimitern: I don't think so, no. I'm told you are not supposed to put implementation in the interface layer, so you might be overreaching. I think interface layer is just supposed to catch events and set states so the charm or layer providing the real implementation can hook in, and an API for driving the protocol. I'm still somewhat fuzzy on this.
<dimitern> stub: I see, that makes sense
<dimitern> stub: I looked at existing examples and most of them just set/clear states.
<bbaqar> Hey guys. What could be going wrong if i see a unit in an error state in juju status but when I resolve it it gives the error that the unit is not in an error state.
<bbaqar> all services on the node are configured and running properly
<rick_h_> bbaqar: is this from the Juju GUI?
<bbaqar> no from the CLI
<bbaqar> juju stat --format=tabular
<jcastro> who's in the mood to unpromulgate mediawiki-scalable?
<bluetack> Can anyone tell me if there is a way for an action to trigger a hook? I can't seem to find anything in the docs
<stub> bluetack: Your action is running in a hook context, so you could just call ../hooks/whatever
<stub> bluetack: But probably better to factor out the common functionality into a library, and have both the hook and action call it.
<marcoceppi> jcastro: unpromulgated
<bluetack> I tried a library (writing in python), but despite may __init__.py files all over the place, I couldn't import a sibling folder. How would I have a common library directory accessible to both actions and reactive dirs?
<stub> bluetack: If this is for reactive charms, you might want https://github.com/juju-solutions/charms.reactive/pull/66
<stub> bluetack: Also, if you want imports to work sanely with reactive you want https://github.com/juju-solutions/charms.reactive/pull/51
<stub> bluetack: $CHARM_DIR/lib is in your reactive hook's PYTHONPATH, so you can put things in there.
<bluetack> I do and I do, but they're not merged in yet
<bluetack> stud: I'll give the ../hooks/whatever approach a try for now. thanks
<bluetack> stub
<bluetack> my intentions are purely honourable
<stub> bluetack: It might be ./hooks/whatever - I think the cwd is $CHARM_DIR
<bluetack> stub: can I reuse an existing hook?
<bluetack> i.e. I dont have a hooks folder yet
<marcoceppi> bluetack: you shouldn't need to create any hooks, they're generated during the build process
<bluetack> understood. thankyou
<shruthima> hi kwmonroe , how can i use "resource-get" option in the python code...? could you please provide links of any charms written in python using resources ..!!
<kwmonroe> shruthima: check out charm-svg: https://jujucharms.com/u/marcoceppi/charm-svg
<kwmonroe> shruthima: search resource_get in this source: https://github.com/marcoceppi/layer-charmsvg/blob/master/reactive/charmsvg.py
<shruthima> thanks kevin :)
<shruthima> kwmonroe: In IBM_IM charm for amulet testing how resources will be fetched..?
<shruthima> we have noticed u have removed 00-setup file and added tests.yaml
<shruthima> kwmonroe : i have tried to deploy ibm-im charm using amulet but is not fetching resources it is failing at timeout ..
<cory_fu> mthaddon: Hey, are you around?
<mthaddon> yep
<kwmonroe> shruthima: the ibm-im charm will fetch the placeholder resources from the store when the charm deploys.. it's expected to fail, but i see the status message it was waiting for is incorrect.  fixed here: http://bazaar.launchpad.net/~kwmonroe/charms/trusty/layer-ibm-im/switch-to-resources/revision/19#tests/01-deploy.py
<cory_fu> mthaddon: You're listed as the maintainer of https://jujucharms.com/nrpe-external-master/  I'm looking to merge https://code.launchpad.net/~aluria/charms/precise/nrpe-external-master/donotremove-hostdefs/+merge/290957 since it has been approved, but because we have a new publish / promulgation process, I need it to be pushed & published into the maintainer's namespace (either yours personally, or an appropriate group) from which I can re-promulgate
<kwmonroe> shruthima: until we can host the real resources (i think we've been calling that "resources phase 2" during our calls), we'll just test that the charms deploy and  we see the right status message for the placeholder resources.
<cory_fu> mthaddon: The end goal of this is to give more power to the maintainers to publish updates directly
<shruthima> kwmonroe : is it like once the resources are in charm store only we can test amulet?
<shruthima> oh k thanks kevin :)
<mthaddon> cory_fu: we're migrating away from that charm to https://jujucharms.com/nrpe/trusty - is there a reason you'd prefer nrpe-external-master?
<kjackal> Hey marcoceppi do you have any news for me regarding charmers? Thanks
<cory_fu> mthaddon: I don't prefer it.  Just trying to get an approved MP merged in.  Are you to the point where we can unpromulgate nrpe-external-master entirely and get that change applied against cs:trusty/nrpe if applicable?
<mthaddon> cory_fu: both charms are owned by ~charmers which you're a member of so you should be able to merge and promulgate yourself, no?
<cory_fu> mthaddon: The point is that we're doing away with ~charmers ownership of promulgated charms in favor of ownership by the maintainer (or group, if more appropriate).
<cory_fu> And the one bit that I can't do is push & publish to the new namespace in the store.
<cory_fu> marcoceppi: Do we have a write-up of this new process somewhere?
<mthaddon> cory_fu: I'm familiar with the new process, I just wasn't aware you were actively unowning things from ~charmers
<rick_h_> cory_fu: https://jujucharms.com/docs/devel/developer-getting-started
<cory_fu> rick_h_, mthaddon: By new process, I specifically meant the fact that promulgated charms are no longer to be owned by ~charmers and that we are transitioning them as they are touched during review
<cory_fu> I also believe marcoceppi was working on an email to the mailing list with a full list of charms that will need to be transitioned out of ~charmers ownership
<rick_h_> cory_fu: ah, yea that's not in the charmstore section or the charmers getting started section
<cory_fu> Again, this is what I've been told by marcoceppi, so I'd like to get his feedback, but I think he's on a call and is going to get annoyed at me pinging him.  ;)
<rick_h_> cory_fu: no, you're correct on the goal and how things are built
<rick_h_> cory_fu: we want to celebrate the true authors and make it clear it's not all canonical/one team that is responsible for the charms
<cory_fu> Indeed
<mthaddon> cory_fu: I'm not sure what to suggest in terms of that specific charm. I may be listed as the maintainer in the charm itself, but we consider it pretty much obsolete at this point. Is there a suggested approach for that case?
<rick_h_> mthaddon: cory_fu so are they compatible at all? e.g. can we provide a migration math and move over?
<cory_fu> Yeah, if it's consider obsolete, then we should look at deprecating, migrating people off, and unpromulgating
<aisrael> petevg: psst, don't forget to check the lock icon in the review queue ;)
<cory_fu> aisrael: He can't
<rick_h_> cory_fu: and with the new process you can move what charm is promulgated and it maintains a history so we can look at a true migration path
<mthaddon> but we don't have any way of migrating people off do we? surely that's up to them
<aisrael> cory_fu: how so?
<petevg> aisrael: yeah. I don't have permission to set the locked status. I think that I'm officially waiting for the new review queue to launch to get it.
<mthaddon> rick_h_: nrpe supports the functionality of nrpe-external-master, but you need to set some specific config options. Not sure if that qualifies as a migration path
<cory_fu> aisrael: Locking doesn't work for him or kjackal.  Some bug due to them being newer accounts that was deemed not worth fixing since we're about to move to a new RQ platform anyway
<aisrael> cory_fu: fair point. I meant to point out that the nagios stuff petevg just reviewed were locked to me, so I could have saved you some time
<petevg> aisrael: right. I guess I could look at the locked icon :-) Whoops.
<aisrael> tl;dr, unit tests in the nagios charm in the store are completely busted, so no tests are going to pass
<cory_fu> aisrael: There is also an issue with the charmhelpers code synced into the proposed branch as well, though
<aisrael> cory_fu: Yeah, one of many problems with the charm :/
<cory_fu> mthaddon: Are the config options different than the ones you would set with nrpe-external-master?
<mthaddon> cory_fu: yes, you need to set new config options if you're migrating from nrpe-external-master to nrpe
<rick_h_> cory_fu: mthaddon so it'd be interesting to see if the upgrade charm hook in nrpe could help manage an upgrade from nrpe-external-master to itself
<cory_fu> rick_h_: From the store page, it doesn't look like cs:precise/nrpe-external-master is used in any bundles.  Do we have any other usage stats to see if it's being actively used such that we shouldn't simply unpromulgate it?
<stub> nrpe-external-master is precise, nrpe is trusty. People need to redeploy to switch anyway.
<rick_h_> cory_fu: looking
<cory_fu> stub: Good point
<cory_fu> rick_h_: I feel like we need a better way of indicating to users that something is no longer recommended without breaking their stuff right away.  Something like a flag we could set on the charm store that would cause Juju to emit a deprecation warning when deploying the charm.  *shrug*  Just a thought
<rick_h_> cory_fu: so https://api.jujucharms.com/charmstore/v4/nrpe-external-master/meta/stats/ suggests it's getting used twice today, 5 times this week, etc.
<rick_h_> cory_fu: so it's not a big footprint, but does exist?
<cory_fu> rick_h_: Also, I suppose the fact that there's a MP against it means that it's being used
<cory_fu> So I'm not sure what the appropriate action to take here is
<stub> I'd lay money it is entirely Canonical internal, and a decent chance of all that being CI systems.
<cory_fu> aluria: Since this is your PR, care to chime in?
 * stub wanders off into the night
<aluria> cory_fu: hey, I think I don't have perms to merge it
<cory_fu> aluria: :)  Not looking for you to merge it.  Looking for your input on the conversation about it being dropped in favor of cs:trusty/nrpe
<cory_fu> And whether the usage of cs:precise/nrpe-external-master is entirely Canonical IS / CI
<cory_fu> If so, it looks like the right approach is to transition
<aluria> cory_fu: aah sorry... we're using it in several deployments but will start moving to lp:charms/trusty/nrpe in future ones
<rick_h_> cory_fu: so I think we unpromulgate it, move it to the new namespace, publish it to the juju list and update the readme perhaps
<rick_h_> cory_fu: and let folks like aluria know, maybe look for any other comitter in the "recent" tree
<cory_fu> aluria: Do you know if that PR is applicable to nrpe as well?
<cory_fu> rick_h_: New namespace being ~mthaddon?
<aluria> cory_fu: it's not.... nrpe-external-master is written in bash while nrpe is in python (but will need to write a MP for nrpe too)
<rick_h_> cory_fu: if he wants it there, or we had an ~unmaintained, or we can do some ~deprecated-promulgated
<cory_fu> aluria: Yeah, sorry, I meant the fix in general, not the specific patch
<aluria> cory_fu: issue is very specific, and it also needs to be addressed in nrpe charm --- hacluster charm is a subordinate for OpenStack endpoints, and deploys /var/lib/nagios/exports/service__${hostname}_blabla.cfg and host__${hostname}.cfg
<aluria> cory_fu: nrpe charm thinks there's only one host definition and removes all to recreate the non-subordinate hostdef one
<aluria> you can end up having service defs without host defs --- patch only removes host defs without service defs
 * mthaddon is +1 to moving it to a team with a name like ~unmaintained or something
<cory_fu> aluria: Ok, I opened it as a bug on nrpe: https://bugs.launchpad.net/charms/+source/nrpe/+bug/1595612
<mup> Bug #1595612: Be more selective in deleting host__*.cfg files <nrpe (Juju Charms Collection):New> <https://launchpad.net/bugs/1595612>
<aluria> cory_fu: ta
<cory_fu> rick_h_, mthaddon: I'm also +1 to moving it to ~deprecated or ~unmaintained.  I wonder if we should promulgate it from there for a short period to give people like aluria a chance to transition
<cory_fu> The short-term transition could just be adding the namespace, of course
<cory_fu> Also, the question remains about merging this PR.  I assume we need to get that fix in during the move.
<arosales> kwmonroe: do you know if there are docs on how to use the docker box?
<aluria> cory_fu: I ran into this bug which I think it's already solved -- https://bugs.launchpad.net/charms/+source/nrpe/+bug/1473205
<mup> Bug #1473205: nrpe charm creates checks with _sub postfix, breaking compatibility with nrpe-external-master <canonical-bootstack> <nrpe (Juju Charms Collection):New> <https://launchpad.net/bugs/1473205>
<cory_fu> arosales: https://github.com/juju-solutions/charmbox
<cory_fu> The README is pretty good on it
<cory_fu> That one is designed for dev & testing.  If you want a Docker image for just deploying using Juju, you can use the lighter-weight jujubox: https://github.com/juju-solutions/jujubox
<cory_fu> arosales: ^
<marcoceppi> cory_fu: I've gone a bit off the deep end, but I think it's for the best
<marcoceppi> cory_fu: https://github.com/juju-solutions/layer-apache-php/pull/5/files
<marcoceppi> cory_fu: what I'd like to do, is build a layer tactic that merges apache.yaml to layer.yaml prior to validation
<cory_fu> marcoceppi: +1 I've been meaning to update the apache-php layer to use layer.yaml for some time but never use it so didn't have the time or motivation
<cory_fu> I suppose a custom tactic would be one way to handle migrating.  How many charms are using that layer currently?  Maybe we can submit patches against them all?
<magicaltrout> oooh look at that
<magicaltrout> you can spy on charm summit submissions  in google forms :)
 * magicaltrout bookmarks that page
<magicaltrout> marcoceppi: out of interest excluding merlijn's uploading issues, are their restrictions on resource sizes ?
<marcoceppi> magicaltrout: not that I'm aware of
<marcoceppi> magicaltrout: probably around the 50 GB size we have to start asking hard questions
<marcoceppi> like "why" and "what"
<magicaltrout> lol okay
<marcoceppi> magicaltrout: wait
<marcoceppi> you can see the submissions?
<marcoceppi> magicaltrout: eheh, not anymore ;)
<magicaltrout> :(
<magicaltrout> didn't think that was on purpose :P
<marcoceppi> Google forms comes with, by default, "allow users to see summary" which is follish
<marcoceppi> foolish, even
<magicaltrout> hehe
<magicaltrout> well i didn't notice the first time
 * magicaltrout should pay more attention
 * magicaltrout has plans afoot to transfer all of his data center ops to DC/OS and Juju... because..... why not... 
<magicaltrout> reap what you sow and all that
<magicaltrout> marcoceppi: you guys need to get 2.0 out so I can deploy without fear! ;)
<marcoceppi> magicaltrout: that sounds awesome
<marcoceppi> magicaltrout: yeah, I'm pumped for 2.0 landing, we're really close to RC's which means we'll be upgradability between releases
<magicaltrout> woop
<magicaltrout> oh also on a slightly different note
<magicaltrout> http://www.darpa.mil/news-events/2016-06-17 with my NASA hat on I should be working on this soon
<magicaltrout> now I have no idea about what will actually happen, but most of the Darpa stuff gets open sourced
<marcoceppi> bad ass!
<magicaltrout> so automatic model discover over juju big data sounds pretty cool
<magicaltrout> so if they do make is ASL I'll make sure we can stand up whatever is produced
<magicaltrout> there's a bunch of the last darpa/nasa project I'm slowly working with some of the JPL guys on bringing to Juju, dark web crawling and stuff
<cholcombe> anyone around that knows mojo?
#juju 2016-06-24
<magicaltrout> can i tell a reactive charm to not execute a block until all the units are in the same state?
<magicaltrout> kjackal: you'll know the answer to that
<kjackal> Hi magicaltrout
<kjackal> let me think...
<kjackal> magicaltrout: ok so, here is how you can do that. You can have a relation of peer type among the service units
<kjackal> you should then make sure the units set the correct state within the interface implementing the peer relation
<magicaltrout> ah right yeah, so just plonk in a "we're all ready for the next step" state in my interface
<kjackal> for example, when you enter the correct stare you do something like this inside the interface: "remove_state('wrong.state') set_state('correct.state')"
<kjackal> then inside the reactive part you should have something like @when_not('peers.in.wrong.state') def do_whatever: ....
<kjackal> Let me think....
<kjackal> So there might be another way to do that
<kjackal> You could you the leader layer
<kjackal> the leader should be gathering info on the state of the units and the units should ask the leader if all of them are in the right state
<kjackal> I am sure this solution with the leader is doable, but I do not have it in my head right now, I will need to do ome research on this
<kjackal> magicaltrout: ^
<kjackal> magicaltrout: any comment on the referendum?
<magicaltrout> yeah i'm just looking at stub 's github page thanks kjackal
<magicaltrout> yeah, fucking shit
<magicaltrout> anyway
<magicaltrout> not great, the uk has had better days
<kjackal> in the bigdata world the need of coordination among units is a bad sign, and is usualy outsorced to services like zookeeper. I am sure you already know that
<magicaltrout> DC/OS has an immutable configuration for whatever reason
<magicaltrout> so they expect all masters to be up and running before installing the config on them
<magicaltrout> i'm sure there is a way to do it on the fly, but they don't document it. So, spin up the masters, wait for all the ip's to be addressable and then run the config setup
<stub> magicaltrout: The problem you might trip over is that you don't know when all the units have actually joined the peer relation. You can detect all units that have joined so far are in a particular state, but you don't know how many more nodes are yet to be provisioned and join.
<magicaltrout> yeah i know stub I was just mulling that over
<stub> magicaltrout: The approach I use is have the leader set itself up as a stand alone unit, and as other units join have them grab a lock (using the coordinator layer) and have them join one at a time.
<kjackal> yes stub is right.  Question for you stub, how do you know if you will have 4 or 5 masters for example?
<stub> Assuming your service can dynamically scale out like that
<kjackal> does the leader know upfront the exact number of masters that should join?
<magicaltrout> the 2nd option i have is just to figure out how to make the DC/OS masters scaleable but they lock down the setup reconfig so I  need to reverse engineer their installer :)
<magicaltrout> the other idea I had for simplicity was just to dump it all in an action
<magicaltrout> that would at least get it going quicker ;)
<stub> kjackal: Unless it is specified by the user in configuration ('I want 3 masters' or 'there will be at least 10 nodes'), then the leader has no way of knowing how many units are expected to join.
<stub> kjackal: So the leader will need to make a decision, starting with 1 master since it is standalone, and revise this decision as more units join.
<kjackal> stub: understood, yes the leader needs to know or guess the number of masters
<magicaltrout> on the bright side... my new 4k monitor just turned up
<magicaltrout> might need to buy a 2nd one now
<kjackal> magicaltrout: I think next step should be to get  3 more 4k monitors so that you would make an 8k!
<magicaltrout> kjackal: I have a thunderbolt 3 port
<magicaltrout> which drives dual hdmi
<magicaltrout> plus a hdmi port on the laptop
<magicaltrout> so.......
<kjackal> do you turn your back on a challenge?
<magicaltrout> hehe
<magicaltrout> only britain staying in the EU
<magicaltrout> one thing I do know
<magicaltrout> today is a really bad day to get paid in USD.....
<icey> lots of aws service failures this morning
<dannf> arosales, beisner : ICYMI - there's a mongodb build for z now - https://bugs.launchpad.net/ubuntu/+source/mongodb/+bug/1595242/comments/1
<mup> Bug #1595242: mongodb xenial s390x packages are needed (blocks ceilometer) <s390x> <uosci> <mongodb (Ubuntu):New> <ceilometer (Juju Charms Collection):New> <ceilometer-agent (Juju Charms Collection):New> <mongodb (Juju Charms Collection):New> <https://launchpad.net/bugs/1595242>
<arosales> dannf: very cool. OpenStack is still blocked until we can put this into a charm
<arosales> dannf: I am working with marcoceppi to work on this which we should hit by end of June, but OpenStack won't be able to consume till post that
<dannf> arosales: i pasted a link to my MP for mongodb that used a ppa for arm64 into that bug
<beisner> dannf, arosales - awesome.  look forward to revisiting :)
<arosales> dannf: https://launchpad.net/~ubuntu-s390x-community/+archive/ubuntu/mongodb is the latest correct?
<arosales> dannf: also think for the pointer to the MP for the ARM enablment in mongo
<arosales> kjackal: sorry for being dense here but what steps am I missing @ http://paste.ubuntu.com/17806346/ to reproduce  https://bugs.launchpad.net/juju-core/+bug/1593185
<mup> Bug #1593185: In lxd the containers own fqdn is not inclused in /etc/hosts <addressability> <hours> <lxd> <lxd-provider> <network> <juju-core:Triaged> <juju-core 1.25:Triaged> <https://launchpad.net/bugs/1593185>
<dannf> arosales: yep, that's the latest (and only) :)
<kjackal> arosales: Ok, you deployed the ubuntu. The bug is that you cannot "ping juju-1706c4-0"
<lazyPower> cory_fu - got a sec to chat about https://github.com/juju-solutions/charmbox/pull/40 ?
<kjackal> let me try to find my repro of the bug
<cory_fu> lazyPower: Sure
<lazyPower> cory_fu - just to be clear on intent, we want this to target devel, and only land in devel.
<lazyPower> and this is to reduce the divergence of those two branches?
<arosales> dannf: thanks
<lazyPower> i think this only effects mbruzeks PR that was made slightly before this pr, but this looks pretty sane.
<arosales> kjackal: that does fail for me, but does that break models?
<cory_fu> lazyPower: Yes, that branch (new-devel) should *replace* devel and then devel should remain only that single commit different from master and be constantly rebased up to master
 * arosales doesn't ping hostnames if I can ssh to them
<kjackal> arosales: http://pastebin.ubuntu.com/17395368/
<arosales> but if inter charm communication is broke, then that is a huge issue
<lazyPower> cory_fu - ok, makes sense. I'll start getting this landed and ping if any additional questions crop up. I'm more interested in making sure I was reading the directions clearly :)
<kjackal> arosales: yes, for a few minutes the connection using the fqdn is broken
<kjackal> arosales: from what dimitern told me there is a dhcpclient that is firing for no good reason and it breaks the resolution
<kjackal> AFAIU adding the hostname on the etc/hosts file will cause the dhcpclient process not to fire
<kjackal> arosales: our discussion is here: https://irclogs.ubuntu.com/2016/06/16/%23juju-dev.txt at around 10:00
<arosales> kjackal: ok let me see if hadoop-processing works here in lxd
<arosales> it inter charm communication works, agreed it is a bug, but perhaps not as huge a blocker as I initially thought
<kjackal> arosales: I know for certain kafka start-up is failing because of this bug, petevg can confirm this because he hit this bug when reviewing the kafka charm
<petevg> Yep. Can confirm.
<petevg> I was able to reproduce consistently, too.
<petevg> Should I leave a comment on a ticket?
<arosales> kjackal: is you latest charm https://jujucharms.com/u/bigdata-dev/kafka/trusty or https://jujucharms.com/u/bigdata-dev/apache-kafka/trusty or should I build from layers?
<kjackal> https://jujucharms.com/u/bigdata-dev/kafka/trusty  is the latest BT kafka
<kjackal> you should relate that kafka charm to openjdk and apache-zookeeper
<arosales> kjackal: ok
<arosales> kjackal: thanks
<arosales> I'll post my info to the bug, and I work to follow up next week with the juju team
<kjackal> note that you have to be fast enough (or the machine slow enough) so that the dhcp client wont finish
<arosales> at least its on the target milestones though just need to be sure it gets to a release soon
<jcastro> anyone know where I can find the juju log if debug-log doesn't work?
<petevg> jcastro: if you got to a point where a unit was deployed, you can do "juju ssh <service>/<unit>", and then look in /var/log/juju/
<petevg> If you don't have a working unit, you can try doing "juju deploy --debug <charm>"
<jcastro> nope, the instance launches and the container immediately stops and is removed, so I think it has something to do with provisioning the container
<petevg> Hmmm ... If you do a deploy with the --debug flag, do you get any output that hints at what might be going wrong?
<jcastro> http://paste.ubuntu.com/17808709/
<jcastro> I can launch instances just fine from the lxc command line
<petevg> jcastro: hmmm. That looks normal to me. I'm afraid that you're bumping up against the limits of my knowledge. Does anyone else have any ideas as far as troubleshooting goes?
<jrwren> jcastro: what does `juju status --format yaml` say?
<jcastro> aha!
<jcastro> more information
<jcastro> http://paste.ubuntu.com/17809286/
<jrwren> jcastro: but lxc launch starts an  instance ok?   i'm not familiar with lxd forkstart and how juju starts lxd instances.
<jcastro> yeah, mannual launching works, so it's this forkstart that must be the issue
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1577524
<mup> Bug #1577524: Error calling ''lxd forkstart juju-machine-2-lxd-0 /var/lib/lxd/containers             /var/log/lxd/juju-machine-2-lxd-0/lxc.conf'': err=''exit
<mup> status 1'' <ci> <deploy> <intermittent-failure> <juju-release-support> <lxd> <juju-core:Triaged by cherylj> <https://launchpad.net/bugs/1577524>
<jrwren> jcastro: and the container is already gone? can you `lxc info --show-log juju-5c3262-0` ?
<jcastro> nope, the container comes up and goes away long before I can do that
<jcastro> aha, the units log to /var/log/lxd and those are still there, going to go dig
<jcastro> jrwren: can you pastebin me the results of your: lxc profile show juju-default
<jcastro> the error in the logs was a networking one related to the modified template I was using to get openstack running on lxd
<jrwren> jcastro: its pretty empty. http://paste.ubuntu.com/17809808/
<jcastro> that appears to have fixed me up!
<jrwren> WOOT!
<xilet> so I have asked about this a few different ways, but I am still unclear and the documentation is.. lacking.  With juju 2.0 how do I tell a container what device(s) it can access on the hardware it is running on?  I can't find any way to tell it to use '/dev/sdb1' or allow access to '/dev/ttyUSB0' or such. I can do it with lxc but I know that is replaced with lxd.
<cory_fu> jcastro, tvansteenburgh: I'm hitting the same problem testing wiki-simple with the new mysql that I was seeing before: It's not honoring the setup: directive in the mysql charm's tests.yaml file.  That means it had nothing to do with me testing it with local: in the bundle before, because I'm using cs: now
<cory_fu> The "it" in this case is bundletester.  I'm wondering if it's only honoring tests.yaml from the bundle itself?
<tvansteenburgh> cory_fu: please file a bug on bt with steps to repro
<cory_fu> tvansteenburgh: Actually, I jumped the gun.  The problem isn't BT.
<cory_fu> marcoceppi: You're going to love this.
<cory_fu> marcoceppi: Going to need another fix to mysql.  >_<
<cory_fu> My bad
<cory_fu> I'm also running up against the '==' length error from `pip3 list` that's due to an older lib version in trusty which is causing 00-setup to fail.  Does anyone recall what lib caused that issue and if there's a good way to fix it?
<cory_fu> For reference: http://pastebin.ubuntu.com/17813844/
<cory_fu> marcoceppi: Should I just remove this check altogether and always try to install PyMySQL?  https://github.com/marcoceppi/charm-mysql/blob/master/tests/setup/00-setup#L33
<cory_fu> 00-setup now gets called before each case (e.g., charm proof, make lint, etc) so I wanted to minimize network traffic
<cory_fu> Actually, it looks like it doesn't do network traffic if it's already present, so yes, I should just remove it
<marcoceppi> cory_fu: do what you must, I support any change
<cory_fu> marcoceppi: https://github.com/marcoceppi/charm-mysql/pull/1
<marcoceppi> cory_fu: ta
<marcoceppi> man, I really want a post commit push process for github now
<marcoceppi> like, commit, build/test, push to norevision/development channel
<cory_fu> marcoceppi: Yeah, +1
<marcoceppi> cory_fu: published
<marcoceppi> cory_fu: I wonder if travis could do this. Like with a travis plugin
 * marcoceppi investigates
<cory_fu> marcoceppi: Huh.  Why'd it go from 52 to 54?
<marcoceppi> cory_fu: it actually was 53 at one point
<cory_fu> Not when I looked at it.  I think you might have forgotten to publish 53.  I saw you mention that it had gone up again but still saw it at 52 and just didn't say anything
<cory_fu> Anyway, it's sorted now
<marcoceppi> cory_fu: I also want to find a way to notify bundle owners
<marcoceppi> like "mysql has been incremented to 54, please check yo bundles bro"
<cory_fu> Indeed
<cory_fu> Shouldn't be very difficult.  The store has a list of bundles using a given charm rev
<marcoceppi> yeah, and we know owners and bug urls
<marcoceppi> I should be able to do if gh || lp - open a bug from bundlebot
<marcoceppi> first, figure out travis
<cory_fu> marcoceppi: Ok, I'm super pissed at the mysql charm right now
<marcoceppi> cory_fu: y u do dis
<cory_fu> marcoceppi: http://pastebin.ubuntu.com/17815449/
<cory_fu> So, the test has the max connections that it is testing hard-coded, but the bundle deploys with a different number.  So the charm test and the bundle are incompatible
<cory_fu> Actually, the bundle gives -1 so there is no way the test will work
<marcoceppi> cory_fu: which bundle, yours?
<cory_fu> wiki-simple
<marcoceppi> weird, shouldn't the test run the configure?
<marcoceppi> OIC
<marcoceppi> it's set prior to setup, so it's not executed live
<cory_fu> So I guess I could fix it by removing reset: false
<marcoceppi> cory_fu: this should fix it in MySQL
<marcoceppi> http://paste.ubuntu.com/17815662/
 * marcoceppi tests
<marcoceppi> cory_fu: this, I think, is the fix
<marcoceppi> https://github.com/marcoceppi/charm-mysql/pull/2
<marcoceppi> cory_fu: going to test the bundle with my version
<marcoceppi> cory_fu: what flags are you using on bundletester?
<cory_fu> -vlDEBUG -b bundle.yaml
<cory_fu> marcoceppi: I also have these changes pending: http://pastebin.ubuntu.com/17816358/
<cory_fu> (The seemingly no-ops are eol-whitespace deletes)
<marcoceppi> cory_fu: I have a few as well
<cory_fu> marcoceppi: Still getting the max connections error
<cory_fu> tvansteenburgh1: Is there a way to make BT not delete the tmp directory for a charm being tested as part of a bundle so I can verify it's using the right change?
<cory_fu> I guess I can breakpoint
<cory_fu> Odd.  Second time through (with already deployed charms, to add the breakpoint), it passed the config check.  grr
<lazyPower> cory_fu - https://hub.docker.com/r/jujusolutions/charmbox/builds/byulnqrh6s44sovbabborg/  new charmbox:devel is being built from your branch work. Thanks for submitting that
<cory_fu> lazyPower: Sweet.  I'm looking forward to not having to manually install bundletester and make every time now.  ;)
<cory_fu> (Not that they work with the current betas >_<)
<lazyPower> cory_fu - make sure it got everything you wanted... i had to manually merge due to conflicts
<cory_fu> o_O
<lazyPower> well, you targeted master not devel
<lazyPower> then left explicit instructions this was to be applied against devel
<lazyPower> soooo *throws confetti*
<cory_fu> Oh noooooooooo!
<lazyPower> do i need to back this out?
<cory_fu> lazyPower: I said it should *replace* devel, not merge into it!
<lazyPower> ah ok
<lazyPower> welp easy enough
<lazyPower> 1 sec incoming fix, disregard that build as its garbage
<cory_fu> :)
<cory_fu> lazyPower: The whole point of that commit is to keep the devel branch as *exactly* 1 commit different from master.  Then, any changes that need to happen should go in master and the devel branch is then rebased against master
<lazyPower> no i got the principal
<lazyPower> i just botched the instructions
<cory_fu> Ok, sure
<cory_fu> But yeah, my intention was just to delete the whole of the current devel (or move it to old-devel if you want to preserve it for some reason) and replace it with the new-devel
<lazyPower> https://hub.docker.com/r/jujusolutions/charmbox/builds/bnoyhnumjzrrrfp2gn3emiy/
<lazyPower> that should make a bit more sense then :)
<cory_fu> lazyPower: Build failed.  :(
<cory_fu> Very strange
<cory_fu> marcoceppi: I'm getting intermittent failures on that stupid max connections test.  :(
<marcoceppi> of all the stupid configuration options to test, it has to be /that one/
<lazyPower> cory_fu pip install directive failed
<lazyPower> i'll pull it local and get it building before we submit to the builders again
<marcoceppi> cory_fu: I'm waiting for my tests to finish, then I'm going to make better tests
<marcoceppi> cory_fu: I don't need to test that MySQL knows how to handle max connections, I just need to test the damn file got written
<cory_fu> Yeah, +1
<marcoceppi> make tests great again
<cory_fu> marcoceppi: Mind if I just hand wiki-simple off to you, then, so I can get back to some Bigtop stuff I've been neglecting?
<marcoceppi> cory_fu: uh, sure
<cory_fu> So confident.  :p
<cory_fu> marcoceppi: Actually, I'll go ahead and update wiki-simple with mysql-54 without reset: false so that you don't have to worry about the bundle
<marcoceppi> cory_fu: don't we want reset: false?
<marcoceppi> I know I do
<cory_fu> Yeah, we do, because it'll make the bundle tests faster
<cory_fu> Also, I'm pretty sure that max-connections test is racey anyway, so nevermind, I'll leave the bundle to you after all
<cory_fu> Or you can just let me know when you've fixed up mysql and I'll update the bundle then
<cory_fu> Whatever
<marcoceppi> cory_fu: cool
<lazyPower> cory_fu - it appears upgrading pip befor einstalling the deps resolves the build failure
<magicaltrout> blimey
<magicaltrout> its taken all day to figure out how to make the DC/OS masters scale dynamically
<magicaltrout> but i think I've finally solved the problem
<magicaltrout> \o/
<lazyPower> magicaltrout nice :)
<magicaltrout> they make their configs immutable
<lazyPower> not an easy task from what i've gleaned
<magicaltrout> which is a *right* pain in the backside
<magicaltrout> their offical advice if you want to add more masters is to tear down what you have and rebuild
<lazyPower> seriously?
<magicaltrout> yeah
<lazyPower> welp, i'm happy we wont have that in the README
<magicaltrout> but your masters should be pretty static, so its not a huge deal
<magicaltrout> but
<lazyPower> three cheers for magicaltrout
<magicaltrout> if your nodes fail and stuff
<magicaltrout> then eventually you'd run out of masters
<lazyPower> are they not running any kind of consensus on the leaders?
<lazyPower> k8s has the same limitations, but its trivial to add a replica of the apiserver/scheduler-manager
<magicaltrout> its probably an *enterprise* feature ;)
<lazyPower> in our current model the only downside is the PKI
<magicaltrout> although i've not found it their either
<magicaltrout> but its only a bunch of zookeeper backed stuff
<magicaltrout> so i'm not sure why its so static
<magicaltrout> surely thats part of the point of ZK?
<lazyPower> That sounds right, but ive only interfaced with ZK in terms of big data deployments, and didn't fully understand what it brought to the table
<magicaltrout> its just distributed configuration management, nothing special but makes sure all your nodes stay in sync
<magicaltrout> which these days is pretty important
<magicaltrout> also though with DC/OS they have something called Exhibitor
<magicaltrout> which appears to come from netflix
<magicaltrout> which seems to ensure ZK is running and stuff
<magicaltrout> which seems a bit weird
<cory_fu> jcastro: So, I've updated wiki-simple with the latest mysql and verified the tests all pass.  I see that cs:~jorge/bundle/wiki-simple (stable) has Write: charmers (because it was promulgated) but only you have read or write perms to the unpublished channel, so you'll have to push (or grant to me or charmers)
<magicaltrout> its like the old who's monitoring the monitoring scenario
<marcoceppi> jcastro: where is the upstream source for wiki-simple? did you make a gh repo for it?
<cory_fu> marcoceppi: Yes
<cory_fu> marcoceppi: https://github.com/juju-solutions/wiki-simple
<marcoceppi> nice
<magicaltrout> 20GB of Virtual Box VMs to debug dcos locally
<magicaltrout> good job I bought a new laptop
<magicaltrout> cory_fu / lazyPower help me out its been a few weeks. I need a list of all IP's for a service
<magicaltrout> I also need to update all units in my service if a new unit is added
<magicaltrout> in python/reactive
<magicaltrout> can you point me at some code
<magicaltrout> or anyone else....
<cory_fu> magicaltrout: If I understand you correctly, you want something like https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39 (ignore dismiss_joined, that's a bad pattern and should be removed) or possibly something like https://github.com/juju-solutions/interface-namenode-cluster/blob/master/peers.py#L49
<cory_fu> resolve_private_address is defined here: https://github.com/juju-solutions/jujubigdata/blob/master/jujubigdata/utils.py#L427
<cory_fu> hookenv.unit_private_ip() is poorly named because it usually won't actually return an IP, but sometimes will
<magicaltrout> ah yeah the freaky conversations stuff i  remember now
<cory_fu> magicaltrout: Usage would be something like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/lib/charms/layer/zookeeper.py#L70
<magicaltrout> okay and to keep it all in sync i don't need to wire it up, i just need to check for changes i guess
<cory_fu> Though the RelationBase stuff would be better served by getting your instance from a @when decorator
<cory_fu> magicaltrout: Checking for changes like https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/reactive/zookeeper.py#L50
<magicaltrout> yeah thanks a lot cory_fu
<magicaltrout> looks spot on
<cory_fu> magicaltrout: Sorting of lists is important with data_changed, FYI
<magicaltrout> hmm k
<cory_fu> Glad I could help.  I'm about to head out, tho, so further questions will have to be directed at kwmonroe ;)
<magicaltrout> i can sort ip addressed like usual ?
<magicaltrout> addresses
<cory_fu> Sure
<cory_fu> If you're curious how data_changed works, it's pretty simple: https://github.com/juju-solutions/charms.reactive/blob/master/charms/reactive/helpers.py#L168
<magicaltrout> cool, kwmonroe is about as useful as a chocolate fireguard....
<cory_fu> lol
<kwmonroe> mmmm, chocolate
<magicaltrout> hehe
<magicaltrout> alright, i should be good, i've used that pattern somewhere else so i should be able to figure it out
<magicaltrout> thanks cory_fu
<magicaltrout> finally get this stuff pushed to the charm store
<magicaltrout> before my country goes bankrupt and our internet turns into  something akin to north korea
<lazyPower> #brexitproblems
<magicaltrout> hehe
<magicaltrout> i'm  moving to scotland
<lazyPower> I think i'm going to aim for Nova Scotia, and wherever I land in between will be fine with me.
<magicaltrout> or sealand.... lazyPower i  could take over sealand and you could be my only minion whilst i'm king of the fort
<lazyPower> i mean, sure, but i'm a terrible minion
<magicaltrout> hehe
<lazyPower> i'm a minion with megalomania
<magicaltrout> i do like Nova Scotia
<magicaltrout> i'm reigniting the discussion of moving with the mrs whilst she's in a state of wild depression over whats happened ;)
<magicaltrout> Canada is great plus their PM is cool  and  knows his shit
<lazyPower> It certainly is putting uncertainty in my moving plans as well
<magicaltrout> that said, I did like the  Obama / Fallon slow jam the other day
<lazyPower> I was going to head over to the UK and AirBNB it for a month or two closer to fall...
<magicaltrout> that would be so bad in the UK
<magicaltrout> lazyPower: in all honest  unless the bottom falls out of the economy nothing will happen  for ages
<magicaltrout> so i wouldn't worry about changing  plans, just worry about the exchange rate ;)
<magicaltrout> although technically today I got a 10% pay rise on NASA stuff without doing  anything :)
<lazyPower> nice ^5
<magicaltrout> and there is another  devops day in Ghent I'm  sure jcastro would like you to attend in October ;)
<lazyPower> magicaltrout - if you dug arctic life, i'm jammin out a playlist i made in the seattle airport lounge now. http://24.3.228.120:8000/listen.m3u
<magicaltrout> hold on , i shall stab some buttons and have a listen
<lazyPower> buttonstabbing++
<magicaltrout> you're either giving me an incorrect  ip or you're behind a firewall
<magicaltrout> i lie
<magicaltrout> i can't type
<lazyPower> nah my girl is connected on that link. should be g2g if you're plugging it into a shoutcast compliant player
<lazyPower> i'll have to make it a point to setup the pirate radio again so its got an html5 player
<magicaltrout> cool i'm  in
<magicaltrout> such a pain in the balls that Sonos doesn't let  you add new radio stations on a phone/tablet else I'd have you streaming over my hifi ;)
<lazyPower> I hear ya. I just tore down all my audio gear in the house. I was kind of weepy when all that started coming down, because it got really real at that point.
<magicaltrout> awww
<magicaltrout> kwmonroe: i might have been a big harsh earlier... i need a tip ;)
<magicaltrout> cory pointed me to that peer releation side of the interface, and my understanding of relations is master <-> slave type stuff, but I'm after all the IP addresses of the units in the same service not related to each other via add-relations
<magicaltrout> just units that coexist via add-unit
<lazyPower> magicaltrout - so does the charm have a peer relationship? if so, you can scrape it from that. Thats the only way you'll get all service IP's of deployed charms units.  I do something very similar in the etcd 'cluster' interface.
<magicaltrout> well thats what cory pointed me to, but as I don't relate anything I'm not sure how that would work
<magicaltrout> for example, juju deploy dcos-master
<lazyPower> peering is implicit when you add-unit. they automagically get that relationship added.
<magicaltrout> juju add-unit dcos-master
<magicaltrout> ah
<magicaltrout> hmm
<lazyPower> also running support between cuts :P
<magicaltrout> hehe
 * lazyPower flex's
<magicaltrout> told you kwmonroe was as much use as a chocolate fireguard
<lazyPower> between you and cory_fu he gets that all the time
<magicaltrout> hehe
<magicaltrout> first christmas related email of the year \o/
<lazyPower> in june?
<lazyPower> lolwut
<magicaltrout> yup
<magicaltrout> it has santa on and everything
<kwmonroe> hello magicaltrout.  i trust i've left you waiting long enough.  watch the harshies next time.  anyway, the zk quorum relation that cory pointed you to should be what you need.. https://github.com/juju-solutions/interface-zookeeper-quorum/blob/master/peers.py#L39
<magicaltrout> yeah i didn't realise there was an auto peer relationship in the background
<magicaltrout> mystery solved
<kwmonroe> and like lazyPower said, it's implicit, so you'd create a method on that relation that returned a list of all peer ip addrs
<kwmonroe> ah
<kwmonroe> well, i shall stop re-explaining :)
<kwmonroe> also, it's important to note that "auto peering" may not always work.. the charm that you want to peer needs to specify a peer relation in its metadata.yaml.
<kwmonroe> ... which corresponds to an interface with a peers.py, etc, etc, blah, blah.
<magicaltrout> i love the way everyone at canonical always caveats anything i ever want to do with "may not always work" :P
<kwmonroe> why can't you just be happy with mediawiki?  you're always throwing new wrenches and acronyms in our shiny.
<magicaltrout> hehe
<magicaltrout> well i'm on a mission to finish off a bunch of the ones i've started
<magicaltrout> saiku will be done as soon as we ship 3.9, DCOS just needs master <-> master support, PDI just needs Hadoop pluggability
<kwmonroe> and, to be clear, when i said it "may not always work", what i should have said is "it will never work unless your charms has a peer relation in the metadata.yaml; it will always work easy peasy if you do it right."
<magicaltrout> time to clean up all the charms i've started
<kwmonroe> nice!
<kwmonroe> lmk if/when you're hooking pdi up to hadoop.  we can walk through the hadoop-plugin relation, which will probably just work right out of the box. ;)
<magicaltrout> hehe.  Yeah I need to investigate the PDI side of it. Pentaho devs say "oh you just add the libs", personally i'm more pessimistic
<magicaltrout> we shall see
<kwmonroe> cool
<lazyPower> after this list i'm going to drop into some new stuff i picked up from Pooldor (a belgium artist who'm i've become quite infatuated with recently)
<magicaltrout> not.... artist infactuation!
<lazyPower> *Poldoore
<kwmonroe> lazyPower: i've hacked into your m3u.  thanks for the tunes this fine friday!
<magicaltrout> hehe
<lazyPower> kwmonroe aww yeee bruddah
<lazyPower> glad you could make it
<magicaltrout> i'd also like to port scispark in the not too distant future kwmonroe to try and entice the JPL guys a bit mor
<magicaltrout> e
<kwmonroe> sweet magicaltrout!  maybe they'll give you another raise ;)
<kwmonroe> it's what 'merica does when we feel bad for other countries.
<magicaltrout> hehe
<magicaltrout> i'll speak to donald
<magicaltrout> he's up in scotland making a tit of himself
<kwmonroe> lol.  i expect nothing less.
<magicaltrout> told the scottish that its a  great day for the uk when scotland backed staying in the EU massively
<magicaltrout> good work Donald!
<magicaltrout> liking this track lazyPower
<magicaltrout> good work
<lazyPower> All Poldoore my friend. no lazy intervention on this perfection
<magicaltrout> hehe
<lazyPower> by far an away my favorite jam he's done
<lazyPower> this rocks my car on every road trip in 2016 so far :)
<magicaltrout> yeah this is some cool stuff, i'll be looking it up next week
<magicaltrout> thats part of the problem working at home
<lazyPower> Thanks for letting me share :)
<magicaltrout> i have music on all day
<magicaltrout> finding new stuff is always hard
<lazyPower> i really do dig getting to alienate people with my taste in music
<magicaltrout> urgh monday sucks, trip into london and a 9pm SFO meeting
<magicaltrout> *bork*
<magicaltrout> oh well tunes for the train at least!
<lazyPower> http://poldoore.bandcamp.com/
<lazyPower> ;)
<lazyPower> i should get an affiliate link, get me on the insider track to getting pre-release jams
<magicaltrout> right i'm offski, got some cricket to umpire in the morning... I know you americans don't understand that concept....
<magicaltrout> thanks for the tunes lazyPower!
<lazyPower> Thanks for tuning in magicaltrout o/ have a good weekend
#juju 2016-06-25
<acacac> mpu help
<jose> has there been any changes to the api that prevent me from getting output from juju status? I know the bootstrap mode is up, but juju status is not returning
<jose> oh nvm, I was working on the wrong env :P sorry all!
#juju 2017-06-19
<kjackal> Good morning Juju World!
<cnf> ohai
<cnf> is there a way to lint bundle yaml files?
<cnf> i keep getting "invalid charm or bundle provided", and i don't know what's wrong
<anrah_> cnf: if you deploy with --debug Juju should show the error
<cnf> anrah_: ah, that's slightly more useful, thanks!
<cnf> hmm, my juju version has no option to reload spaces, it seems
<wpk> It's only in 2.2
<roaksoax> /query/win 8
<cnf> roaksoax: new to irssi, or just mistyping a lot lately? :P
<roaksoax> cnf: ha! happens to me all the time
<cnf> yeah, i noticed :P
<roaksoax> :)
 * roaksoax pulls yet another roaksoax
<cnf> ugh
<cnf> juju is doing weird shit with my network settings again :(
<cnf> it works when MaaS brings it up, juju does stuff, and it no longer works
<cnf> :(
<cnf> why the hell is it adding post-up route add -net 172.20.19.248 netmask 255.255.255.248 gw 172.20.20.254 metric 0 || true ?
<cnf> that makes no sense, and that subnet isn't even associated with that interface
<boolman> when deploying services on lxd containers, the machines get stuck in pending sometimes
<boolman> how do I retry these without destroying my model?
<cnf> boolman: for containers, i have not found a way
<boolman> =/
<cnf> boolman: if you can, change a config value
<cnf> and change it back
<cnf> that tends to trigger a retry
<cnf> ugh "no matching agent binaries available" :(
<boolman> cnf: I will try that next time, thx
<cnf> anyone here good with networking, especially in relation to juju?
<cnf> how long can a controller upgrade take? o,O
<cnf> hmm. it's been upgrading for 40 minutes now
<cnf> 45
<BlackDex> cnf: upgrading to juju 2.2?
<BlackDex> That took a while for me
<BlackDex> i think it has something to do with the logs cleaning or something like that
<cnf> yes, and it's juju being retarded again
<cnf> juju and networking bs, as usual
<BlackDex> upgraded maas to 2.2 also already?
<BlackDex> if so, dubble check your subnet/vlan spaces
<BlackDex> since they changed it from subnets to vlan for the space definitiion
<cnf> no, because maas 2.2 breaks shit for me
<cnf> and it can't even access mass
<BlackDex> ah
<cnf> mass
<cnf> maas
<cnf> because it is being stupid
<BlackDex> hmm
<BlackDex> haven't yet played a lot with it yet. So i can't tell if it works for me
<BlackDex> the only thing i know is that upgrading from juju 2.0 to 2.1 and then to 2.2 breaks whole networking for me
<BlackDex> spaces are broken
<BlackDex> can't enrole a lxd with the correct interfaces
<BlackDex> but again, i havent looked at it that much, not debuged it or what ever
<cnf> MaaS 2.1 > 2.2 spaces changed
<cnf> fromn layer 3 to layer 2
<cnf> which is why i am NOT upgrading MaaS at this time
<BlackDex> it was broken from 2.0 > 2.1 for me already
<BlackDex> 2.2 didn't fixed it
<BlackDex> i think i need to start over again
<BlackDex> but that is shitty for a production env
<BlackDex> doubt that you can backup juju, remove the controller, install a new one with the new version and restore the backup
<BlackDex> to be as clean as possible
<cnf> so juju assumes that the machine you use to run the juju command has the EXACT same network connectivity as the controller?
<cnf> wt?
<cnf> jam: poke?
<BlackDex> cnf: No, not that i know
<BlackDex> i have several juju installs where the controller does not have a storage network
<cnf> storage? i'm talking about storage
<BlackDex> i'm just saying that it doesn't matter if the controller isn't connected to the storage network, it still works
<cnf> o,O i'm not talking about storage networks
<BlackDex> i know
<BlackDex> i'm just stating that it doesn't matter!
<cnf> of course not, because it's not relevant?
<BlackDex> So, the answer to your question is NO, it doesn't have to be the EXACT same network
<BlackDex> if i'm correct, they don't even need to be on the same subnet, preverably they do, but as long as they can send commands to each other it will all be fine
<cnf> you misunderstand
<cnf> juju assumes it has the same routing to get to stuff like the MaaS controller
<cnf> if it doesn't, upgrades fail
<BlackDex> ah
<BlackDex> that wasn't clear in your statement ;)
<cnf> hmz, how the hell am I going to fix this mess :(
<BlackDex> i don't know
<BlackDex> the failed juju upgrades i encountered ended up in a backup restore
<cnf> westcoast is 6 am right now, isn't it?
<cnf> i guess jam will get here once i get home ^^;
<jam> cnf: /wave
<cnf> ohai!
<cnf> jam: i fixed my problem (with an UGLY UGLY hack)
<cnf> jam: but do you have some time to debug it?
<cnf> see what caused it, and if it was me doing something wrong, or if i need to submet a juju bug?
<jam> I can at least chat about it a bit
<cnf> cool
<cnf> jam: so i just started a juju upgrade process
<cnf> and my juju controller was trying to access MaaS on an ip that is valid from my laptop, but just doesn't work from the juju controller
<cnf> and I can't find where it sets this, or where i can change it
<cnf> (so i assigned the ip on the juju controller, and used ssh port forwarding to redirect as a temporary hack)
<jam> cnf: so MAAS does have an address that the Juju controller can talk to it from, just not the one from your laptop?
<cnf> jam: right
<cnf> jam: it can talk to the maas controller on the maas network (172.20.20.1 in my case)
<cnf> jam: as a sidenote, in my case the juju controller runs on a KVM on the MaaS controller machine
<jam> cnf: if you're on a kvm on the MAAS controller, wouldn't you have a gateway of the maas controller's IP on the KVM bridge, and the MAAS machine itself would route the traffic to the other IP ?
<cnf> jam: no routing for it on that bridge
<cnf> and it would not be on a kvm in production, anyway
<cnf> so it's internal vs external ip for the MaaS controller
<cnf> the juju controller can only see the internal one
<jam> cnf: so when you register the URL to talk to MAAS, I believe we only support a single URL, but that would be the place to set it
<cnf> jam: register it where?
<cnf> on the client?
<jam> cnf: when you do "juju bootstrap" or "juju add-cloud", etc, we take the URL of the MAAS API
<jam> that includes the IP/hostname of the MAAS controller
<jam> cnf: another option is to use DNS to give the MAAS controller a name and have that resolve to multiple addresses
<cnf> jam: so you are saying i will have an eternal problem with this?
<jam> cnf: I'm not sure how you made it work in the first bootstrap case, as we always just take "here is the URL for MAAS"
<jam> I don't quite understand why Upgrade would be different than Bootstrap/initial deployment
<cnf> jam: i don't know...
<cnf> jam: but it's not an uncommon usecase, i think
<cnf> back
<bdx> http://paste.ubuntu.com/24900445/
<bdx> ghaaaar
<lazyPower> bdx: wow, scaling for the win right?
<bdx> right
<Budgie^Smore> o/ juju world
<lazyPower> \o Budgie^Smore
#juju 2017-06-20
<kjackal> Good morning Juju world!
<anrah_> Is there a way to log what states are active?
<anrah_> oh, bus.get_states() seems to do that
<erik_lonroth_> kjackal: I'm trying it again. For my presentation I had to fake it so I would still like to succeed with this.
<kjackal> hi erik_lonroth_ help me remember is that for proxing bigtop?
<erik_lonroth_> No, I think I managed to get through that. It was stuck on "configuring spark...."
<erik_lonroth_> I'm deploying the bundle again as we speal.
<erik_lonroth_> speak.
<kjackal> cool, I am here to help
<erik_lonroth_> kjackal: Its great! I'm very glad for your help.
<erik_lonroth_> Are you working for Canonical today?
<admcleod_> hehe
<kjackal> erik_lonroth_: yeap!
<kjackal> admcleod_: still here! :)
<admcleod_> kjackal: syncharitiria!
<kjackal> admcleod_ chur
<lazyPower> anrah: Just as a belated follow up - charms.reactive get_states (when in a hook context) will dump a dictionary of the states on the unit.
<boolman> I'm trying to wrap my head around this issue I have when deploying openstack, http://ix.io/xKB
<boolman> the node clearly has IP's in the range, but for some reason it fails and cannot find a suitable IP
<kwmonroe> erik_lonroth_: how did the spark deploy go?
<Budgie^Smore> o/ juju world
<kwmonroe> \o Budgie^Smore!
<lazyPower> o/ Budgie^Smore
<bdx> put your docker containers on lxdbr0 when 'docker.ready' https://gist.github.com/jamesbeedy/39829a14fdc64583dfda4a6be1812aea
<bdx> :)
<bdx> possibly something similar could be worked into layer-docker?
<bdx> a simple bridge option ... I guess that all goes out the window when k8s and network plugins come into play though
<magicaltrout> our new k8s cluster has 350TB  of ram \o/
<tvansteenburgh> wat
<tvansteenburgh> how many nodes
<magicaltrout> quite a lot
<magicaltrout> i dunno tvansteenburgh i'm waiting for it to come online but we got quoted the core and ram size of the whole cluster
<magicaltrout> its DARPA's attempt to make machine learning automatic, machine learning for machine learning if you will.
<tvansteenburgh> and you're gonna put cdk on this?
<magicaltrout> yeah we have a small cluster going for test stuff so i was on the call an hour ago about getting our test stuff migrated out of our openstack testbed onto bare metal
<tvansteenburgh> so how will this gianormous cluster be provisioned? MAAS?
<magicaltrout> dunno about that yet, that would be my preference of course, have to sell it higher up as well though
<Budgie^Smore> sounds really cool magicaltrout
#juju 2017-06-21
<kjackal> erik_lonroth_: any news?
<magicaltrout> kjackal: a chap called Brian Mullan said I should speak to you! ;)
<kjackal> hey magicaltrout, Brian is a wise man!
<kjackal> what about?
<magicaltrout> ha, he hooked up with me on linkedin kjackal and said he'd seen the questions about dcos and lxd
<kjackal> magicaltrout: wow, I must be more carefull on what write on the web....
<BlackDex_> Hello there. I have a ha-juju env, and i lost 2 nodes one is still available, but that database is giving some locking issues atm. I Think i can fix that, but am i able to remove/repair the cluster or bring it back to just one node again?
<digvijay2016> hi, I am not able to download spectrum scale manager charm
<digvijay2016> can anyone tell me what's the issue
<digvijay2016> here is the link : https://jujucharms.com/u/ibmcharmers/ibm-spectrum-scale-manager/13
<BlackDex> digvijay2016: what happens if you try to deploy it?
<BlackDex> also what does `juju deploy --debug cs:~ibmcharmers/ibm-spectrum-scale-manager-13` tell you?
<digvijay2016> @BlackDex : here is the output http://paste.openstack.org/show/613264/
<BlackDex> oke juju status should now show you that the app is there
<BlackDex> only no unit/machine defined yet for it
<digvijay2016> but why I am not able to download the zip from UI?
<BlackDex> i dont know
<BlackDex> maybe you can download it using the charm-tools
<BlackDex> digvijay2016: `charm-pull-source cs:~ibmcharmers/ibm-spectrum-scale-manager-13` did it for me :)
<BlackDex> `sudo apt install charm-tools` to install those tools
<digvijay2016> let me try
<digvijay2016> working :) thanks.
<BlackDex> yw :)
<rick_h> BlackDex: hmm, not really a "disable-ha" command. You can run with the warnings. If you can get it to a stable state perhaps you can migrate the models? (what version of juju is this?)
<BlackDex> juju 1.25
<BlackDex> and juju dusable-ha is nice, but if there are no controllers to talk to ;)
<rick_h> BlackDex: heh, fair enough and yea 1.25 is going to be tough...
<rick_h> BlackDex: so if you can get the db back and have the one remaining node speak back to you you can reuse enable-ha I believe to bring up new controller nodes
<BlackDex> i think i have mongo alive again
<rick_h> BlackDex: you might need to make sure to update the environments.yaml and in 1.25 there was some cache file for the controllers/models that had the list of IPs connected and the like
<BlackDex> trying to check if i can remove the replset via mongo it self
<rick_h> BlackDex: so there should be some state in juju tracking the IPs of the nodes and such that might cause at least errors but hopefully can ignore for now
<BlackDex> oke thanks for the tip
<BlackDex> ill check if i can find that :)
<BlackDex> First fix the db
<erik_lonroth_> kjackal: I'll have a look once I'm back home. I'm at work now discussing how we can ramp up our workplan towards juju, maas and and development of charms. We will need education here at some later stage specific to juju development. We are prepared to pay for good education.
<rick_h> BlackDex: yea, have to check the .jenv files to make sure the values in there are ok post-recovery
<BlackDex> oke
<BlackDex> do you maybe know where the user/pass is stored for the mongodb?
<BlackDex> i need to auth but i can't find it
<rick_h> BlackDex: hmm, no. Sorry. It changed a bunch since 1.25 and I've no idea.
<BlackDex> rick_h: found it: https://github.com/juju/juju/wiki/Login-into-MongoDB
<BlackDex> well i can't connect it seems, but that is the right procedure
<BlackDex> connected :). Now lets see if i can fix it all
<kjackal> erik_lonroth_: there are partnership programs that (useed to) include "charm schools" https://partners.ubuntu.com/programmes/charm
<kjackal> erik_lonroth_: I am not sure if charm schools are still included, I guess they are since its on the website
<erik_lonroth_> Thanx for pointing to that!
<rick_h> BlackDex: go go go! :)
<rick_h> msg lazyPower ping-a-do
<rick_h> bah
<lazyPower> rick_h: pong
<kwmonroe> reactive bash question:  http://paste.ubuntu.com/24917986/ on initial install, *both* install_app() and config_foo() are executed, presumably because the config.changed.foo state is initially true.  how can i alter config_foo() so it only gets executed when the user actually runs a "juju config app foo=bar"?
<kwmonroe> cory_fu: bash me brain smarts!  ^
<cory_fu> kwmonroe: Does the foo config option have a default value that is not an empty string?
<kwmonroe> yes cory_fu, it's a true boolean by default
<Budgie^Smore> o/ juju world!
<lazyPower> \o Budgie^Smore
<rick_h> The Juju Show #15 from the Canonical Offices in London in 30min!
<rick_h> kwmonroe: tvansteenburgh marcoceppi lazyPower bdx and others that might be interested in joining ^
<bdx> has anyone ran into the "Missing implementation for interface role" when building charms? - http://paste.ubuntu.com/24918410/
<rick_h> Juju Show url to join the hangout: https://hangouts.google.com/hangouts/_/5w4l6x4l4jcgno5mke7ru252vye
<rick_h> Juju Show viewing url: https://www.youtube.com/watch?v=MRQYURC83zQ
<lazyPower> juju docs recap: https://lists.ubuntu.com/archives/juju/2017-June/009132.html
<bdx> hey .. sorry about the abrupt exit there .. looks as though I haven't been plugged in all morning :/
<rick_h> bdx: lol all good. Your answer is in the video
<bdx> excellent
 * rick_h runs for food stuffs now
<magicaltrout> run?
<magicaltrout> hmmmmmmm k8s its been a while
 * magicaltrout trys to figure out the DNS craziness
<bdx> ^ ha
<bdx> magicaltrout: slap deis atop that k8s cluster, point the fqdns at the loadbalancer and call it good :)
<lazyPower> I feel like paas offerings are great for one off projects like client work
<lazyPower> but when it comes to long running infrastructure, i prefer that tight control of a strictly declared manifest
<bdx> I must say, letting deis handle the subdomain routing/proxying on a per application basis is quite nice
<bdx> I'm sure it has its draw backs though
<lazyPower> its no different than the ingress controller
<lazyPower> only it assigns a random domain on a wildcard domain
 * Budgie^Smore is catching up on The Juju Show 
<Budgie^Smore> migrate is definitely nice!
<lazyPower> yeahhhhh boiiii
<Budgie^Smore> hardly seems worth running k8s with only 2 machines though ;-)
<rick_h> Budgie^Smore: woot, it was for demo purposes :p
<Budgie^Smore> oh I see my confusion, I vaguely remember the k8s core bundle needing 4 machines
<Budgie^Smore> oh and you really should becareful when you stick those 2 fingers up with the back facing the person, especially in the UK ;-)
<Budgie^Smore> So I do have a question about charms since you have started working on CentOS ones, is there any thoughts on OS agnostic charms?
<Budgie^Smore> jupyter looks awesome
<Budgie^Smore> rick_h can I +1 for a Charm School Notebook? (or did I miss that being done already?)
<lazyPower> Budgie^Smore: so, the general guidance there was to make good abstractions where it makes sense, but not to cram the kitchen sink into charms because it leads to messy solutions like poorly written cookbooks (as an example, i'm sorry chef fans not picking on ya)
<lazyPower> where you wind up with abstracted abstractions that do 70% of what you want, but fall down because of vendor differences
<Budgie^Smore> LOL oh I get that
<lazyPower> but there are many community members who are asking about this
<lazyPower> so, i think there's a larger conversation that needs to happen between us, and the community, to make this streamlined. Like defining clear interfaces for these plumbing libs, and give users a  consistent interface to bind their work to.
<lazyPower> for example, i shouldn't care what series, i should be able to just say install_package('list, of, things, that, are, awesome') and get a consistent abstraction independent of implementor that does the right thing. installing my package, and thats all it does, anything else on top of that would be either a different abstraction, or manual python.
<lazyPower> at least thats how i would expect it to work, maybe there are better patterns.
<lazyPower> i'm not an expert in all things :)
<axisys> can I discuss conjure-up in this channel?
<stokachu> Sure
<stokachu> axisys: ^
<axisys>  conjure-up kubernetes fails with Unable to locate package kubernetes
<axisys> stokachu: ^
<axisys> I am on ubuntu 16.0.2 LTS
<stokachu> axisys: what version of conjure-up
<axisys> just ran sudo apt install conjure-up
<axisys> conjure-up 0.1.2
<stokachu> axisys: remove that package and follow the instructions from conjure-up.io
<axisys> conjure-up 2.2.1
<stokachu> Yea
<axisys> stokachu: working .. sweet.. now I can follow the webinar.. thanks a lot!
<stokachu> axisys: np!
<bdx> stokachu: we need to find and obliterate all sources pointing to the apt install method .... I'm sure you hear about it more then I do, but my users end up there too still somehow
<stokachu> bdx: yea i need to file a bug to have it removed in new releases and ive got a package that informs the user to use the snap
<stokachu> just need to go through the sru process
<bdx> ahh, that would make sense
<bdx> now with juju-2.2.0, all of the sudden I can't bootstrap aws
<bdx> my vpc ids that worked before no longer work
<bdx> I get these strange errors now ... writing up a bug in a few
<bdx> gah
<bdx> I'm getting successful bootstraps on other another vpc ... possibly just misconfiguration on my end
<bdx> ahh .. did we lose the "--to 'subnet=subnet-3737373'" with 2.2 ?
<bdx> for the bootstrap placement
<bdx> ?
<bdx> ahh I know whats wrong
<bdx> there is no metadata in us-east-2!
<bdx> how can I go about getting that metadata up there?
<bdx> possibly I'm having an issue with dedicated vs default tenancy
<axisys> Enter your Ubuntu SS credentials during conjure-up kubernets -> aws -> us-east-1 fails with ERROR cannot log into jimm.jujucharms.com .. but I can login to login.ubuntu.com with same credentials
<axisys> what gives?
<axisys> SSO*
#juju 2017-06-22
<kwmonroe> axisys: shot in the dark, but can you login with those creds here: https://jujucharms.com/login/
<axisys> kwmonroe: yes I can.. I just did..
<axisys> but still same failure with conjure-up
<stokachu> axisys: can you file a bug at https://github.com/conjure-up/conjure-up
<axisys> I can deploy a kubernetes model from the jujucharms gui.. no problem
<axisys> using aws
<stokachu> axisys: so you can do it without logging in through jaas
<axisys> stokachu: yes .. using firefox
<axisys> even google chrome
<axisys> I just exported the model yaml..
<axisys> yep.. checked and my aws now has 3 instances of ec2 running..
<stokachu> No I mean use conjure-up but not go through the hosted jaas
<axisys> so from terminal it did not work with cojure-up kubernetes as it failes in SS)
<axisys> SSO
<axisys> failed*
<stokachu> axisys: there are 2 ways to deploy, one is through SSO (jaas) which is what you are picking, the other can go diretly through AWS
<stokachu> so when you select which controller to utilize you pick the second option *self hosted*
<stokachu> at least until we get this jaas issue sorted
<erik_lonroth_> kjackal: Sorry for late reply, we are prepping for midsummer here....
<kjackal> no worries erik_lonroth_
<erik_lonroth_> kjackal: I'm still begugging why spark won come up
<kjackal> do you have any logs we can look at?
<erik_lonroth_> I'm able to find the message you linked to before. Yeah, I'll send some output. sec.
<erik_lonroth_> https://pastebin.com/bqWKGvZE
<erik_lonroth_> juju status gives: https://pastebin.com/DZey05k0
<erik_lonroth_> As you can se, spark is still in "maintenance".
<erik_lonroth_> kjackal: A completely different thing.... I've just bought two  nanopi 64 bit ARM (4 cores!) to play with. How would I do to put those as usable by juju ? =D A fun side project perhaps.
<kjackal> nice erik_lonroth_ :) I haven't done anything like that. I would start with https://www.ubuntu.com/download/server/arm I think SaMnCo has done some experimentation with rpi he might have blogged about it
<kjackal> let me see
<SaMnCo> well I mostly did MAAS on rpi but I backed off around 2.1 bc we did not release binaries for arm anymore
<kjackal> https://askubuntu.com/questions/431526/guide-for-using-juju-in-deploying-to-a-raspberry-pi
<erik_lonroth_> Oh, so you don't release binaries for ARM anymore?
<erik_lonroth_> Was there any technical reason for that? I have a friend that happily would endeavour into that.
<kjackal> erik_lonroth_: which bundle did you deploy?
<erik_lonroth_> juju deploy cs:bundle/hadoop-spark-37
<erik_lonroth_> Let me know what I can investigate
<kjackal> thanks
<erik_lonroth_> Could this be another proxy issue?
<erik_lonroth_> https://pastebin.com/4RKMmBqW
<kjackal> erik_lonroth_: there are some resources we would rather have access to: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/spark/layer-spark/config.yaml
<kjackal> they are under https://s3.amazonaws.com/jujubigdata
<erik_lonroth_> Oh, "resources" - thats a chapter in the juju world I've just recently got in touch with...
<erik_lonroth_> So, you need the "config.yaml" from me? I've used the default and have not changed anything in there yet.
<kjackal> so... erik_lonroth_ can you try a deployment with the bench mark disabled? https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/spark/layer-spark/config.yaml#L16
<kjackal> actually you might be able to do that right now
<kjackal> since your charm is not on error state
<erik_lonroth_> .. OK. Can I "redeploy" easily without having to re-install all machines etc?
<kjackal> juju config spark spark_bench_enabled=false
<erik_lonroth_> I'll test immediately
<kjackal> but your charm is not on an error state... strange...
<erik_lonroth_> Woho! "spark" is now "active".... But I agree that getting stuck in the "maintenance" mode is not very informative.
<erik_lonroth_> kjackal: How to I enable the resources so that I can get the benchmark examples to run ?
<kjackal> you would need to have access to those s3 buckets on amazon
<erik_lonroth_> When I try, I get "access denied"
<erik_lonroth_> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>609A3CEEE51904DE</RequestId><HostId>kg/5Bjg6iytNAlLW5tXY8hP493G7l5kfFPnVrGHXa8nxF40rMncVF9dyJqudPuEs</HostId></Error>
<kjackal> erik_lonroth_: can you please file a ticket for not going into an error state?
<kjackal> I can reach https://s3.amazonaws.com/jujubigdata/ibm/noarch/SparkBench-2.0-20170403.tgz from here
<erik_lonroth_> I could reach that link, but the first one gives me the access denied.
<kjackal> what is the "first one" ?
<erik_lonroth_> https://s3.amazonaws.com/jujubigdata
<kjackal> erik_lonroth_: a yes, https://s3.amazonaws.com/jujubigdata should give you access denied you cannot see what is there
<erik_lonroth_> Ah Ok
<erik_lonroth_> I'll file that ticket for you. What do you want me to point out?
<erik_lonroth_> ... also where do you like me to post the issue?
<kjackal> erik_lonroth_: what you can do is download the benchmark binary from here https://s3.amazonaws.com/jujubigdata/ibm/noarch/SparkBench-2.0-20170403.tgz then host it somewhere you know is accessible from your infrastructure and then when you deploy spark you can set the new url  of the resource by providing this config param: https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/spark/layer-spark/config.yaml#L33
<kjackal> erik_lonroth_: for the ticket describe how you reached that deployment. Provide the logs, and say that you did not reach the ready state. Based on the error you have seen on the logs you would expect an error state but you only got the maintenance
<kjackal> let me tell you where to file the ticket
<kjackal> erik_lonroth_: do you have access here, or do you need to create an account?
<erik_lonroth_> Account where?
<erik_lonroth_> I've got a github account
<kjackal> erik_lonroth_: https://issues.apache.org/jira/secure/CreateIssue!default.jspa
<erik_lonroth_> Ah, I don't think I have an account
<kjackal> the spark layer is upstream, part of bigtop
<kjackal> erik_lonroth_: what about here: https://github.com/juju-solutions/layer-apache-bigtop-base
<kjackal> erik_lonroth_: can you file an issue there?
<erik_lonroth_> I'll try file it on github
<kjackal> thank you erik_lonroth_, kwmonroe will be pleased to hear you found the issue. Many thanks for the tcket
<erik_lonroth_> Can I reference you in github ?
<erik_lonroth_> Whats your usernames there so I can get it in there?
<erik_lonroth_> kjackal: https://github.com/juju-solutions/layer-apache-bigtop-base/issues/59
<kjackal> thank you erik_lonroth_, would you mind if I linked the pastebins you showed me?
<erik_lonroth_> Thats no problem.
<kjackal> thanks
<magicaltrout> thanks for the call kwmonroe, very helpful!
<kwmonroe> thanks for showing up magicaltrout!  and especially for the ping -- i was sitting in that room by myself like "dang, nobody wants to hangout with me".  the priority discussion was super helpful for me.  i'll fire off a minutes email so all the jealous people here know what we're talking about.
<admcleod_> awww such love
<magicaltrout> you're just jealous admcleod_
<admcleod_> it is not even a joke to say that is true.
<kjackal> magicaltrout: I am crashing that meeting next time!
<magicaltrout> nice kjackal
<magicaltrout> ask kwmonroe what his golf handicap is
<kwmonroe> i don't like to brag, but i typically shoot in the 60s.
<kwmonroe> not bad for the front 9, #amirite?
<Budgie^Smore> o/ juju world
<roaksoax> /w/win 7
#juju 2017-06-23
<kjackal> Good morning Juju world!
<Zic> hi here, can I enable alpha-ressources (--runtime-config=batch/v2alpha1=true) in CDK?
<tvansteenburgh> Zic: you can do it manually
<tvansteenburgh> Zic: or wait for https://github.com/kubernetes/kubernetes/pull/47912/files
<Zic> oh, +1 for this PR
<Zic> tvansteenburgh: if I want to use this extra args right now in the kube-apuiserver's snap, where can I put it?
<tvansteenburgh> Zic: you would need to run `snap set kube-apiserver ...` on the master node
<Zic> tvansteenburgh: like snap set kube-apiserver --runtime-config=batch/v2alpha1=true or with another subcommand?
<tvansteenburgh> Zic, yeah, like that
<Zic> do a command exist to check what was the set value before/after? I'm fearing to remove something by simply replacing it with my extra parameter :o
<Zic> # snap set kube-apiserver --runtime-config=batch/v2alpha1=true
<Zic> error: unknown flag `runtime-config'
<Zic> oops
<Zic> found a way here: /snap/kube-apiserver/current/meta/hooks/configure
<Zic> hmm cannot write it :/
<Zic> how can I add a config-arg to make "snap set" working properly?
<lazyPower> Zic: those args are found in /var/snap/$snap_name/current/args
<Zic> lazyPower: doesn't have it for kube-apiserver :x
<Zic> (hi o/)
<Zic> oh wait, on /var? didn't check it, just /snap
<lazyPower> Zic: yep its in /var
<lazyPower> Zic: there are 2 data directories for snaps.  $SNAP_DATA and $SNAP_USER_DATA
<lazyPower> $SNAP is the /snap path, and all those are teh squashfs mounted bits, which are read only
<lazyPower> also please forgive my typing, i'm operating on about 4 hours of sleep, the fire alarms went off in my building lastnight and kept me up entirely too long :|
<Zic> huhu, here is the hot weather which kept me up sleeping nicely :)
<lazyPower> seems to be going around eh?
<Budgie^Smore> o/ juju world
<lazyPower> \o Budgie^Smore
#juju 2018-06-18
<wallyworld> thumper: lgtm, i wonder if omitempty should be used though for those attributes
<thumper> wallyworld: it is over the api... but can't hurt I guess
<wallyworld> i was just thinking for older juju that didn't have them
<wallyworld> it wouldn't bloat the wire format
<wallyworld> babbageclunk: thanks for review, could you clarify your last couple of comments about naming? maybe a quick HO?
<thumper> babbageclunk: how goes the raft leadership bits?
<babbageclunk> wallyworld: sorry, was lunching - wanna chat ?
<babbageclunk> thumper: alright - my first idea for funnelling the requests to the raft leader wasn't a goer, doing forwarding between controllers now
<thumper> babbageclunk: so what is the approach going to be?
<babbageclunk> thumper: each controller machine will maintain an API connection to the raft leader, the leadership facade will forward claim requests to it.
<thumper> babbageclunk: do we really want yet more api connections held open?
<thumper> have you considered the pros/cons of using the hub?
<babbageclunk> thumper: well, this will only be one per controller machine.
<thumper> who holds the api connection?
<babbageclunk> thumper: I *thought* about it - but it seemed quite fiddly to turn it into the request/response I'd need.
<babbageclunk> thumper: a new worker in the controller machine agent - basically the same as the apicaller but also watching the raft node for leadership changes.
<thumper> how is the leadership api calls going to get funneled to it?
<babbageclunk> thumper: you mean how will the leadership facade get hold of the API connection? Or something else?
<babbageclunk> Shall we do a hangout?
<thumper> yes, the first bit
<thumper> sure
<babbageclunk> thumper: hadn't fully worked that out - I was thinking in kind of a similar way to the raft-box.
<babbageclunk> thumper: in 1:1
<babbageclunk> ha, too quick
<wallyworld> babbageclunk: hey, free now?
<babbageclunk> wallyworld: sorry, talking to thumper (about you)
<wallyworld> about the raft approach - we did discuss pub/sub
<babbageclunk> yeah, that's what I was saying
<wallyworld> but ruled it out as the model doesn't fit so well
<wallyworld> we want rpc
<babbageclunk> but he's talked me round
<wallyworld> ok, be interested to hear the arguments
<wallyworld> babbageclunk: ping me when done so i can clarify your comment on the method/interface naming?
<babbageclunk> wallyworld: wilco
<babbageclunk> wallyworld: ok done, jump in standup?
<babbageclunk> Today's one though, not tomorrow's one.
<wallyworld> babbageclunk: sorry, missed ping be righht there
<seyeongkim> deploying lxd with juju on artful and bionic can't create proper bridge ( it was br-ens4 on xenial in my case), is it known issue?
<thumper> seyeongkim: I think this has been fixed... which version are you using?
<thumper> but yes, it was a known issue :)
<seyeongkim> thumper ah, it is 2.3.1-xenial
<wallyworld> anastasiamac: here's that PR. a net deletion of 40 lines :-) https://github.com/juju/juju/pull/8827
<anastasiamac> wallyworld: ta, will look soon :)
<wallyworld> no rush
<BlackDex> KingJ: how did you configure maas/juju to put a default gateway for all interfaces? As far as i know you can only have 1 default gateway. If you want more you need to create separate routing tables
<BlackDex> maybe you should see what tcpdump is telling you to figure out what is comming in and then how it tries to go out
<TheAbsentOne> Does anyone see where I go wrong here
<TheAbsentOne> https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/mysql/mysql-proxy/reactive/mysql-proxy.py
<TheAbsentOne> 2n'd render does not occur but flag gets set
<stub> TheAbsentOne: Lines 44 and 69 are both rendering to the same destination, so one render will overwrite the other.
<stub> oh, ignore that
<TheAbsentOne> stub: yeah that would have been an interesting answer if that was the issue
<TheAbsentOne> I'm kinda baffled as I'm really not doing a lot here
<stub> yeah, time to stuff in a pile of print statements or step through it in pdb
<TheAbsentOne> what is pdb? xD
<TheAbsentOne> When I looked with debug-hooks the root flag was set which is weird
<stub> The only think that will set the hook is your charm. It was either set in a previous hook run, perhaps from much earlier code if you have been using upgrade-charm, or some other handler is setting the flag.
<stub> status_set('active', 'mysql-root done!') <- should get some noise in the logs, even if the render() silently failed.
<TheAbsentOne> I'll try deleting the charms and starting from ground up once again, btw stub do you mind giving a quick look at this as well: https://github.com/Ciberth/gdb-use-case/blob/master/mininimalexamples/redis/redis-proxy/reactive/redis-proxy.py#L32
<TheAbsentOne> This was the way to do it on the docs (I thought) but it fails miserably
<stub> pdb is the Python debugger, so if you know how to drive it you can edit reactive/whatever within a debug-hooks session, stick in the magic 'import pdb; pdb.set_trace()', and drop to the debugger at that point when you execute the hook
<stub> I don't know what flags the redis interface claims to set or when.
<stub>  redis_ep = endpoint_from_flag('redis.available') may fail if it is an old-style interface
<TheAbsentOne> allright thanks stub!
<TheAbsentOne> stub: I was doing something stupid on the redis thing, got it working now (cuz I correctly receive the port number) but host and uri are None O.o Weird stuff. If others are here who have used redis before pls ping me ^^
<TheAbsentOne> also stub I redid it from scratch and it works now more or less for mysql!
<rfowler> $ juju status | grep memcache
<rfowler> memcached                             unknown      1  memcached              jujucharms   21  ubuntu
<rfowler> memcached/0*              unknown   idle   3/lxd/4  10.90.0.86      11211/tcp
<rfowler> why does memcache give a status of unknown?
<rfowler> this is with an openstack base deploy
<rick_h_> rfowler: so it's because the charm hasn't been updated to supply the status messages.
<rick_h_> rfowler: it's a feature that was added a while ago but charms that haven't been updated will show an unknown status since they're not saying "hey, I'm doing maintenance" or such
<rfowler> rick_h_: so this is expected behavior i guess
<rick_h_> rfowler: yea
<rfowler> rick_h_: is there an updated charm for memcache i should be using instead?
<rick_h_> rfowler: no, I think it's just one of those things that as long as it does it's job folks haven't put any love into it.
<rfowler> rick_h_: ha, ok
<TheAbsentOne> rfowler: there are idd many other charms out there who have the same behaviour I was once wondering too! ^^
<stub> application_name: The name of the remote application for this relation, or None.
<stub> cory_fu: Do you recall in what situation a Relation instance will have an application_name of None? 'Cause one of my users is seeing it, and I can't reproduce it.
<cory_fu> stub: Hrm.  It needs at least one remote unit, so if there's a case where relation.units is empty, it will be None.  That could happen if the remote application is scaled to 0 or after the last unit has triggered the -departed hook but the relation is still established.
<stub> Hrm, I thought this was during install, but maybe the reason I can't reproduce is the failure is happening during teardown.
<stub> oh, unless the deployment script is deploying a 0 unit application, relating, then adding units. which would be weird.
<cory_fu> stub: Hrm.  Could it happen during install?  Maybe if both applications are coming up at the same time and the relation gets established before the remote application has any units?
<cory_fu> Yeah, odd
<stub> I didn't think the relation could be joined until there is at least one remote unit to join with
<cory_fu> stub: I do know that during bundle deploys the application creation is separate from adding the units which is also separate from adding the relation, so I think it's technically possible but it seems like it would be really hard to hit unless something strange went wrong
<cory_fu> stub: Maybe provisioning errors?
<stub> no, he can repeat it
<stub> I think it is a mojo spec, so I'll see if I can find it to reproduce
<cory_fu> stub: Is there another way we can determine the application name without requiring a remote unit?
<stub> No, that is the only way. And to make it worse, with cross model relations one side or the other sees the remote application as juju-abcdef012312 or similar gibberish
<stub> oh, charm goal state will expose it too I think.
<stub> oh, I think I know. I'll trace it through tomorrow.
<stub> But I think *before* the relation-joined hook is run, endpoint_from_name('foo').relations may return one or more relations that have not yet been joined, and they will have no remote units and no remote application name yet.
<stub> maybe that is a reactive bug, or maybe I need to be more careful.
<stub> (or maybe my guess is completely wrong, so need to prove it first)
<cory_fu> stub: Reactive could always filter the relations for ones with no units?  Is it useful to reference relations that don't have any units?
<stub> logging the relation id?
<cory_fu> *shrug*
<stub> I can't think of anything particularly useful. I don't know if it is worth filtering or not.
<wallyworld> veebers: want to jump in on release call?
<veebers> wallyworld: yep omw
<KingJ> BlackDex: In MaaS each subnet has a gateway defined, which seems to translate to MaaS/Juju creating a default route per interface?
#juju 2018-06-19
<anastasiamac_> wallyworld: part  1 - https://github.com/juju/juju/pull/8828 (the simplest...)
<wallyworld> anastasiamac_: will look as soon as i finish talking to vino
<anastasiamac_> nws, thnx!
<wallyworld> anastasiamac_: lgtm with a question
<anastasiamac_> wallyworld: part 2 - api consolidation for cli and worker access - https://github.com/juju/juju/pull/8829 (largely mechanical....)
<wallyworld> ok
<veebers> wallyworld: query, the resource-get command currently returns, well prints, the file path of the named resource. For docker stuff that doesn't make sense, should we update expectations so it either: 1) prints some json/yaml with the needed details (image, secret) or write a json/yaml file and return the path to that?
<wallyworld> veebers: it prints the file path brecause it has streamed the content to that location. i think for other hook commands like config-get we print to stdout
<wallyworld> i'll check
<veebers> wallyworld: right sorry it does print to stdout
<veebers> it's the content of that I'm querying
<veebers> currently resource-get only prints the filepath of the resource (after it's been downloaded/cached/whatever) but for the docker bits we don't want a filepath, we want more
<wallyworld> the content for a docker inmage resource will be the result of the charmrepo api call
<wallyworld> we print a filepath for file resources because that fits the semantic
<veebers> wallyworld: ok, so the resource-get hook command will change from just ever printing out a file path, to printing out whatever is right for the resource type. filepath for filetype, json for docker-image type
<wallyworld> veebers: correct
<wallyworld> although i think the output format should be either json or yaml
<wallyworld> depending on what the charm asks for
<wallyworld> anastasiamac_: review done, 1 thing to fix i think
 * anastasiamac_ looking
<veebers> wallyworld: ack, so optional arg for format (for when it's not just a straight string of 'filepath')
<wallyworld> yup
<anastasiamac_> wallyworld: i disagree about auth. it is the saem auth is for model migration (and upgrader, I think)
<wallyworld> hmmm, that's wrong then
<wallyworld> auth need sto check for correct callers
<wallyworld> at the very least IsAuthClient() || IsAuthAgent() etc
<wallyworld> which model migration facade?
<anastasiamac_> yeah, i can do that
 * anastasiamac_ digs up the magnifying glass..
<wallyworld> you'd want AuthController() || AuthClient()
<wallyworld> those other cases just checking for != nil should be fixed
<anastasiamac_> not authcontroller... it will be a model surely?... and we do not have an equivalent..
<anastasiamac_> that's why all model-scoped api's auth do that
<anastasiamac_> i guess, i could check if it's any agent (machine|unit|application) at the very least?... i'll update a PR
<anastasiamac_> wallyworld: apparently, i was talking nonsense - there is no precedence for just a nil check :D
<wallyworld> whew, thought i was going crazy
<anastasiamac_> \o/ not yet... and hopefully, not from me :D
<wallyworld> it's AuthClient() || AuthXXXX()
<wallyworld> whatever is the agent that makes the call
<anastasiamac_> well, it's authclient() for one of the api (the one that cli will use, client one)
<anastasiamac_> and a 3 agent check for the other...
<anastasiamac_> i've pushed the changes
<vino> wallyworld: I have pushed a commit addressing ur review comments.
<wallyworld> looking
<wallyworld> vino: just need one more test for truncation as there's a coding error
<wallyworld> test would have picked it up
<vino> ok.
<veebers> babbageclunk: did you have some emacs bits so you could like all definitions of a function declared in an interface?
<babbageclunk> veebers: I use go-guru for that
<veebers> babbageclunk: would you know what the command is?
<babbageclunk> veebers: go-guru-implements
<veebers> babbageclunk: ack, cheers!
<veebers> babbageclunk: ah, that's for a type to list what interfaces it implements right? Can you do the other way around?
<veebers> I want to find places where a func signature is implemented
<babbageclunk> If you point to a method in an interface definition, it will list all of the method defs that implement that interface method.
<veebers> oh wait, no I think that's done it ^_^
<babbageclunk> B)
<veebers> babbageclunk: ack, yeah I misread something. Thanks! that's really helpful
<babbageclunk> yeah, it's great - I use it pretty frequently
<vino> wallyworld: pushed a commit
<vino> wallyworld: plz ping me once u have reviewed. I can merge and start the next phase
<wallyworld> vino: looking
<vino_> just a small correction i am doing now
<wallyworld> vino: omit empty bit is not quite right
<wallyworld> `json:"charm-vesion,omitempty" yaml:"charm-version,omitempty"`
<vino_> yes
<vino_> thats wat
<wallyworld> ok :-)
<vino_> i am correcting that now
<vino_> gimme few mins. I found it in CI
<vino_> :q
<BlackDex> KingJ: what does `ip route get <ip-request-comes-from>` tell you? the correct interface/subnet? or a wrong one?
<vino_> wallywaorld: fixed the status charm-version issues
<vino_> pushed the commit
<vino> wallyworld:
<wallyworld> looking
<wallyworld> vino: looks good to land
<vino> sure.
<vino> thank u
<vino_> wallyworld: Can we discuss abt the next phase ? whenever u have time. please ping me.
<wallyworld> vino: ok, now?
<vino> sure.
<vino> standup HO ?
<wallyworld> yep
<vino> wallyworld: i didnt git add the verison file created in testcharms
<vino> thats the issue
<wallyworld> ah ok
<vino> i will commit that and squash
<vino> thank u
<vino> :)
<wallyworld> kelvin: have to run, had a quick look at charmm pr. mostly ok a few small fixes. will check again later
<kelvin> wallyworld, thx, just got the keys and back home. looking on it now
<stub> Speaking of charm versions, anyone recall when $CHARM_DIR/.juju_charm became a thing?
<TheAbsentOne> suddenly every hook gives an error on "subprocess.CalledProcessError: Command '['config-get', '--all', '--format=json']' returned non-zero exit status 1" Anyone have occurred a similar thing? Gonna restart machines and charms but might as well ask
<TheAbsentOne> + juju status gives no error it's when I run a hook in debug-hooks
<rick_h_> TheAbsentOne: hmm, are all the agents up and in the green?
<TheAbsentOne> at that moment it was like it lost connection to the agents but status showed everything green yeah rick_h_
<TheAbsentOne> rick_h_: I'm having the same problem now when spinning up a new charm/machine :s I'll show error gimme a sec
<rick_h_> TheAbsentOne: gotcha, yea I know it pings every so often and it needs to miss X pings before it shows down
<rick_h_> TheAbsentOne: kk
<TheAbsentOne> "connection is shut down"
<TheAbsentOne> https://pastebin.com/4zDsica3
<TheAbsentOne> I'm having a feeling it's not my charm but rather something else :/
<TheAbsentOne> rick_h_: the error is also "insta" so I don't think he is running anything
<rick_h_> TheAbsentOne: yea, so the agent being down means that it can't ping the API server and get the details about the config-get information
<rick_h_> TheAbsentOne: might have to restart the agent, check the list of services running for juju related ones
<TheAbsentOne> and what agent is this? The unit agent on the charm?
<TheAbsentOne> How do I check that?
<TheAbsentOne> rick_h_: I got a juju status where I saw agents being lost (both to new/recent charms as to charms that are already running a longer time), but they are green again
<rick_h_> TheAbsentOne: sec, otp.
<rick_h_> TheAbsentOne: oh hmm, maybe the agents came back then
<rick_h_> TheAbsentOne: and you just need to juju resolved the units
<TheAbsentOne> but inside debug-hooks I still get the same error though rick_h
<TheAbsentOne> is there a log I can consult for more information? show-status-log doesn't give me anything
<TheAbsentOne> I think the problem lies with the vmware-cluster, sometimes juju status takes a while and I catched 2 machines being down for a few secs. Weird stuff
<TheAbsentOne> rick_h_: is it correct that if the agent is lost while being in a debug-hooks he won't fix the connection and it is lost?
<rick_h_> TheAbsentOne: doubtful? I'm not 100% sure on the debug-hooks if it can help with that or not
<TheAbsentOne> got it working now rick_h_ think the problem is not caused by me but lies deeper int he infra where I don't have access to, thanks for the help though!
<rick_h_> TheAbsentOne: ah ok cool. Glad you got around it
<veebers> Morning all o/
<hml> veebers: morning!
<hml> veebers: is quiet today
<veebers> hey hml o/ how's things? Hopefully quiet is a good thing ^_^
<hml> veebers: not badâ¦ you?
<veebers> hml: can't complain :-)
<wallyworld> babbageclunk: a small +18/-3 PR to fix a regression from yesterday https://github.com/juju/juju/pull/8831
<TheAbsentOne> what does a blocked status mean:  "backend relation required" o.O All I did was juju deploy pgbouncer :o
<TheAbsentOne> ahn I guess it was still waiting for postgresql nvm my question!
<catbus> if it's following this guideline: https://docs.jujucharms.com/2.3/en/reference-status then you probably should take a look to unblock it.
<TheAbsentOne> thx catbus it's just that it feels like my model/controller is a bit unhealthy. Juju gui keeps loading too :/
<TheAbsentOne> can I manually restart juju gui?
<wallyworld> kelvin: i've had time to do a more thorough review on the charm PR; let me know if you have questions
<babbageclunk> wallyworld: oops, missed that - looking now
#juju 2018-06-20
<kelvin> hi wallyworld im here
<wallyworld> ah, different nic?
<wallyworld> have you pushed changes to charm.v6?
<kelvin> ah.. let me change it!
<kelvin> wallyworld, can i have ur a few minutes on HO?
<wallyworld> sure
<kelvin> hi wallyworld do u think the `devices` will be used for GPU only or could be something else in the future?
<wallyworld> whatever k8s supports, like networks cards etc
<wallyworld> that's why we wnet with the generic "devices" name
<wallyworld> but for now, initially, just gpu
<wallyworld> there's a link in the spec to the k8s device plugin framework
<kelvin> so I define the Device.Name -> nvidia.com/gpu and Device.Type -> gpu.
<kelvin> Is this correct?
<wallyworld> no, define name is somthing meaningful to the charm
<wallyworld> like "bitcoin-miner"
<wallyworld> type is either "gpu" or "nvidia.com/gpu"
<wallyworld> depending on if the charm wants just a generic/any gpu or specifically an nividia one
<wallyworld> there are examples in the spec
<wallyworld> initially we'll probably do an exact match on charm metadata device type and k8s
<kelvin> ok, got it. thx
<anastasiamac> wallyworld: when u r bored and wonder what to entertain ur mind with, could u PTAL at https://github.com/juju/juju/pull/8834
<wallyworld> lol, bored
<anastasiamac> this is the last part before storage interface :D and actual providers changes ...
<anastasiamac> lol, i know :D
<wallyworld> anastasiamac: done
<anastasiamac> wallyworld: thnx \o/
<anastasiamac> wallyworld: F vs Func... one the type, the other the name of the variable...
<wallyworld> i see, ok. we tend to use Func for var names elsewhere, but i'm not too worried about that one. the names just grated a bit
<anastasiamac> wallyworld: but, yes, consistency is better and Func everywhere is more explicit
<wallyworld> i had hungarian notation flashbacks
<anastasiamac> wallyworld: agree :) func is better :) i'd even go with 'funk' but m sure it won't b liked either :)
<wallyworld> yes! funk!!!!!
<anastasiamac> :D
<wallyworld> anastasiamac: there's an F on the context as well i think? could do that as a drive by?
<anastasiamac> wallyworld: k...
<anastasiamac> but u do know that there r a few Fs in the codebase :)
<wallyworld> no, didn't know
<anastasiamac> here is one (altho in tests so probably does not count as much... but still exists...) https://github.com/juju/juju/blob/develop/cmd/juju/commands/bootstrap_test.go#L2056
<kelvin> wallyworld, I just moved the Count Validation from Checker to `schema`, and addressed all the other comments, would you mind to take another look? thanks.
<wallyworld> sure
<wallyworld> kelvin: lgtm but we need to drop the checks for specific device type values, see my comments
<wallyworld> the schema should just be to check that a value is supplied
<kelvin> wallyworld, yes, looking now
<wallyworld> juju should validate the actual values based on capability of the cloud
 * wallyworld off to vet for a bit
<kelvin> wallyworld, yes, u r right, we will have to do more detailed checks later on juju side based on the runtime status of the cluster, so here we do not need to validate the type value.
<wallyworld> kelvin: that should be good to land, just go ahead and $$merge$$
<wallyworld> then you can pull tip of master locally and do the deps update
<kelvin> wallyworld, yes, doing it now, thx
<anastasiamac> wallyworld: storage signatures change to accomodate call context... PTAL when u can - https://github.com/juju/juju/pull/8835 :D
<wallyworld> anastasiamac: looking
<anastasiamac> wallyworld: it's not fully ready yet - i need to fix storage provisioner worker, etc... most of logic is there tho... wip :)
<wallyworld> i'll wait till done
<anastasiamac> k
<KingJ> BlackDex: Sorry I missed your reply!
<KingJ> BlackDex: ip ro get shows me a single route using a matching interface and gateway
<BlackDex> very strange indeed then that it doesn't work. That interface is the same interface the requests does its ingress on?
<KingJ> BlackDex: What i'm thinking though is that only one of these network spaces should have a gateway defined to prevent this issue - that way it's always consistent. But i'm not sure which 'space' is best for this, none of the charms share a common space - and presumably each charm needs at least one space with a gateway in order to hit apt etc?
<BlackDex> KingJ: Or just use the apt-proxy
<BlackDex> from juju
<BlackDex> i have build several environments which them selfs don't have internet access at all
<KingJ> apt-http(s)-proxy?
<BlackDex> yea, and no dfgw
<KingJ> I have that defined already, but to a value that isn't part of the network spaces - how will it be able to route there? using the system level default gw?
<BlackDex> routing is not available then
<BlackDex> thats true
<BlackDex> maybe your network setup is to complex for what i suggest
<KingJ> Hmm, would you have an example diagram of one of these environments?
<KingJ> I'm always able to make changes ;)
<KingJ> It sorta sounds like I need to remove the default gateway from each of the space subnets, but make sure each charm also has access to a common space that contains a proxy, and set apt-http-proxy to that host.
<BlackDex> they are very simple. Just one network for openstack communication, and a pxe network for the systems. Some containers do not have the pxe network, but do have the apt-proxy stuff. and the maas node has an interface in both networks as does juju
<BlackDex> that way juju can still communicate with all the charms. Also the juju controller uses the maas proxy!
<KingJ> Ahh I see. Mine's broken up in to about 7 spaces similar to this... http://blog.naydenov.net/wp-content/uploads/2015/11/openstack-spaces-e1447000706196.png
<BlackDex> via the `juju model-config -m controller` settings
<KingJ> (which to be fair, does have internal-api as a common space across all charms which i've not done, hmm, that could be the fix...)
<KingJ> Although ceph-osd and ceph-mon don't have a binding to anything other than storage and storage-cluster in the charm, I think I could use extra-bindings to give it an address in the internal-api space too, which would mean all my charms have a binding there and I could use that as my proxy subnet.
<BlackDex> ah yes, that is the only thing i do have, a separate storage network indeed :)
<BlackDex> and that is only connected to the charms which need it
<KingJ> How have you connected that seperate storage space for proxy access?
<BlackDex> not, because ceph-osd and ceph-mon are connected to the interal ;)
<BlackDex> and maas has a nic in the internal also to provide the proxy
<KingJ> Ahhhh
<KingJ> So you've bound your ceph-* charms to internal and a dedicated storage network?
<BlackDex> yes!
<BlackDex> just be sure to correctly configure the bondings or the *network* config options
<BlackDex> that messes stuff up for me sometimes
<BlackDex> for the bondings in a bundle see: https://github.com/openstack-charmers/openstack-bundles/blob/master/development/openstack-base-spaces/bundle.yaml
<KingJ> BlackDex: That's a useful bundle reference. I'm trying to work out how best to bind it to my internal now. I've already got public -> storage-data and cluster -> storage-cluster, so I need to pick something for internal.
<KingJ> I'm not sure how to 'discover' what bindings a charm supports though. So for example, ceph-osd's metadata.yaml lists bindings of 'public' and 'cluster' (https://github.com/openstack/charm-ceph-osd/blob/master/metadata.yaml) but the juju error output implies it supports bindings of "admin", "bootstrap-source", "client", "cluster", "mds", "mon", "nrpe-external-master", "osd", "public", "radosgw"
<KingJ> Where are those extra bindings defined?
<BlackDex> haha, i did a dirty trick to get all those
<BlackDex> a long time a go
<BlackDex> i just added a non-existing bond
<BlackDex> like 'foobar'
<BlackDex> then juju returned me an error telling me which were available
<BlackDex> else i would have to download all the charms and look for it in the source-code, because not everything was in the metadata.yaml of the charms
<KingJ> Ok forgive me, the link I gave to metadata.yaml was for ceph-osd, but the juju output was for ceph-mon. Now that i'm looking at ceph-mon's metadata.yaml I can see the same bindings....
<BlackDex> they have the same :)
<BlackDex> that is correct
<BlackDex> you can also add a default bond btw
<BlackDex> instead of a name use ""
<KingJ> Will that be used in addition to defined ones?
<BlackDex> "": space-name
<KingJ> so e.g., ceph-osd supports bindings to public and cluster, which i've already bound. If I bind default too, will I get a third interface
<BlackDex> yes :)
<KingJ> perfect :D
<BlackDex> it should as far as i know atleast
<KingJ> which solves my problem of "ceph-osd can only bind to public and cluster and i've bound those to spaces other than internal"
<BlackDex> but
<BlackDex> it is better to use "constraints: spaces=space1,space2,space3"
<BlackDex> or both ofcourse
<BlackDex> the constaints will ensure the nic bridged
<KingJ> So far i've been explicitly binding charms to space(s), and sounds like it's best for me to keep doing that except in cases like the ceph charms where I need an extra space that's not used by the charm's bindings, but instead by core network stuff.
<BlackDex> i'm used to the constraints because that was in 2.x somewhere, and the bonding came later
<BlackDex> so, i really don't know if they have the same effect actually
<BlackDex> the bonding values are also used by the charm to configure values
<BlackDex> like keystone uses those to configure the pub,int,admin parts
<KingJ> I've just updated my bundle for ceph-mon to have a default binding to my internal space in addition to the existing explicit binds, deployed and I can see now they have 3 interfaces, perfect.
<BlackDex> nice :)
<KingJ> So now I can change MaaS to not put default gateways on every subnet, except for the admin subnet.
<KingJ> * internal subnet
<KingJ> so that'll result in a single default gw on all of them
<TheAbsentOne> rick_h_ stub cory_fu zeestrat kwmonroe and all others I'm forgetting: I want to all thank you for all the help the past few months! In 20 hours I need to present my research and generic database implementation. If you guys are interested in the presentation it can be seen here: https://www.youtube.com/watch?v=DPtRJKgNxoA
<TheAbsentOne> Let's hope things go well :fingerscrossed:
<TheAbsentOne> (and sorry for the bad english >.<)
<zeestrat> TheAbsentOne: best of luck!
<TheAbsentOne> thanks!
<anastasiamac> wallyworld:  8835 is ready for review - all unit tests pass locally
<stub> Hmm... is 'juju deploy' preferring xenial over bionic, even for multiseries charms that list bionic first? cs:cassandra lists bionic first, but I get xenial vms unless I override with --series
<stub> My test charm that only lists bionic as a supported series gets bionic as a default just fine
<stub> Hmm, no. The test charm lists the exact same 3 series as supported. So the only difference I can think of is the charmstore, since older versions of cs:cassandra didn't support bionic?
<stub> I'd be interested if other people see the same thing, with 'juju deploy cs:~cassandra-charmers/cqlsh' getting bionic and 'juju deploy cs:cassandra' getting xenial
<naturalblue> hi guys/gals. i granted a user to a admin rights to a model. a few mins later i am getting this error "ERROR cannot log into controller "maas-controller": cannot get discharge from "https://172.17.174.2:17070/auth": cannot acquire discharge: cannot http POST to "https://172.17.174.2:17070/auth/discharge": Post https://172.17.174.2:17070/auth/discharge: proxyconnect tcp: tls: oversized record received with length 20527"
<naturalblue> im not using any proxy between the juju client and the juju maas controller
<naturalblue> juju controllers shows the controller but no user/access. its like it removed my access from the controller model at the same time as giving the user admin to the openstack model
<stub> cory_fu: endpoint.{endpoint_name}.joined gets set when the first relation is made, but there is no event signaling that a second relation has been made to that name
<stub> When two client applications are related to something like a database (ie. juju add-relation cqlsh:database cassandra:database; juju add-relation cqlsh2:database cassandra:database)
<stub> cory_fu: https://pastebin.canonical.com/p/BwRRF8thTp/ is the relevant code, which works great for just one relation. Second relation, no new flag so no handlers in the interface or my charm get triggered.
<stub> cory_fu: I can work around it with a @hook
<stub> Hmm, I also can work around it by watching for endpoint.{endpoint_name}.changed.private-address, which I can reset
<stub> practical fix might be for clear and set the endpoint.{endpoint_name}.{joined, changed.*} flags when a new relation appears, which a trigger can react to.
<rick_h_> TheAbsentOne: you can do it!
<rick_h_> naturalblue: hmm, what command did you use to grant access?
<TheAbsentOne> Let's hope rick_h_!
<stub> But I think a new endpoint.{endpoint_name} flag will be better, if we can think of a suitable name
<naturalblue> rick_h_: juju grant <username> admin openstack (openstack is model name)
<rick_h_> naturalblue: ok, and can you still run show-model on the openstack model?
<naturalblue> rick_h_: ERROR refreshing models: no credentials provided (no credentials provided)
<rick_h_> naturalblue: can the other user run the command?
<naturalblue> rick_h_: i am both users
<naturalblue> 1 is admin and 1 is naturalblue
<rick_h_> naturalblue: ah ok, from the same terminal?
<rick_h_> or machine I guess
<naturalblue> when i try to login with either user is get
<rick_h_> naturalblue: so...you ran the register command on that machine and gave the controller a new name?
<naturalblue> no
<naturalblue> i was logged into the maas-controller on the client machine
<naturalblue> i ran the juju grant naturalblue admin openstack
<naturalblue> after a minute or 2 i started getting the proxy error
<naturalblue> i logged out and tried to login as both users and am still getting the same message
<rick_h_> naturalblue: ok, bear with me. Having my morning coffee still. There's some cached files in .local/share/juju that have the admin credentials in them normally.
<rick_h_> I'm wondering if they got confused but if you ran register on another machine it shoudn't have.
<naturalblue> sorry, i wasnt giving out, just giving a better recap. i will check now
<rick_h_> naturalblue: can you run juju commands from the maas-controller then?
<rick_h_> naturalblue: as the new user?
<stub> cory_fu: Oh, I guess endpoint.{endpoint_name}.changed is what I am supposed to hook into, and that will work fine. I don't know why it took me so long to get there :)
<naturalblue> rick_h_: i will see about loggin into the maas-controller now
<rick_h_> naturalblue: k, I wanted to see if it's working on that end and we can probe the model for details to make sure the access is still there for everyone
<naturalblue> ^^ i still have access to the openstack model for naturalblue in the juju gui
<naturalblue> rick_h_: when i try to ssh to the maas-controller it says permission denied
<naturalblue> i have tried with admin and naturalblue. it does even get to a password prompt
<naturalblue> the controller was setup using juju
<rick_h_> naturalblue: is maas deployed with juju or something?
<rick_h_> naturalblue: I mean juju sits on top of maas
<rick_h_> naturalblue: so I'd expect you to be able to ssh to that machine from wherever you set maas up
<naturalblue> i setup the maas-controller from the maas region server. i am on that now as i use it for juju client actions. it is the system i am getting the errors from when i try to ruin juju commands
<naturalblue> i setup a maas region controller, then installed juju, from that deployed a juju-maas-controller on a different machine. I then setup a naturalblue user. i added an openstack model. i deployed my openstack setup to other machines. i then granted naturalblue admin access to the openstack model. This is where i am now
<naturalblue> all actions where done a the defult setup admin user.
<naturalblue> after i granted the naturalblue user admin access to openstck model, i have lost access to everything it seems
<naturalblue> as both admin and naturalblue
<naturalblue> although in the juju gui i had open for naturalblue, i do have the openstack model there and can do things
<naturalblue> strange!
<KingJ> You can force a charm to deply on a non-supported series via the juju CLI - how would you do that in a bundle configuration?
<rick_h_> KingJ: we don't have a --force in the bundle atm
<KingJ> rick_h_: Ah I see. For now, i'm working around it for the moment by deploying with force on the CLI, then using the bundle to configure and relate.
<rick_h_> KingJ: yea, the --force was meant really so folks could test/etc before things got updated but not something you'd want to do in prod with a repeatable bundle
<rick_h_> KingJ: at that point it's best to edit the charm and push the update
<KingJ> rick_h_: Yeah, understandable that that is the preferred approach. Unfortunately this is a charm from the store that hasn't yet been updated with bionic support in the manifest, even though it does work fine on bionic.
<rick_h_> KingJ: gotcha, have you filed a bug on the charm? Should be a link to file bugs if the charm author set a homepage/bugs-url
<KingJ> rick_h_: I did yeah, although unfortunately there's been no traction on the bug since I filed it 3 weeks ago.
<KingJ> I wouldn't mind making the changes myself - if I forked it can I publish it to the store too? (albeit under my username instead of theirs?)
<rick_h_> KingJ: bummer, what charm?
<rick_h_> KingJ: exactly, you can use the charm cli tool to pull down the charm, edit it, and push to your own space
<KingJ> rick_h_: https://jujucharms.com/u/bertjwregeer/snmpd/3
<KingJ> rick_h_: Ah cool, I will look in to doing that in the mean time then
<rick_h_> KingJ: https://docs.jujucharms.com/2.3/en/authors-charm-store
<KingJ> rick_h_: Perfect, thanks for the information and pointers :)
<cory_fu> stub: .changed will work, certainly.  I do think that .joined should re-trigger on new units, but that touches on the idea that I think all handlers / triggers should be edge-triggered and there should be an explicit mechanism for forcing an edge (which, under the hood, would just clear / set cycle the flag).  But I was already writing up a more in-depth discussion of that for https://github.com/juju-solutions/charms.reactive/issues/177
<cory_fu> stub: Also, your usage pattern for the endpoint is very different than what I usually recommend, in that you're using the relations collection outside the endpoint class.  It seems like a pretty natural way of doing it but it breaks encapsulation somewhat, mainly because the endpoint class can't influence the relations list, leading to things like your `publish_credentials(rel, False)`.  I wonder if it would be a good pattern to allow the endpoint
<cory_fu> class to provide a subclass implementation for Relation and Unit so that those collections could be used directly with the interface layer being able to extend them like it can the Endpoint, so that we don't have to pass around rel_id or some other synthetic ID?
<stub> cory_fu: This is a peer relation, which have a tendency to break encapsulation
<stub> I would like to move stuff into the endpoint peers.py, but this was the first pass through translating things to reactive
<stub> cory_fu: The Endpoint implementation can already override the relations property to return wrapped relations and access to wrapped units, with effort.
<stub> cory_fu: I wouldn't bother with making things pluggable, at least at this stage. I think it would likely make things more confusing for people with more normal use cases.
<stub> https://pastebin.ubuntu.com/p/YXBBVMXcD4/ is the full client.py file, which certainly could all be moved to provides.py. Encapsulated, just not where you expected it ;)
<stub> And yes, I'm confusing two bits of code in my head and realise this is not a peer relation
<stub> But in general I'm finding the client side easy to publish as an interface, but the server side implementation seems easier this way.
<cory_fu> stub: The server side is always going to feel easier to write that way, but the issue is that it makes it harder for others to create an interface-compatible server, since they have to make sure they re-implement the interface data protocol properly, probably by reading code.  The purpose of the provides side is to provide a documented API for anyone who wants to support the same interface.  Of course, that's not going to happen very often, so it
<cory_fu> feels like pointless work.  :p
<thumper> morning peeps
<wallyworld> kelvin: reviewed - there's a dependency issue, see if my comments make sense
<kelvin> wallyworld, looking now, thx
#juju 2018-06-21
<wallyworld> vino: the bzr command is "bzr revision-info"
<kelvin> wallyworld, do u think we should define the device constrains at `juju/juju/device` or somewhere else?
<wallyworld> hmmm. i think "devices" is ok for now
<kelvin> in the root dir `/devices`?
<vino> ok.
<vino> wallyworld : and for mercurial hg equivalent is hd id --id
<wallyworld> great
<wallyworld> kelvin: yeah, i think so for now
<kelvin> wallyworld, great, thx
<thumper> wallyworld, kelvin: how about core/devices?
<thumper> i had wanted to move the /network into core/network
<thumper> as long has it has no state or api dependencies
<kelvin> core/devices looks a good one,
<kelvin> yes, then we can shorter the root dir ls
<wallyworld> thumper: we should move storage as well then
<thumper> wallyworld: yes
<wallyworld> anastasiamac: if you have a moment? https://github.com/juju/names/pull/89
<wallyworld> veebers: want to talk about resources? thumper has stodd me up for our 1:1 :-)
<veebers> wallyworld: hah, sure thing
<thumper> wallyworld: we did have a long catch up this morning :)
<wallyworld> we did. but you could have told me :-)
<thumper> wallyworld: sorry, was planning to but got stuck talking to xavpaice
<kelvin> wallyworld, could I have ur a few minutes?
<kelvin> thinking if it's reasonable to consolidate the struct (params struct) at api and apiserver in single place.
<kelvin> to have the api schema shared for client, server side, or other dependencies.
<wallyworld> kelvin: sure, just finished talking to chris, free now
<kelvin> c u in hangout
<kelvin> now?
<babbageclunk> thumper: will the hub drop messages if the handler takes too long?
<babbageclunk> (I mean, if a subscribed handler takes too long)
<anastasiamac> wallyworld: reviewed..
<wallyworld> ty
<wallyworld> anastasiamac: i pushed a change to use a common method
<kelvin> wallyworld, could u take a look the PR again when u have time? thx
<vino> wallyworld: have a min ?
<wallyworld> sure
<vino> HO
<wallyworld> am in standup
<thumper> babbageclunk: no
<babbageclunk> good
<thumper> did you want to chat about it?
 * thumper is back in the office for a bit
<babbageclunk> no, it's fine
<babbageclunk> thanks!
<thumper> jam: are you back around?
<jam> thumper: hey, just got back, quite the adventure.. are you still here?
<thumper> um... was just about to leave, but can chat
<thumper> jam: did you want to catch up?
<jam> thumper: sure. Just chatting with manadart, will be available in 2min.
<thumper> kk
<manadart> Need a review for: https://github.com/juju/juju/pull/8838
<stickupkid> manadart: i'll take a look
<stickupkid> manadart: just out of curiosity, but jam will probably want to vet the whole thing (?)
<manadart> stickupkid: Assume so.
<stickupkid> manadart: done - just a few questions
<manadart> stickupkid: Shoot.
<stickupkid> manadart: questions in the PR
<manadart> stickupkid: Ta.
<naturalblue> hi. I am trying to deploy a juju maas controller and it starts to deploy but i notice later on it looks for an IP (that was MAAS's old IP address) and then fails to deploy. Is there a way to get the details of what it is looking for. I have changed all ips and restarted all services on MAAS. I am not sure why it is still looking for the old address.
<BlackDex> naturalblue: Did you re-deploy juju?
<BlackDex> the controller?
<BlackDex> If not, i think you need to reconfigure your cloud providers in juju
<naturalblue> ah that maybe it. i think although i removed the cloud provider after the controller, i possible changed its ip after i added it back in.
<naturalblue> I will give it atry and let you know.
<naturalblue> BlackDex: Thansk
<naturalblue> no, i just checked with juju show-cloud openstack-maas and the endpoing shows the correct ip (i.e. the new IP)
<naturalblue> but cloud-init is definitely looking for the old address and then aborting
<BlackDex> during boot?
<BlackDex> pxe boot?
<BlackDex> Please check the if the IP's are correct during the following command: `sudo dpkg-reconfigure maas-region-controller maas-rack-controller`
<naturalblue> during pxe boot, after i ask to bootstrap a new controller, maas rolls the OS, then juju starts the deploy of the controller. Watching the console its installing but suddenly i see a cloud-init script that references the old ip address of maas and the system shutsdown
<naturalblue> the ip in `sudo dpkg-reconfigure maas-region-controller maas-rack-controller` was incorrect, it was the old one
<naturalblue> i didnt realise i had to run that after chaning the ip from the maas web ui and rebooting
<naturalblue> its asking for the api address now on the dpkg-reconfigure. Do i put in thr port of 5240 or is that just for the web gui
<BlackDex> 5240
<BlackDex> that is the API port
<BlackDex> port 80 wil be deprecated in 2.4 or 2.5 if i'm correct
<BlackDex> yea, you can change those in a file somewhere, but this is better, and it restart the services needed after :)
<rick_h_> naturalblue: it might be in the cache for the cloud in .local/share/juju/clouds.yaml
<naturalblue> BlackDex: rick_h_ : Thanks
<manadart> stickupkid: Are you in any position to push your changes that further strangle out rawProvider?
<manadart> stickupkid: I am having rewrite provider tests and it makes no sense to be stubbing/mocking out changes to something we are ditching.
<magicaltrout> anyone know why if i try and deploy to a machine into an lxd container
<magicaltrout> it sets my resolv.conf seemingly incorrectly and the charm fails to apt update?
<rick_h_> magicaltrout: bionic?
<rick_h_> magicaltrout: there was a bug around that for bionic I know that was worked on and a fix landed
<magicaltrout> the controller is on bionic the machines are xenial
<rick_h_> magicaltrout: https://bugs.launchpad.net/juju/+bug/1764317
<mup> Bug #1764317: bionic LXD containers on bionic hosts get incorrect /etc/resolve.conf files <bionic> <cdo-qa> <cdo-qa-blocker> <foundations-engine> <kvm> <lxd> <network> <uosci> <juju:Fix Committed by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1764317>
<enrico_> Hello, I am trying to deploy a service with juju. I want to deploy it inside an lxd container. Till now ok. But I want to add 2 NIC, how can I do?
<stickupkid> manadart: i can do, but it's broken a few things... let me just fix what i've got so you can at least see what i've done
<stickupkid> manadart: this is a very much work in progress - https://github.com/juju/juju/compare/develop...SimonRichardson:lxd-schema?expand=1
<stickupkid> manadart: i'm in the middle of refactoring the tests, before I get to the construction of the provider - although I may change my mind whilst I'm writing the tests
<manadart> stickupkid: Ta.
<naturalblue> when i try to install juju gui with juju upgrade-gui i get this "ERROR cannot upgrade to most recent release: cannot retrieve Juju GUI archive info: error fetching simplestreams metadata: invalid URL "https://streams.canonical.com/juju/gui/streams/v1/index.sjson" not found"
<naturalblue> ^^but i have tried a wget and can retrieve the file from both the client and the controller
<naturalblue> ^^ please ignore this. turns out i had a proxy variable in /etc/environment which get reapplied on a reboot. removed now and variables unset.
<Guest53> Hi I just booted three ceph osd that were shutdown.  One came up fine but the other two say they are stuck in upgrading.  The log message I am getting is DEBUG config-changed Error ENOENT: error obtaining osd_hostname_luminous_start.
<Guest53> Any idea where this luminous start file is located?
<acss> Anyone know why I would see the following in a juju ceph-osd unit log? 2018-06-21 16:03:06 INFO juju-log Monitor config-key get failed with message: b''
<rick_h_> acss: hmm, some sort of unexpected config value the charm couldn't parse?
<acss> This would be based on my ceph.conf file?
<acss> I am confused because I have another OSD functioning fine with the same charm configuration.
<rick_h_> acss: sorry, ran for lunch stuff. So this is the juju config
<rick_h_> acss: so you can see what you've got with juju config ceph-osd
<acss> okay
<acss> where is this located?
<rick_h_> acss: so it's the config juju is told to send to the charms and changes to that config triggers the charm's config-changed event which then causes it to evaluate things and update any local files/apis/settings that need updating for the software in question
<acss> okay thank you
<sfeole> hello all... we have bootstrapped a controller out on AWS cloud and deployed rabbitmq, juju status displays the internal cloud IP address.   However, i want to query rabbitmq via the assigned floating ip address, will juju correctly translate/route  floatingip -> internal cloud ip?
<rick_h_> sfeole: so you need to expose the rabbit unit for the floating IP to be the one to work since by default the firewall won't allow any traffic to it
<sfeole> patriciadomin, ^^
<sfeole> rick_h_, so just,  juju expose <unit>   ?
<rick_h_> sfeole: juju expose rabbitmq or whatever the application name is
<rick_h_> sfeole: that'll set the firewall rules as long as the charm is setup to expose the rabbit port
<sfeole> rick_h_, ok thanks,
<sfeole> patriciadomin, ^^
<patriciadomin> rick_h_ sfeole thx
#juju 2018-06-22
<wallyworld> thumper: babbageclunk will coprrect me, but the BootstrtapRaft() bit is called during bootstrap to set up the cluster; i can't recall exaxctyl what it stores
<babbageclunk> It stores the raft configuration into the raft log
<wallyworld> so for a running system being upgraded, we don't bootstrap
<wallyworld> so miss that api call
<thumper> ah...
<thumper> ok
<thumper> how big a change is the k8s api?
<wallyworld> not that big - need to return a struct instead of a string for getting info to set up a pod
<wallyworld> most of the attrs in 2.4 will be empty
<thumper> that's fine
<thumper> get it done
<wallyworld> will do
<wallyworld> need to pave way for storage in 2.5
<veebers> Is there a way to get 2.2.9 working with snap installed lxd?
<wallyworld> veebers: not sure tbh, may require the apt version
<wallyworld> thumper: here's that refactoring pr for the api params https://github.com/juju/juju/pull/8844
<veebers> wallyworld: ah apt lxc not apt juju that makes sense ^_^
<veebers> shoot, I only recently migrated from apt installed lxd to snap installed
<wallyworld> joy
<veebers> going to spin up an ec2 instance and use that as the host ^_^
<wallyworld> that works
<babbageclunk> wallyworld: what's the right way to get all controller machines from state?
<babbageclunk> wallyworld: ooh, looks like State.ControllerInfo has it - and then I guess I can ask each machine for their address.
<wallyworld> babbageclunk: i think so, let me have a look
<wallyworld> yeah that works but i thought we had another way, i'll poke around
<babbageclunk> I'm struggling to work out how to select the right address for each machine.
<babbageclunk> Trying to follow how the peergrouper does it is doing my head in.
<babbageclunk> I could do it sort-of-heuristically using the api addresses from the agent config: when I find a machine with an address that matches an api address then I can match them up
<anastasiamac_> wallyworld: thumper: unskipping test PRs for reviews - 8840:8843 PTAL?
<wallyworld> babbageclunk: there's got a be a better way
<thumper> anastasiamac_: I'd like to see the premerge check passing before approving
<wallyworld> i can poke around
<babbageclunk> wallyworld: thanks
<anastasiamac_> thumper: only 1 out of 4 had an un-related failure... the other 3 passed...
<wallyworld> babbageclunk: did we need a HO?
<babbageclunk> wallyworld: yeah, might be good - in standup?
<wallyworld> sure
<thumper> anastasiamac_, wallyworld: I'd prefer to push back on the test changes, just because it is introducing more churn without much benefit at this stage
<thumper> I'd prefer them to land post 2.4.0 release
<anastasiamac_> thumper: ack
<anastasiamac_> thumper: in fact, m happy to unskip them in develop branch instead
<veebers> wallyworld: you think the "--agent-stream" addition to juju-upgrade is worth putting in 2.4 release notes?
<wallyworld> veebers: i think so yeah
<veebers> sweet, I'll make is so
<wallyworld> as it's a usability improvement
<veebers> I'm being dense, how can I change an agent stream for an upgrade? (juju upgrade-juju -m controller --agent-version 2.4-rc2 --agent-stream=proposed won't work because the controller doesn't support that ^_^)
<veebers> ah, rtfm helps; juju model-config -m controller agent-stream=devel
<wallyworld> babbageclunk: how goes the patch?
<babbageclunk> wallyworld: just testing it now - think I have a problem with timing start up, might need to have the raft worker start later.
<wallyworld> you can gate on !upgrading... you doing that?
<babbageclunk> yup
<wallyworld> babbageclunk: so this timing issue is not new then
<babbageclunk> Oh, I mean, the raft worker wasn't gated on upgrade, but now it needs to be, so I'm doing it now
<veebers> wallyworld: you have a quick sec? https://pastebin.canonical.com/p/SvjPwMpgHB/ I set the agent-stream, but juju-upgrade without an explicit version (i.e. 2.4-rc2) it chooses 2.3-rc2 (from a base 2.2.9 install). That's expected behaviour right? (If I understand the upgrade-juju help correctly)
<wallyworld> i think it goes +1 by default, can check
<wallyworld> veebers: first look - the code seems to say it will upgrade to the version of the juju client if not specified
<veebers> wallyworld: I think that's right, just re-reading the help docs)
<veebers> wallyworld: ack, that matches my expectations and what I'm seeing. Thanks :-)
<wallyworld> good, docs match code :-)
<veebers> even better ^_^
<enrico_> hello, please anyone to help: trying to deploy and getting this ->  no obvious space for container "0/lxd/0", host machine has spaces: "default", "overlay", "provider", "storage" <-- any idea?
<rick_h_> enrico_: check out https://docs.jujucharms.com/2.3/en/network-spaces and then the binding syntax
<rick_h_> enrico_: basically juju is saying you've got 4 spaces and we're not sure how to setup the container because we don't know which network interfaces need to be setup in which way to make sure the application is on the right network
<magicaltrout> how's the hangover rick_h_ ?
<rick_h_> magicaltrout: :) drink responsbily
<enrico_> rick_h_: thanks you... I've tried to follow the doc.. Juju recognizes the 4 spaces defined in MAAS and in to my bundle I've binded them  using the example here https://jujucharms.com/nova-cloud-controller/
<rick_h_> enrico_: ok, so the nova-cloud-controller is what you're putting in 0/lxd/0?
<enrico_> rick_h_: yes but I've got the same error for the others services I've spread over others containers! so something wrong with my syntax ?
<rick_h_> enrico_: maybe, juju isn't liking the binding and fails because it doesn't know what to do
<enrico_> rick_h_: please look at https://pastebin.com/3sXEFgtV as example of my conf
<enrico_> rick_h_: Can I do that: mgt: default ? when my space is named default ?
<rick_h_> enrico_: right, this is application binding so that the endpoint on the application is bound to the space.
<rick_h_> enrico_: looking for example
<Guest53> GM I keep getting the following in three OSDs that have been rebooted.  Two of the OSDs are stuck in upgrading state.  The ceph cluster is operational.
<Guest53> unit-ceph-osd-0: 08:09:20 DEBUG unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd5_luminous_done' doesn't exist
<Guest53> unit-ceph-osd-0: 08:09:20 INFO unit.ceph-osd/0.juju-log osd5 is not finished. Waiting
<Guest53> unit-ceph-osd-0: 08:09:20 DEBUG unit.ceph-osd/0.config-changed Error ENOENT: error obtaining 'osd_osd5_luminous_start': (2) No such file or directory
<Guest53> unit-ceph-osd-0: 08:09:20 INFO unit.ceph-osd/0.juju-log Monitor config-key get failed with message: b''
<Guest53> unit-ceph-osd-0: 08:09:21 INFO unit.ceph-osd/0.juju-log waiting for 16 seconds
<rick_h_> enrico_: so I'm looking at https://jujucharms.com/glance/265 and I don't see any endpoint called mgt
<rick_h_> enrico_: so I think the thing is getting those bindings to be the real endpoints on the charm bound to the space
<enrico_> rick_h_: oooook so the endpoints names are predefined with the charms and I have to use them as they are ! Ok make sense ! Thanks you !  going to try
<rick_h_> enrico_: right, so that the juju can help the charm tell it what data it knows about needs to go over what networks
<rick_h_> enrico_: e.g. ceph has a data plane for ceph->ceph communication that something like keystone will not
<enrico_> rick_h_: yes okok I was missing that key thanks you :)
<rick_h_> enrico_: np, good luck!
<acss> Sorry entered as Guest53 can anyone tell me why I would get the key osd_osd5_luminous_done
<acss> from above in the chat
<acss> This seems to be created by juju
<acss> Hello I have three OSDs and it appears there is some way juju is references keys to the wrong hosts.  On hostname osd6 I keep getting an error unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd5_luminous_done' doesn't exist
<acss> on hostname osd5 I get unit.ceph-osd/0.config-changed Error ENOENT: key 'osd_osd4_luminous_done' doesn't exist
<acss> Can anyone please tell me the direction to look into to find the why this is happening?
<hml> acss: have you asked in #openstack-charms?
<acss> No I can try that thank you
#juju 2018-06-24
<veebers> wallyworld: I'm pretty sure the cloud region details are all up to date etc. (I did that streams update a little while back), what's the best way to confirm this? Compare the in-tree public-clouds with what's in s.c.c?
<wallyworld> yup. i am pretty sure as well. i thought it was part of the release process to check. but yeah, comparing those 2 is what's needed
<wallyworld> the symptom of a mismatch is to run up a system and then update-clouds will say that stuff is added/removed
<veebers> ack, I'll compare now to ensure /me considers a way to automate it for release/ci
<babbageclunk> wallyworld: if I add the peer grouper as a dependency for the raft transport worker, do I need to also do a context.Get in the manifold (even though I don't actually need a ref to it in the worker)? Or will the dep engine prevent a worker from running until all its deps are running/
<babbageclunk> ?
<babbageclunk> looks like the latter from this log. I'll add code to get it.
<babbageclunk> bah, can't do that unless the peergrouper implements an output function. I'm not sure this is the fix - lacking this dependency doesn't cause a problem in the normal startup process. I'm adding more logging in.
<babbageclunk> wallyworld: ^
<kelvin> wallyworld, I just pushed changes to fix typo stuff, could u take a look when u got time? thanks
<veebers> hmm, using "charm push" with resources . . . ah (as I was typing this) the resources need to be in the homedir if it's snap installed? It failed when I used /tmp/thefile, but not ~/tmp/thefile
#juju 2020-06-15
<thumper> nope, I think it was all me
<thumper> https://github.com/juju/juju/pull/11707 for anyone
<achilleasa> manadart: added some comments to 11683
<manadart> achilleasa: Responded. I think those things are OK.
<manadart> hml: If you've time in your day, can you look at https://github.com/juju/juju/pull/11706 ? It's the one that removes networkingcommon logic.
 * manadart is off home.
<hml> manadart:  will do
<manadart> Ta.
<hml> stickupkid: do you have time for teddybear?
<stickupkid> i do
<stickupkid> daily?
<hml> stickupkid: omw
<achilleasa> looks like we need to update our dep list for kvm on focal... " Package 'libvirt-bin' has no installation candidate"
<achilleasa> hml: on a sidenote, shouldn't I be able to start a kvm machine with a bionic image on a focal host?
<hml> achilleasa:  iâd think so
<achilleasa> hml: ah... looks like it's broken for focal on focal :-(
<hml> achilleasa:  what provider?
<hml> achilleasa:  not sure you can start a kvm inside an lxd machine
<achilleasa> manual machine; deploy --to kvm:X
<achilleasa> (manual focal machine)
<hml> achilleasa:  iâm having that trouble today tooâ¦ even âto lxd:<manual> machine
<achilleasa> hml: ' Requested operation is not valid: format of backing image '/var/lib/juju/kvm/guests/focal-amd64-backing-file.qcow' of image '/var/lib/juju/kvm/guests/juju-275797-0-kvm-3.qcow' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)'
<hml> achilleasa:  what type is the manual machine?
<achilleasa> guess I need to open bugs
<achilleasa> focal provisioned via maas
<achilleasa> using bionic boxes works though
<hml> huh
<achilleasa> also, focal libvirt-bin pkg -> libvirt-client
<achilleasa> I will fix that in my kvm PR but the image format one is odd (also: https://bugzilla.redhat.com/show_bug.cgi?id=1801219)
<achilleasa> petevg: I 've created https://bugs.launchpad.net/juju/+bug/1883575 to track ^. Should be able to land a fix in my upcoming kvm PR but we may need to backport
<mup> Bug #1883575: Unable to deploy --to kvm:X on focal hosts <juju:In Progress by achilleasa> <https://launchpad.net/bugs/1883575>
<flxfoo> Hi there
<flxfoo> quick one
<flxfoo> on AWS
<flxfoo> having 3 AZ
<flxfoo> can I create a controller in each zone that would be able to manage the same models?
<flxfoo> let's say in case a zone goes down...
<flxfoo> thanks in advance
<flxfoo> well I guess controller high availability page is what I am looking for :)
<thumper> flxfoo: yes juju controllers work find across AZs
<flxfoo> thanks @thumper , I enabled `juju enable-ha` and he did add another 2 controllers, but in the same AZ...
<thumper> flxfoo: hmm... by default they should try to spread over the AZs
<flxfoo> ok got it `--to` need to be used ...
<thumper> flxfoo: can you add-machine in that model specifying an az?
<thumper> flxfoo: which version of juju?
<flxfoo> `juju enable-ha --to zone=<az1>,zone=<az2>` works
<thumper> sweet
<flxfoo> @thumper: yeah I though the default would have spread like when deploying, but it appears that one need to use `--to`, someone can confirm that?
<thumper> flxfoo: I have had it spread before... perhaps it isn't as controlled as we'd like
<flxfoo> @thumper:Just did a little test again, with a new controller (without constraints) and with 3 , it spreads over only 2 az...
<flxfoo> @thumper:version is 2.8.0
<flxfoo> @thumper:which instance type so you use for controllers?
#juju 2020-06-16
<timClicks> wallyworld: what is the timeline for juju scp support in k8s models?
<wallyworld> timClicks: hopefully this cycle but it is a stretch goal as it got punted
<timClicks> ok
<wallyworld> hpidcock: if you have a moment at some point https://github.com/CanonicalLtd/juju-qa-jenkins/pull/470
<hpidcock> wallyworld: sure thing
<hpidcock> wallyworld: lgtm
<wallyworld> ty
<timClicks> 2 easy PRs https://github.com/juju/juju/pull/11704 https://github.com/juju/juju/pull/11712
<thumper> wallyworld: re: bug 1872670, if it is working on eu/us region, surely it isn
<mup> Bug #1872670: AWS m6g.* instances fail with latest juju <juju:Confirmed> <https://launchpad.net/bugs/1872670>
<thumper> isn't juju, but account issue?
<thumper> clearly latest/edge won't work for them
<thumper> but 2.8.0 stable should
<flxfoo> hi there
<flxfoo> interface for apache-solr uses deprecated reactive RelationBase, where can I find documentation to be able to define departed / joined like parts , please?
<stickupkid> manadart, https://github.com/juju/juju/pull/11715
<stickupkid> manadart, might affect your changes you want to propose
<manadart> stickupkid: I also notice that `idle_subordinate_condition` does not use the declared `app_index`.
<stickupkid> ah good spot
<achilleasa> manadart: stickupkid can you take a look (and QA) https://github.com/juju/juju/pull/11716 and https://github.com/juju/juju/pull/11714?
<achilleasa> fun with virtual maas ;-)
<stickupkid> I like how in the integration tests we have .log and .txt extensions
<stickupkid> PICK ONE SIMON!
<stickupkid> I'll fix that at some point
<manadart> achilleasa: Yep.
<stickupkid> manadart, updated https://github.com/juju/juju/pull/11715, you where correct, just run the whole deploy suite and it worked fine
<stickupkid> deploy suite is hefty
<stickupkid> this annoys me https://paste.ubuntu.com/p/tX7jdGff79/ we really should sort the image server nested containers manadart
<stickupkid> also I've never seen the error/issue when trying to retrieve the environ
<manadart> stickupkid: You now have a gap in the args indexes :)
<manadart> stickupkid: Tell me what I am doing wrong here. I have this: https://pastebin.canonical.com/p/QZKBVhd57V/
<manadart> And this query: `wait_for "network-health-xenial" "$(idle_subordinate_condition "network-health-xenial", "mongodb")"`
<manadart> If I fill out the generated query manually, I get what I expect: https://pastebin.canonical.com/p/dYbxvTZMV8/
<manadart> stickupkid: Gonna eat. Maybe you can school me in a few.
<manadart> stickupkid: Got time for a quick HO?
<manadart> Need a review: https://github.com/juju/juju/pull/11717
<jam> hey manadart. I just came across something unexpected: https://bugs.launchpad.net/juju/+bug/1883703
<mup> Bug #1883703: Snap build not stripping the binary? <build> <juju:Triaged> <https://launchpad.net/bugs/1883703>
<jam> basically, the 'juju' in our snap is 120MB instead of 89MB and it appears it is *not* stripped.
<jam> I can't make it to standup, and I don't really know the priority for it, but I wanted to raise awareness
<manadart> jam: Ack; we will discuss.
<jam> manadart, it seems we use our own extension of snapcraft's go plugin
<jam> and it has https://github.com/juju/juju/blob/develop/snap/plugins/juju_go.py#L107
<jam> which isn't supplying '-ldflags "-s -w '
<hml> stickupkid: Iâve gotten this 2 of 2 times on 11715: https://pastebin.canonical.com/p/bkPRmhPRWV/ canât confirm if itâs timing related or not
<achilleasa> back-port for the kvm fix: https://github.com/juju/juju/pull/11718 and https://github.com/juju/juju/pull/11719
<achilleasa> QA should be easy on this one ;-)
<stickupkid> hml, i'll get to that after fixing the issue for manadart
<hml> stickupkid: k
<stickupkid> why is it sometimes does juju status take 22 seconds on a local lxd setup
<stickupkid> something very strange is going on
<stickupkid> manadart, got a second
<stickupkid> another case of a random lock up when querying juju status https://paste.ubuntu.com/p/yYSnzxJ2pN/
<stickupkid> see timestamp between the two different status timestamps
<stickupkid> should juju hang if a lxd container is down, i wonder if the controller is down?
<stickupkid> hml, re-writing the container integration test, but can confirm it the new stuff works for lxd and aws...
<stickupkid> hml, juju action wasn't working as I was targetting the wrong unit
<hml> stickupkid: ack
<hml> stickupkid: i think i have a test failure to deal withâ¦ doesnât reproduce locally.
<hml> stickupkid: [LOG] 0:01.949 INFO test unit "u/0" shutting down: preparing operation "install cs:quantal/wordpress-0": failed to download charm "cs:quantal/wordpress-0" from API server: download aborted
<stickupkid> hml, isn't that the mock one?
<hml> stickupkid: unfortunately notâ¦  its in the uniter.loop codeâ¦ and startup
<hml> no easy way to mock without rewritting, afaik
<hml> rewritting uniter stuff
<stickupkid> "cs:quantal/wordpress-0" it's just that it using the testcharms though
<hml> stickupkid: yeah, thatâs what i donât getâ¦ it should be using the local charm repro for test
<stickupkid> that's why i was wondering if it was the mock store
<stickupkid> hml, quick ho?
<hml> stickupkid: sure
<pmatulis> does the '--map-machines' option need to be used when deploying with both a bundle and an overlay? i'm thinking that the bundle will define the existing machines and then the overlay needs the option. or is an existing machine completely "pre-deploy"?
<wpk> 2/wind 343
<josephillips> hey
<josephillips> im installing swift
<josephillips> on different zones
<josephillips> but when i perform teh relation betwen swift-storage-zonename swift-proxy
<josephillips> keep on missing relation proxy
<hml> hey wpk
<hml> josephillips: and no errors with the add-relation?
<hml> josephillips:  do you have a pastebin you can share?  iâm not seeing a relation âproxyâ for either of those charm
<hml> pastebin with the output youâre seeing
<wpk> hml: o/
<josephillips> sure hml im checking something else
<josephillips> give me one sec
<josephillips> my bad
<josephillips> was a bad configuraiton as block device
<hml> josephillips: rgr
<wallyworld> hpidcock: kelvinliu: tlm: see comment on bug 1875481 ... another issue with running juju agent in the workload container. the base linux may not support our jujud
<mup> Bug #1875481: Juju 2.8 error about unit not being the leader <k8s> <juju:Incomplete> <https://launchpad.net/bugs/1875481>
<hpidcock> wallyworld: I thought we were statically compiling jujud
<hpidcock> oh maybe that was only in snaps
<wallyworld> i thought we did too
<wallyworld> hpidcock: ah forgot to mention, there's also an issue with our snap
<wallyworld> bug 1883703
<mup> Bug #1883703: Snap build not stripping the binary? <build> <juju:Triaged> <https://launchpad.net/bugs/1883703>
<wallyworld> i haven't looked closely, theory is he go plugin doesn't supply the necessary flags
<kelvinliu> I found all jujud images doesnot have jujuc added
<wallyworld> something to fix for 2.8.1 release
<hpidcock> kelvinliu: that shouldn't affect it, but yeah I noticed the same thing
<wallyworld> kelvinliu: yeah, jujuc is a separate binary that we need to package if we haven't already
<kelvinliu> https://pastebin.ubuntu.com/p/v9d3MRSXS5/
<kelvinliu> it's weird if docker build didn't fail when jujuc was missing
<kelvinliu> docker should complain about ADD failed because jujuc was not existing
#juju 2020-06-17
<hpidcock> kelvinliu: it wasn't added in the rc branch
<hpidcock> only the 2.8 branch
<kelvinliu> ah, ic
<hpidcock> wallyworld: did you want me to quickly look at both issues and land in 2.8?
<wallyworld> hpidcock: if you had time that would be grand
<hpidcock> np
<pmatulis> does the '--map-machines' option need to be used when deploying with both a bundle and an overlay? i'm thinking that the bundle will define the existing machines and then the overlay needs the option. or is an existing machine completely "pre-deploy"?
<hpidcock> wallyworld: https://github.com/juju/juju/pull/11720
<hpidcock> ended up chasing my tail, thought I saw really small binaries of 40mb etc,  for some reason was unable to repro
<wallyworld> looking
<wallyworld> hpidcock: lgtm ty
<wallyworld> i shold merge directly
<wallyworld> i'll let the check run pass
<wallyworld> hpidcock: merged
<timClicks> stupid question sorry... is --device just a k8s thing?
<kelvinliu> timClicks: yes
<manadart> achilleasa: QA is failing for me on #11716
<mup> Bug #11716: psycopg: new changes from Debian require merging <psycopg (Ubuntu):Fix Released by amu> <https://launchpad.net/bugs/11716>
<manadart> Stupid mup. The PR, not LP.
<manadart> mup--
<Chipaca> juju#11716
 * Chipaca guesses
 * Chipaca guesses poorly
<achilleasa> manadart: ho?
<manadart> achilleasa: OMW
<achilleasa> manadart: pushed the acpi fix to all kvm PRs; taking a look at the failed QA now
<stickupkid> CR anyone https://github.com/juju/juju/pull/11715
<manadart> stickupkid: I'll do it, then swing back to achilleasa's stuff.
<stickupkid> manadart, run this https://github.com/juju/juju/pull/11715#issuecomment-645285581 if you intend to run it all
<stickupkid> manadart, life is too short
<manadart> stickupkid: Running now.
<manadart> stickupkid: Done. You still looking at mine?
<stickupkid> manadart, nearly done
<achilleasa> manadart: hmmm... so the remove call is skipped because lxd reports the nic type as "broadcast" instead of "bridged"...
<manadart> achilleasa: Yeah, I added some Criticalfs here was surprised they weren't hit. So it's bugging out earlier.
<achilleasa> also, the lxd from snap on focal seems to remove the entry from the bridge as expected...
<achilleasa> yay :D
<achilleasa> so I need to tweak some things and re-test on bionic
<manadart> achilleasa: I suspect this difference goes hand-in-hand with the profile changes.
<manadart> So we get 3.0.3 on Bionic and the latest Snap on Focal.
<achilleasa> I will amend my comments accordingly
<achilleasa> manadart: updated the PR and verified it works on bionic (I also move the maybeRemove call a bit higher up); can you take another look?
<manadart> achilleasa: Yep, saw. Upgrading now.
<achilleasa> manadart: if you set juju.core.networking level to DEBUG you should see a log of the call to remove the veth
<manadart> achilleasa: Yep, that one's done. Onto the next.
<manadart> achilleasa: You try and merge the wrong PR?
<achilleasa> manadart: yeah. cancelled the merge once I spotted it... too many open tabs
<achilleasa> the right one just landed
<hml> stickupkid: i think i know why height and width are not unmarshalling in Mediaâ¦ they can return integer or null
<hml> stickupkid: pieced that info out of http://api.snapcraft.io/docs/charms.html
<stickupkid> hml, any reason why we read the resp.Body twice?
<hml> stickupkid: not that i can think of.
<hml> stickupkid: missed that in putting together the spike
<stickupkid> hml, as we're not doing auth yet, I'm going to cull most of the do command and bakery client for now
<stickupkid> hml, we can add as required (which I'm sure will be soon)
<hml> stickupkid: will it be harder to move back in when we do?
<stickupkid> hml, nah, just align to an interface
<hml> stickupkid: rgr
<stickupkid> hml, so the http client and the bakery client both conform to `func (Client) Do(request *http.Request) (http.Response, error)`
<stickupkid> hml, so as long as we do that, we can inject what ever we want
<hml> stickupkid: ack
<cory_fu_> wallyworld: Hey, for CMR, is there a way to get the remote side's model UUID?
<cory_fu_> wallyworld: Actually, I should step back.  What I really want to know is when a CMR is created on AWS, how are the SGs modified so that the two units can talk to each other?  Are specific rules added to the unit SGs?
<timClicks> does the juju/gnuflags package support modifying a Var once it's been added to the flagset?
<timClicks> the juju deploy command includes flags from add-unit directly.. I would like to add " (charm only)" to their help lines when someone execs `juju help deploy`
<cory_fu_> wallyworld: Nm, tested and confirmed that it's the specific rules on the unit SG, which makes sense.
<timClicks> thumper: I keep wanting to make juju help topics a thing again
<timClicks> juju help charm-urls
<thumper> timClicks: I'm ok with that...
<timClicks> I'm going in circles trying to make 'juju help deploy' shorter. It's getting easier to read, but longer.
<tlm> wallyworld: https://github.com/juju/juju/pull/11723
<wallyworld> ok
<wallyworld> tlm: add bug to PR description, make as in progress...
<tlm> ah yep
<wallyworld> tlm: just to test end-end, might be work hacking an existing charm and try it, as we've seen once or twice a missing bit in practice, just to be sure etc
<wallyworld> it looks ok, but you never know
<tlm> any suggestions what what charm to hack etc ?
<wallyworld> i use a mariadb one i have unpacked locally. ~juju/mariadb-k8s
<wallyworld> cs:~juju/mariadb-k8s
<wallyworld> just hack the extra line in the yaml
#juju 2020-06-18
<tlm> wallyworld: sent you a dm incase your notifications are still playing up
<timClicks> is it possible to use `juju deploy` on k8s and require that it's deployed to a pod with a pre-existing CRD? Sort of like --devices but for CRDs?
<timClicks> perhaps --constraint would make more sense, if that makes any sense at all
<thumper> timClicks: I'm not sure where you're getting with this
<tlm> wallyworld: got 5 minutes for HO ?
<wallyworld> tlm: i do
<wallyworld> hpidcock: if you have any time given there's a meeting tonight https://github.com/juju/juju/pull/11724
<hpidcock> wallyworld: sure thing
<hpidcock> wallyworld: I'm sorry for the review but here it is https://github.com/juju/juju/pull/11724
<tlm> wallyworld: not urgent but when you get 2 minutes can you check I correctly addressed the logging comment in https://github.com/juju/juju/pull/11671
<wallyworld> tlm: just finished a call, will look after coffee
<timClicks> thumper: you might want to take a look https://discourse.juju.is/t/inline-help-update-juju-deploy/2767/6?u=timclicks
<timClicks> it's a fairly aggressive change so I haven't filed a PR
<thumper> ok, can we look tomorrow during our call?
<thumper> wallyworld: where does the unit cloud container status get updated?
<thumper> I can't seem to find it
<thumper> hmm... in UpdateUnitOperations
<timClicks> thumper: sounds good
<wallyworld> thumper: yeah, there
<wallyworld> hpidcock: thanks for review, i've made changes but also answered a couple of comments, see if you agree
<wallyworld> i gotta go buy dinner, bbiab
<thumper> wallyworld: it seems very hard to set to test
<thumper> if you have a pointer that'd be helpful
<wallyworld> tlm: i think the model controller PR might be missing the loggingConfigUpdater worker
<manadart_> stickupkid or achilleasa: Need a forward merge review: https://github.com/juju/juju/pull/11725. Reasonably substantial.
<stickupkid> seems like a lot of deletions
<manadart_> stickupkid: From https://github.com/juju/juju/pull/11724.
<stickupkid> manadart_, yeah added a comment about him publicly announcing this...
<tlm> wallyworld: no it's there
<flxfoo> Hi all, quick one
<flxfoo> Is there a way to proxying access to instances via the controller? instead of doing direct `juju ssh <id>` ?
<achilleasa> flxfoo: did you try 'juju ssh --proxy <id>'?
<rick_h> achilleasa:  flxfoo and there's a controller config to set it up to work that way I believe but the user has to have ssh access on the controller for the proxying to work
<rick_h> e.g. not great for shared environments/etc
<flxfoo> achilleasa: rick_h thanks guys , appreciated...
<manadart_> stickupkid: Can you tick a forward merge of the same patch? https://github.com/juju/juju/pull/11727
<flxfoo> Hi again
<flxfoo> `juju ssh --proxy` just stall
<flxfoo> I have a few subnets (class c) to form private subnets... i set spaces related to those subnet...
<flxfoo> after deploying, I have a class c IP in "public IP" in juju status
<flxfoo> I guess I am missing something here
<flxfoo> (so direct ssh or ssh --proxy) don't work
<flxfoo> does anybody using aws with private (class c) subnets, instances not having public ips (I mean having private ips) being able to reach them properly? is there a specific settings perhaps?
<achilleasa> flxfoo: can you try the script in the "what if I need to ssh into a machine with a private IP" section in https://discourse.juju.is/t/how-to-create-and-use-spaces-on-aws/2115?
<flxfoo> achilleasa: will do thanks
<stickupkid> hml, CR https://github.com/juju/juju/pull/11728
<achilleasa> petevg: got a question. If the plan is to extend the open/close-port tools to work with endpoints, how will charms know whether that feature is available to them (they would get an error running the tool with extra args on older controllers)
<achilleasa> do we want to provide a new open/close-port-for-endpoints tool which charms can discover on their path?
<achilleasa> stickupkid: have we done anything similar in the past? ^^^
<stickupkid> achilleasa, not a clue on that one
<rick_h> hml:  ping, do you know if there's one juju operator pod per unit in a k8s model or one pod per application?
<hml> rick_h: i believe itâs per application.
<petevg> achilleasa: whoops. Never followed up on your question. Sorry. Short answer: I don't know. Long answer: we should probably put together a list of possibilities, and pick the least bad.
<flxfoo> achilleasa:thanks for your answer... I think I got more detail though
<flxfoo> the controller is on a different vpc
<flxfoo> we enabl peering between the two vpc
<flxfoo> then now if I ssh an instance (with the ssh key) from within the controller I can connect.
<flxfoo> but the client does not
<flxfoo> achilleasa:the script endup with usage: nc [-46CDdFhklNnrStUuvZz] [-I length] [-i interval] [-M ttl]
<flxfoo>  achilleasa using ssh -J seems to be working
<flxfoo> achilleasa:I use your script to base another one with a one liner... should I post it on the page you mentioned?
<tlm> thumper: got 5 minutes for HO ?
<thumper> tlm: sure
#juju 2020-06-19
<timClicks> is any special syntax required for cross-controller and/or cross-cloud CMR?
<hpidcock> wallyworld: did your recent fixes fix this https://bugs.launchpad.net/juju/+bug/1880422 ?
<mup> Bug #1880422: Bootstrapping manual controller on arm64 fails <manual-provider> <juju:Triaged> <https://launchpad.net/bugs/1880422>
<wallyworld> hmmm
<wallyworld> maybe not
<wallyworld> i'd need to look at the code
<hpidcock> I can take a look
<wallyworld> i think he just needs ---constraints "arch=arm64"
<wallyworld> which is separate to my fix, that should already have worked
<wallyworld> thumper: https://github.com/juju/juju/pull/11731
 * thumper looks
<thumper> wallyworld: done
<wallyworld> ty
 * thumper sighs...
<thumper> I though I had all this sorted, now got weirdness
 * thumper takes a deep breath
<thumper> figured out why some of them are failing
<thumper> FFS
<hpidcock> thumper: did you turn it off and on again?
<thumper> hpidcock: oh I wish it was that easy
<thumper> now to go and fix tests that I've broken with the "fix"
<thumper> this is the branch that will never die
<thumper> ok... I think they are all passing now
<thumper> if that's right, it will now be time to pull the branch apart to land in pieces
<thumper> ugh... still need to write the upgrade step
<thumper> can do that later
<thumper> won't be landing that bit first anyway
 * thumper pulls off a chunk to land
<thumper> wallyworld, hpidcock: https://github.com/juju/juju/pull/11732
<hpidcock> lookin
<thumper> simple PR to just move stuff
<thumper> hpidcock: I'm EODing, if you're happy, can you please $$merge$$ it?
 * thumper out
<hpidcock> sure thing
<achilleasa> flxfoo: curious why you got the usage prompt; perhaps adding 'set -eux' at the top of the function should provide more info about what gets passed to nc? If you do come up with an improved version of the script by all means post it to that page for others to use
<manadart_> achilleasa: Got time to review this one? Follows the original refactor you reviewed: https://github.com/juju/juju/pull/11734
<achilleasa> manadart_: sure
<flxfoo> achilleasa:I think the ssh client version with proxycommand change, not to use nc anymore
<stickupkid> manadart_, achilleasa CR https://github.com/juju/juju/pull/11733
<hml> good morning
<manadart_> Morning hml.
<manadart_> Anyone able to review a forward merge? No conflicts. https://github.com/juju/juju/pull/11735
<hml> stickupkid: review please https://github.com/juju/juju/pull/11722
<stickupkid> hml, nice :+1:
<hml> stickupkid:  ty
<rick_h> petevg:  do you have time to pre-forrester?
<rick_h> pre-wave?
<manadart_> stickupkid: https://github.com/juju/description/pull/79
<petevg> rick_h: I've got TGIF at 11am, my time. I'm free now, though. Want to jump into a hangout?
<rick_h> petevg:  sure, tell you what let's hop in your tgif and I'll just eat the gap if that's ok
<petevg> Sounds good :-)
<rick_h> guild heads up I'm going to steal guimaas for demo work again kthx sorry
<stickupkid> achilleasa, ^
<achilleasa> stickupkid: got my own maas now :p
<rick_h> achilleasa:  ooooh, fancy!
<achilleasa> rick_h: there wasn't an easier/quicker way to test the OVS stuff so I finally bit the bullet and set up a local with the edge snap :D
<rick_h> achilleasa:  cool
<stickupkid> hml, find https://github.com/juju/juju/pull/11736
<hml> stickupkid: osm is a bundle, so iâm guessing the bundle contains wordpress
<hml> stickupkid: so that makes sense to me for find
<stickupkid> hml, osm is the k8s stuff
<stickupkid> hml, https://jaas.ai/osm/bundle/44
<hml> stickupkid: maybe notâ¦ iâts a bundle, but no wordpress
<stickupkid> hml, yeah... interesting though
<stickupkid> hml, also it states that it's a charm, when it's a bundle, so we really should catch that?
<hml> stickupkid: bug in api?
<hml> stickupkid: we do catch itâ¦ just need to type the Type :-)
<stickupkid> hml, it's really intesting, that they have a leaf call "charm"... I wonder if there is a better word for that
<stickupkid> i.e. we have "type: bundle" and a leaf called "charm"
<stickupkid> you get me?
<hml> stickupkid: not following
<stickupkid> ho?
<hml> stickupkid: sure
<rick_h> petevg:  did get the bootstrap to work with the new larger instances
<petevg> rick_h: cool. I'm still working on azure (I've got my credentials all straightened out. I'll see what I can do to make sure that I get instances that are large enough myself.)
<petevg> rick_h: how much ram and disk did you end up allocating to each node?
<rick_h> petevg:  8gb ram k8s nodes
<rick_h> I think they're two cores
<rick_h> I think the key was getting from 4gb of ram default notes to the 8gb of ram ones
<petevg> rich_h: cool. Thx!
<thedac> coreycb: https://pastebin.canonical.com/p/KvKPxrMTkK/
<thedac> cat /root/.pydistutils.cfg
<thedac> [easy_install]
<thedac> find_links = file:///var/lib/juju/agents/unit-octavia-0/charm/wheelhouse/
<thedac> allow_hosts = ''
<thedac> pip list |grep setup
<thedac> setuptools     47.3.1
<thedac> setuptools-scm 1.17.0
<thedac> dpkg -l |grep setuptoo
<thedac> ii  python3-setuptools               39.0.1-2                                    all          Python3 Distutils Enhancements
<thedac> Could it be checking the wrong version of setuptools?
<thedac> coreycb: setuptools_ver = _load_installed_versions('pip3').get('setuptools') is returning None
<petevg> rick_h: what demo are you deploying on your k8s cluster? Just mongodb, or something else?
<rick_h> petevg:  I'm getting a small kubeflow setup from the k8s team
<petevg> rick_h: nice. Share it with me when you get it? (I've got a model theoretically running on AKS, but I need to test it to verify.)
<rick_h> petevg:  yea, I'm using hte mongodb one to test with. Let me get you the bundle I'm using
<rick_h> petevg:  https://pastebin.canonical.com/p/nrdyYwXMhw/
<petevg> thx!
<rick_h> petevg:  yea so that's just mongodb.yml and juju deploy ./mongodb.yaml and should work
 * rick_h has to run the boy to a party, I'll check back in later
<petevg> rick_h: AKS was pretty straightforward to bootstrap. Where would you like me to drop my notes?
<rick_h> petevg:  email would be fine for now if that's cool
<petevg> rick_h: sent. Have a great weekend!
<rick_h> petevg:  ty you too
#juju 2020-06-21
<thumper> https://github.com/juju/juju/pull/11737 for anyone
