[02:33] <seal1> How do I approach sending over a masker ssh key to a related slave for the jenkins and jenkins-slave charm? I have taken the follwoing approach and cannot find the documentation of how to access a related unit - https://gist.github.com/slatunje/c357791dcc1b42d4df12
[11:30] <jamespage> gnuoy, could you review - https://code.launchpad.net/~james-page/charm-helpers/lp.1391784/+merge/242422
[11:30] <jamespage> it adds a default backend to all haproxy configurations that will loadbalance over private-address
[11:30] <gnuoy> yep (otp at the mo)
[11:30] <jamespage> gnuoy, ta
[11:54] <gnuoy> jamespage, unit_test fail
[11:54] <jamespage> gnuoy, oh joy
[11:55] <jamespage> gnuoy, let me see
[11:57] <jamespage> gnuoy, fixed and pushed
[11:58] <gnuoy> jamespage, merged
[11:59] <jamespage> gnuoy, thanks!
[11:59] <jamespage> gnuoy, are you ok for me to sync that out to the next charms?
[11:59] <gnuoy> jamespage, absolutely
[12:18] <jamespage> gnuoy, done
[13:16] <mwak> hi
[14:05] <marcoceppi> Is there any charmhelper to run a command as a user?
[14:07] <lazyPower> marcoceppi: not at present, no.
[14:07] <marcoceppi> lazyPower: ta
[14:07] <mwak> o/ marcoceppi lazyPower
[14:07] <lazyPower> o/ mwak
[14:09] <marcoceppi> \o mwak
[14:13] <mwak> still fighting with the hadoop charm
[14:13] <mwak> I ve rebuilt hadoop on armhf but still the same issue with hdfs
[14:21] <lazyPower> mwak: it sounds more like there's an issue with the setup than with the binaries. but I cant really confirm that at present.
[14:21] <lazyPower> mwak: have you tried a single node setup vs the cluster so there's fewer moving parts to debug?
[14:26] <mwak> lazyPower: you mean juju deploy hadoop and then run the test on the machine?
[14:26] <mwak> right
[14:26] <mwak> ?
[14:28] <lazyPower> yeah
[14:28] <lazyPower> mwak: just teh master/slave nodes actually - you'll still need a yarn controller and some computes - so single node is a mosnomer on my part.
[14:29] <mwak> I did, but I just respawn to re-try
[14:30] <mwak> What I did atm is re-built hadoop from the source on an armhf server, edit the charm to use the oracle jdk
[14:31] <lazyPower> yeah that shouldn't have much effect - it should still work without any issue being java is mostly portable among jre's
[14:33] <mwak> yep but there are .so libraries which were not built for arm.
[14:34] <lazyPower> very true, the bins they bundle with it
[14:35] <mwak> lazyPower: I have to start it manually when I deploy on a single node?
[14:36] <lazyPower> mwak: start manually + i think you'll have ot edit the configs to point to the local microservices - as those are not set until relationship exchange
[14:36] <mwak> alright
[14:37] <mwak> it is started!
[14:38] <mwak> interesting
[14:38] <mwak> http://pastebin.com/7DEb6bMG
[14:41] <lazyPower> mwak: smells like progress to me
[14:41] <mwak> right, but cpu usage is 0%
[14:41] <mwak> :/
[14:41] <lazyPower> so the job itself is probably failing to start
[14:41] <lazyPower> but the microservices are connecting
[14:42] <mwak> yep
[14:42] <mwak> looks like the problem occurs when relation change
[14:45] <mwak> http://212.47.235.30:8088/cluster/app/application_1421159802799_0001
[14:47] <jamespage> gnuoy, know you are probably a busy man, but could you review the MP's on https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1391784
[14:48] <mup> Bug #1391784: HA failure when no IP address is bound to the VIP interface <openstack> <cinder (Juju Charms Collection):In Progress> <glance (Juju Charms Collection):In Progress> <keystone (Juju Charms Collection):In Progress> <neutron-api (Juju Charms Collection):In Progress> <nova-cloud-controller
[14:48] <mup> (Juju Charms Collection):In Progress> <openstack-dashboard (Juju Charms Collection):In Progress> <percona-cluster (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1391784>
[14:48] <jamespage> they re-introduce the cidr and iface configurations options as fallbacks for when people want to bind VIP's to subnets which are not also configured as subnets for primary addresses
[14:54] <jamespage> dimitern, see the crazy networking stuff that people get up to ^^
[14:54] <jamespage> dimitern, its an edge case but one for which I've had two separate bug reports for
[14:58] <dimitern> jamespage, wow :)
[14:58] <dimitern> jamespage, is there something juju can do for these short-term?
[14:59] <jamespage> dimitern, not really - its an accidental feature that we took away last cycle ...
[14:59] <jamespage> dimitern, but it does highlight what things people do
[14:59] <jamespage> dimitern, I've categorically never ever deployed an HA solution in that way
[14:59] <jamespage> but apparently some people do
[15:01] <rick_h_> deryck: ! dude
[15:01] <deryck> Hi rick_h_!
[15:01] <dimitern> jamespage, I'll have a closer look for educational purposes then :)
[15:02] <jamespage> dimitern, sounds like a good idea
[15:04] <gnuoy> jamespage, approved
[15:08] <jamespage> gnuoy, thanks!
[15:43] <lazyPower> mbruzek: whit:  https://pythonhosted.org/charmhelpers/api/charmhelpers.payload.html  -- docs on teh execd pattern in charmhelpers
[15:53] <lazyPower> dannf: landed your arm64 update to mongodb. Thanks for the patience on this one - looks really good, i like the pattern you used to do this too
[15:53] <dannf> lazyPower: ack, thx!
[16:00] <lazyPower> stub: ping
[16:01] <lazyPower> Looking over your merge against precise postgres charm - does this warrant additional action against the trusty charm as well when accepted for precise? https://code.launchpad.net/~stub/charms/precise/postgresql/manual-replication/+merge/240556
[16:02] <lazyPower> or is this scoped strictly to precise
[16:15] <alexis_> hello everyone
[16:15] <lazyPower> btw mbruzek i landed those arm64 changes. Make sure you sync with master if/when the review you're working on is ready.
[16:15] <lazyPower> o/ alexis_
[16:16] <alexis_> hello how are you
[16:16] <mbruzek> lazyPower: the mongodb tests fail for me.
[16:16] <jcastro> lazyPower, can you check this  pls? https://github.com/juju-solutions/juju-solutions.github.io/pull/21
[16:16] <lazyPower> jcastro: need it merged or just asking if it looks right? cuz +1
[16:17] <jcastro> I didn't want to merge it because I wrote it
[16:17] <jcastro> That's just nice manners right? But yeah, merge pls.
[16:17] <lazyPower> done
[16:17] <lazyPower> :)
[16:28] <lazyPower> mwak: I'm still a bit confused on the diagnosis of whats wrong. but based on the last pm - you need to restart all the hadoop services and ar easking how to do so?
[16:29] <lazyPower> cory_fu: ^
[16:29] <lazyPower> can you help mwak triage some port binding and service management in the apache hadoop charm
[16:29] <mwak> lazyPower: at the moment I have
[16:29] <mwak> : running the terasort.sh give me the following output
[16:29] <mwak> ubuntu@onlinelabs-d2d614b9ffa944a5bfd0731c3e3bf18d:~$ /usr/local/hadoop/terasort.sh
[16:30] <mwak> rm: `in_dir': No such file or directory
[16:30] <mwak> 15/01/13 16:18:36 INFO client.RMProxy: Connecting to ResourceManager at /10.1.8.72:8032
[16:30] <mwak> 15/01/13 16:18:38 INFO ipc.Client: Retrying connect to server: onlinelabs-d2d614b9ffa944a5bfd0731c3e3bf18d/10.1.8.72:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
[16:30] <mwak> ubuntu@onlinelabs-d2d614b9ffa944a5bfd0731c3e3bf18d:~$ lsof -i  | grep 8032
[16:30] <mwak> java    8997 ubuntu  183u  IPv6  13866      0t0  TCP localhost:8032 (LISTEN)
[16:30] <mwak> It looks like there is a bind problem
[16:31] <mwak> curl localhost:8032 -> ok | curl 10.1.8.72:8032 -> refused
[16:34] <lazyPower> mwak: yeah, seems like its bound to local and not the interface
[16:34] <lazyPower> mwak: that should be defined in site.xml
[16:34] <lazyPower> or core-site.xml
[16:34] <mwak> in core-site i have
[16:34] <mwak>     <value>hdfs://10.1.8.72:9000</value>
[16:35] <mwak> What should I do to bind on the 10.1.8.72 instead 127.0.0.1
[16:35] <mwak> ?
[16:35] <mwak> which file to edit
[16:35] <cory_fu> yarn-site.xml
[16:35] <jcastro> lazyPower, I get this http://pastebin.ubuntu.com/9731838/
[16:36] <cory_fu> Looking for yarn.resourcemanager.address
[16:36] <cory_fu> mwak: ^
[16:36] <lazyPower> jcastro: do you have bundler installed?
[16:36] <lazyPower> if so, create a Gemfile in your repository and place the following in it:
[16:36] <cory_fu> That gets populated from either `unit-get private-address` or `relation-get private-address` in the charm, neither of which should return localhost
[16:37] <jcastro> lazyPower, yeah
[16:37] <lazyPower> https://gist.github.com/4ca44ea8c7251d01dadd
[16:37] <lazyPower> that should get jekyll and all teh required gems
[16:37] <mwak> cory_fu: the address is correct in yarn-site.xml

[16:37] <lazyPower> then bundle exec jekyll serve
[16:37] <mwak> Oo
[16:38] <cory_fu> Hrm.  Actually, now that I think about it, that's probably the wrong end of the config, there
[16:38] <jcastro> lazyPower, I get "could not locate Gemfile", but also github isn't generating the page either
[16:38] <lazyPower> jcastro: let me pull the repo and see whats up, 1 sec.
[16:38] <lazyPower> this may be environment vs configuration
[16:39] <mwak> cory_fu: hum ok..
[16:39] <cory_fu> mwak: Still looking...
[16:39] <jcastro> lazyPower, afaict I've followed every instruction exactly.
[16:40] <mwak> cory_fu: FYI all value in yarn-site.xml are binded
[16:40] <mwak> to localhost and not private ip
[16:40] <lazyPower> jcastro: i've got all kinds of jekyll errors spewing at me atm when i pull wahts in master
[16:40] <cory_fu> mwak: Wait, what?  I thought you said it had the non-localhost IP address?
[16:41] <mwak> sec
[16:41] <mwak> will paste the file
[16:41] <cory_fu> Ok
[16:41] <mwak> http://pastebin.com/6yB2MnWt
[16:41] <lazyPower> jcastro: i know why its not generating
[16:41] <lazyPower> the post layout expects there to be a date
[16:42] <lazyPower> you need to frontload this with all the data points you would have in a normal blog post
[16:42] <jcastro> for post, not pages though right?
[16:42] <lazyPower> the layout:post is whats telling jekyll waht to load and parse this page content file as
[16:42] <mwak> cory_fu: http://pastebin.com/tnK9rsRw
[16:42] <lazyPower> 2 options - add the data points to make this like a blog post so it satisfies the liquid variables, or add a layout that strips those variables it wants.
[16:42] <stub> lazyPower: The PostgreSQL precise and trusty branches are identical, so I'll get them in sync later.
[16:43] <lazyPower> stub: 10-4, thanks for confirmation
[16:43] <cory_fu> mwak: That seems correct.  Not sure why it would have bound to localhost, then
[16:43] <stub> lazyPower: Thanks for the review :)
[16:43] <jcastro> lazyPower, could I use "default" for the post layout instead of post?
[16:44] <lazyPower> jcastro: i added a date and it worked as expected
[16:44] <mwak> cory_fu: to restart the master what should I do? run the stop-all.sh / start-all.sh scripts?
[16:44] <jcastro> lazyPower, I changed it to "default" and it works now as expected
[16:44] <jcastro> lazyPower, thanks for the help
[16:44] <lazyPower> np
[16:45] <jcastro> http://blog.juju.solutions/containers.html
[16:45] <jcastro> blam, instant generation
[16:46] <jcastro> now to fix the content ...
[16:46] <TheFezzer> hey there lazypower, if you’re into reviewing charms, can I show you one?
[16:46] <cory_fu> mwak: I imagine that would work, though the charm uses, e.g., su yarn -c '/usr/lib/hadoop/hadoop-2.4.1/sbin/yarn-daemon.sh --config /etc/hadoop/conf.juju stop resourcemanager'
[16:47] <lazyPower> TheFezzer: is it in the queue? :) we have several in here that have waited their turn to get reviewed.
[16:47] <adalbas> hi! i'm trying to write charms using the python charmhelpers. When I ran "charm create -t python mycharm", i got a different structure than expected (no hooks.py, no cham-helpers.yaml). Does anyone have the same problem?
[16:47] <jcastro> whit, do you have any kubernetes information I could add to this page?
[16:47] <lazyPower> adalbas: proibably looking for python-basic
[16:47] <TheFezzer> uhh, i’m not sure I can really do that sort of stuff without a sign-off from Legal. Oracle has /claws/.
[16:47] <lazyPower> adalbas: the charm create -t python generates a services framework charm.
[16:47] <whit> jcastro, looking
[16:48] <lazyPower> TheFezzer: lets take this out of band, if you're working on ISV work we can work something out for you to get regular charm reviews as a partner until its in the queue.
[16:48] <adalbas> lazyPower, so i should use charm create -t python-basic instead, is that it?
[16:48] <lazyPower> s/queue/store/
[16:48] <lazyPower> adalbas: give that a go and see if thats what you're looking for.
[16:48] <adalbas> lazyPower, thanks!
[16:49] <jcastro> lazyPower, unless anyone objects I'm not going to PR work on the team blog, too heavyweight for now, leave that for the posts themselves I figure.
[16:50] <lazyPower> jcastro: i'm not sure what you're telling me - pr work? OH you mean github pull request? i think we're ok with that for now as you work on those topic pages.
[16:50] <lazyPower> but i defer to the team, as there's several of us involved in that.
[16:50] <TheFezzer> Ya, I just really want a once-over aka sanity check
[16:51] <TheFezzer> not any sort of serious review
[16:51] <whit> jcastro, we could add some links I guess, let me get those
[16:55] <whit> jcastro, https://github.com/kapilt/bundle-kubernetes
[16:55] <whit> jcastro, that will be the main entry point
[16:57] <whit> jcastro, should be good to start with
[16:57] <jcastro> ta
[17:00] <jcastro> http://blog.juju.solutions/containers.html
[17:00] <jcastro> ok, now we're cooking!
[17:00] <lazyPower> aww yeee
[17:00] <lazyPower> jcastro: might want to link to the docker charm docs as well...
[17:00] <lazyPower> http://chuckbutler.github.io/docker-charm/
[17:00] <jcastro> yeah, after lunch
[17:00] <jcastro> ta
[17:00] <jcastro> <--- lunch
[17:04]  * marcoceppi lunchy lunch lunch
[17:05] <mwak> cory_fu: don't understand why, but after restart it is the same
[17:05] <mwak> the wrong interface is bind
[17:33] <cory_fu> mwak: Strange.  I'm deploying it now to see if I get the same results, though on aws instead of onlinelabs
[17:34] <mwak> alright
[17:34] <mwak> fyi if i bind on 0.0.0.0 it works
[17:34] <mwak> oO
[17:34] <mwak> all interfaces are listening
[18:00] <captine> evening all.  just started playing with juju on my local machine (desktop) and wanting to know if there is a way to allow other machines on my network to connect to the juju services? I see the juju containers get their own IP address which is different to that of my lan.
[18:00] <captine> some pointers would be much appreciated.
[18:06] <TheFezzer> I found that the sshuttle instructions for OSX.9 were good
[18:06] <TheFezzer> but the ones for X.10 were better
[18:06] <TheFezzer> also welcome to the club :)
[18:07] <lazyPower> captine: i have a write up for that 1 moment
[18:07] <captine> lazyPower, that would be awesome. Thnx
[18:07] <lazyPower> captine: http://blog.dasroot.net/making-juju-visible-on-your-lan.html
[18:07] <TheFezzer> captine if you do either of those, you can just use the address from juju status or from the unit panel in the gui
[18:08] <captine> thanks.  I will check it now.  I want to be able to access some of the services from outside my lan, so as long as I can do some forwarding from the router, it will be perfect.
[18:08] <TheFezzer> nice doc
[18:09] <TheFezzer> :D
[18:09] <lazyPower> Ta :)
[18:21] <cory_fu> mwak: Sorry it took so long, but it finally finished deploying.  On AWS, everything seems to have bound correctly: http://pastebin.ubuntu.com/9733248/
[18:27] <cory_fu> mwak: Also confirmed that the terasort.sh worked.  Let me pastebin the exact deploy steps I used
[18:27] <mgarza> Regarding MariaDB or MySQL charms is it possible for another charm to create multiple databases without using the db-admin relation?
[18:29] <cory_fu> mwak: http://pastebin.ubuntu.com/9733381/
[18:29] <lazyPower> mgarza: you'd need to use the db-admin relationship
[18:30] <lazyPower> mgarza: the defaults for the db relationship assume a single db per service that is related
[18:30] <whit> hey marcoceppi for backporting commits from the trusty version of a prescise charm (w/no tests), what's the procedure wrt review?
[18:30] <lazyPower> mgarza: it sounds like a possible reason to add another relationship if you want to do an exchange cycle of what the app needs. i'd like to see that chatter back and forth before anything was merged - but i'm def. interested in your take on how it should work.
[18:30] <marcoceppi> whit: open a merge from trusty version to precise
[18:30] <whit> marcoceppi, thinking of https://code.launchpad.net/~niedbalski/charms/precise/mysql/precise-syncup/+merge/244436
[18:31] <whit> marcoceppi, that's been done, I'm asking what I should do re reviewing
[18:31] <whit> automated test failure is a AWS hiccup
[18:32] <marcoceppi> whit: I'll kick off another test, but that's about it, it's just a normal review. Really just make sure it deploys and works on precise
[18:33] <whit> marcoceppi, thanks
[18:34] <whit> will do
[18:35] <mgarza> lazyPower: Thanks I just wanted to make sure
[18:35] <lazyPower> np
[18:43] <cory_fu> mwak: I can't get onlinelabs to bootstrap, though, so I can't test it there.  :/  (Getting prompted for root password, despite having added my juju ssh key via the website.  Tips appreciated.)
[18:48] <lazyPower> cory_fu: you should add your personal id_rsa - as thats what its using to connect for the intial bootstrap
[18:48] <lazyPower> *id_rsa.pub
[18:49] <captine> hi lazyPower.  I already had some juju instances and have tried following some articles on cleaning my local juju and redoing it, but keep getting the following "ERROR there was an issue examining the environment: cannot use 37017 as state port, already in use"
[18:49] <captine> when trying to bootstrap again
[18:49] <mwak> cory_fu: gimme a sec
[18:50] <lazyPower> captine: interesting - are you running something on port 37017?
[18:51] <lazyPower> captine: if not, it sounds like there may be a stale api-server around, and can you show me a pastebin of the output from initctl --list | grep juju
[18:51] <lazyPower> sorry, no double tack on the list command:  initctl list | grep juju
[18:52] <captine> no idea.
[18:52] <captine> i also see my eth0 dissappeared... only have wireless now..  lol.  fun learning
[18:52] <jose> rick_h_: ping
[18:53] <captine> nothing shows
[18:53] <captine> just blank
[18:55] <marcoceppi> captine: it sounds like you deleted the upstart job before stoping it
[18:56] <captine> mmm.  not sure i follow (am just a simple accountant playing around :))
[18:56] <marcoceppi> captine: if you do a `ps -aef | grep juju`
[18:56] <marcoceppi> you should see 0 results
[18:56] <marcoceppi> but you'll probably see a jujud and a mongod running
[18:56] <captine> i see it is running
[18:56] <captine> do i do a killall -9 or something?
[18:56] <marcoceppi> captine: kill those processes
[18:57] <marcoceppi> just a kill on each process id (the second colum) is sufficient
[18:59] <captine> i type "sudo kill xxxx" where xxx is the id.  after doing it for each, the ps - aef shows more processes with different id's.  Like it is respawning or something
[19:00] <lazyPower> well thats fun... we're playing chase the pid huh?
[19:00] <sarnold> pkill can help obliterate lots of pids :)
[19:01] <lazyPower> sarnold: why do i never think of this in the hot seat? good insight.
[19:01] <captine> yip
[19:01] <captine> cannot type fase enough
[19:02] <captine> lol.  pkill didnt help, unless there is an option i should use
[19:03] <captine> would they start up after rebooting?
[19:03] <captine> I can try that
[19:04] <lazyPower> captine: that might resolve it but i know that hjistorically we placed upstart jobs for the containers
[19:04] <lazyPower> it also appears some of the upstart dependency concerns have moved as well
[19:04] <lazyPower> we should see jujud in the listing from initctl
[19:04] <lazyPower> or we used to
[19:04] <captine> :)
[19:04] <captine> change and progress... keeps it interesting
[19:04] <lazyPower> ahhh ok
[19:05] <lazyPower> if you sudo, those tasks should show up
[19:05] <lazyPower> sudo initctl list
[19:05] <lazyPower> also - i re-read the scrollback and know whats going on
[19:05] <lazyPower> you did those bridge instructions on a running environment didnt you?
[19:05]  * lazyPower makes a note to circle back and add that in huge warning text
[19:06] <captine> sudo works
[19:06] <lazyPower> ok, can you sudo service juju-db-<name>-local stop and run the same on the agent?
[19:06] <captine> lazyPower, about the running environment... yip, think i did
[19:07] <lazyPower> that should stop the pids from spawning like crazy
[19:07] <lazyPower> and then we can go back and triage the environment
[19:07] <lazyPower> as right now, its in a very inconsistent state
[19:07] <lazyPower> its using lxcbr0 to communicate according to the jenv, but if you wiped that .jenv file - those lxc containers are still running and panicking that they cannot communicate with the state server
[19:08] <lazyPower> so we'll need to start from scratch
[19:08] <lazyPower> if there's any data you have in those containers - we can attach and preserve that info before wiping them - if its all inconsequential stuff all the better that we can have zero concern about whats in those lxc containers
[19:09] <captine> na
[19:09] <captine> i was just playing with them
[19:09] <lazyPower> ok
[19:09] <captine> no data to save
[19:09] <lazyPower> so lets confirm we have the processes stopped of jujud
[19:11] <captine> i couldnt stop the one service "juju-agent-$USER-local start/running, process 7472"
[19:11] <captine> using the sudo service ... stop, it says it is unrecognized... and I copy pasted it from the service listing
[19:11] <lazyPower> ok
[19:12] <lazyPower> so we have a runaway process... can you pkill the zygote in psaux to ensure its been nuked with fire?
[19:12] <lazyPower> sarnold: does that sound legit? ^
[19:12] <captine> juju bootstrap is running now though
[19:12] <captine> so lets see
[19:12] <captine> :)
[19:12] <lazyPower> captine: can you pastebin me the output of sudo lxc ls --fancy?
[19:13] <lazyPower> captine: i want to see what lxc containers are left around from that old deployment so we can clean those up as well
[19:13] <captine> lxc: command not found
[19:14] <sarnold> lazyPower: the best part about pkill is that you can use e.g. pkill -u sarnold to kill all the processes from one uid.. I suspect it's still racy, but running it a few times might do the job
[19:15] <lazyPower> sarnold: you probably dont want to pkill everything as the user you're logged in as
[19:15] <lazyPower> i can see that being the cause of nuking your current session
[19:16] <lazyPower> captine: sudo lxc-ls - my brain is failing me hard core today on commands
[19:16] <lazyPower> captine: so to confirm, its sudo lxc-ls --fancy
[19:16] <sarnold> lazyPower: oh, sure, but it works also for e.g. juju :)
[19:18] <jrwren> agent-state-info: 'cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No node available.)'    how do I trigger a retry?
[19:18] <captine> http://slexy.org/view/s202HrnQ3K
[19:18] <lazyPower> jrwren: re-run the bundle deployment
[19:18] <lazyPower> juju-deployer should pick up where it left off
[19:19] <jrwren> lazyPower: *sigh* *grumble*  *ugh*   :p
[19:19] <lazyPower> captine: ok, see all those -local-machine-# containers that are stopped?
[19:19] <captine> lazyPower, do you work for Canonical?  how you guys all remember this stuff is beyond me
[19:19] <captine> i think so
[19:19] <captine> it says stopped
[19:19] <jrwren> juju-deployer does some magic more than just issuing another deploy command?
[19:19] <lazyPower> captine: you'll want to sudo lxc-destroy --name $name, the juju-$series-lxc-template containers can be left as is - they are used as clones to build the service containers.
[19:20] <lazyPower> jrwren: i dont know ;)
[19:20] <lazyPower> jrwren: use deployer though
[19:20] <captine> thanks a mil.
[19:20] <captine> will do so
[19:20] <jrwren> lazyPower: *grumble*
[19:23] <lazyPower> jrwren: <3
[19:24] <lazyPower> captine: no problem, let me know if you run into any further blockers, and thanks a bunch for circling back that there was an issue - and i need to up date that guide :)
[19:27] <captine> thanks for the guide.
[19:32] <dalek57> do the juju charmhelpers provide a way to execute bash? I'm just doing subprocess.Popen right now. I'm not finding it in the docs
[19:32] <dalek57> I'm trying to install rvm
[19:35] <jrwren> dalek57: charmhelpers.core has liberal use of subprocess.check_call and subprocess.check_output, which are a bit easier than Popen, if you can get away with using them instead.
[19:39] <dalek57> jrwren: well, I have a shell script for installing rvm saved in the charm directory, and I'd like to run it. If I were just doing this as a bash script, I could just do $CHARM_DIR/helpers/install_rvm.sh. Is there a best practice for doing this in python?
[19:40] <jrwren> dalek57: I'm guessing call_check is what you want. the differences are what is captures as return values in python.
[19:40] <jrwren> or maybe just subprocess.call()
[19:46] <lazyPower> yeah subprocess is the successor to Popen and family
[19:46] <lazyPower> dalek57: what jrwren outlined is what i would do. subprocess.call(['scripts','rvm_install.sh'])
[19:52] <marcoceppi> or translate the rvm_install.sh script to python
[19:56] <jrwren> or don't use rvm and package the ruby version you want for ubuntu.
[19:57] <marcoceppi> except the ruby version in ubuntu is not the best
[19:59] <jrwren> all the more reason to make it better.
[20:10] <rick_h_> jose: pong, what's up?
[20:10] <jose> rick_h_: hey! would it be a problem if I push a new docs folder for ES translated ones?
[20:10] <jose> or would it be preferred to not do it yet?
[20:11] <rick_h_> jose: hmm, so we're working on a deployment today/tomorrow that will udpate our docs loading
[20:11] <rick_h_> If you had a change what would be great would be if you had that change in a fork
[20:11] <rick_h_> jose: and then email us, we'd setup a QA to make sure that your fork worked and upgraded fine from where we're awt now
[20:12] <rick_h_> jose: and fix any bugs on our end
[20:12] <rick_h_> jose: and then once we're ready/sure it'll be fine merge your forked branch of the docs into trunk
[20:12] <marcoceppi> jose rick_h_ is there any build process for other languages?
[20:12] <rick_h_> marcoceppi: no, right now we just follow what you all were doing to build
[20:12] <rick_h_> marcoceppi: so that's what I want to see, what's in the changes jose is speaking of
[20:12] <jose> rick_h_: sure - I don't have direct push access. I guess I'll start translating today and will let you know once the branchis ready
[20:12] <marcoceppi> rick_h_: so you're pulling from github or from the bzr branch?
[20:12] <rick_h_> marcoceppi: from github
[20:13] <jose> so, in the docs there is an en folder, I'd like to create an es one so I can translate the docs into Spanish
[20:13] <rick_h_> marcoceppi: so if the org/structure of that process changes we need a heads up to make sure we're ready to follow the change
[20:13] <jose> if that's not supported I'll hold my horses
[20:13] <marcoceppi> jose: it'll be ignored by the build process
[20:13] <rick_h_> jose: well fork the docs and do what you will and we'll work on a path to make sure things work and get pulled together
[20:13] <rick_h_> jose: there's no reason not to start the work and we appreciate the heads up.
[20:13] <jose> ok, cool
[20:13] <marcoceppi> jose rick_h_ we want to do two additional things and this just reminded me that there's two build processes now
[20:14] <jose> thanks
[20:14] <rick_h_> marcoceppi: ok, we talked witht he web team to get docs redirected by the end of the week to jujucharms.com/docs
[20:14] <rick_h_> marcoceppi: so yea, if we've got changes we need to know and sync up.
[20:14] <marcoceppi> rick_h_: we want to create branches for releases of juju so we can tie docs to versions
[20:15] <marcoceppi> rick_h_: let me knwo when is a good time to talk about that, it's a git change which won't effect master
[20:15] <marcoceppi> but something we'd like to do
[20:15] <marcoceppi> rick_h_: I'd like to talk to you about possibilties before chit chatting on the list
[20:15] <marcoceppi> or we can do vice versa
[20:16] <rick_h_> marcoceppi: sure thing, can we setup a call tomorrow?
[20:16] <marcoceppi> rick_h_: yeah, sounds good
[20:16]  * rick_h_ is out of the office working from the car dealer atm
[20:17] <rick_h_> marcoceppi: either over lunch or after your standup work best?
[20:17] <marcoceppi> rick_h_: either works
[20:18] <rick_h_> marcoceppi: ok, sent. let me know if there's anyone else we should invite (nick?)
[20:26] <marcoceppi> rick_h_: probably, we've chatted about this multiple times, but it wouldn't hurt
[20:26] <rick_h_> marcoceppi: ok added
[20:27] <marcoceppi> rick_h_: this would just be a "is this possible, how would this look" once that's figured out I'll mail the list with the intentions for feedback and go from there
[20:27] <rick_h_> marcoceppi: sure thing
[20:27] <rick_h_> we've thought about and made sure it's possible
[20:27] <rick_h_> marcoceppi: so just have to get down to it
[21:38] <dalek57> Why might subprocess.check_call(("./helpers/rvm.sh && source /etc/profile.d/rvm.sh && rvm install ruby-"+ruby_version).split(), shell=True) create a zombie process, and never complete the installation?
[21:40] <dalek57> rvm is installed, but it doesn't get ruby-2.1.3, and when I juju ssh, I'm told that there is 1 zombie process
[21:41] <lazyPower> dalek57: are you su'ing or re-initializing the shell?
[21:41] <lazyPower> dalek57: sounds to me like you restarted teh shell and that caused it to tank leaving a zombie process
[21:42] <dalek57> lazyPower: does `source` do that?
[21:42] <lazyPower> depends on what you're sourcing
[21:42] <lazyPower> cehck the file you're sourcing and make sure it isn't reinitializing the shell. i do beleive that rvm does that
[21:42] <lazyPower> its fairly heinous like that.
[21:43] <dalek57> yeah, ok. Well, when I run the command when I'm sshed in, it completes fine
[21:43] <dalek57> would I see side effects of it reinitializing the shell?
[21:44] <lazyPower> thing is, when you're ssh'd in youre in a shell
[21:44] <lazyPower> when you reinitialize the shell in subprocess - you're not in a login shell you're in a thread nested inside the python process thats running
[21:44] <lazyPower> so when you reinitialize teh shell, that thread goes away
[21:45] <lazyPower> argh, netsplits :(
[21:57] <sebas5384> hey!! somebody had this error before? https://gist.github.com/anonymous/0a88537322e6b4daa516
[22:06] <lazyPower> o/ seal
[22:06] <lazyPower> er
[22:06] <seal> hi all
[22:07] <lazyPower> sorry seal, netsplits are driving me batty with mistargeting people when trying to reply
[22:07] <lazyPower> sebas5384: o/
[22:07] <lazyPower> sebas5384: where did that error origniate? is that coming from your debug-log?
[22:07] <lazyPower> dalek57: wb - did you get any of my response?
[22:08] <seal> Is there a way to test the current hook being executed? I have tried if [[ $JUJU_HOOK_NAME == 'config-joined' ]]; then ... fi but this works within debug but fails when running
[22:09] <lazyPower> I may be wrong, but i think that only gets exported during debug sessions - but marcoceppi may say i'm incorrect. A way to find out seal would be to dump your env at the beginning of the hook for inspection.
[22:10] <seal> mmm
[22:11] <seal> I ask since I am extending the jenkins / jenkins-slave within the hook/install.d/* and I need to execute base on the current hook
[22:11] <seal> I will try the env dump
[22:12] <marcoceppi> seal: it's possible
[22:14] <marcoceppi> seal: but it's done by getting the basename of argv
[22:14] <seal> ah I see
[22:14] <marcoceppi> seal: so, basename $0 should give you hook name
[22:14] <seal> thanks
[22:14] <seal> will try that
[22:15] <marcoceppi> np, that's how it's done in the Python Charm Helpers
[22:15] <marcoceppi> seal lazyPower: for references: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/core/hookenv.py#L160
[22:16] <lazyPower> marcoceppi: Gracias. i forgot all about that.
[22:16]  * marcoceppi intensifies hat tipping
[22:17] <seal> marcoceppi: great tip
[22:18] <sebas5384> lazyPower: hey! :)
[22:18] <sebas5384> lazyPower: in the Drupal charm
[22:18] <sebas5384> from nothing ¬¬
[22:18] <lazyPower> sebas5384: that tls error has to be coming from something... its either failing to handshake with the state server or something weird is going on in the env.
[22:19] <sebas5384> yeah maybe is something like a timeout
[22:19] <sebas5384> i'm in an instable internet right now
[23:13] <dalek57> lazyPower: I'm getting strange behavior out of subprocess, and thought you might have more insight. subprocess.check_call('ls') returns a folder called ruby-install-0.5.0, but when I execute subprocess.check_call('cd ruby-install-0.5.0 && make install'), I get "no such file or directory"
[23:13] <dalek57> I ran the command on the machine manually, and the make install doesn't error out
[23:13] <sarnold> try make -C ruby-install-0.5.0 install instead
[23:13] <lazyPower> ^
[23:14] <sarnold> that avoids the use of the shell built-in 'cd', feels more likely to succeed to me
[23:14] <lifeless> or
[23:15] <lifeless> subprocess.check_call('make install', cwd='ruby-install-0.5.0')
[23:15] <lifeless> IIRC
[23:15] <sarnold> ooo
[23:16] <sarnold> yeah that feels even more likely to work :)
[23:16] <lifeless> you could set shell=True, but honestly, eewwwww.
[23:24] <dalek57> sarnold: Thanks!
[23:24] <sarnold> dalek57: check out lifeless's suggestion for cwd= -- it more closely replicates what you were doing with cd ... && make ...