[00:06] <weblife> sarnold: looks like you were right.  It was the socket.  HAdn't opened yet
[00:07] <sarnold> weblife: I wonder if there's a good / lightweight way to wait until a socket is opened..
[00:08] <sarnold> ideally, 'service' wouldn't return until the mongo server is actually running. but if it daemonizes, the child will return very nearly immediately, and the grandchild may not yet be ready..
[00:09] <weblife> I just seperated it into its own function and threw it a little later in the install hook.  I am sure there is something else I could do but I am good with throwing the function later
[00:13] <weblife> This charm inits a mongodb instance of its own if there is no waiting instance.  Now i can focus on exporting the database if a mongodb instance is added.
[01:00] <jrwren> juju destroy-environment is my favorite command :)  it reminds me of kill -9 1
[01:01] <sarnold> hehe :)
[01:32] <davecheney> gary_poster: thank you thank you thank you for juju-gui-74
[01:55] <jrwren> trying this: https://juju.ubuntu.com/get-started/local/
[01:55] <jrwren> juju is trying to connect to lcoal mongo on 37017 but mongo is listening on 27017 where is the right place to change it?
[02:01] <thumper> jrwren: which version are you using? 1.12? or 1.13.2?
[02:02] <thumper> jrwren: also, do you have root-dir specified in environments.yaml for the local provider?  if you do, comment it out
[02:02] <thumper> known bug fixed in the dev version
[02:03] <thumper> sudo juju destroy-environment
[02:03] <thumper> and try again
[02:13] <jrwren> 1.12
[02:13] <jrwren> no root-dir set
[02:13] <jrwren> i just changed mongodb.conf to listen on 37017 now bootstrap just hangs at opening state mongo.
[02:14] <jrwren> i don't get connection failed messages like I did before though :(
[02:15] <jrwren> oh... juju-db-$USER-local service.
[02:20] <jrwren> apparently I was impatient on waiting for mongo local to start
[03:13] <thumper> jrwren: I found on my SSD, mongodb starts up in about 2 seconds
[03:13] <thumper> but on another's normal laptop, took upto 30s
[03:13] <thumper> I was killing the bootstrap thinking it was a bug and hung
[03:13] <thumper> but no...
[03:18] <sarnold> 30s! ouch.
[03:41] <hazmat> thumper, this is with no prealloc option?
[04:03] <thumper> hazmat: NFI
[04:09] <davecheney> thumper: hazmat we found the same thing on azure
[04:10] <davecheney> hazmat: we arelady use no-prealloc
[04:10] <davecheney> maybe we're doing it wrong
[04:10]  * davecheney reaises an issue
[04:10]  * davecheney lags
[04:12] <davecheney> https://bugs.launchpad.net/juju-core/+bug/1218176
[04:12] <_mup_> Bug #1218176: cmd/jujud: bootstrap may not properly configured mongodb to avoid preallocation <juju-core:Triaged> <https://launchpad.net/bugs/1218176>
[04:32] <kvt> davecheney azure has pretty slow disks.. possibly we also need --smallfiles
[04:32] <davecheney> kvt: the idea is it shouldn't allocate anything
[04:33] <davecheney> just move the file pointer to the end of the file, write a zero, and close the file
[04:33] <davecheney> i'd be quite confident that this is (yet another) mongo bug
[04:35] <kvt> davecheney possibly but i think it does need both
[04:35] <kvt> davecheney prealloc is against db file stores (if the fs supports it will do sparse allocation)
[04:35] <kvt> the smallfiles also adjusts the journal file size
[04:35] <davecheney> kvt: it's your bug if you want it
[04:35] <davecheney> i was going to flick it to my friends at 10gen and ask for advice
[04:36]  * kvt makes some notes
[04:37] <kvt> hmm.. from the bug it looks like we're already passing smallfiles
[04:55] <davecheney> yes
[04:55] <davecheney> i wonder if they are mutually exclusive
[05:51] <melmoth__> anyone understand what does the hacluster's charm's corosync_pcm_ver is for ?
[05:51] <melmoth__> the install hook does not seems to change the way in install things, just the way it start them based on the value of this conf
[08:53] <jaywink> hi, just installed stable juju-core, signed up to aws and bootstrapped. It didn't give any errors and I can see a running instance with AWS console. juju status however says nothing there. But new bootstrap attempt says there is an instance, see: http://pastebin.com/c8S7sfTD
[08:53] <jaywink> any ideas?
[09:10] <noodles775> jaywink: That sounds like the bootstrap instance hasn't gotten to the correct state. Does the AWS console tell you that everything is fine with the instance?
[09:11] <noodles775> jaywink: either way, a more consistent error message there would be helpful. I'll create a bug for it, unless you're already doing so?
[09:12] <jaywink> yeah afaik, first time with juju. same happened with local, bootstrap went fine, then said no instances.. unfortunately the verbose flag didn't bring any more errors than was in the pastebin
[09:12] <jaywink> I ended up already resetting everything and upgrading to devel ppa and now everything works, locally at least
[09:13] <jaywink> so not quite sure what to say in a bug :( should have initially bootstrapped the amazon instance with -v I guess
[09:15] <noodles775> jaywink: heh, sorry - I meant that juju providing a more consistent error message would be helpful :)
[09:17] <jaywink> noodles775, there is this, after I upgraded, before I did an rm -rf .juju and terminated the AWS instance, I ran juju stat: http://pastebin.com/eYWK9SqR .. should I file that as a bug?
[09:20] <noodles775> jaywink: I wouldn't think so - it looks like there's confusion over what environment (ec2) you're bootstrapped with. The earlier issue you had in stable is more worrying to me. I'll install stable and see if I can reproduce.
[09:22] <mgz> jaywink, noodles775: if bootstrap gives you an instance, but juju doesn't work, it's worth checking the console log/sshing in and looking at /var/log to find the underlying issue
[09:22] <jaywink> mgz, sorry, terminated the instance already - was a bit hasty I know .. :(
[09:24] <noodles775> mgz: here's what jaywink pasted before you joined: http://pastebin.com/c8S7sfTD
[09:27] <noodles775> jaywink: actually, your last paste confuses me a bit. It looks like you'd first had the old (python version) of juju installed (0.7), which isn't what you should have had from stable?
[09:28] <noodles775> jaywink: Now you should have the new version (juju-core), but I think you'll find that removing the dev PPA and just installing 'juju-core' with from stable should work (https://juju.ubuntu.com/docs/getting-started.html )
[09:29] <mgz> thanks noodles775
[09:29] <jaywink> hmm good point, it's possible I installed some version long time ago without trying it - is juju-core a new package after that?
[09:30] <noodles775> jaywink: Yeah, you need juju-core instead of juju (confusing I know :/ ).
[09:30] <jaywink> ok sorry guys for taking your time, should have cleaned up :) I wonder though when I activated juju/stable repo juju wasn't updated, but when I hit devel ppa it was updated
[09:31] <jaywink> could have been me though, I followed the tutorial and run update && install as there .. I guess a dist-upgrade should have been done too
[09:32] <noodles775> jaywink: which tutorial? It may need updating (if it asked you to do 'apt-get install juju' instead of 'apt-get install juju-core'.
[09:33] <noodles775> jaywink: https://juju.ubuntu.com/docs/getting-started.html has the right info, afaict.
[09:35] <jaywink> yes it was juju-core, but since I had an old juju installed I guess I should have done upgrade too. Unless installing juju-core should have automatically done that
[10:27] <mthaddon> if I have two juju-core environments on the same provider, would/should they use the same public-bucket-url ?
[10:28] <mthaddon> specifically the AUTH_* part
[10:28] <mgz> mthaddon: potentially
[10:29] <mthaddon> mgz: what's the implications of sharing that amongst juju envs?
[10:29] <mgz> one of the main reasons for providing that config was so that someone else could upload the tools, then you could use those without remirroring from aws yourself
[10:30] <mgz> we're trying to move towards everyone using simplestreams, and cloud providers putting the simplestreams link in their identity service,
[10:30] <mthaddon> mgz: ah cool - so yeah, it sounds like in this case I do want to do that - as long as the control-bucket and admin-secret is unique per env things are okay, right?
[10:30] <mgz> yup.
[10:30] <mthaddon> thanks
[10:33] <mthaddon> mgz: hmm, I get https://pastebin.canonical.com/96563/
[10:36] <jaywink> sigh ... really stuck with bootstrapping successfully to local .. everything goes fine but services pending forever, over an hour. tried many times. juju debug-log just says "ssh: connect to host 10.0.3.1 port 22: Connection refused" :P
[10:38] <mthaddon> mgz: adding --upload-tools seems to do the trick (doesn't error anyway, will check status shortly)
[10:39] <mgz> mthaddon: geh, the list is breaking things
[10:39] <mgz> so, you don't want --upload-tools
[10:39] <mgz> you want `juju sync-tools` probably
[10:39] <mthaddon> mgz: can I undo --upload-tools?
[10:39] <mgz> unless you're really trying to use a locally built trunk version of juju for testing rather than a stable release
[10:39] <mgz> mthaddon: you can just nuke the container
[10:40] <mthaddon> mgz: we're not using a stable release because we need fixes from trunk - we're using 1.13.2-1~1670 (packaged)
[10:40] <mgz> mthaddon: the other option here is to generate a simplestreams file in one container which references the tools, and point at that. I'm not sure we have good instructions on how to do this yet though.
[10:42] <mgz> mthaddon: you may want `juju sync-tools --source DIR` then
[10:42] <mthaddon> what's DIR?
[10:42] <mgz> the directory where you have the 1.13.2-1~1670 binaries
[10:42] <mthaddon> and will that overwrite what's there now, or do I need to nuke the container first?
[10:43] <mgz> nuke it to be safe, `swift delete CONTAINER` should do it
[10:44] <mthaddon> what's the problem with having done --upload-tools? just want to make sure I understand what's going on here
[10:45] <mthaddon> (things seem to be working as expected in terms of the bootstrap node now responding to juju status okay)
[10:45] <mgz> --upload-tools is a development hack
[10:46] <mgz> what it does, mostly, is build the copy of juju in your local directory, and upload that
[10:47] <mgz> but, confusingly, it also currently has a hack that searches path for a 'jujud' binary
[10:47] <mthaddon> in my case /usr/lib/juju-1.13.2/bin/jujud
[10:48] <mgz> which in this particular case, might do the same as using sync-tools there, but generally just breaks things in really confusing ways if you get in the habit of using it
[10:49] <mthaddon> breaks what kind of things? we've been doing --upload-tools for our envs so far and have production services running in them, so would like to know if we have a booby trap waiting for us
[10:49] <mgz> well, the most likely breakage is we just remove that bit of code, so then it starts complaining about not having the juju-core source and go compiler
[10:50] <mgz> the breakage devs normally hit is having multiple versions of juju around, and getting the wrong one uploaded
[10:50] <mthaddon> ok, so the --upload-tools command itself might break, but if it succeeds the env is okay
[10:51] <mgz> only if you're on a clean machine with only one jujud around
[10:51] <mgz> which should mostly be the case for you guys, but is always breaking us :)
[10:51] <mthaddon> we deploy from one server which will only ever have one version of juju-core installed (as far as I can tell)
[10:52] <mthaddon> thanks for the help
[10:54] <mgz> `juju sync-tools --source `which juju`` is the sane-ish equivalent, if you can start using that instead
[10:57] <mthaddon> mgz: error: unable to select source: specified source path is not a directory: /usr/lib/juju-1.13.2/bin/juju - dirname $(which juju) then says "no tools available"
[10:57] <mthaddon> ls /usr/lib/juju-1.13.2/bin
[10:57] <mthaddon> juju  jujud  juju-metadata
[10:59] <mgz> er... yes, you're better at shell than I :)
[10:59] <mthaddon> mgz: right, but now I'm getting "no tools available"
[10:59] <mgz> I'll investigate what's up and get back to you
[10:59] <mthaddon> cool, thanks
[11:02] <jaywink> anyone know why juju debug-log (and juju ssh 0) give "ssh: connect to host 10.0.3.1 port 22: Connection refused" after a successful local bootstrap? already tried shutting down ufw ... any tips would be greatly welcome .. running juju-core stable on raring
[11:24] <X-warrior> What is wrong with my configs file? http://pastebin.com/XeSKXzjb when I try to use, I get "command failed: unknown option "git-key"
[12:25] <X-warrior> Oh, I just find that the config.yaml should have options: on begin of it. So I updated it to http://pastebin.com/1jFpW26H but still doesn't work :S
[13:46] <X-warrior> what user does run the hooks?
[13:47] <marcoceppi> X-warrior: root
[13:48] <jasondotstar> noob here. I'm using ec2. is there a guide to ensuring that I'm using the free tier only?
[13:49] <jasondotstar> afraid i might see some hidden charges
[13:49] <jasondotstar> :-/
[13:49] <marcoceppi> jasondotstar: by default you get small instances
[13:50] <jasondotstar> marcoceppi as i understand, t1.micro is the free tier
[13:50] <marcoceppi> if you're using Ubuntu as you desktop though you can try juju using the local provider
[13:51] <marcoceppi> jasondotstar: you get 700 hours a month of free t1.micro on was. however they are not recommended because they are severely underpowered. there is a way to use them though
[13:53] <jasondotstar> marcoceppi ok. so if I simply want to develop charms and use juju to test them, i can use the local provider, which i assume means the charms deploy to my own nodes instead of a specific cloud provider?
[13:54] <marcoceppi> jasondotstar: you can run `juju bootstrap --constraints "cpu-power=0 cpu-cores=0 men=128"` to get micros with juju. however for developing you can use local provider which turns your machine in to a cloud using LXC
[13:54] <marcoceppi> mem*
[13:54] <marcoceppi> and local provider is free :)
[13:56] <jasondotstar> marcoceppi right. found this: http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
[13:56] <jasondotstar> marcoceppi seems like a decent place to start.
[14:11] <X-warrior> I'm creating my hooks and there is some code that will be the same on more then one file, I would like to create a third script where I add this code. Should I add it to a subfolder inside hooks? Is there any specific name? Or can I just pick any one?
[14:23] <marcoceppi> X-warrior: there's no real convention yet, we recommend putting it in a lib directory in the root of the charm (leaving just hooks to be in the hooks directory)
[14:23] <marcoceppi> but having common code shared via a common location (lib, etc) is a great way and one we consider a best practice for charms
[15:00] <kurt_> Hi All - anyone know why I would get "connection refused" from juju status, but can ssh to juju node direct just fine?
[15:00] <kurt_> http://pastebin.ubuntu.com/6040584/
[15:00] <kurt_> this is juju 1.12
[15:01] <mhall119> smoser: arosales: can you guys start filling in your track summary highlights for today's closing session: http://pad.ubuntu.com/uds-1308-track-summaries
[15:02] <arosales> mhall119, will do. what time does it need to be complete?
[15:02] <smoser> arosales, its for http://summit.ubuntu.com/uds-1308/meeting/21888/track-summaries/
[15:02] <smoser> (19:00 UTC)
[15:03] <smoser> jamespage, if you want to help, that'd be good too.
[15:03] <mhall119> arosales: the summary session is at 1900
[15:03]  * smoser didn't think about this.
[15:03] <arosales> mhall119, is the plan to have smoser and I present our respective tracks
[15:03] <mhall119> also we'll need one of you to be on the hangout
[15:03] <jamespage> smoser, I'll try to
[15:03] <X-warrior`> marcoceppi: nice. so if I would like to execute it from hook, could I execute 'sh ../lib/file' or does the hook execute from another 'context'?
[15:03] <arosales> smoser, jamespage: I'll coordinate with you on who does the presenting.
[15:04] <arosales> mhall119, thanks for the link
[15:04] <marcoceppi> X-warrior`: hooks are executed from $CHARM_DIR which is the root of the directory
[15:04] <marcoceppi> so you'd just `sh lib/file`
[15:04] <X-warrior`> oh sweet
[15:04] <X-warrior`> :D
[15:26] <arosales> kirkland, hazmat, gary_poster, would love to hear from you guy any really anyone on http://summit.ubuntu.com/uds-1308/meeting/21899/servercloud-s-juju-new-user-ux/
[15:27] <arosales> s/guy any/guys and
[15:28] <arosales> If anyone has any feedback on getting started with Juju, from the web site to charm authoring please join the uds session: http://summit.ubuntu.com/uds-1308/meeting/21899/servercloud-s-juju-new-user-ux/
[15:30] <arosales> we'll post the hangout url in #ubuntu-uds-servercloud-2 channel, hope to see some folks there
[15:34] <gary_poster> arosales, how do I join hangout?
[15:35] <arosales> gary_poster, be in  #ubuntu-uds-servercloud-2 and i'll post the hangout url shortly
[15:36] <gary_poster> cool arosales thx, there now
[15:36] <arosales> gary_poster, thank you
[16:20] <X-warrior`> Is it normal to bootstrap logging goes up to 3.3gb of logs in less then a week?
[16:55] <sarnold> X-warrior`: I do recall seeing something that said logs were never rotated and running out of disk space was a real problem. I don't know if that's been addressed yet.
[16:55] <X-warrior`> sarnold: uhmm
[17:38] <weblife> Juan Negron  in here?
[17:42] <jamespage> negronjl, ^^
[17:42] <weblife> negronjl: I am not that good at reading python.  Have you made a option for mongorestore in the mongodb charm?
[17:42] <weblife> jamespage:  thanks I just looked him up on launchpad
[17:52] <mhall119> jamespage: smoser: arosales: which of you is going to give the cloud track summary today?
[17:52] <weblife> negronjl: I am trying to see if I could send a mongodump on a joined relationship to the mongodb instance.  My charm runs its spins up its own mongodb service until one has joined.  I also want to send mongodump files over periodically to my charm in case the mongodb instance crashes.  All of this is to minimize instances for the sake of cash savings.
[17:52] <arosales> mhall119, I am
[17:52] <mhall119> thanks arosales
[17:53] <arosales> mhall119, sure, np.
[17:53]  * smoser goes to sendbeertoarosales.com
[17:53] <arosales> smoser, +1 and I owe you a few too
[17:54] <weblife> lol
[17:54] <kurt_> That was a great session "Amazing First 30 min Juju Experience" - good discussion between you guys
[17:55] <marcoceppi> weblife: he's in Japan ATM, might not reply right away
[17:57] <weblife> marcoceppi> Thanks I will email him
[18:51] <adam_g> how do i sync in juju-core tools into a firewalled MAAS cluster?
[19:01] <jcastro> 2 new bounties on AU for those who can answer these questions
[19:01] <jcastro> http://askubuntu.com/questions/335720/agent-state-info-hook-failed-config-changed-deploy-wordpress-using-juju
[19:02] <jcastro> http://askubuntu.com/questions/337075/how-can-i-expose-icmp-ports-in-a-hook
[20:17] <kurt_> Any idea why I cannot "juju ssh 0" in 1.12?  "juju status" also gives "connection refused" Time is in sync, but in UTC on charm node.  This worked previously.  verbose output: http://pastebin.ubuntu.com/6041608/
[20:18] <kurt_> And yes, I should have the right mongodb installed
[20:41] <weblife> kurt: I maybe you missing '\'  "juju ssh  \0"
[20:41] <weblife> thats what i have to do when I ssh into bootstrap
[20:43] <weblife> or perhaps your bootstrap failed
[20:44] <marcoceppi> weblife: if you can't get status to give you back information, juju ssh won't work (it has to query juju status to get the address for the 0 machine)
[20:45] <marcoceppi> err kurt_ ^
[20:45] <kurt_> marcoceppi: I was just putting this all in to ask ubuntu
[20:45] <kurt_> do you have an idea already?
[20:46] <marcoceppi> kurt_: nope, stick it in ask ubuntu and I can take a look at it later
[20:46] <weblife> marcoceppi:  yeah bad help idea.  Read the last part of his message after I responded
[20:46] <kurt_> Ok
[20:46] <marcoceppi> kurt_: but it sounds like either you can't reach that ip address (can you ping it?), or mongodb didn't start up
[20:46] <marcoceppi> kurt_: you can try to just `ssh ubuntu@172.16.118.12`
[20:47] <marcoceppi> some more stuff to try to ask ubuntu
[20:47] <kurt_> marcoceppi: that works fine
[20:48] <kurt_> marcoceppi: are you aware of any bugs around mongodb not starting correctly on node reboot?
[20:48] <marcoceppi> kurt_: can you check the two upstart jobs, they start with juju-, are running (`initctl list | grep juju`)
[20:48] <marcoceppi> kurt_: it's possible
[20:49] <kurt_> is this what you are referring to? juju-db stop/waiting
[20:52] <marcoceppi> kurt_: yeah, start that
[20:53] <marcoceppi> there should be another job, I think, that starts with juju
[20:53]  * marcoceppi checks
[20:53] <kurt_> uh oh
[20:53] <kurt_> marcoceppi: Thu Aug 29 20:52:56 [initandlisten] ERROR: Insufficient free space for journal files
[20:53] <marcoceppi> kurt_: df -h should help you out :)
[20:53] <kurt_> yeah, just did that
[20:53] <kurt_> how much space does the node need?? LOL
[20:53] <marcoceppi> kurt_: out of disk space?
[20:54] <kurt_> si
[20:54] <marcoceppi> kurt_: this might be a problem with logs not rotating
[20:54] <kurt_> '/dev/sda1        18G   17G     0 100% /'
[20:54] <marcoceppi> if you track down the large files, and they're juju related, please open a bug
[20:54] <marcoceppi> we had an issue like this back in juju 0.7 where it just would eat disk space via logs not being rotated
[20:54] <marcoceppi> I figured this was fixed, but it might not be yet
[20:55] <marcoceppi> kurt_: to answer your question, yes nodes are designed to survive reboot
[20:55] <kurt_> marcoceppi: thanks.
[20:55] <kurt_> -rw-r-----  1 syslog adm  13047238656 Aug 29 20:55 all-machines.log
[20:56] <kurt_> :D
[20:56] <kurt_> that's 12 gigs of space
[20:56] <kurt_> wtf
[20:57] <kurt_> ROFL
[20:57] <jcastro> stay classy juju logger!
[20:57] <kurt_> sorry sir
[20:58] <jcastro> this feels like a bug
[20:58] <jcastro> it should never do that crap
[20:58] <kurt_> I will log it
[20:58] <kurt_> file a bug on it
[20:58] <kurt_> #2 today for me
[20:58] <kurt_> I'm on a roll
[20:58] <jcastro> \o/
[20:59] <jcastro> keep em coming
[20:59] <kurt_> Dave is going to get to know my name
[21:00] <jcastro> mail me your tshirt size and address, it's about time we send you something! jorge@ubuntu.com
[21:04] <weblife> lol. Stay classy
[21:05] <kurt_> Ok, Bug #1218616 filed for your viewing pleasure.
[21:05] <_mup_> Bug #1218616: all-machines.log is oversized on juju node <juju-core> <juju-core:New> <https://launchpad.net/bugs/1218616>
[21:12] <weblife> jcastro: I would like to see a charms school that focuses more on relationships. Just FYI.
[21:17] <weblife> jcastro: Of course I just started watching the Best Practices video.  It sounds like this might be covered.
[21:30] <kurt_> I was just thinking of what web life was thinking about in a different way
[21:31] <kurt_> juju-gui should have a way to show how its deployed
[21:32] <kurt_> with juju-1.x - since I can deploy charms/services on same node, the gui should show me in logical form all services running on a node and how they are related
[21:33] <kurt_> or at least give me the ability to arbitrarily draw a circle or box around particular charms
[21:39] <kurt_> Is the solution to send logging to /dev/null like the user suggests in https://bugs.launchpad.net/juju-core/+bug/1218199 ? That appears to spike the cpu to 100%
[21:39] <_mup_> Bug #1218199: [MAAS] Deploy a charm into machine 0 = log loop <debug-log> <maas> <merge> <juju-core:New> <https://launchpad.net/bugs/1218199>
[23:41] <weblife> if I am running "mongodb-10gen"(The mongodb recommended deb package)  will this be  problem for juju-local?
[23:42] <thumper> maybe
[23:42] <thumper> juju needs an ssl enabled mongodb
[23:42] <thumper> it will fail to connect if this isn't enabled
[23:49] <jcastro> weblife: for sure we can do that.