[01:24] <lazypower> negronjl: I'm not sure that just stripping the configuration flags is the proper route to go with the modifications I filed. Do you want to retain MMS support in the mongodb charm, or does it make sense ot break it into another charm, possibly a subordinate, now that the MMS service is a python application?
[01:24] <lazypower> I'm fine with either method, and would love to contribute to the effort of integration.
[01:56] <negronjl> lazypower, one option would be for you to make the subordinate charm and we can then see about taking the functionality out of mongodb
[02:27] <lazypower> negronjl: Challenge accepted
[02:27] <negronjl> lazypower, rofl .. cool!
[02:28] <lazypower> Let me get some other high priority things out of my queue and I'll get a subordinate prototype in your hands soon
[02:54] <lazypower> I'm working on charming up the Errbit exception catcher - It's going to provide an airbrake interface the way its layed out in my head. I'm a bit confused if it should be a provides: relationship, or be a peer relationship.
[02:55] <lazypower> I initially call peer, but at the end of the day other charms will have to require it to hook into the server stack (it provides api keys and other fun configuration magic) so - it should probably be a provides: and requires: <optional> on the related charm yes? or am I misunderstanding the relationship categories?
[02:57] <negronjl> lazypower, If I understand your reasoning correctly, then your theory is correct :)
[02:59] <lazypower> Scope creep like a pro. Love it
[09:48] <Felipe_C> Hi All, Could anyone confirm what JUJU - Manual Provisioning is? Will that give me the possibility of using a
[09:49] <Felipe_C> Hi All, Could anyone confirm what JUJU - Manual Provisioning is? Will that give me the possibility of using, for instance, a VPS ( purchased from whatever provider but running Ubuntu)  and deploying services there using only my SSH root account?
[09:52] <axw_> Felipe_C: that's correct. It's not quite ready yet, but that is one use case for it
[09:54] <Felipe_C> Thanks axw_ - Would I assume correctly that this, in itself, could potentially be the single most valuable tool for (not fully cloudfied) web developers, server admins, etc?
[09:57] <axw_> Felipe_C: we Juju developers like to think so :)
[09:57] <axw_> it certainly should take a lot of the pain out of deploying apps to VPSes
[10:00] <Felipe_C> axw_ - with the added benefit of being burstable to a PaaS of your choice, such as AWS - provided of course, it allows easy collocation of services on the same machine, which currently is only available using the command line - am I right?
[10:01] <axw_> Felipe_C: right, AFAIK, the GUI does not support "placement directives" (--to on the CLI)
[10:07] <oatman> morning! I have a mongo error when bootstrapping juju, could someone help me with this?
[10:08] <oatman> sudo juju bootstrap
[10:08] <oatman> ERROR unauthorized: cannot create log collection: unauthorized mongo access: unauthorized db:juju ns:juju lock type:1 client:127.0.0.1
[10:10] <oatman> huh, turns out my mongodb server hadn't come up at boot time, starting it fixed it! Odd error though, for no db connection
[10:11] <oatman> wait
[10:11] <oatman> that doesn't fix it
[10:11] <oatman> ...
[10:21] <mthaddon> mgz: you seen oatman's problem before? ^ (local provider)
[10:22] <mgz> oatman, mthaddon: doesn't look familiar
[10:22] <oatman> mgz, do you think I could wipe my mongo setup?
[10:24] <mgz> we should be spinning up a seperate mongo, but it's poissible if you had existing mondgo config it would confuse the local provider
[10:24] <mgz> purging the package then reinstalling it might be a big-hammer option
[10:30] <oatman> mgz, I think my mongo is crashing, everytime I sudo start mongodb, I get a different pid
[10:32] <mgz> oatman: check the logs, they're er... somewhere
[10:35] <oatman> heh
[10:35] <oatman> huh, nothing in them
[10:35] <oatman> wc /var/log/mongodb/mongodb.log
[10:35] <oatman> 0 0 0 /var/log/mongodb/mongodb.log
[10:36] <oatman> how on earth did my mongo server get broken while my machine was off during the night?!
[10:51] <jam> oatman, mgz: Are you trying to start a mongodb for Juju or you have mongodb in your system and you also have a juju local environment?
[10:52] <jam> I've heard there is a bug with mongodb, in that if you start a juju local instance
[10:52] <jam> it also starts a 'mongod' process
[10:52] <jam> and then the global mongodb says "oh, I'm already running, I'll just exit"
[10:52] <oatman> jam, thanks for helping, I'm just using system mongo
[10:52] <jam> even though we're trying to have 2 mongod's running
[10:52] <oatman> I think you're right
[10:52] <oatman> ps -e | grep mongo
[10:52] <oatman>  1190 ?        00:00:15 mongod
[10:52] <oatman> (fenchurch2)➜  ~VIRTUAL_ENV  status mongodb
[10:52] <oatman> mongodb stop/waiting
[10:53] <oatman> I'll nuke juju's mongo and spin mine up?
[10:53] <jam> status juju-db
[10:53] <jam> ?
[10:53] <jam> oatman: you should be able to "stop juju-db" and then "start mongodb"
[10:53] <oatman> OATMAN SHELL PROXY STARTED
[10:53] <oatman> subprocess.check_call(
[10:53] <oatman>         ['a2dissite', '000-default']
[10:53] <oatman>     )
[10:53] <oatman> sorry, bad paste
[10:53] <oatman> status: Unknown job: juju-db
[10:53] <oatman> ^
[10:53] <jam> oatman: if this is local provider, then I think we name it differently, 1 sec
[10:53] <oatman> yes, lxc
[10:54] <jam> oatman: juju-db-$USERNAME-local
[10:54] <jam> so "stop juju-db-oatman-local" for example
[10:54] <oatman> status juju-db-oatman-local
[10:54] <oatman> juju-db-oatman-local stop/waiting
[10:55] <oatman> sudo stop juju-db-oatman-local
[10:55] <oatman> [sudo] password for oatman:
[10:55] <oatman> stop: Unknown instance:
[10:55] <jam> oatman: what does psgrep mongod give you?
[10:55] <oatman> I'm not familiar with that command
[10:55] <oatman> ps -e | grep mongod
[10:55] <oatman>  1190 ?        00:00:17 mongod
[10:56] <jam> oatman: so you do have a mongod running, if you use ps -efwww you can probably figure out who started it and who its running for
[10:57] <oatman> yep, it's juju
[10:57] <oatman> ps -efwww  | grep mongod
[10:57] <oatman> root      1190     1  1 10:28 ?        00:00:18 /usr/bin/mongod --auth --dbpath=/home/oatman/.juju/local/db --sslOnNormalPorts --sslPEMKeyFile /home/oatman/.juju/local/server.pem --sslPEMKeyPassword xxxxxxx --bind_ip 0.0.0.0 --port 37017 --noprealloc --syslog --smallfiles
[10:58] <jam> oatman: so I'm surprised juju-db-oatman-local doesn't realize it is running, but you can try just "sudo kill 1190" and see if that lets you start your normal mongodb
[10:59] <oatman> yep, that low-tech method allowed me to start my local mongo
[11:00] <oatman> lets see if that fixes my juju
[11:02] <oatman> jam, mgz killing the rogue juju mongo process fixed it :-)
[11:06] <mgz> oatman: ace
[11:31] <noodles775> I'm unable to ssh/debug-hooks - it instead seems to look up the address for the unit indefinitely: http://paste.ubuntu.com/6560877/, anyone able to help?
[11:34] <ashipika> hi.. can somebody please explain relation hook names? i.e. if in the metadata.yaml i have
[11:34] <ashipika> peers:
[11:34] <ashipika>  peers-relation:
[11:34] <ashipika>    interface: somename
[11:34] <ashipika> which hooks need to be implemented?
[11:56] <noodles775> fwiw, I can't say it's related, but I'm now able to ssh/debug-hooks with the unit above after switching environments, exiting the debug-hooks session I had going in the other (openstack) environment, switching back to my local environment and running juju debug-hooks...
[11:57] <noodles775> ashipika: I've not yet worked with peer-relations, and right, the docs seem a bit sparse in that respect.
[11:58] <bloodearnest> ashipika: peers-relation is the name in your charm of the relation. So you want to add peers-relation-relation-joined, for example (and also probably not include 'relation' in the relation name :)
[11:58] <ashipika> noodles775: thnx for reply! i kind of figured it out.. but i still need to test it, which should be.. interesting :)
[11:58] <ashipika> bloodearnes: yes.. figured it out :) my mistake.. *dubmdumb*
[11:58] <bloodearnest> noodles775: are there yet any semantics to required/provides/peer relation types?
[11:58] <ashipika> docs could use a bit of polishing :)
[11:59] <noodles775> bloodearnest: idk - just reading up on peer relations.
[11:59] <bloodearnest> ashipika: name of relations is hard (2 hard problems, etc) - my advice is to be very specific
[12:00] <jam> noodles775: I actually thought "juju debug-hooks" didn't work at all in a local environment, because we don't have proper IP address detection yet
[12:00] <bloodearnest> e.g. we have some relations called cached-website and some called website-cache, difficult to clearly remember which way round they are
[12:00] <noodles775> jam: works well for me :-) Soo much faster.
[12:02] <ashipika> bloodearnes: but i.e. peer_ip=`relation-get ip` should work in the relation-changed hook, correct? given that the relation-joined hook says relation-set ip=$IP
[12:03] <bloodearnest> ashipika: yes, that should work in both -joined and -changed hooks AFAIK. But not sure about -broken or -departed
[12:03] <ashipika> bloodearnest: thnx.. trying to figure it out one step at a time :)
[12:04] <bloodearnest> ashipika: ah wait, I misunderstood your question
[12:05] <bloodearnest> ashipika: you do "relation-set ip XXX" in -joined, and you want to do "relation-get ip" in changed? Not sure that will work.
[12:06] <bloodearnest> but it might :(
[12:06] <bloodearnest> :)
[12:06] <ashipika> bloodearnest: then who sets these variables?
[12:06] <ashipika> auto-juju-magic? :D
[12:06] <bloodearnest> ashipika: the other side of the relation, usually :)
[12:06] <bloodearnest> ashipika: ip address is automatically provided by juju for example
[12:07] <bloodearnest> ashipika: you shouldn't have to set it
[12:07] <ashipika> ip is just an example.. any variable.. i suppose relation-set should be used somewhere.. if i interpreted it correctly it should be in the relation-joined hook?
[12:20] <bloodearnest> ashipika: yes, joined and changed usually (can often be identical)
[12:20] <bloodearnest> ashipika: relation-set is for sending data to the other side of the relation. So one peer would set some value to other peers
[12:25] <ashipika> bloodearnest: peer relations.. are these supposed to be relation between two units of the same service?
[12:25] <bloodearnest> ashipika: I believe so
[12:26] <ashipika> bloodearnes: so i gues, both units will run both hooks: joined and changed.. am i correct?
[12:26] <bloodearnest> ashipika: the squid charm uses them for cache peering, for example
[12:26] <bloodearnest> ashipika: yes
[12:26] <ashipika> excellent!
[13:17] <ashipika> why do i get
[13:17] <ashipika> Permission denied (publickey, password)
[13:17] <ashipika> ERROR exit status 255
[13:17] <ashipika> when i try juju debug-log?
[13:28] <marcoceppi> ashipika: because there was a problem uploading your ssh key
[13:30] <marcoceppi> ashipika: because there was a problem uploading your ssh key
[13:30] <marcoceppi> Do you have an ssh key generated?
[13:30] <ashipika> user's ssh key?
[13:30] <marcoceppi> ashipika: your ssh key
[13:30] <marcoceppi> ashipika: do you have one?
[13:30] <ashipika> yes
[13:31] <marcoceppi> ashipika: can you `juju ssh 0`
[13:32] <ashipika> nope :(
[13:32] <ashipika> same error
[13:38] <ashipika> is there a requirement on permissions to my ssh keys?
[13:41] <marcoceppi> ashipika: wait, do you have your .pub as well?
[13:41] <marcoceppi> or just the private key
[13:42] <ashipika> .ssh/id_rsa.pub?
[13:42] <ashipika> .ssh contains id_rsa (private) and id_rsa.pub (public)
[13:43] <marcoceppi> ashipika: huh, yeah that's what you need.
[13:43] <marcoceppi> .pub should be 0644
[13:43] <ashipika> both have permission 600
[13:43] <marcoceppi> eh, 600 should be okay too
[13:43] <ashipika> aaah.. ok ok
[13:43] <marcoceppi> ashipika: are you using local provider?
[13:43] <ashipika> null
[13:43] <ashipika> manual
[14:04] <ashipika> one question: in juju status it says cpu-cores=1 even on a multi-core machine..
[14:37] <marcoceppi> jcastro: we need a video on troubleshooing and debugging charms
[14:37] <marcoceppi> ie: juju debug-hooks
[14:48] <jcastro> marcoceppi, ok
[14:48] <jcastro> wanna make one like tomorrow or something?
[14:48] <marcoceppi> jcastro: yeah, just like a quick 10-15 min video
[14:49] <marcoceppi> juju debug-hooks, juju ssh + tailing logs, juju debug-log
[14:51] <jcastro> ok we'll do it tomorrow
[14:52] <jcastro> maybe bust out two or so?
[14:52] <marcoceppi> jcastro: yeah, sounds good
[14:52] <ashipika> any clues as to why juju ssh 0 would not work?
[14:52] <ashipika> or where to start debugging it?
[14:52] <ashipika> (manual provisioning)
[14:55] <marcoceppi> ashipika: if you're getting an ssh key error, it's because your ssh key wasn't copied during enlistement. Manual provider is still under development
[14:57] <ashipika> ok.. thnx
[16:06] <lazypower> Does the EC2 side of root-disk constraints populate in EBS flavor? I've added --constraint "root-disk=50G" to my provisioning request, and the root disk is still at 8gb, i thought maybe it went into the /mnt directory, but thats provisioned with 318GB of ephemeral storage. In either case, it doesn't appear to have taken hold.
[16:19] <marcoceppi> natefinch: Does the EC2 side of root-disk constraints populate in EBS flavor? I've added --constraint "root-disk=50G" to my provisioning request, and the root disk is still at 8gb, i thought maybe it went into the /mnt directory, but thats provisioned with 318GB of ephemeral storage. In either case, it doesn't appear to have taken hold.
[16:19] <marcoceppi> per lazypower
[16:21] <natefinch> it should adjust the root (i.e. /  ) partition.   What version of Juju?
[16:23] <mgz> natefinch: it probably doesn't work with EBS
[16:23] <mgz> the ec2 provider hardcodes /dev/sda1
[16:23] <natefinch> mgz, right, no it doesn't.
[16:23] <mgz> and I'm pretty sure the root device comes up on something else when using EBS
[16:24] <mgz> lazypower: ^
[16:24] <natefinch> sorry, I neglected to pay attention to that part of the question, my bad.
[16:42] <lazypower> Hmm... interesting
[16:42] <lazypower> 1.16.4-unknown-amd64 is the version of juju i'm running. Latest package in BREW
[16:43] <marcoceppi> lazypower: 1.16.5 was released a wee bit ago, brew should have been updated but there may have been a snag
[16:43] <lazypower> Let me run another update, make sure i am indeed on -HEAD
[16:44] <lazypower> Here comes 1.16.5 - it made it in.
[16:45] <mgz> lazypower: if the issue is what it appears to be, nothing has changed in 1.16.5
[16:45] <mgz> if you can not use an EBS image, everything should be fine
[16:46] <lazypower> Well my concern is the volume for my MongoDB BSON data globs. Those need ot reside on a disk larger than 8gb. Let me poke around in the charm store and see if there's a related charm for mapping extra disk space.
[16:46] <lazypower> I dont like the idea of having it live in ephemeral storage. If the box reboots, poof goes the data
[16:46] <mgz> there are certainly charms that add extra (persistent) even storage though hacks for use with dbs etc
[16:47] <mgz> I'm not sure of a good one to point you at, the ones I've seen are pretty hairy
[16:47] <lazypower> Yeah thats why I avoided them initially
[16:47] <lazypower> So i completely understand where you're coming from there.
[16:47] <mgz> marcoceppi: have we got any charms/helpers that do sane things with persistent storage?
[16:48] <lazypower> Ideally i can do this manually - but when someone comes behind me ot modify this tech stack, I'd like it to be as straight forward as possible. If there are hidden things that weren't performed by juju, chances are they will assume it comes that way out of the box.
[16:48] <marcoceppi> mgz: not really, we don't have a consistent persistent stoarge story within juju. The best we have is using like NFS or gluster to string servers as storage
[16:49] <jcastro> marcoceppi, when doing these README updates, should I care about line wrapping at 80 or not?
[16:49] <marcoceppi> jcastro: no, it gets parsed either way
[17:00] <marcoceppi> jcastro: I just noticed on the merges
[17:00] <marcoceppi> why are you using h2 instead of h1 for headers in the README?
[17:00] <marcoceppi> (## vs #)
[17:04] <jcastro> # Looks too big
[17:04] <jcastro> and the title on the GUI is like the H1
[17:05] <lazypower> I did the same thing Jorge. Using h2's for my sub-headers and h1 for the charm title
[17:05] <jcastro> like if you move to the other tabs in the GUI
[17:05] <jcastro> for the non readme sections, they're H2ish
[17:06] <marcoceppi> jcastro: cool
[17:06] <jcastro> marcoceppi, when doing the audit on mysql I get this
[17:06] <jcastro> W: config.yaml: option vip does not have the keys: default
[17:06] <jcastro> I can probably fix that by setting the proper key right?
[17:07] <marcoceppi> jcastro: yeah, vip needs a default key in the config.yaml, and it needs to be set to empty string ""
[17:07] <jcastro> ok
[17:07] <jcastro> since that changes something that isn't a readme I will MP that one
[17:07] <marcoceppi> jcastro: ack, thanks
[17:08] <marcoceppi> jcastro: that'll allow me to test to make sure "" is acceptable as a default
[17:10] <jcastro> I think it's '' isn't it?
[17:10] <jcastro> hmm, this file seems to use " and ' interchangeably
[17:12] <marcoceppi> jcastro: it's interchangeable
[17:12] <jcastro> ok
[17:13] <jcastro> I wonder if we should standardize?
[17:13] <jcastro> at least within the same file?
[17:16] <marcoceppi> jcastro: we should have the GUI render everything in the README h1 tags smaller
[17:16] <marcoceppi> IMO
[17:17] <jcastro> yeah it wasn't something I did on purpose
[17:17] <jcastro> it was just "oh, the readme will be part of a bigger page, so start as a subheading"
[17:17] <marcoceppi> jcastro: meh, doesn't matter either way
[17:17] <jcastro> well, decide now
[17:17] <jcastro> because I've done 2
[17:18] <marcoceppi> jcastro: Okay, well we can never assume the readme is going to be part of a larger page, it's its own document and should be formatted as such
[17:18] <jcastro> also, am I going crazy or was `juju add readme` real?
[17:18] <jcastro> ok, I'll update the 2 charms now then
[17:18] <marcoceppi> everything else should bend to it's will
[17:18] <marcoceppi> jcastro: it's being added, juju charm add readme
[17:18] <marcoceppi> 1.2.4
[17:19] <marcoceppi> jcastro: I'll patch the README too, no need to MR
[17:19] <jcastro> the template you mean?
[17:19] <jcastro> ok I'll do mysql and ubuntu
[17:20] <marcoceppi> jcastro: yeah, the template
[17:20] <marcoceppi> since I'm already there rummaging around
[17:24] <jcastro> indeed
[17:28] <mxc> so i finally "solved" my azure issue... migrated to AWS
[17:29] <mxc> but, quick question about configuration, shouldn't:
[17:29] <mxc> > juju get mongodb
[17:29] <mxc> ERROR service "mongodb" not found
[17:29] <mxc> show me the config file?
[17:30] <marcoceppi> mxc: yes, is mongodb deployed and in juju status?
[17:30] <marcoceppi> jcastro: 1.2.4 charm-tools uploaded to ppa, should be built soon
[17:30] <mxc> ahh, no, not yet
[17:30] <mxc> I was trying to get the scaffolded config file
[17:30] <mxc> thanks marcoceppi
[17:31] <marcoceppi> mxc: yeah, that queries the deployment, to get the config you'll need to either download the charm or look online
[17:31]  * marcoceppi makes a bug to add juju charm config <charm>
[17:31] <mxc> ok, got it.
[17:32] <jcastro> hey
[17:32] <jcastro> that sounds like an awesome idea though
[17:32] <mxc> may be a good idea to add a note about that in the docs..
[17:32] <jcastro> get config options even if it isn't deployed
[17:32] <jcastro> juju info config
[17:32] <jcastro> juju info readme
[17:32] <jcastro> etc.
[17:32] <mxc> please and thank you
[17:32] <jcastro> sorry, I mean charm info config, charm  info readme
[17:33] <mxc> if i wanted to work on that, i'd need to create a launchpad account, install bzr, and relearn bar rigt
[17:33] <marcoceppi> jcastro: yeah, created this: https://bugs.launchpad.net/charm-tools/+bug/1260419
[17:33] <_mup_> Bug #1260419: add config key for listing configuration of a charm <Juju Charm Tools:Triaged by marcoceppi> <https://launchpad.net/bugs/1260419>
[17:33] <marcoceppi> mxc: for the most part, yes
[17:33] <jcastro> yeah that way you can read up on the options, since you have to wait for the instance to fire up anyway
[17:34] <marcoceppi> jcastro: def, probably going to make that a 1.3.0 feature
[17:34] <mxc> part me of wishes canonical would surrender and move to github
[17:34] <marcoceppi> since juju charm info <charm> already exists, I didn't want to make juju charm info readme <>
[17:34] <marcoceppi> too much typing
[17:35] <marcoceppi> mxc: I used to mirror charm-tools and amulet on github, but it's too much work to be able to accept merge proposals from both sources :\
[17:35] <jcastro> we have mirrors of the charms in github currently
[17:36] <mxc> marcoceppi: i know..  trying to have it both ways is a monstrous pain in the ass..  i just have this hopeless hope that shuttleworth gives up and moves everything to github
[18:06] <mxc> is there a way to use a file besides .juju/environments.yaml to bootstrap?
[18:11] <natefinch> mxc: no
[18:12] <mxc> ok, thanks
[18:28] <jcastro> hey marcoceppi
[18:29] <jcastro> so I am unofficially going to also consider warnings as bugs as part of the audit
[18:29] <jcastro> if I'm going to go through each one, might as well get bang for the buck
[18:36] <marcoceppi> jcastro: yeah, warnings are bugs
[18:36] <marcoceppi> errors means they shouldn't even be in the store
[18:37] <jcastro> ack
[19:36] <marcoceppi> jcastro: 1.2.4 has landed, feel free to update
[19:36] <jcastro> ON IT
[19:39] <jcastro> arosales, https://pastebin.canonical.com/101920/
[19:39] <arosales> jcastro, thanks
[19:40] <jcastro> bbiab, new kernel
[19:51] <marcoceppi> jcastro: charm add readme workin' okay?
[19:51] <jcastro> I haven't used it yet
[19:52] <jcastro> the last 2 readmes I did were mostly complete
[19:52] <jcastro> trying it now
[19:55] <jcastro> marcoceppi, hey so
[19:55] <jcastro> if we want to make new charms pass tests
[19:55] <jcastro> where can I see where my charm's tests results are?
[19:55] <jcastro> like, show me an example of a charm that is flunking its tests
[19:56] <andre__> hi
[19:56] <jcastro> marcoceppi, charm add readme works awesome btw
[19:56] <marcoceppi> jcastro: no where, atm, but I'm not saying we should require to pass, we should have a policy that charms should have tests
[19:56] <jcastro> true
[19:57] <marcoceppi> we should have a testing infrastructure up early Jan
[19:57] <jcastro> hey so what do you tell people now  when they submit but they don't have tests
[19:57] <marcoceppi> mid-early jan
[19:57] <Guest97524> I am trying to deploy using juju but I cannot do more than bootstrap
[19:57] <jcastro> Guest97524, what happens?
[19:57] <Guest97524> can anybody help me please?
[19:58] <marcoceppi> jcastro: I'm saying, everything in the queue gets grandfathered, added to audit, and commented that tests will be a req. Everything else going forward, after udpating the docs, we fail review because of no tests
[19:58] <Guest97524> so, appearentely juju bootstraps successfully
[19:58] <jcastro> right
[19:58] <jcastro> ok so what we need to do is propose "every new charm from now on must include tests" as policy
[19:58] <marcoceppi> Guest97524: what happens when you try to deploy? can you run deploy with --debug --show-log
[19:59] <jcastro> marcoceppi, but does it make sense to do that now before the infra is up?
[19:59] <jcastro> maybe I can strongly hint that that's coming
[19:59]  * marcoceppi shrugs at jcastro
[20:00] <jcastro> ok
[20:00] <jcastro> I'll send a strawman to the list for input
[20:00] <marcoceppi> jcastro: cool
[20:00] <Guest97524> but, when I try to run juju status, it keeps saying: connection refused (--debug flag)
[20:00] <jcastro> basically, new charms will include tests, deal with it.
[20:00] <jcastro> Guest97524, did you wait for the bootstrap to come up first?
[20:00] <marcoceppi> Guest97524: that means bootstrap likely failed, what provider are you using
[20:00] <jcastro> that usually takes a few minutes
[20:01] <Guest97524> I mean, bootstrap doesn't report any errors, even with debug and show-log
[20:01] <Guest97524> I'm using it to deploy on a MaaS environment
[20:01] <marcoceppi> Guest97524: bootstrap is an asyncronous process, it could fail in certain ways
[20:01] <marcoceppi> Guest97524: ah, do you see a node provisioned in you maas master?
[20:02] <Guest97524> the MaaS controller allocates a random machine, but juju doesn't install anything on it
[20:02] <marcoceppi> Guest97524: can you see if there is anything in /var/log/cloud-init* ?
[20:02] <arosales> marcoceppi, jcastro I am a +1 for tests in charms and I acknowledge that existing charms should be grandfathered
[20:02] <marcoceppi> on the maas machine
[20:02] <Guest97524> and there's not even a juju log in /var/logs
[20:03] <arosales> but we should promote the audit to have all charms with test and encourages maintainers to add tests
[20:03] <arosales> going forward policy review should have a check for tests
[20:03] <arosales> jcastro, to confirm you are going to propose to the list as policy, correct?
[20:03] <jcastro> writing it up now
[20:03] <marcoceppi> Guest97524: right, because it likely failed during provisioning. Is there a cloud-init log?
[20:03] <arosales> jcastro, re your paste bin can we also get the Blueprint sync'ed up with the effort and process to reference in your post
[20:04]  * jcastro nods
[20:04] <Guest97524> there is a cloud-init log
[20:04] <marcoceppi> Guest97524: can you pastebin that log to http://paste.ubuntu.com ?
[20:04] <Guest97524> apparently without any errors
[20:04] <jcastro> https://blueprints.launchpad.net/juju-core/+spec/t-cloud-juju-charm-audit
[20:04] <jcastro> this one  right arosales?
[20:05] <Guest97524> sure: http://paste.ubuntu.com/6563207/
[20:05] <arosales> jcastro,  ya  I think that is our latest :-)
[20:06] <marcoceppi> Guest97524: something's wrong. What version of juju are you using?
[20:06] <arosales> jcastro, I would also specifically spell out in your post the main points to review in the audit and tools to help do so (ie charm tools)
[20:06] <Guest97524> I am using version 1.16.5
[20:06] <marcoceppi> Guest97524: and what version of MAAS?
[20:07] <marcoceppi> Guest97524: Are you using the one from the cloud archive?
[20:07] <Guest97524> my client machine is a saucy release, while the maas controller and machines are precise release
[20:08] <arosales> jcastro, the points to review to ensure folks know what a readme should look like (include example usage), description about the charm, icons, tests, passes proof, etc
[20:08] <Guest97524> maas version: 1.5.4
[20:09] <jcastro> arosales, yeah, the issue is that the email is getting long
[20:09] <jcastro> arosales, I was thinking of just adding all of those things to the review policy instead
[20:09] <Guest97524> I got juju from ppa repository (ppa:juju/stable)
[20:09] <jcastro> which is where it should be anyway
[20:09] <arosales> jcastro, I'll fire up a pad
[20:10] <arosales> jcastro, policy should have it for sure.
[20:10] <arosales> perhaps put the details in the blueprint
[20:10] <arosales> and reference it there. .  .
[20:10] <jcastro> yeah
[20:10] <jcastro> good idea
[20:10] <jcastro> I'll do that
[20:10] <arosales> jcastro, thanks
[20:11] <marcoceppi> Guest97524: latest maas is 1.4
[20:11] <arosales> jcastro, I am updating http://pad.ubuntu.com/bxNaItLUMi from your initial thoughts
[20:12] <Guest97524> 'sudo maas --version' gives me 1.5.4, updated it today
[20:17] <Guest97524> it is strange because MaaS allocates the machine, but nothing happens on it
[20:20] <marcoceppi> Guest97524: It's not getting the user-data which drives cloud-init to complete the setup
[20:20] <arosales> jcastro, I updated the perms on the ss to be open to all to edit.
[20:20] <jcastro> marcoceppi, pretend I want to get started writing tests for my charm, you send me to ... ?
[20:20] <marcoceppi> jcastro: the docs I'm writing
[20:22] <jcastro> marcoceppi, when will you have those? ballpark?
[20:22] <marcoceppi> eow
[20:26] <arosales> jcastro, http://pad.ubuntu.com/bxNaItLUMi looks good
[20:27] <arosales> jcastro, to confirm you are going to reconcile the blueprint,  propose testing to the list, and email the list on the Great Charm Audit of 2014, correct?
[20:29] <jcastro> testing has been proposed
[20:29] <jcastro> finishing up the pad now, will reconcile the blueprint and then send to list in about ~10
[20:29] <jcastro> to finish up all those things
[20:32] <arosales> jcastro, thanks!
[20:32] <arosales> no escaping the The Great Charm Audit of 2014 now
[20:56] <dpb1> jcastro: I would like to write some tests for my charms, but I'm having trouble finding out how to write a juju test at all.  I understand how to write python tests, but not how to write a juju test.  other than to come up with my own harness.
[20:57] <jcastro> dpb1, heh, marco is in the process of writing that document right now
[20:57] <jcastro> should be ready tomorrow
[20:57] <jcastro> marcoceppi, do you have a TLDR for dpb1 so he can get started?
[20:57] <lazypower> dpb1: there is ane xisting document about charm testing. Have you seen it? https://juju.ubuntu.com/docs/authors-testing.html
[20:58] <lazypower> *existing
[20:58] <marcoceppi> lazypower: dpb1 that docs just go over some bash stuff
[20:58] <marcoceppi> dpb1: I can show you a few example tests I've written if that helps
[20:59] <dpb1> marcoceppi: would love to see that.
[20:59] <marcoceppi> dpb1: here's an example wordpress test https://gist.github.com/marcoceppi/7727543
[21:00] <dpb1> lazypower: I'll look that over.  I've seen it before, but maybe it has more content now.
[21:00] <marcoceppi> dpb1: there's also amulet.PASS and amulet.FAIL statuses you can raise to denote the state of the test
[21:01] <dpb1> marcoceppi: I attemted to use amulet, but ran into issues with subordinates, btw.  I haven't narrowed down the cause yet.
[21:01] <marcoceppi> dpb1: here's another example: https://gist.github.com/marcoceppi/6779616
[21:01] <dpb1> marcoceppi: when I do, I'll let you know.
[21:01] <marcoceppi> dpb1: which subordinate? ones you're deploying or the ones it creates?
[21:01] <dpb1> marcoceppi: when i attempted to relate them.  It had some namespace type of issues.
[21:01] <dpb1> the...
[21:01] <dpb1> sentinels.
[21:02] <marcoceppi> dpb1: yeah, juju broke amulet for a while
[21:02] <marcoceppi> dpb1: that's been patched
[21:02] <dpb1> ok... I'll give it another shot
[21:02] <marcoceppi> I believe since 1.16.1
[21:02] <dpb1> ok
[21:03] <dpb1> jcastro: marcoceppi: I'll give it another shot.  But that was the first thing that came into my mind with the "line in the sand" email, is I didn't really know how to do it yet. :)
[21:04] <marcoceppi> dpb1: yeah, I'm writing docs to dive in to amulet stuff more, as well as improving the generic testing page
[21:04] <dpb1> cool
[21:04] <jcastro> dpb1, yeah I wanted to get that discussion started too
[21:04] <jcastro> so marco can go "see how easy it is?" as a follow up
[21:05] <dpb1> OK, looking forward to the improvments.  I'll give it another shot and get you something specific back if I hit issues.  Thanks.
[21:05] <jcastro> marcoceppi, actually, if you're not going to land that document today it doesn't hurt to reply to my mail with "here's a tldr" with those examples, and just let people know you'll have it ready tomorrow
[21:08] <marcoceppi> dpb1: thanks! any feedback, bugs, whatever, is appreciated. I'm looking to pump a lot of the feedback in to amulet to make it better
[21:14] <dpb1> marcoceppi: great! :)
[22:12] <dpb1> marcoceppi: first question: how do I add the charm from my branch? "d.add('mysql')"
[22:13] <marcoceppi> dpb1: just add it as you would, the framework intercepts it and will deploy from local for a matching name
[22:13] <dpb1> marcoceppi: ok... let me try
[22:13] <marcoceppi> dpb1: if you wanted to deploy a same name charm, you should use a full cs URL
[22:17] <dpb1> marcoceppi: and to test something *on the unit*, I should call out to juju ssh?
[22:18] <marcoceppi> dpb1: at the moment yes, there's an exec endpoint but it's not stable yet
[22:18] <dpb1> ok
[22:18] <marcoceppi> dpb1: let me check, I may have landed the exec already
[22:20] <marcoceppi> dpb1: yeah, so the "run" api endpoint exists in the sentry, but it's not in the amulet library, I'll make sure that gets released tonight
[22:21] <dpb1> marcoceppi: sweet, I'll look for it
[22:21] <marcoceppi> dpb1: yeah, I'll mail the list when the new amulet version drops, docs will co-incide with that
[22:21] <dpb1> k
[22:26] <dpb1> marcoceppi: so, here is my first attempt:  I installed amulet via apt-get install amulet from the stable ppa  http://paste.ubuntu.com/6563839/
[22:29] <marcoceppi> dpb1: is swap your charm? Looks like it's not loading from the local branch
[22:29] <marcoceppi> dpb1: the test looks good though, let me file a bug and look into local branch switching
[22:30] <dpb1> marcoceppi: yes, it's a new subordinate that just adds swap to a system
[22:30] <dpb1> I put up a review for it, but was going to use it as a test bed for... writing an amulet test. :)
[22:33] <marcoceppi> dpb1: yeah, I see the problem in the code. I'll have a new release out tonight. Thanks for being a guiena pig ;)
[22:34] <dpb1> marcoceppi: np.  I'll circle back around tonight when I see it and give it some more testing.
[22:34] <marcoceppi> dpb1: many thanks!
[22:51] <mxc> is there a way to fine tune the security group settings on EC2?  For example, if i spin up some charms, juju defaults to opening up 22, 17070, and 37017 to 0.0.0.0
[22:52] <mxc> i thought the "idea" of juju relations was that it only opens the ports it needs to and only to the other machine in the relation
[22:54] <mxc> this seems to be the only option: http://askubuntu.com/questions/156715/can-i-specify-tighter-security-group-controls-in-ec2
[23:17] <mxc> next question, has anyone deployed mongodb with juju and enabled the mongo monitoring service?
[23:47] <marcoceppi> mxc: lazypower was looking in to that
[23:48] <mxc> woohoo
[23:49] <mxc> lazypower: if you;re around and want to chat MMS, i'd be happy to push any work I do on it back upstream
[23:51] <lazypower> Surely. I'm working on another charm but i'm certainly available
[23:51] <mxc> msg ok?
[23:51] <lazypower> mxc: my thought was to build MMS into its own subordinate. Its a python app and not packaged as part of MongoDB core. It diverged earlier this year.
[23:51] <lazypower> surely
[23:52] <mxc> actually, never mind, this may be useful to others
[23:52] <mxc> lazypower: i saw that it diverged.  the mongodb config though has placeholders for some mms-fields
[23:52] <mxc> lazypower: i was kind of hoping that it was built in to the charm, but i see its not
[23:52] <lazypower> According to Jira those are to be treated as leftover artifacts from the exploratory dev cycle.
[23:53] <lazypower> The placeholder config options do nothing. The daemon actually skips them
[23:54] <lazypower> Now, I can see why we would want to put it into the mongodb unit, however, the MMS service was meant to run from anywhere and monitor any MongoDB installation that you have valid connection credentials for. There are performance tuners that want every last drop of system resource to go to the MongoDB Daemon itself. My co-worker is like that. We ended up sticking MMS on one of our Jenkins slaves that doesn't power down.

[23:56] <mxc> can you think of any similar, subordinate service charms to use as a skeleton?
[23:57] <lazypower> Subordinates are surprisingly simple to write. I wrote a papertrail charm that sets up papertrailapp's gem for log monitoring. Its somewhat similar.
[23:57] <lazypower> in a kinda not really sorta way
[23:57] <lazypower> marcoceppi: can you think of any subordinate services that compliment a parent charm that mxc could use as a guide?
[23:58] <mxc> papertrail could be close enough
[23:58] <lazypower> mxc: preference between github or launchpad
[23:58] <mxc> hugely for github
[23:59] <lazypower> https://github.com/chuckbutler/papertrail-charm
[23:59] <mxc> thanks
[23:59] <lazypower> Wait, the readme is outdated here
[23:59] <lazypower> let me see whats up with that