[00:02] <lazypower> i'm not sure what to do about the log truncation, any suggestions on what you would like me to do next?
[00:03] <hazmat> lazypower, if the log on disk is bigger than the pastebin, the easiest thing to do would be to zip it up and email it to me.. but it doesn't look like its truncated just very short
[00:05] <lazypower> its not any larger than whats in the pastebin
[00:07] <lazypower> however i'm more than happy to send you a tarball of the logs for completeness sake
[00:09] <hazmat> lazypower, hmm.. sure.. it might be on local provider its in a different log
[00:13] <marcoceppi> lazypower: include /var/log/upstart/juju* in that tarbal
[00:16] <_thumper_> marcoceppi: plz file a bug and assign to me for "juju help logging"
[00:16] <hazmat> yeah.. not seeing anything in the log files
[00:17] <marcoceppi> thumper: https://bugs.launchpad.net/juju-core/docs/+bug/1269618
[00:17] <_mup_> Bug #1269618: Need to document log level changing <juju-core:Confirmed for thumper> <juju-core docs:Confirmed for evilnick> <https://launchpad.net/bugs/1269618>
[00:17] <thumper> marcoceppi: ta
[08:46] <stub> Anyone know of a charm using charmhelpers.fetch.configure_sources? I want to see how to spell install_sources & install_keys in the config.yaml
[08:56] <marcoceppi> stub: not that I'm aware of, I can take a quick look though
[08:57] <stub> I've got a whole pile of config options that should be lists. I didn't realize config.yaml even supported that until I saw the example in charmhelpers.config_sources()
[08:58] <stub> Need to work out how to sanely version the config and migrate to a saner structure, deprecating and rewiring
[08:58] <marcoceppi> yeah, that's going to be a ton of fun
[09:00] <stub> def upgrade_charm(): log('┌∩┐(◣_◢)┌∩┐')
[09:00] <marcoceppi> hahaha
[10:14] <marcoceppi> stub: none of the charms in the charmstore use configure_sources :\
[10:17] <stub> marcoceppi: I'll give it a go. Someone has to be first :) My existing extra_archives is bogus anyway, as it specified space separated list... which doesn't work.
[10:20] <noodles775> stub: I've used it (before I switched to using ansible for a charm I'm working on). Let me see if I can find it. Using ansible makes a lot of that pain go away though.
[10:21] <stub> noodles775: go away or transfer it elsewhere ;)
[10:21] <ashipika> can anybody here help me use juju on my local openstack installation?
[10:22] <noodles775> marcoceppi: Are you using amulet with latest juju-core? Or maybe the issue is with juju-deployer - I'll start digging (http://paste.ubuntu.com/6761334/)
[10:23] <marcoceppi> noodles775: we are using amulet with latest juju-core, but I've not seen that error. It's originating from deployer though
[10:23] <noodles775> Yeah, I'll find out why.
[10:26] <marcoceppi> noodles775: I'm switching gears from charm-tools to amulet for the next few days to fix it up and get it ready for the automated testing infrastructure. if you find anything else let me know or open a bug and it'll get eyes on
[10:28] <noodles775> marcoceppi: will do (I just switched back this morning from other stuff too). BTW, by latest juju-core, I meant trunk (sorry). It works with 1.16.5-saucy-amd64, but not trunk: http://paste.ubuntu.com/6761360/
[10:28] <marcoceppi> noodles775: ah, we've been testing 1.17.0
[10:29] <marcoceppi> noodles775: if you bootstrap with trunk, and run juju status, what does the output look like?
[10:29]  * marcoceppi will be sad if dns-name was dropped from machine output
[10:29] <noodles775> No - it's there, that's the first thing I checked. I'll dig down in a minute (just finishing something else).
[10:30] <marcoceppi> ah, k, cool
[10:30] <marcoceppi> we're going to be testing against devel/latest built release, but I'll keep trunk in mind
[10:31] <noodles775> OK, I'll switch to that too.
[10:51] <noodles775> marcoceppi: jfyi, I must have checked the wrong status earlier. Deploying a charm with trunk does indeed have unexpected output for juju status: http://paste.ubuntu.com/6761458/
[10:51]  * noodles775 switches to 1.17.0
[10:51] <marcoceppi> that makes me a sad panda
[10:53] <noodles775> I'm assuming that's not intended, probably an issue in trunk. Any devs who can confirm?
[10:53]  * marcoceppi usually pings fwereade, I'm sure he loves that
[10:55] <fwereade> marcoceppi, noodles775: hmm, someone was swearing blind that already-exists bug was fixed
[10:56] <fwereade> marcoceppi, noodles775: dns-name is sometimes delayed a bit in status
[10:56] <marcoceppi> cool, as long as it's not a surprise for 1.17.1 I'm fine :)
[10:58] <noodles775> fwereade: great (fwiw, the above was with r2212 of juju-core apparently: http://paste.ubuntu.com/6761485/)
[11:26] <fwereade> noodles775, ok, thanks, that's very good to know
[11:37] <ev> hazmat: hi. I have an environment where juju-deployer -TW -r5 refuses to succeed, no matter how many times I run it. Is this interesting to you? Any debugging information I can provide?
[11:52] <marcoceppi> stub: here's a merge that adds configure_sources support: https://code.launchpad.net/~cmars/charms/precise/haproxy/trunk/+merge/201458
[11:53] <stub> ta
[12:00] <ev> hazmat: http://paste.ubuntu.com/6761718/
[12:06] <stub> marcoceppi: alas it breaks when you want an empty list. I'll file a bug and fix.
[12:07] <marcoceppi> stub: ack, thanks. fwiw, charm-helpers is going to be getting some serious attention by the charmers in the very near future
[12:08] <stub> marcoceppi: does that mean I don't have to fix the bug myself ;)
[12:08] <marcoceppi> stub: very near future, unfortunately not the immediate future :P
[12:09] <marcoceppi> ev: so, jenkins/0 can't be removed because lander-jenkins is in a pending state but also dying. This is why deployer keeps repeating it's message
[12:10] <ev> marcoceppi: still a bug in deployer though, no? I mean my understanding of -T is it's destroy-environment without the bootstrap node - I just want the instances to go away, I don't care how violently it's done.
[12:10] <marcoceppi> ev: I don't think deployer has --force functionality yet, but you could run `juju terminate-machine --force 10` then run the juju deployer -TW -r5
[12:10] <marcoceppi> ev: it's more a feature request, --force is a very new flag in juju, I think it landed sometime in the 1.16 series
[12:10] <ev> marcoceppi: won't juju immediately try to recreate the node?
[12:10] <ev> oh right
[12:11] <marcoceppi> ev: no, that hasn't been a feature since the go conversion
[12:11] <ev> ^ vila, in case you're not following along already
[12:11] <marcoceppi> ev: def file a bug/feature request about it, it's great to have as a part of -T
[12:11] <ev> on it now
[12:12] <marcoceppi> the manual terminate-machien is a work around for now though
[12:14] <vila> ev: pure gold, thanks for the ping
[12:14] <vila> ev: and no I wasn't following so I would have missed
[12:15] <ev> vila: there's a precise backport of 1.16 that we can use for tarmac: http://archive.admin.canonical.com/pool/main/j/juju-core/
[12:16] <vila> ev: that means that we can replace juju-deployer -T by for $m in (juju all your machines expect 0): juju terminate $m ?
[12:16] <vila> cjohnston: ^
[12:16] <ev> that's my understanding. marcoceppi ^ any gotchas there that you can think of?
[12:16] <vila> ev, cjohnston: 1.16 ?  We don't need 1.17 ?
[12:16] <ev> ps. thanks for amulet. It's lovely.
[12:17] <marcoceppi> ev: that would be a temporary work around until -T uses the --force option, deployer still does a bit more elegant things
[12:17] <ev> vila: we just need whatever version has the --force flag
[12:17] <ev> marcoceppi: *nods*, thanks
[12:17] <marcoceppi> 1.17.0 def has it, let me check 1.16.5
[12:17] <vila> ev, cjohnston: and why can't we use saucy of even trusty for tarmac ? This is exactly why I'm advocating using ubuntu dev releases for devs (and tarmac needs to be closely in sync with devs).
[12:18] <marcoceppi> vila: ev there's stable builds (1.16, etc) in the cloud-tools archive for precise as well
[12:20] <ev> marcoceppi: filed: https://bugs.launchpad.net/juju-deployer/+bug/1269783
[12:20] <_mup_> Bug #1269783: -T should have a complimentary --force option <juju-deployer:New> <https://launchpad.net/bugs/1269783>
[12:20] <ev> ah cheers
[12:20] <marcoceppi> ev: vila 1.16.5 has --force, so you should be good to use that
[12:20] <vila> marcoceppi: ack
[12:20]  * vila lost the battle that one time ;)
[12:38] <mattyw> marcoceppi, you got a few minutes for a charm helpers question?
[12:38] <marcoceppi> mattyw: I'll do what I can
[12:38] <mattyw> marcoceppi, the only occurance of an apt-get-install call seems to be in the jujugui contrib?
[12:39] <marcoceppi> mattyw: charmhelpers.fetch.apt_install
[12:40] <marcoceppi> http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/fetch/__init__.py#L76
[12:40] <marcoceppi> fetch will have most apt functions in it
[12:40] <mattyw> marcoceppi, ah brilliant, I missed that
[12:40] <marcoceppi> np!
[12:41] <marcoceppi> mattyw: typically the contrib stuff is hyper specific to that service (ie: openstack-helpers, jujugui, ansible, etc)
[12:41] <mattyw> marcoceppi, ok cool
[12:42] <mattyw> are there any firm plans of what goes into core - or is it just done quite ad hoc?
[12:48] <cjohnston> ev, vila I'm already on 1.17.0-0ubuntu1~ubuntu12.04.1~juju1
[12:51] <ev> cool
[12:53] <marcoceppi> mattyw: there will be. Basically it needs to be generic and have tests atm, but we're drawing up a roadmap and restructure of charm-helpers to make it a little less adhoc and a little more structured
[12:54] <marcoceppi> core has really lost it's meaning, since it was supposed to be contrib and core, things get put in contrib as a testing/fleshing out then promoted to core. but now fetch and payload are outside of both contrib and core, so not sure what happened there
[13:14] <vila> cjohnston: \o/ Is your setup documented somewhere ? No urgency but that's a blind spot for me so far :-}
[13:16] <cjohnston> vila: I'm still trying to make it all work, so no (or its in progress)
[13:17] <vila> cjohnston: ack
[13:17] <cjohnston> basically its just a precise instance and trying to get all of the different deps.. that's really it
[13:20] <vila> cjohnston: charmed or setup manually ?
[13:20] <cjohnston> vila: manually atleast for now
[13:21] <cjohnston> vila: my tarmac runs more than just CI stuff too
[13:22] <vila> cjohnston: ok, as long as you keep track of what you install there, we'll be fine
[14:00] <hazmat> smoser, the cloudinit docs on readthedocs look nice
[14:05] <smoser> hazmat, thanks. that is mostly harlowja. things can definitely be improved still.
[14:10] <hazmat> yeah.. features overview is a bit light, but i was googling around and reading the format docs yesterday and found them quite helpful.
[14:23] <fwereade> noodles775, ping
[14:24] <noodles775> Hi fwereade
[14:25] <fwereade> noodles775, can you confirm you saw "service already exists" with r2212 client and --upload-tools?
[14:25] <noodles775> I didn't use --upload-tools, dimitern mentioned that I should try that. Let me do so now.
[14:26] <fwereade> noodles775, we have a bug that STM to be the same, with a very inappropriate title: https://bugs.launchpad.net/juju-core/+bug/1259925
[14:26] <_mup_> Bug #1259925: juju destroy-environment does not delete the local charm cache <destroy-environment> <local-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1259925>
[14:26] <fwereade> noodles775, ah, the local provider does that anyway I think
[14:27] <fwereade> noodles775, if you take a look at the jujud installed alongside juju, and the jujud inside one of your containers, I think they'd be bit-for-bitthe same
[14:28] <fwereade> noodles775, so as long as your jujud matched your juju you'd be fine
[14:29] <fwereade> noodles775, would you update that bug with whatever detail you have please? I think the name is better now
[14:32] <noodles775> fwereade: I don't understand why you changed the name of that bug - it seems quite different. I actually created a dupe of that bug at the time, and as you can see from the dupe, I had no problems deploying a service with the juju I was using.
[14:34] <noodles775> fwiw, here's the result using juju bootstrap --upload-tools http://paste.ubuntu.com/6762332/
[14:35] <noodles775> Works fine with 1.16.5-saucy-amd64
[14:37] <fwereade> noodles775, ok,about the bugs quickly: I'm confused
[14:37] <fwereade> noodles775, the bug I linked was complaining that a service already existed
[14:37] <fwereade> noodles775, and I thought that matched what you were talking about this morning
[14:38] <fwereade> noodles775, your charm caching bug is IMO not a duplicate of that other one -- but it had a bad title that made it seem like it was
[14:38] <fwereade> noodles775, (the one I changed had a bad title, not your bug)
[14:39] <fwereade> noodles775, am I making sense here?
[14:39] <noodles775> fwereade: not really (at least, not to me :-) ). That is, the paste I showed this morning, http://paste.ubuntu.com/6762349/ , I expected the service to already exist (line 18) because I'd just deployed it (line 5). What I didn't expect was the output of juju status to not contain the details.
[14:40] <noodles775> It's probably clearer on the second paste: http://paste.ubuntu.com/6762332/ (as it shows juju status before the deploy as well).
[14:41] <noodles775> That said, there's something wrong with the installed packages - juju-core is at 1.16.5 while jujud version shows 1.17.1.. http://paste.ubuntu.com/6762349/
[14:41]  * noodles775 apt-get updates to be certain.
[14:43] <fwereade> noodles775, yeah, that mismatch is almost certainly a problem
[14:44] <noodles775> Where's jujud coming from? http://paste.ubuntu.com/6762365/
[14:50] <mattyw> marcoceppi, this is definately a crazy question - have you ever tried installing juju in a unit?
[14:51] <mattyw> (so that in the unit you can do a juju bootstrap to create a local environment
[14:51] <marcoceppi> mattyw: yeah, a few times, pyjuju used to go ape over it, juju-core should run fine side by side in a deployed unit
[14:51] <mattyw> marcoceppi, juju init compains about juju_home not being set
[14:51] <marcoceppi> mattyw: juju-core doesn't actually install any juju packages, it's all custom paths, etc
[14:51] <mattyw> but afaik it's set by juju - it doesn't need to be present in the env before hand
[14:52] <marcoceppi> mattyw: yeah, that's because there is no HOME environment variable in the hooks env
[14:52] <mattyw> marcoceppi, oh right - so I should set home in the hook env?
[14:52] <marcoceppi> mattyw: yup, that should resolve that
[14:52] <mattyw> marcoceppi, I'll give it a go - thanks
[14:52] <marcoceppi> mattyw: I'm sure there are a few other minor nuances, but there shouldn't be any major blockers
[15:02] <mattyw> marcoceppi, ok thanks, I'll see how far I can get
[15:04] <marcoceppi> mattyw: np, gl
[15:07] <marcoceppi> lazypower: postgresql is a good example of this
[15:07] <lazypower> haha
[15:07] <marcoceppi> it has a 01_unittest.test
[15:07] <marcoceppi> and a 10_function_tests.test
[15:07] <lazypower> ok, i think i was more granular than that
[15:07] <marcoceppi> juju test plugin, by default, will hault once a test file runs
[15:07] <marcoceppi> so unittesting will cause the whole ship to sink if it fails
[15:08] <lazypower> i have tests titled deploy_test that check the deployed resources, and another config_test that cache the config flags and test that they are sent over the wire (Which feels like i'm testing juju itself and not the charm)
[15:08] <marcoceppi> lazypower: well you can have as many test files as you want, this is just how stuart structured it
[15:09] <lazypower> next up was relationship_test that check db, caching, etc. relationships
[15:09] <lazypower> but if i'm using amulet in a wrong context what should I be using for unit testing? I'll add it to my required reading list
[15:10] <marcoceppi> lazypower: amulet really is there to help you stand up an environment and poke at it with a sharp stick
[15:10] <marcoceppi> lazypower: there's tons of unittesting tools out there, if the charms in python there's a unittest python modules that's part of core
[15:11] <marcoceppi> that allows you to test functions, etc, with a nice set of assertions, bleh
[15:11] <lazypower> Right but I feel like poking at the end environment is still a good way to do assert the intent was performed properly
[15:11] <lazypower> but ok, let me spin on the unit test conversation a bit more and i'll come back to this
[15:11] <marcoceppi> lazypower: it's a lot of effort to stand up a deployment only to test is a config key was changed or removed
[15:12] <marcoceppi> unittesting is nice to have, just like integration tests, they're both pieces to a larger quality puzzel
[15:21] <lazypower> marcoceppi, http://paste.ubuntu.com/6762527/
[15:21] <lazypower> like, i understand that's expensive, but I need a deployed unit to validate the intent was completed. Maybe I'm approaching integration testing with a unit testing mindset?
[15:22] <marcoceppi> lazypower: the concept looks fine to me
[15:23] <lazypower> Ok i'm going to finish this suite and put it up on CR
[15:23] <lazypower> ta
[16:01] <marcoceppi> lazypower: congrats, papertrail passed review
[16:04] <mhall119> marcoceppi: I need your help
[16:04] <mhall119> I'm getting this again:
[16:04] <mhall119> Error details:
[16:04] <mhall119> Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
[16:05] <mhall119> don't remember what you had me to last time to fix it
[16:05] <marcoceppi> mhall119: what version of juju?
[16:05] <mhall119> 1.16.3-saucy-i386
[16:05] <marcoceppi> mhall119: not that it'll help this, but 1.16.5 is latest, fyi
[16:05] <mhall119> is it in saucy?
[16:05] <marcoceppi> mhall119: is the environment already bootstraped? or are you trying to bootstrap?
[16:06] <marcoceppi> mhall119: it's in ppa:juju/stable
[16:06] <mhall119> if it's not in saucy, it's not the latest for me :)
[16:06] <mhall119> marcoceppi: to be honest, I thought I had destroyed this env, haven't used it in months, but I'm seeing instances still running
[16:06] <marcoceppi> mhall119: fair enough, just a reminder :)
[16:07] <marcoceppi> mhall119: okay, there's like a 10 step process to purge with fire
[16:07] <mhall119> marcoceppi: make jcastro backport 1.16.5 to saucy's archives
[16:07] <mhall119> marcoceppi: I've alredy done step 1: Admitting that I have a problem
[16:07] <marcoceppi> mhall119: http://askubuntu.com/a/403619/41
[16:08] <jcastro> I was hoping we'd be backporting point releases back to released Ubuntus
[16:08] <marcoceppi> just delete all those files
[16:08] <jcastro> sinzui, is there a plan for that?
[16:08] <marcoceppi> jcastro: we are, 1.16.0 shipped in saucy, saucy-updates has 1.16.3
[16:08] <marcoceppi> it just seems to not have 1.16.5
[16:08] <mhall119> thanks marcoceppi
[16:08] <jcastro> yeah I am just wondering if there's a timed cycle or if it's best effort or what
[16:08] <marcoceppi> jcastro: ah
[16:09] <sinzui> jcastro, 1.16.4 is stalled over discussion about the definition of bug fixes
[16:09] <marcoceppi> mhall119: likely it's the last two paths goofing you up, but the rest are good to clean out
[16:09] <marcoceppi> mhall119: OH, also
[16:09] <jcastro> sinzui, <nod>
[16:10] <marcoceppi> mhall119: if it's been months, delete this too: /var/cache/lxc/cloud-precise/ubuntu-12.04*.tar.gz
[16:10] <sinzui> jcastro, as a policy they will but backported because they are stable, but we need to argue the case when we add a feature because it address how ubuntu stable users interact with Juju, such as backups need to work like saucy expects
[16:11] <lazypower> marcoceppi, \o/ wooo
[16:11] <mhall119> I thought backports was specifically for new stuff
[16:11] <lazypower> i'm going to be in the charm store
[16:12] <jcastro> sinzui, are we putting new features in these point releases or just pure bugfixes?
[16:12] <sinzui> mhall119, confusion over the term backport. You are correct, jcastro is asking about the SRU for juju in saucy
[16:13] <mhall119> ah, ok
[16:13] <lazypower> marcoceppi, can you think of a good charm to reference for relationship testing? My immediate guess is to parse the DB Config file of a service and regex out the db credentials.
[16:13] <marcoceppi> lazypower: sentries can do that already
[16:13] <sinzui> jcastro, backup of the state-server in juju 1.16.3 and below doesn't work. That was fixed in 1.16.4, but that fix takes the form of a plugin/feature
[16:13] <lazypower> wot
[16:14] <lazypower> marcoceppi, thats not documented is it?
[16:14] <marcoceppi> lazypower: sentries haven't been documented yet
[16:14] <lazypower> Ok i'll take notes and ship them to you when i'm done so you can verify i understand what i think i understand
[16:14] <lazypower> I smell a blog post coming
[16:15] <marcoceppi> d.sentry.unit['ubuntu/0'].realation( 'db', 'mysql:db') - https://github.com/marcoceppi/amulet/blob/master/amulet/sentry.py#L100
[16:15] <lazypower>     def directory_stat(self, *args): <- that checks if a directory exists?
[16:15] <marcoceppi> lazypower: that's the whole point of the relation_sentry service, it proxies all the relations and acts like a MITM, recording all the relations
[16:16] <lazypower> er sorry i mean tot get def directory()
[16:16] <lazypower> i see dir_stat is not implemented yet
[16:16] <marcoceppi> lazypower: dir_stat is
[16:16] <marcoceppi> Look at UnitSentry class
[16:16] <lazypower> https://github.com/marcoceppi/amulet/blob/master/amulet/sentry.py
[16:16] <lazypower> is what i'm reviewing right now
[16:16] <marcoceppi> Look at UnitSentry class
[16:18] <lazypower> wow, ok
[16:18] <marcoceppi> lazypower: I forget what dir_stat data looks like, but it basically is a way to check if dir exists
[16:18] <lazypower> I think i overcomplicated my deploy test concept
[16:18] <lazypower> the sentry does all this already...
[16:18] <marcoceppi> lazypower: yeah, sorry, I should have picked up on that
[16:18] <lazypower> nbd :) learning curve
[16:19] <marcoceppi> most of what you need to know is in that unit_sentry class, it's just the formatting of what each returns which needs to be documented
[16:20] <lazypower> I'm keeping notes that i want to swap with you after your travels, i think we can build a fairly comprehensive documented suite, or at least API reference for Amulet fairly quickly
[16:21] <marcoceppi> lazypower: evilnick did a great job making my README look pretty in the docs
[16:21] <marcoceppi> https://juju.ubuntu.com/docs/tools-amulet.html
[16:21] <lazypower> I did see that.
[16:21] <marcoceppi> just need to expand on it with sentry and the other stuff not in there :\
[16:21] <lazypower> i've been referencing that, and the tests that mbruzek and I paired over yesterday
[17:05] <rick_h_> marcoceppi: or jcastro I've qa'ing stuff and removing/resetting my .juju over and over to test out quickstart. I keep getting ERROR TLS handshake failed: x509: certificate signed by unknown authority
[17:05] <rick_h_> any ideas on what else i need to clear/reset/remove to prevent these issues? I see bug #1178312 but that's a bit different
[17:05] <_mup_> Bug #1178312: ERROR state: TLS handshake failed: x509: certificate signed by unknown authority <config> <ui> <juju-core:Triaged> <https://launchpad.net/bugs/1178312>
[17:06] <jcastro> I've seen that error before
[17:06] <jcastro> but can't remember when
[17:07] <marcoceppi> rick_h_: you're blowing away the entire $HOME/.juju dir?
[17:07] <rick_h_> marcoceppi: yea, I was
[17:07] <rick_h_> now it's working
[17:07] <rick_h_> bah
[17:07] <marcoceppi> :)
[17:07] <jcsackett> rick_h_: what environment? I get those periodically when i'm using a sshuttle instance to canonistack that's gone stale.
[17:08] <rick_h_> lxc and juju on trusty is giving me grief here and there
[17:08] <rick_h_> I know it's in flux, I just seemed to get stuck but rm'd the whole thing again and it worked
[17:08] <jcsackett> huh, ok. never seen that on local.
[17:08] <jcsackett> but then, i'm not on trusty yet.
[18:22] <mhall119> marcoceppi: do I need to reboot after following the 10-step program, or should I be okay to bootstrap a new environment and work with it?
[18:23] <diogoviannaarauj> hi you guys
[18:23] <diogoviannaarauj> I'm getting the error "timed out waiting for mgo address from []" when trying to use juju with openstack
[18:23] <diogoviannaarauj> and i can't find anything about it
[18:48] <hatch> somehow my local juju env has been corrupted, there are no lxc's running, juju status shows the env, but I can't destroy it because it can't find a local-machine.conf file. Is there a 'nuke it' option?
[18:51] <hatch> ahh found http://askubuntu.com/questions/403618/how-do-i-clean-up-a-machine-after-using-the-local-provider
[18:53] <marcoceppi> mhall119: no restarts needed
[18:53] <marcoceppi> diogoviannaarauj: this typically means the machine didn't boot
[18:53] <mhall119> thanks marcoceppi
[18:57] <diogoviannaarauj> marcoceppi: oh, will check out, but going on openstack panel i can do everythin
[18:57] <diogoviannaarauj> marcoceppi: any log i should inspect?
[18:57] <marcoceppi> diogoviannaarauj: so you've got an instance launched?
[18:57] <diogoviannaarauj> i can do throu
[18:58] <diogoviannaarauj> i can do through openstack interface
[18:58] <marcoceppi> diogoviannaarauj: yes, if you can ssh in to it first, check /var/log/cloud-init*, then run initctl list | grep juju to see if any jobs exists/are arunning
[18:58] <marcoceppi> Is this a private openstack setup?
[18:58] <diogoviannaarauj> yes
[18:58] <marcoceppi> k
[18:58] <diogoviannaarauj> I'm using openstack with docker
[18:58] <diogoviannaarauj> to create containers
[18:59] <diogoviannaarauj> my friend who is in the project from beginning is joining the room to explain better
[19:00] <diogoviannaarauj> hi jogn00santos, marcoceppi is trying to help us
[19:01] <diogoviannaarauj> he said: if you can ssh in to it first, check /var/log/cloud-init*, then run initctl list | grep juju to see if any jobs exists/are arunning
[19:02] <john00santos> ok
[19:03] <john00santos> are running
[19:03] <john00santos> juju-db-root-local start/running, process 3353
[19:03] <john00santos> juju-agent-root-local start/running, process 3363
[19:04] <mhall119> stub: is cs:precise/postgresql-56 the charm I should be testing the api-website with?
[19:04] <marcoceppi> john00santos: okay, what happens if you run juju status right now?
[19:06] <john00santos> waiting
[19:07] <marcoceppi> john00santos: run the command with --debug and --show-log flags
[19:07] <john00santos> Ok
[19:07] <john00santos> 2014-01-16 19:07:38 ERROR juju supercommand.go:282 Unable to connect to environment "openstack".
[19:07] <john00santos> Please check your credentials or use 'juju bootstrap' to create a new environment.
[19:07] <john00santos> Error details:
[19:07] <john00santos> timed out waiting for mgo address from []
[19:09] <john00santos> Por default ele se conecta usando o valor em "auth-url"?!
[19:09] <john00santos> By default it connects using the value "auth-url"?
[19:11] <marcoceppi> john00santos: does your bootstrap instance have an IP address that reachable by your machine?
[19:12] <john00santos> yes
[19:12] <john00santos> 198.27.90.250 5000
[19:12] <john00santos> vagrant@precise64:~$ echo 1 > /dev/tcp/198.27.90.250/5000
[19:12] <john00santos> vagrant@precise64:~$ echo $?
[19:12] <john00santos> 0
[19:12] <john00santos> vagrant@precise64:~$
[19:15] <marcoceppi> john00santos: The auth-url should be the keystone URL for your openstack, if you got juju to launch and instances, that should be correct
[19:16] <marcoceppi> john00santos: do you have swift setup on this openstack?
[19:16] <john00santos> yes
[19:16] <john00santos> Ok, looks environment.yaml
[19:17] <marcoceppi> john00santos: what version of juju-core do you have (`juju version`)
[19:17] <lazypower> marcoceppi, so this means the relationship failed to be parsed by amulet, right?
[19:17] <lazypower>  {   u'Error': u'service "relation-sentry" has no "requires-mediawiki_db-mysql_db" relation',
[19:18] <john00santos> vagrant@precise64:~/.juju$ juju version
[19:18] <john00santos> 1.16.5-precise-amd64
[19:18] <marcoceppi> john00santos: okay, good, you have the latest
[19:19] <john00santos>   openstack:
[19:19] <john00santos>     type: openstack
[19:19] <john00santos>     # Specifies whether the use of a floating IP address is required to give the nodes
[19:19] <john00santos>     # a public IP address. Some installations assign public IP addresses by default without
[19:19] <john00santos>     # requiring a floating IP address.
[19:19] <marcoceppi> lazypower: can you run juju status and pastebin the output?
[19:19] <marcoceppi> john00santos: please use http://paste.ubuntu.com for more than one line of text
[19:19] <lazypower> http://paste.ubuntu.com/6763722/
[19:20] <john00santos> ok
[19:20] <marcoceppi> lazypower: that message is correct, there is no requires-mediawiki_db-mysql_db relation
[19:20] <marcoceppi> lazypower: there are no relations to the relation-sentry
[19:21] <lazypower> right
[19:21] <lazypower> its mediawiki:mysql
[19:21] <lazypower> duhhh
[19:21]  * lazypower facepalms
[19:21] <marcoceppi> lazypower: well, relation-sentry is designed to proxy those
[19:21] <marcoceppi> lazypower: did you get that error during d.setup() ?
[19:21] <lazypower> yeah, do i leave off the interface?
[19:21] <lazypower> d.relate('mysql','mediawiki')
[19:22] <marcoceppi> lazypower: you need to be explicit and include the relations
[19:22] <lazypower> https://juju.ubuntu.com/docs/tools-amulet.html#functionality
[19:22] <lazypower> docs are wrong and should probably be updated
[19:22] <marcoceppi> d.relate('mysql:db', 'mediawiki:db')
[19:22] <lazypower> i'll branch and fix
[19:22] <marcoceppi> lazypower: docs are correct
[19:22] <lazypower> d.relate('mysql:db', 'mediawiki:db')
[19:22] <lazypower> thats what i have in my code thats failing
[19:22] <marcoceppi> lazypower: can you add this line prior to d.setup()
[19:23] <marcoceppi> import json
[19:23] <marcoceppi> print(json.dumps(d.schema(), indent=2))
[19:23] <lazypower> surely
[19:23] <marcoceppi> then pastebin the schema that it drops
[19:23] <john00santos> http://paste.ubuntu.com/6763735/
[19:25] <lazypower> http://paste.ubuntu.com/6763752/
[19:26] <marcoceppi> lazypower: can you pastebin /tmp/sentry_etcadv/relation-sentry/metadata.yaml
[19:26] <lazypower> http://paste.ubuntu.com/6763760/
[19:28] <marcoceppi> john00santos: your config looks fine, try running juju destroy-environment, make sure the node that juju launched shuts down
[19:29] <marcoceppi> john00santos: set use-floating-ip to True, rm -f ~/.juju/envs/openstack.jenv; then run `juju bootstrap --show-log --debug`
[19:29] <marcoceppi> lazypower: yeah, that's correct
[19:29] <marcoceppi> lazypower: and it's giving you grief about now having a relation? was this a test running against a clean environment?
[19:30] <lazypower> nope, i chained right off of the standup test
[19:30] <lazypower> let me clear and try fresh
[19:31] <marcoceppi> lazypower: that's the problem
[19:31] <marcoceppi> lazypower: the relation-sentry doesnt' get upgraded, so it doesn't get the latest metadata.yaml
[19:31] <lazypower> i just hit the setup() portion of the test, so far so good
[19:31] <lazypower> ahhh ok
[19:31] <marcoceppi> so technically relation-sentry doesnt have those defined yet
[19:31] <lazypower> so the sentries cache data then
[19:32] <marcoceppi> lazypower: no, the deployment caches data
[19:32] <lazypower> whatever config comes in initially they hold on to, and are not reprogrammable
[19:32] <marcoceppi> its a juju feature, can you open a bug to make sure that amulet runs an upgrade each time deployer runs?
[19:32] <lazypower> sure, against amulet right?
[19:33] <marcoceppi> lazypower: yes
[19:33] <marcoceppi> lazypower: I think I just need to add the --update-charms flag to deployer
[19:33] <marcoceppi> could you add that as a note, I'll investigate later
[19:35] <lazypower> https://bugs.launchpad.net/amulet/+bug/1269914
[19:35] <_mup_> Bug #1269914: Upgrade Charm on juju-deployer run <Amulet:New> <https://launchpad.net/bugs/1269914>
[19:38] <marcoceppi> john00santos: any luck?
[19:39] <john00santos> this uploading tools for bucket
[19:41] <marcoceppi> john00santos: gotchya
[19:41] <john00santos> I found these strange errors http://paste.ubuntu.com/6763823/, but he ignores her and continues uploading
[19:43] <amrilinux> hi guys !!! plz i need to know about juju !!!
[19:43] <amrilinux> whats juju ???
[19:45] <sarnold> amrilinux: juju is a cloud-provider independent way to orchestrate 'services' -- you can with a few commands or clicks deploy a few hundred machines to run the services you need; check out https://juju.ubuntu.com/ for the details
[19:46] <lazypower> hi amrilinux, what would you like to know about juju?
[19:47] <marcoceppi> john00santos: those are somewhat exected errors, it's just juju trying to find tools and imagemetadata
[19:47] <amrilinux> lazypower: hi ! i don't know nothing about her !!
[19:48] <lazypower> Well, Juju is the coolest tool in data center orchestration, distilling away the pain of deployment and setup of your infrastructure. My first suggestion is you visit juju.ubuntu.com and get the overview of the application. I'll be more than happy to answer any specific questions you may have about the platform.
[19:49] <lazypower> amrilinux, ^
[19:49] <rick_h_> http://www.youtube.com/watch?v=1yoIdgdqzLk&list=PLc2mSdaCQVXYBiIRrA72T6yCJi8AKU4lb for the video lovers of learning what something is
[19:51] <john00santos> It did not work ... Error: http://paste.ubuntu.com/6763859/
[19:55] <john00santos> re-running http://paste.ubuntu.com/6763888/ , generating ImageData the problem is resolved, but will return to the initial error
[19:56] <marcoceppi> john00santos: Okay, have you created image metadata for your cloud already?
[19:57] <marcoceppi> the second paste shows a failure to find a machine to launch
[19:58] <john00santos> I will create and upload to the bucket, do you think the problem may related to the fact that I'm using OpenStack with docker
[20:00] <marcoceppi> john00santos: it shouldn't be, but I don't think we've tested that combination yet, so I can neither confirm or deny it works with juju
[20:01] <john00santos> I will upload the metadata and you have the output step
[20:01] <marcoceppi> john00santos: thanks
[20:06] <john00santos> The id is the image that stored in the right glance?
[20:10] <john00santos> marcoceppi, The id is the image that stored in the right glance?
[20:11] <marcoceppi> john00santos: yes, you should see it in the drop down in horizon for the create an instance
[20:13] <john00santos> thanks
[21:03] <john00santos> marcoceppi, I did upload metadas and ran the juju status and error was the same http://paste.ubuntu.com/6764202/
[21:11] <marcoceppi> john00santos: so you uploaded metadata, then ran a bootstrap?
[21:28] <john00santos> yes
[21:29] <john00santos> http://paste.ubuntu.com/6764297/
[21:32] <john00santos> marcoceppi: yes, http://paste.ubuntu.com/6764297/
[23:11] <john00santos> marcoceppi?!