[00:08] <AskUbuntu> How can I get the file from local server on my intranet instead of internet in hook install | http://askubuntu.com/q/474178
[09:59] <vbn> http://vk.com/away.php?to=http%3A%2F%2Fhaxxjp.net
[09:59] <vbn> https://zhovner.com/tmp/killwebkit.html hack for TELE
[09:59] <sarnold> vbn: ?
[10:00] <vbn> ?
[10:01] <vbn> http://vk.com/club43237732
[10:02] <vbn> paludis/monkey/arachnist ddos
[10:03] <vbn> http://vk.com/videos-43237732?z=video-43237732_168162885%2Fclub43237732
[10:03] <vbn> H@h@H@H@Hh2
[10:03] <vbn> porno
[10:05] <sarnold> perhaps you're not where you think you are.
[10:05] <sarnold> info #juju
[10:12] <bloodearnest> tvansteenburgh: fyi, after the tmpdir change, this is what I get: http://paste.ubuntu.com/7550440/
[10:12] <bloodearnest> tvansteenburgh: what's doubly odd is that I have added local bzr repo to the two charms that didn't have it
[10:42] <mthaddon> can anyone help with a local provider issue? http://paste.ubuntu.com/7550592/
[11:05] <vbn> porno
[11:05] <vbn> porno
[12:03] <tvansteenburgh> bloodearnest: hey, just starting my day
[12:03]  * tvansteenburgh reads pastes
[12:07] <bloodearnest> tvansteenburgh: no problem, I'm switching to another task for now, but let me know if you need me to test anything
[12:08] <tvansteenburgh> bloodearnest: cool, will do. should be fairly easy to repro locally (i hope)
[12:11] <bloodearnest> tvansteenburgh: fwiw, the 2 "inverse" charms are autogenerated from the charm-under-test's metadata.yaml, in order to only have to deploy 1 extra unit rather than n when testing.
[12:12] <bloodearnest> and to reduce external dependency count
[12:32] <tvansteenburgh> bloodearnest: are you running HEAD from marcoceppi/amulet?
[12:33] <bloodearnest> tvansteenburgh: yes, installed via pip3 install --user git+git://github.com/marcoceppi/amulet.git
[12:33] <tvansteenburgh> ok thanks
[12:33] <marcoceppi> sounds like we have a bug, also, we should do another release
[12:36] <bloodearnest> tvansteenburgh: only diff is that one line patch you said yesterday
[12:36] <tvansteenburgh> okay
[12:38] <bloodearnest> mthaddon: I've seen that a few times in local on trusty. The juju state server stops taking connections. A destroy and rebootstrap usually fixes, but don't know the root cause. I think I saw a bug on it somewhere...
[12:46] <mthaddon> bloodearnest: thx, will try looking for a bug report
[13:15] <tvansteenburgh> bloodearnest: when you have a min, undo the patch from yesterday and try this instead (works for me): http://pastebin.ubuntu.com/7551446/
[13:27] <bloodearnest> tvansteenburgh: ok
[13:34] <bloodearnest> tvansteenburgh: \o/
[13:35] <bloodearnest> tvansteenburgh: I still got an error, but that was in my charm
[13:35] <tvansteenburgh> bloodearnest: suhweet.
[13:36] <tvansteenburgh> bloodearnest: i'll get a PR submitted for this
[13:37] <bloodearnest> tvansteenburgh: is odd, I did try with an explicit abs path, but maybe that was before trying HEAD
[13:38] <tvansteenburgh> yeah, you may have run into the vcs problem prior to being on head
[14:58] <khuss> marcoceppi: i am getting error error: cannot run instances: gomaasapi: got error back from       server: 409 CONFLICT (No matching node is available. while running juju status
[14:58] <khuss> nodes are in the ready state
[15:09] <bloodearnest> marcoceppi: tvansteenburgh: so, next hurdle - I am trying to use amulet to deploy a "fat charm" that requires a build step before deploying
[15:11] <bloodearnest> I tried running the build step before running amulet, but it seems that if amulet finds a bzr repo in the charm, it checks out the latest revision, rather than copying the directory with it's local changes?
[15:17] <tvansteenburgh> bloodearnest: i think it's deployer doing that, not amulet (not that is solves your problem)
[15:18] <hazmat> bloodearnest, deployer will do buildsteps automatically..
[15:18] <bloodearnest> tvansteenburgh: ah, right. But we use deployer to do exactly that in production deploy
[15:18] <hazmat> bloodearnest, if you have a build hook
[15:18] <bloodearnest> hazmat: we have a "make charm-payload" convention
[15:19] <bloodearnest> hazmat: are there docs on build hooks
[15:19] <hazmat> bloodearnest, build: path_to_script in each service
[15:19] <hazmat> er in each charm
[15:20] <hazmat> bloodearnest, not really re docs.. it was a feature add from your team a while back afaicr
[15:20] <hazmat> ported over from pyjuju days
[15:20] <bloodearnest> hazmat: really? I never heard of it
[15:21] <bloodearnest> maybe from is
[15:21] <bloodearnest> IS
[15:21] <hazmat> bloodearnest, tvansteenburgh that does sound like an amulet issue wrt
[15:21] <hazmat> bloodearnest, oh.. yeah. from is
[15:21] <hazmat> sorry mixed streams
[15:21] <bloodearnest> hazmat: makes sense, but we still didn't know about it. It's a really good idea, will start to use it
[15:22] <bloodearnest> hazmat: so, the the charm's metadata needs a build: step?
[15:22] <hazmat> bloodearnest, if you skipped the make charm-payload  and had a build script that does the same ref' it from your deployer config you'd be good
[15:22] <hazmat> but amulet's generating deployer config, so have to patch amulet
[15:22] <hazmat> bloodearnest, yup
[15:22] <hazmat> bloodearnest, not exactly..
[15:22] <hazmat> bloodearnest, each service referencing a charm in a deployer config, needs a build: relative_path_in_charm
[15:22] <hazmat> which specifies an exec to call to build the charm
[15:23] <hazmat> ie. its deployer config not charm metadata.
[15:23] <bloodearnest> hazmat: amulet supports loading a deployer config, which it then modifies with the sentries as needed AIUI
[15:23] <bloodearnest> hazmat: right
[15:23] <hazmat> bloodearnest, you can also use db inspect plugin.. to bypass the need for sentries entirely..
[15:25] <bloodearnest> hazmat: do you still need an actually deployed service to relate your charm to when using db plugin? Or can it fake the whole relation?
[15:25]  * bloodearnest would love to be able to fake relations
[15:25] <hazmat> bloodearnest, no you need the actual relations
[15:26] <hazmat> bloodearnest, fake for speed?
[15:26] <bloodearnest> hazmat: yeah
[15:26] <hazmat> bloodearnest, lxc + btrfs + apt-http-proxy is the speediest combo atm
[15:26] <hazmat> er.. s/lxc/local
[15:26] <bloodearnest> hazmat: yep, that's what I'm running
[15:27] <tvansteenburgh> bloodearnest: fwiw if you want to use Deployment.load() in amulet right now, you'll have to use https://github.com/tvansteenburgh/amulet/tree/lp-1324272
[15:28] <tvansteenburgh> fixes queued up here: https://github.com/marcoceppi/amulet/pull/36
[15:28] <bloodearnest> tvansteenburgh: thanks, good to know
[15:29] <hazmat> tvansteenburgh, so what's the value add of amulet?
[15:29] <bloodearnest> hazmat: but a bootstrap/deploy/settle/test/destroy-environment per test adds up
[15:29] <hazmat> bloodearnest, oh.. that's test framework idiocy
[15:29] <hazmat> bloodearnest, i've patched to just do juju-deployer -TW
[15:29] <hazmat> which saves the bootstrap node
[15:29] <bloodearnest> hazmat: nice
[15:32] <bloodearnest> hazmat: which reminds me - juju-deployer defaults to the use the default juju api port, but if you have multiple local environments, with non-standard api server ports, is there a way to tell juju this?
[15:32] <bloodearnest> s/tell juju/tell juju-deployer/
[15:32] <hazmat> bloodearnest, which version of deployer you using?
[15:33] <hazmat> bloodearnest, latest versions of deployer detect all of this correctly..
[15:33]  * hazmat wonders if it did before anyways.. 
[15:33] <hazmat> it was parsing the jenv file for the exact state server address and port
[15:33] <bloodearnest> 0.3.6, from stable ppa
[15:34] <bloodearnest> on trusty
[15:34] <hazmat> 0.3.8 is latest stable in pypi.. i don't generally push to ppas. there's a daily recipe though
[15:34] <bloodearnest> will try from pypi
[15:37] <bloodearnest> hazmat: installing 0.3.8 with pip install --user -U juju-deployer, I get:
[15:37] <bloodearnest> ImportError: cannot import name ErrorExit
[15:38] <hazmat> intersting
[15:38] <hazmat> bloodearnest, full traceback pastebin?
[15:39] <hazmat> bloodearnest, just did it in a virtualenv.. works fine
[15:40] <bloodearnest> hazmat: http://paste.ubuntu.com/7552241/
[15:41] <hazmat> bloodearnest, i get http://pastebin.ubuntu.com/7552245/
[15:42] <hazmat> bloodearnest, likely your package versions are interferring
[15:42] <hazmat> bloodearnest, that's why i'd recommend a virtualenv
[15:42] <bloodearnest> yeah, just tried in venv, got a different error, no bzrlib
[15:43] <bloodearnest> this is fresh trusty vm, so could be missing some base stuff
[15:43] <hazmat> bloodearnest, re bzrlib.. you have to install bzr separately due to new pip insanity..
[15:44] <hazmat> bloodearnest, pip install bzr --allow-external bzr --allow-unverified bzr
[15:44] <bloodearnest> yeah
[15:44] <hazmat> there's an extant bug for that one against deployer.. its a bit lame though.
[15:44] <hazmat> i'm hoping this will fix the pip insanity.. http://lwn.net/SubscriberLink/599793/a595880fa4546f4c/
[15:45] <hazmat> either that or we get bzr uploaded to pypi
[15:48] <bloodearnest> hazmat: et voila. Will use this env for further testing, thanks
[15:59] <hazmat> bloodearnest,fwiw  re test without reboot lp:~hazmat/charm-tools/dont-waste-my-time
[16:16] <rbasak> mgz, sinzui: did you say that bug 1320891 shouldn't be able to happen again? Or is it exactly the same as the external dependency change/availability issue?
[16:16] <_mup_> Bug #1320891: make-release-tarball.bash fails with godeps failure <juju-release-tools:Fix Released by sinzui> <https://launchpad.net/bugs/1320891>
[16:17] <sinzui> rbasak, it is less likely to happen
[16:17] <sinzui> rbasak, since we go over the network to get deps. a failure can always happen and it will fail because there are unmet deps
[16:24] <rbasak> sinzui: OK, so let's say that in the future we eliminate the network failure possibility by caching all dependencies locally. In this case, is it possible that something like bug 1320891 could recur, or is that fixed now?
[16:24] <_mup_> Bug #1320891: make-release-tarball.bash fails with godeps failure <juju-release-tools:Fix Released by sinzui> <https://launchpad.net/bugs/1320891>
[16:25] <sinzui> rbasak, locally for whom?
[16:25] <rbasak> sinzui: whoever is running make-release-tarball.bash
[16:26] <sinzui> rbasak, understood. there are several competing projects try to solve go's love ovf tip
[16:27] <sinzui> rbasak, the issue remains that if you have never run the script before, you don't have a cache. As I/jenkins are origins of source packages, do you mean that we should be using a cache?
[16:28] <rbasak> sinzui: I think so, yes. We should (in the long term) maintain a cache somehow, whether that is via a vendor branch or something else.
[16:28] <sinzui> rbasak, yeah, go wont accept that
[16:28] <rbasak> If I then wanted to run make-release-tarball.bash myself, I'd be able to (ideally) get access to that same cache.
[16:28] <mgz> we could just generate tarballs on jenkins...
[16:28] <sinzui> though I have used proxies and falsified certs to make that happen on closed networks
[16:29] <rbasak> This is basically what we have already in the Debian world of source packages and build dependencies.
[16:29] <rbasak> (but that doesn't apply to go here because of how it is different)
[16:29]  * sinzui is not saying he is proud of that, but putting together a https file server claiming to be someone else to make juju work is awesome
[16:29] <rbasak> :)
[16:42] <mgz> sinzui: what do I still need to do to plug in all the git builder bits...
[16:43] <sinzui> A script that calls or sources the make-release-tarball script, then calls the run-unit-tests script
[16:44] <mgz> sinzui: I can just do that in place in the job, right?
[16:44] <mgz> and I set up the gui tool on the cron
[16:44] <mgz> sinzui: I'll get to it.
[16:44] <sinzui> you can to prove what you want to do. I think a proper script can be written later (version controlled)
[17:03] <jose> arosales, mbruzek: ping
[17:03] <mbruzek> jose, we are just finishing up our other meeting.
[17:03] <jose> np
[17:05] <arosales> jose: sorry on my way
[17:33] <james_w`> hi
[17:33] <james_w`> I have a local juju env, and one of the units has started failing
[17:33] <james_w`> it's failing to talk with 127.0.0.1:37017
[17:33] <james_w`> what is supposed to be running there?
[17:34] <sebas5384> james_w`: mongodb i think...
[17:41] <sebas5384> jcastro: do you know what else than change the storage port is needed to bootstrap more than one local env?
[17:41] <sebas5384> can't find anything more about that
[17:41] <jcastro> I believe we don't specifically support multiple local environments
[17:42] <jcastro> james_w`, do all subsequent units also fail to talk? or just the one?
[17:43] <sebas5384> jcastro: oh... i see..
[17:44] <jcastro> sebas5384, I know some people have gotten it to work
[17:44] <jcastro> sebas5384, but it seems to be very "here be dragons"
[17:45] <jcastro> sebas5384, I think splitting it into say, 2 VMs with one LXC environment in each would be cleaner
[17:45] <sebas5384> jcastro: hehe i imagine is not an easy task
[17:46] <sebas5384> jcastro: tell me more about that, so you are talking about having one vm with the juju(client) and in the other vm the lxc ?
[17:46] <sebas5384> that's possible???
[17:47] <sebas5384> because that would be really awesome, like having a juju-remote thingy hehe
[17:47] <jcastro> no I mean 2 vm's, and inside you have the LXC containers
[17:48] <jcastro> but they'd be 2 different environments
[17:48] <jcastro> so like, take 2 vagrant boxes ... with the LXC containers in each one
[17:48] <sebas5384> so a vm for each juju-local?
[17:48] <sebas5384> ok
[17:54] <jcastro> yeah
[17:56] <sebas5384> jcastro: in the doc haves something like "Override the shared storage port if you have multiple local providers, or if the default port is used by another program."
[18:03] <khuss> juju status shows error: cannot run instances: gomaasapi: got error back from       server: 409 CONFLICT
[18:04] <khuss> the nodes are in ready state
[18:04] <jcastro> sebas5384, yeah that conflicts with what thumper told me as to what they support.
[18:05] <sebas5384> jcastro: http://askubuntu.com/questions/470917/can-i-have-multiple-local-juju-environments
[18:05] <jcastro> sebas5384, can you ask on the list what the status of support is on multiple local providers?
[18:05] <sebas5384> i'm trying that approach
[18:05] <jcastro> ok
[18:05] <sebas5384> jcastro: of course :)
[18:05] <jcastro> also, we're having the Ubuntu Online Summit in 2 weeks
[18:05] <sebas5384> uhuu
[18:05] <sebas5384> where?
[18:05] <jcastro> if you want to join in on the hangouts, we're going to do like a bunch of classes, etc.
[18:06] <jcastro> it'll be online, on g+ and here
[18:06] <sebas5384> great!! jcastro i'll be online then
[18:06] <sebas5384> jcastro: if there's any thing i can help :)
[18:06] <jcastro> I'm sending an announcement to the list
[18:07] <jcastro> sebas5384, yeah people can submit sessions for whatever they want, I'll put the instructions in the email
[18:07] <sebas5384> some folks here are now using the drupal charm with a juju-local :)
[18:07] <sebas5384> great! i will be waiting the info
[18:27] <khuss> has anybody seen this error message while trying to deploy a service?
[18:27] <khuss> RROR 2014-05-30 13:26:36,274 maasserver ################################ Exception: No matching node is available. ################################ ERROR 2014-05-30 13:26:36,274 maasserver Traceback (most recent call last):   File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 112, in get_response     response = wrapped_callback(request, *callback_args, **callback_kwargs)   File "/usr/lib/python2.7/dist-packages/dj
[20:08] <james_w`> jcastro: it appears to have spread to all units
[20:08] <james_w`> (sorry, had to dash to the dentist)
[20:09] <james_w`> I thought mongo would be run on the bootstrap machine, rather than each individual unit
[21:21] <arosales> Juju is a finalist at IBM's App ThrowDown
[21:21] <arosales> http://ibmappthrowdown.tumblr.com/
[21:22] <arosales> feel free to browse the selctions and cast your vote
[22:41] <sebas538_> more than one juju-local running in the same machine, hell yeah!
[22:42]  * sebas5384 is happy
[23:12] <designated> does anyone know if juju charms have a problem accepting a bonded, vlan tagged interface as opposed to something like eth0?  I created a 2x10Gbe bond with 802.1q tagging, will referencing bond0.10 for example, present a problem for charms?
[23:59] <sebas538_> hey!