[05:37] <stub> tvansteenburgh, marcoceppi : I broke your Jenkins
[06:04] <weblife> Thought someone here may appreciate this: https://mastersproject.info/blog/54de76fb03b198870a6daa12
[06:12] <trave> anyone around here connect juju into digitalocean, using kapilt/juju-digitalocean ? i feel that i am close, just having trouble with it timing out.
[07:51] <trave> after setting up a free-tier account on AWS, things work like a charm. thanks anyway :]
[08:55] <lazyPower> trave: i have successfully use/am-using digital ocean w/ the plugin
[08:56] <lazyPower> if you're still on that path. the AWS account provider is however one of the more stable/tried/tested providers in the toolbox
[09:09] <trave> lazyPower, yea, i was likely pretty close with digitalocean, just wasnt picking up my ssh_key identity .pub name or something.. but this AWS path seems pretty stable. I take it the m1.small is about $18/mo.? Just to get my feet wet, how many charms can I throw at one of those instances? I want to toy with IO.js and Redis.
[09:09] <lazyPower> trave: give or take - Digital Ocean is fairly straight forward once you get the flow - and i've found that often i have to manuallya dd the machines myself since the provisioning of VM's takes longer than their highly touted 60 seconds
[09:10] <lazyPower> juju add-machine ssh:root@host
[09:10] <lazyPower> then its in the pool of available machines - however - to answer your aws question - 2 machines (1 for bootstrap, 1 for your apps/containers)
[09:10] <trave> maybe thats all im missing there, is it doing a timeout when im boostrapping the first instance... now that ive spun up and destroyed a few attempts on AWS, i think i might go back to trying D.O. again
[09:11] <lazyPower> i'll be around if you want help troubleshooting
[09:11] <trave> cool, thanks man :]
[09:11] <lazyPower> their provisioner gets wonky late at night
[09:11] <lazyPower> maintenance windows and all that
[09:12] <trave> yea, i might catch some zzzs soon, still got work in the a.m. but feel accomplished having waded my way this far, so far.
[09:12] <lazyPower> http://blog.dasroot.net/juju-digital-ocean-awesome.html
[09:12] <lazyPower> i did a fairly decent writeup of the flow to get moving w/ juju on DO
[09:13] <lazyPower> but that also was submitted for the official docs - so not much to do here other than boost my analytics :)
[09:13] <trave> Oh yes, I did watch that, which is what got me as far as I did. :D thanks for putting that together, maybe i'll absorb more the second time watching it
[09:14] <lazyPower> trave: allright - i'll be off on another work station so ping me if you need me for anything.
[09:28] <trave> lazyPower: probably a dumb question, but heres where i get stuck when spinning up a new digitalocean-0 bootstrap instance... it asks for the root password, even though ive specified an ENV_VAR for DO_SSH_KEY, (do I need to add .pub to the end of that value?) Since I dont yet know what that instances generated root password is, do I let it hang there for a bit while I use the DO UI to go reset that to something else which emails it to me?
[09:28] <lazyPower> ah, no - you probably need to add your SSH key to DigitalOcean's admin web ui so it loads that into the instance(s)
[09:29] <trave> Its listed in there, strange.
[09:29] <lazyPower> http://i.imgur.com/dBakNhu.png
[09:30] <lazyPower> what version of the plugin do you ahve installed with pip? 0.5.1 assuming?
[09:30] <trave> yea, ive got it in there, and ive used it successfully before scp'ing it to my other instance by hand before.
[09:30] <trave> checking..
[09:32] <trave> juju-docean==0.5.1
[09:32] <lazyPower> juju version 1.21.1?
[09:32] <trave> 1.18.4-unknown-amd64
[09:32] <lazyPower> whoa, thats crusty
[09:32] <trave> ;]
[09:32] <lazyPower> sudo add-apt-repository ppa:juju/stable
[09:33] <trave> im on osx
[09:33] <lazyPower> ah
[09:33] <lazyPower> we're shipping 1.18 on mac still? wat
[09:33]  * lazyPower makes note to follow up on that later today
[09:34] <lazyPower> it should still work regardless - the manual provider code hasn't changed.
[09:34] <lazyPower> let me check the bug tracker on hazmat's repo real fast, 1 sec
[09:34] <trave> i'll try the --devel flag
[09:34] <lazyPower> ehh
[09:34] <lazyPower> lets stick with 1.18 for the time being until i can verify
[09:34] <trave> k :]
[09:35] <lazyPower> ah
[09:35] <lazyPower> is the key you're using to authenticate against docean ~/.ssh/id_rsa?
[09:35] <trave> i named it something else
[09:35] <trave> i can make an id_rsa
[09:35] <lazyPower> i bet thats why its having a derp moment
[09:35] <lazyPower> https://github.com/kapilt/juju-digitalocean/issues/29
[09:36] <trave> aha, you da man.
[09:36] <trave> k, making new key, trying again.
[09:47] <trave> well, it did something different at least: juju_docean.exceptions.ProviderError: Failed to get running instance digitalocean-0 event: {u'status': u'OK', u'event': {u'droplet_id': 4177198, u'percentage': u'90', u'event_type_id': 1, u'id': 43954384, u'action_status': None}}
[09:47] <trave> ERROR exit status 1
[09:47] <lazyPower> it timed out - thats in relation to this: https://github.com/kapilt/juju-digitalocean/issues/27
[09:48] <lazyPower> it gets pokey now and again unfortunately
[09:48] <trave> k, trying agin :]
[10:01] <lazyPower> hit paydirt yet trave?
[10:02] <trave> no, just what keeps seeming like timeouts
[10:02] <trave> you think the --devel build is in a unstable state?
[10:06] <lazyPower> i dont know
[10:06] <lazyPower> its worth a go
[10:07] <lazyPower> are tyou tracking the vm launch in a browser while you bootstrap?
[10:07] <trave> or, is 512MB memory/20GB disk the smallest tier that DO does, would spinnging up a smaller instance be faster at all to be under that 3min window?
[10:08] <trave> yea, i can see it spinning up a digitalocean-0 new instance, the CLI throws the error before its done provisioning that box
[10:08] <trave> my next feat will be to try out the --devel build, i bet it will work like a charm
[10:09] <lazyPower> eh
[10:09] <trave> ;]
[10:09] <lazyPower> its not a juju problem :( its a DO problem atm
[10:09] <trave> yea
[10:09] <lazyPower> the only thing i can suggest is either keep trying until the APi gets un-pokey, or do a pure manual bootstrap
[10:09] <lazyPower> which is slightly more inconvenient
[10:09] <lazyPower> but it doesn't have any timing issues, you spin up the machine, specify the bootstrap host in your environments.yaml and you're off and running
[10:10] <trave> gotcha. yea, i'll try through the day tomorrow, maybe their performance is up during the peak hours
[10:10] <lazyPower> should be better then yeah.
[10:10] <lazyPower> its that or try a different DC
[10:10] <trave> yep, several more avenues to try, this is good for me to broaden my skillset and learn
[10:11] <trave> thank you so much for the help, youve been great
[10:11] <lazyPower> I'm sorry it hasn't been as smooth as cheddar tonight however
[10:11] <trave> i'll be around :]
[10:11] <lazyPower> if only everything were as reliable :)
[10:11] <trave> :D
[10:35] <trave> lazyPower: hey, a simple: brew uninstall juju; brew instal juju; now gives me version: 1.21.1-yosemite-amd64
[10:35] <lazyPower> \o/
[10:36] <trave> trying yet again ;]
[10:40] <trave> woot! bootstrap complete.
[10:40] <trave> is there any reason the examples show 2g memory for the bootstrap machine? this time I just used a 512m box, that should be fine, right?
[10:54] <lazyPower> yeah thats fine
[10:55] <lazyPower> the example was with constraints - and thats why you see the higher mem limit
[10:57] <trave> again, thanks. :]
[12:48] <lazyPower> hazmat: thanks for the reply, thats basically what i had figured. Juju isn't doing anything special outside of using the agent and falling back to that specified key
[12:49] <hazmat> lazyPower: its very badly documented in juju
[12:49] <hazmat>  search authorized-keys over here https://juju.ubuntu.com/docs/config-general.html
[12:50] <hazmat> and afaics thats the only doc for it.
[12:51] <lazyPower> https://github.com/juju/docs/issues/268
[12:51] <lazyPower> left a TODO to follow up on. cheers
[12:52] <hazmat> lazyPower: thanks
[12:53] <hazmat> lazyPower: for the other issue i have a timeout option sitting in my working copy...
[12:53]  * lazyPower fist pumps
[12:53] <lazyPower> so we can see that branch land soon, very nice
[12:54] <lazyPower> i was considering pulling the repo this weekend and working on the v2api support if you haven't already looked into it and getting something shot over for review
[12:54] <lazyPower> i got sidetracked with my tinfoil hat, i must admit
[12:58] <lazyPower> hazmat: it may be prudent to close #29 as thats a juju issue and not a plugin issue.
[12:59] <lazyPower> unless you plan on supporting upload ssh key to DO and setting config to leverage that key
[13:48] <tvansteenburgh> stub: i killed that jenkins job
[13:51] <stub> tvansteenburgh: Ta.
[13:53] <stub> ERROR failed to bootstrap environment: cannot start bootstrap instance: no OS images found for location "West US", series "trusty", architectures ["amd64" "arm64" "armhf" "i386" "ppc64el"] (and endpoint: "https://management.core.windows.net/")
[13:53] <stub> I think the next one is doomed too
[13:54] <stub> Still, am interested in how things go with the other providers. I had some code paths not running under lxc that failed with the other providers.
[13:56] <tvansteenburgh> stub you want me to kill this one too?
[13:56] <stub> tvansteenburgh: Only if you think a retry will fix that error. 'no OS images found for location' doesn't fill me with confidence of it working today.
[13:58] <tvansteenburgh> stub: i'll let it run i guess. azure is the environment most often broken. it may work later though
[13:59] <stub> tvansteenburgh: How many environments are there btw? About 10?
[13:59] <tvansteenburgh> 5
[13:59] <tvansteenburgh> lxc, hp, awc, azure, joyent
[14:00] <stub> I might have told bundletester to run the unittests twice then.
[14:01] <tvansteenburgh> stub: well let me know if you need stuff killed or whatever, happy to help
[14:01] <stub> ta. This run looks happy for the next hour in any case.
[14:02] <tvansteenburgh> stub: also next time you want to start one, let me know, i want to try it on the new jenkins setup i've been working on
[14:02] <stub> ok
[14:13] <hazmat> lazyPower: definitely not uploading people's public keys.. to much responsibility.. i'll close it out.
[15:12] <jcastro> jose, what day do you get to SCALE?
[15:12] <jose> jcastro: tomorrow 3pm local time
[15:12] <jose> I'm getting ready to fly out later today
[15:12] <jcastro> oh excellent, me and marcoceppi as well, we can just sit down and bash out the details of our talk
[15:12] <jcastro> which isn't until saturday anyway
[15:13] <jose> yeah, sounds good to me
[16:00] <lazyPower> marcoceppi: mbruzek, tvansteenburgh - whats our stance on incoming tests that only stand up the service? Enough to satisfy the requirement or do we gently guide them to write an actual functional test?  https://code.launchpad.net/~jose/charms/precise/quassel-core/add-tests/+merge/246001 <- is the test in question
[16:01] <lazyPower> I'm inclined to say the latter, but its not just my opinion here.
[16:01] <mbruzek> gently guide
[16:02] <mbruzek> lazyPower: even if he did a ps aux | grep quassel that would make me happy
[16:02] <lazyPower> yeah, we have enough weak tests i feel
[16:02] <lazyPower> +1
[16:03] <jose> +1 to that, nice tests are easy enough to make
[16:03] <jose> (I made those in a hurry since Paul wanted the charm approved and had no tests, no results yet)
[16:04] <mbruzek> ack
[16:04] <lazyPower> I promise this wasn't a poke @ your effort(s) jose :)
[16:04] <jose> question, though
[16:04] <jose> should 'simple' tests be renamed to 10-deploy, or should I keep them as 99-autogen?
[16:05] <lazyPower> if you're not doing anything to actually validate, the 99-autogen gives me a clear indicator of what's been done
[16:07] <jose> ok
[16:29] <marcoceppi> jose: yeah, leave them as 99-autogen, we'll be sniffing that out in the future to find charms that need better tests and it makes it so we can track unmaintained charms
[16:29] <jose> ok
[17:28] <stub> tvansteenburgh: Do you recall what constraints the test vms have? I can install the charm on our local OpenStack, but with little RAM the install hook fails due to the OOM killer.
[17:29] <stub> tvansteenburgh: I don't think I'm going to get anything more useful out of this build, so you can kill it. I might have hung it again too sorry.
[17:29] <stub> tvansteenburgh: Happy to have it kicked off in your new Jenkins if you want to give it a workout.
[17:30] <tvansteenburgh> stub: will do
[17:30] <tvansteenburgh> stub: in the new setup we bootstrap with mem=2G by default
[17:31] <stub> I'll test that here. I know it fails with 1G, and installs with 8G
[17:31] <stub> I may be able to tune the requirements down at my end if it is needed.
[17:32] <tvansteenburgh> stub: http://juju-ci.vapour.ws:8080/computer/charm-bundle-slave/
[17:32] <stokachu> jcastro: lazyPower do you guys know if juju http proxy settings are exposed in charms?
[17:33] <stokachu> for things like wget to make use of
[17:33] <lazyPower> I'm not sure tbh
[17:33] <jcastro> yeah, that's a good question
[17:33] <tvansteenburgh> stub: new tests running, the main job kicks off one job per substrate
[17:33] <stub> tvansteenburgh: Oh, cool. No more interleaving of output.
[17:33] <lazyPower> tvansteenburgh: is my docker test still in the queue or did it get wiped?
[17:33] <stub> lazyPower: Its running now
[17:33]  * lazyPower does a dance
[17:33] <lazyPower> ta
[17:34] <stub> lazyPower: oh, it *was* running
[17:34] <tvansteenburgh> stub: yeah, much better for several reasons. will be the default soonish
[17:34] <lazyPower> stokachu: i'm reasonably certain that if its set as an env variable it will be, however it would be best to confirm with someone like wwitzel3
[17:34] <tvansteenburgh> lazyPower: pfft i thought that was stub's job and killed it
[17:35] <lazyPower> tvansteenburgh: >.>
[17:35] <jcastro> http://askubuntu.com/questions/430865/juju-http-proxy-and-no-proxy-settings
[17:35] <tvansteenburgh> lazyPower:  what was the test url
[17:35] <jcastro> stokachu, ^^^
[17:35] <jcastro> "The proxy options are exported in all hook execution contexts, and also available in the shell through "juju ssh" or "juju run"."
[17:35] <lazyPower> tvansteenburgh: i didnt get one, i kicked it off from the revq
[17:36] <tvansteenburgh> lazyPower: if it's any consolation, it had just started
[17:36] <stokachu> jcastro: ah nice find!
[17:36] <tvansteenburgh> lazyPower: no i mean the url of the thing being tested
[17:36] <lazyPower> https://bugs.launchpad.net/charms/+bug/1413775
[17:36] <mup> Bug #1413775: New Charm - Docker <Juju Charms Collection:New> <https://launchpad.net/bugs/1413775>
[17:38] <tvansteenburgh> lazyPower: is this it? lp:~lazypower/charms/trusty/docker/trunk
[17:38] <lazyPower> yep
[17:38] <stub> manual deploy to openstack vm with 2G seems ok, no oom
[17:39] <tvansteenburgh> lazyPower: queued up behind stub's jobs in the new queue
[17:39] <lazyPower> ta
[17:43] <stub> @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
[17:45] <stub> tvansteenburgh: You may need to have the authorized_keys scrubbed between jobs, like http://paste.ubuntu.com/10276641/
[17:45] <stub> or just nuke both of them entirely I guess
[17:46] <stub> oh, here it goes...
[17:46]  * stub watches his job on hp
[17:51] <tvansteenburgh> stub: yeah good point
[17:52] <stub> DEBUG:runner:2015-02-17 17:37:21 Error getting env api endpoints, env bootstrapped?
[17:52] <stub> The lxc job bombed
[17:53]  * tvansteenburgh looks
[18:02] <stub> grr... my install hook failed on joyent, but I can't reproduce locally or on our openstack :-/
[18:03] <tvansteenburgh> stub: no idea what happened on lxc, it's running again here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/8/console
[18:04] <stub> ta
[18:05] <tvansteenburgh> stub: we're capturing all-machines.log at the end of the tests now, that might yield a clue re joyent
[18:05] <stub> tvansteenburgh: It should, ya.
[18:06] <stub> I think Cassandra is crashing and we are waiting for the install hook to timeout and fail, rather than slow bootstrapping.
[18:07] <sh00k> I have what seems  like a pretty dumb question that I'm hoping someone in here might be able to help with me. I already have an environment configured and a machine bootstrapped in EC2. I am now on a different client machine and want to connect to the existing environment/bootstrap. I've already tried migrating the entire 'juju' folder to the other client machine, but I am getting an error back when trying to run 'juju status' of: WARNING discar
[18:07] <sh00k> 70/environment/d4c68949-2e62-4604-8aa2-xxxxxxxxxxxxx/api"
[18:07] <sh00k> ERROR Unable to connect to environment "aws".
[18:07] <sh00k> Please check your credentials or use 'juju bootstrap' to create a new environmen
[18:07] <sh00k> t.
[18:07] <sh00k> Error details:
[18:07] <sh00k> no instances found" any thoughts?
[18:09] <lazyPower> sh00k: did your $JUJU_HOME/environments/aws.jenv make it during the copy?
[18:09] <lazyPower> if it did, are you able to ping the ip of the state server?
[18:09] <sh00k> Yup, i double checked that - as well as the file permissions
[18:09] <sh00k> stand by let me try that
[18:11] <sh00k> Damn, so i'm guessing that's it. Ping requests timed out :/
[18:11] <lazyPower> that would be a key cause, if you rnetwork reachability isn't there.
[18:11] <lazyPower> its bit me a couple times too - dont feel like the lone stranger sh00k :)
[18:12] <sh00k> hahah thanks a lot
[18:16] <marcoceppi> tvansteenburgh: what do I need to change in the review queue for the new testing stuff?
[18:17] <tvansteenburgh> marcoceppi: new jenkins endpoint, and you'll get one callback per substrate now
[18:18] <marcoceppi> Docs for this?
[18:18] <tvansteenburgh> marcoceppi: HAHAHAHAH
[18:19] <marcoceppi> ROFL
[18:19] <tvansteenburgh> marcoceppi: i figured we could just bang it out at the sprint
[18:19] <marcoceppi> tvansteenburgh: I plan to make a bunch of changes this week to revq
[18:19] <marcoceppi> That works too
[18:19] <tvansteenburgh> marcoceppi: oh ok, i'll type something up
[18:20] <tvansteenburgh> sooner would be better i guess
[18:20] <tvansteenburgh> i'll document it
[18:20] <marcoceppi> Either way is fine. Does the old style work?
[18:20] <tvansteenburgh> the old one still works for now
[18:20] <marcoceppi> Rough docs are fine. Even a bunch of cmd outputs
[18:21] <tvansteenburgh> i haven't changed it, just built the new thing on the side
[18:21] <tvansteenburgh> ack re docs
[18:21] <marcoceppi> Cool
[18:21] <tvansteenburgh> lunch time
[18:42] <lazyPower> marcoceppi: the endpoint/json stuff you sent me will largely remain the same yes?
[18:44] <marcoceppi> lazyPower: yes, if it changes I'll let you know
[18:45] <marcoceppi> Will also be exposing more data
[18:45] <lazyPower> nice
[19:03] <lamont> will juju destroy-env reomve volumes that may have been attached to instances?
[19:03] <lamont> or simply detach?
[19:04] <lazyPower> simply detach
[19:04] <lazyPower> lamont: you're referring to extraneous volumes, such as those that are provided by block-storage-broker correct?
[19:04] <lazyPower> or additional block devices allocated to the node, not just the baseline disk that comes with the VM?
[19:05] <lamont> lazyPower: correct
[19:05] <lamont> specifically one taht our scripts euca-attache-volumes/
[19:06] <lazyPower> yeah, those are not deleted on machine termination
[19:06]  * lamont does hisdestroy-env then
[19:06] <lazyPower> lamont: take a snapshot to be safe :)
[19:06] <lamont> now you tell me.
[19:06] <lamont> volume still tehre,arked avaialble
[19:06] <lamont> woo typing!
[19:07]  * lazyPower grins
[19:07] <lazyPower> nothing like a dose of doubt right after you send a command no?
[19:07] <lazyPower> sorry about that - just happened to think its always better to err on the side of caution
[19:08] <lamont> yep
[19:08] <lamont> you were about 8 seconds after I said 'y'
[19:09] <lazyPower> i wont steer you wrong... on purpose.
[19:09] <lamont> hehe
[19:09] <lamont> alls well that ends well
[19:09] <lazyPower> however with storage landing in core soon - you might want to be a bit more persnickety about snapshotting before destroying things.
[19:10] <lazyPower> i'm not 100% sure how it will work, i'd need to fish up some planning docs
[20:45] <lazyPower> tvansteenburgh: did you kick off that test manually?
[20:45] <lazyPower> i ask because the bug hasn't been updated, and i assume that is a byproduct of how the test was run
[20:46] <tvansteenburgh> lazyPower: correct
[20:49] <tvansteenburgh> lazyPower: i expected to see the results here though, not sure why they're not http://reports.vapour.ws/charm-summary/docker
[20:49] <tvansteenburgh> jog might know
[20:49]  * jog looks
[21:10] <travnewmatic> hello all, i know there's a separate maas channel but i'm not getting any response on there
[21:10] <travnewmatic> can anyone help me with enlisting a node?
[21:11] <travnewmatic> the node boots from pxe, and ends up at an ubuntu login prompt, but it doesnt appear as a node in my maas contron panel on either the region controller or cluster controller
[21:12] <travnewmatic> i can ping the pxe'd node
[21:35] <tvansteenburgh> lazyPower: http://reports.vapour.ws/charm-summary/docker
[21:51] <marcoceppi> tvansteenburgh: that's awesome dude
[21:53] <tvansteenburgh> marcoceppi: what's awesome?
[21:53] <marcoceppi> tvansteenburgh: that URL
[21:53] <tvansteenburgh> ah, yeah, thank jog :)
[21:54] <marcoceppi> jog: sweet url
[21:56] <marcoceppi> awesomeee http://reports.vapour.ws/charm-summary/mysql
[21:56] <tvansteenburgh> marcoceppi: yeah, mysql has a long history of passing proof and lint!
[21:56] <marcoceppi> \o/
[21:57] <tvansteenburgh> lol
[22:44] <mwak> o/