[05:37] tvansteenburgh, marcoceppi : I broke your Jenkins [06:04] Thought someone here may appreciate this: https://mastersproject.info/blog/54de76fb03b198870a6daa12 [06:12] anyone around here connect juju into digitalocean, using kapilt/juju-digitalocean ? i feel that i am close, just having trouble with it timing out. [07:51] after setting up a free-tier account on AWS, things work like a charm. thanks anyway :] === rcj is now known as Guest33360 [08:55] trave: i have successfully use/am-using digital ocean w/ the plugin [08:56] if you're still on that path. the AWS account provider is however one of the more stable/tried/tested providers in the toolbox [09:09] lazyPower, yea, i was likely pretty close with digitalocean, just wasnt picking up my ssh_key identity .pub name or something.. but this AWS path seems pretty stable. I take it the m1.small is about $18/mo.? Just to get my feet wet, how many charms can I throw at one of those instances? I want to toy with IO.js and Redis. [09:09] trave: give or take - Digital Ocean is fairly straight forward once you get the flow - and i've found that often i have to manuallya dd the machines myself since the provisioning of VM's takes longer than their highly touted 60 seconds [09:10] juju add-machine ssh:root@host [09:10] then its in the pool of available machines - however - to answer your aws question - 2 machines (1 for bootstrap, 1 for your apps/containers) [09:10] maybe thats all im missing there, is it doing a timeout when im boostrapping the first instance... now that ive spun up and destroyed a few attempts on AWS, i think i might go back to trying D.O. again [09:11] i'll be around if you want help troubleshooting [09:11] cool, thanks man :] [09:11] their provisioner gets wonky late at night [09:11] maintenance windows and all that [09:12] yea, i might catch some zzzs soon, still got work in the a.m. but feel accomplished having waded my way this far, so far. [09:12] http://blog.dasroot.net/juju-digital-ocean-awesome.html [09:12] i did a fairly decent writeup of the flow to get moving w/ juju on DO [09:13] but that also was submitted for the official docs - so not much to do here other than boost my analytics :) [09:13] Oh yes, I did watch that, which is what got me as far as I did. :D thanks for putting that together, maybe i'll absorb more the second time watching it [09:14] trave: allright - i'll be off on another work station so ping me if you need me for anything. [09:28] lazyPower: probably a dumb question, but heres where i get stuck when spinning up a new digitalocean-0 bootstrap instance... it asks for the root password, even though ive specified an ENV_VAR for DO_SSH_KEY, (do I need to add .pub to the end of that value?) Since I dont yet know what that instances generated root password is, do I let it hang there for a bit while I use the DO UI to go reset that to something else which emails it to me? [09:28] ah, no - you probably need to add your SSH key to DigitalOcean's admin web ui so it loads that into the instance(s) [09:29] Its listed in there, strange. [09:29] http://i.imgur.com/dBakNhu.png [09:30] what version of the plugin do you ahve installed with pip? 0.5.1 assuming? [09:30] yea, ive got it in there, and ive used it successfully before scp'ing it to my other instance by hand before. [09:30] checking.. [09:32] juju-docean==0.5.1 [09:32] juju version 1.21.1? [09:32] 1.18.4-unknown-amd64 [09:32] whoa, thats crusty [09:32] ;] [09:32] sudo add-apt-repository ppa:juju/stable [09:33] im on osx [09:33] ah [09:33] we're shipping 1.18 on mac still? wat [09:33] * lazyPower makes note to follow up on that later today [09:34] it should still work regardless - the manual provider code hasn't changed. [09:34] let me check the bug tracker on hazmat's repo real fast, 1 sec [09:34] i'll try the --devel flag [09:34] ehh [09:34] lets stick with 1.18 for the time being until i can verify [09:34] k :] [09:35] ah [09:35] is the key you're using to authenticate against docean ~/.ssh/id_rsa? [09:35] i named it something else [09:35] i can make an id_rsa [09:35] i bet thats why its having a derp moment [09:35] https://github.com/kapilt/juju-digitalocean/issues/29 [09:36] aha, you da man. [09:36] k, making new key, trying again. [09:47] well, it did something different at least: juju_docean.exceptions.ProviderError: Failed to get running instance digitalocean-0 event: {u'status': u'OK', u'event': {u'droplet_id': 4177198, u'percentage': u'90', u'event_type_id': 1, u'id': 43954384, u'action_status': None}} [09:47] ERROR exit status 1 [09:47] it timed out - thats in relation to this: https://github.com/kapilt/juju-digitalocean/issues/27 [09:48] it gets pokey now and again unfortunately [09:48] k, trying agin :] [10:01] hit paydirt yet trave? [10:02] no, just what keeps seeming like timeouts [10:02] you think the --devel build is in a unstable state? [10:06] i dont know [10:06] its worth a go [10:07] are tyou tracking the vm launch in a browser while you bootstrap? [10:07] or, is 512MB memory/20GB disk the smallest tier that DO does, would spinnging up a smaller instance be faster at all to be under that 3min window? [10:08] yea, i can see it spinning up a digitalocean-0 new instance, the CLI throws the error before its done provisioning that box [10:08] my next feat will be to try out the --devel build, i bet it will work like a charm [10:09] eh [10:09] ;] [10:09] its not a juju problem :( its a DO problem atm [10:09] yea [10:09] the only thing i can suggest is either keep trying until the APi gets un-pokey, or do a pure manual bootstrap [10:09] which is slightly more inconvenient [10:09] but it doesn't have any timing issues, you spin up the machine, specify the bootstrap host in your environments.yaml and you're off and running [10:10] gotcha. yea, i'll try through the day tomorrow, maybe their performance is up during the peak hours [10:10] should be better then yeah. [10:10] its that or try a different DC [10:10] yep, several more avenues to try, this is good for me to broaden my skillset and learn [10:11] thank you so much for the help, youve been great [10:11] I'm sorry it hasn't been as smooth as cheddar tonight however [10:11] i'll be around :] [10:11] if only everything were as reliable :) [10:11] :D [10:35] lazyPower: hey, a simple: brew uninstall juju; brew instal juju; now gives me version: 1.21.1-yosemite-amd64 [10:35] \o/ [10:36] trying yet again ;] [10:40] woot! bootstrap complete. [10:40] is there any reason the examples show 2g memory for the bootstrap machine? this time I just used a 512m box, that should be fine, right? [10:54] yeah thats fine [10:55] the example was with constraints - and thats why you see the higher mem limit [10:57] again, thanks. :] [12:48] hazmat: thanks for the reply, thats basically what i had figured. Juju isn't doing anything special outside of using the agent and falling back to that specified key [12:49] lazyPower: its very badly documented in juju [12:49] search authorized-keys over here https://juju.ubuntu.com/docs/config-general.html [12:50] and afaics thats the only doc for it. [12:51] https://github.com/juju/docs/issues/268 [12:51] left a TODO to follow up on. cheers [12:52] lazyPower: thanks [12:53] lazyPower: for the other issue i have a timeout option sitting in my working copy... [12:53] * lazyPower fist pumps [12:53] so we can see that branch land soon, very nice [12:54] i was considering pulling the repo this weekend and working on the v2api support if you haven't already looked into it and getting something shot over for review [12:54] i got sidetracked with my tinfoil hat, i must admit === mhilton is now known as mhilton-lunch [12:58] hazmat: it may be prudent to close #29 as thats a juju issue and not a plugin issue. [12:59] unless you plan on supporting upload ssh key to DO and setting config to leverage that key === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === Guest33360 is now known as rcj === rcj is now known as Guest49948 [13:48] stub: i killed that jenkins job [13:51] tvansteenburgh: Ta. [13:53] ERROR failed to bootstrap environment: cannot start bootstrap instance: no OS images found for location "West US", series "trusty", architectures ["amd64" "arm64" "armhf" "i386" "ppc64el"] (and endpoint: "https://management.core.windows.net/") [13:53] I think the next one is doomed too [13:54] Still, am interested in how things go with the other providers. I had some code paths not running under lxc that failed with the other providers. [13:56] stub you want me to kill this one too? [13:56] tvansteenburgh: Only if you think a retry will fix that error. 'no OS images found for location' doesn't fill me with confidence of it working today. [13:58] stub: i'll let it run i guess. azure is the environment most often broken. it may work later though [13:59] tvansteenburgh: How many environments are there btw? About 10? [13:59] 5 [13:59] lxc, hp, awc, azure, joyent === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [14:00] I might have told bundletester to run the unittests twice then. [14:01] stub: well let me know if you need stuff killed or whatever, happy to help [14:01] ta. This run looks happy for the next hour in any case. [14:02] stub: also next time you want to start one, let me know, i want to try it on the new jenkins setup i've been working on [14:02] ok [14:13] lazyPower: definitely not uploading people's public keys.. to much responsibility.. i'll close it out. === jhobbs_ is now known as jhobbs === mhilton-lunch is now known as mhilton [15:12] jose, what day do you get to SCALE? [15:12] jcastro: tomorrow 3pm local time [15:12] I'm getting ready to fly out later today === kadams54 is now known as kadams54-away [15:12] oh excellent, me and marcoceppi as well, we can just sit down and bash out the details of our talk [15:12] which isn't until saturday anyway [15:13] yeah, sounds good to me === kadams54-away is now known as kadams54 [16:00] marcoceppi: mbruzek, tvansteenburgh - whats our stance on incoming tests that only stand up the service? Enough to satisfy the requirement or do we gently guide them to write an actual functional test? https://code.launchpad.net/~jose/charms/precise/quassel-core/add-tests/+merge/246001 <- is the test in question [16:01] I'm inclined to say the latter, but its not just my opinion here. [16:01] gently guide [16:02] lazyPower: even if he did a ps aux | grep quassel that would make me happy [16:02] yeah, we have enough weak tests i feel [16:02] +1 [16:03] +1 to that, nice tests are easy enough to make [16:03] (I made those in a hurry since Paul wanted the charm approved and had no tests, no results yet) [16:04] ack [16:04] I promise this wasn't a poke @ your effort(s) jose :) [16:04] question, though [16:04] should 'simple' tests be renamed to 10-deploy, or should I keep them as 99-autogen? [16:05] if you're not doing anything to actually validate, the 99-autogen gives me a clear indicator of what's been done [16:07] ok [16:29] jose: yeah, leave them as 99-autogen, we'll be sniffing that out in the future to find charms that need better tests and it makes it so we can track unmaintained charms [16:29] ok [17:28] tvansteenburgh: Do you recall what constraints the test vms have? I can install the charm on our local OpenStack, but with little RAM the install hook fails due to the OOM killer. [17:29] tvansteenburgh: I don't think I'm going to get anything more useful out of this build, so you can kill it. I might have hung it again too sorry. [17:29] tvansteenburgh: Happy to have it kicked off in your new Jenkins if you want to give it a workout. [17:30] stub: will do [17:30] stub: in the new setup we bootstrap with mem=2G by default [17:31] I'll test that here. I know it fails with 1G, and installs with 8G [17:31] I may be able to tune the requirements down at my end if it is needed. [17:32] stub: http://juju-ci.vapour.ws:8080/computer/charm-bundle-slave/ [17:32] jcastro: lazyPower do you guys know if juju http proxy settings are exposed in charms? [17:33] for things like wget to make use of [17:33] I'm not sure tbh [17:33] yeah, that's a good question [17:33] stub: new tests running, the main job kicks off one job per substrate [17:33] tvansteenburgh: Oh, cool. No more interleaving of output. [17:33] tvansteenburgh: is my docker test still in the queue or did it get wiped? [17:33] lazyPower: Its running now [17:33] * lazyPower does a dance [17:33] ta [17:34] lazyPower: oh, it *was* running [17:34] stub: yeah, much better for several reasons. will be the default soonish [17:34] stokachu: i'm reasonably certain that if its set as an env variable it will be, however it would be best to confirm with someone like wwitzel3 [17:34] lazyPower: pfft i thought that was stub's job and killed it [17:35] tvansteenburgh: >.> [17:35] http://askubuntu.com/questions/430865/juju-http-proxy-and-no-proxy-settings [17:35] lazyPower: what was the test url [17:35] stokachu, ^^^ [17:35] "The proxy options are exported in all hook execution contexts, and also available in the shell through "juju ssh" or "juju run"." [17:35] tvansteenburgh: i didnt get one, i kicked it off from the revq [17:36] lazyPower: if it's any consolation, it had just started [17:36] jcastro: ah nice find! [17:36] lazyPower: no i mean the url of the thing being tested [17:36] https://bugs.launchpad.net/charms/+bug/1413775 [17:36] Bug #1413775: New Charm - Docker [17:38] lazyPower: is this it? lp:~lazypower/charms/trusty/docker/trunk [17:38] yep [17:38] manual deploy to openstack vm with 2G seems ok, no oom [17:39] lazyPower: queued up behind stub's jobs in the new queue [17:39] ta [17:43] @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ [17:45] tvansteenburgh: You may need to have the authorized_keys scrubbed between jobs, like http://paste.ubuntu.com/10276641/ [17:45] or just nuke both of them entirely I guess [17:46] oh, here it goes... [17:46] * stub watches his job on hp [17:51] stub: yeah good point [17:52] DEBUG:runner:2015-02-17 17:37:21 Error getting env api endpoints, env bootstrapped? [17:52] The lxc job bombed [17:53] * tvansteenburgh looks [18:02] grr... my install hook failed on joyent, but I can't reproduce locally or on our openstack :-/ [18:03] stub: no idea what happened on lxc, it's running again here http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/8/console [18:04] ta [18:05] stub: we're capturing all-machines.log at the end of the tests now, that might yield a clue re joyent [18:05] tvansteenburgh: It should, ya. [18:06] I think Cassandra is crashing and we are waiting for the install hook to timeout and fail, rather than slow bootstrapping. [18:07] I have what seems like a pretty dumb question that I'm hoping someone in here might be able to help with me. I already have an environment configured and a machine bootstrapped in EC2. I am now on a different client machine and want to connect to the existing environment/bootstrap. I've already tried migrating the entire 'juju' folder to the other client machine, but I am getting an error back when trying to run 'juju status' of: WARNING discar [18:07] 70/environment/d4c68949-2e62-4604-8aa2-xxxxxxxxxxxxx/api" [18:07] ERROR Unable to connect to environment "aws". [18:07] Please check your credentials or use 'juju bootstrap' to create a new environmen [18:07] t. [18:07] Error details: [18:07] no instances found" any thoughts? [18:09] sh00k: did your $JUJU_HOME/environments/aws.jenv make it during the copy? [18:09] if it did, are you able to ping the ip of the state server? [18:09] Yup, i double checked that - as well as the file permissions [18:09] stand by let me try that [18:11] Damn, so i'm guessing that's it. Ping requests timed out :/ [18:11] that would be a key cause, if you rnetwork reachability isn't there. [18:11] its bit me a couple times too - dont feel like the lone stranger sh00k :) [18:12] hahah thanks a lot [18:16] tvansteenburgh: what do I need to change in the review queue for the new testing stuff? [18:17] marcoceppi: new jenkins endpoint, and you'll get one callback per substrate now [18:18] Docs for this? [18:18] marcoceppi: HAHAHAHAH [18:19] ROFL [18:19] marcoceppi: i figured we could just bang it out at the sprint [18:19] tvansteenburgh: I plan to make a bunch of changes this week to revq [18:19] That works too [18:19] marcoceppi: oh ok, i'll type something up [18:20] sooner would be better i guess [18:20] i'll document it [18:20] Either way is fine. Does the old style work? [18:20] the old one still works for now [18:20] Rough docs are fine. Even a bunch of cmd outputs [18:21] i haven't changed it, just built the new thing on the side [18:21] ack re docs [18:21] Cool [18:21] lunch time === tvansteenburgh is now known as tvan-lunch === kadams54 is now known as kadams54-away [18:42] marcoceppi: the endpoint/json stuff you sent me will largely remain the same yes? [18:44] lazyPower: yes, if it changes I'll let you know [18:45] Will also be exposing more data [18:45] nice === kadams54-away is now known as kadams54 [19:03] will juju destroy-env reomve volumes that may have been attached to instances? [19:03] or simply detach? [19:04] simply detach [19:04] lamont: you're referring to extraneous volumes, such as those that are provided by block-storage-broker correct? [19:04] or additional block devices allocated to the node, not just the baseline disk that comes with the VM? [19:05] lazyPower: correct [19:05] specifically one taht our scripts euca-attache-volumes/ [19:06] yeah, those are not deleted on machine termination [19:06] * lamont does hisdestroy-env then [19:06] lamont: take a snapshot to be safe :) [19:06] now you tell me. [19:06] volume still tehre,arked avaialble [19:06] woo typing! [19:07] * lazyPower grins [19:07] nothing like a dose of doubt right after you send a command no? [19:07] sorry about that - just happened to think its always better to err on the side of caution [19:08] yep [19:08] you were about 8 seconds after I said 'y' [19:09] i wont steer you wrong... on purpose. [19:09] hehe [19:09] alls well that ends well [19:09] however with storage landing in core soon - you might want to be a bit more persnickety about snapshotting before destroying things. [19:10] i'm not 100% sure how it will work, i'd need to fish up some planning docs === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 === roadmr is now known as roadmr_afk === _thumper_ is now known as thumper === roadmr_afk is now known as roadmr === kadams54 is now known as kadams54-away === kadams54-away is now known as kadams54 [20:45] tvansteenburgh: did you kick off that test manually? [20:45] i ask because the bug hasn't been updated, and i assume that is a byproduct of how the test was run [20:46] lazyPower: correct [20:49] lazyPower: i expected to see the results here though, not sure why they're not http://reports.vapour.ws/charm-summary/docker [20:49] jog might know [20:49] * jog looks [21:10] hello all, i know there's a separate maas channel but i'm not getting any response on there [21:10] can anyone help me with enlisting a node? [21:11] the node boots from pxe, and ends up at an ubuntu login prompt, but it doesnt appear as a node in my maas contron panel on either the region controller or cluster controller [21:12] i can ping the pxe'd node [21:35] lazyPower: http://reports.vapour.ws/charm-summary/docker === menn0_ is now known as menn0 [21:51] tvansteenburgh: that's awesome dude [21:53] marcoceppi: what's awesome? [21:53] tvansteenburgh: that URL [21:53] ah, yeah, thank jog :) [21:54] jog: sweet url [21:56] awesomeee http://reports.vapour.ws/charm-summary/mysql [21:56] marcoceppi: yeah, mysql has a long history of passing proof and lint! [21:56] \o/ [21:57] lol === kadams54 is now known as kadams54-away === thumper is now known as thumper-dogwalk [22:44] o/ === thumper-dogwalk is now known as thumper === kadams54-away is now known as kadams54