[01:02] Valduare: o/ [01:02] o/ [01:02] hows it going [01:02] oh wow, a solid 3 hours later. heh heh [01:02] lol [01:02] Good. Just got back from a pretty decent burger chef [01:03] i’ve been sitting here for 3 hours waiting for someone to respond to my hey guys … I didnt want to be rude and leave anyone hanging [01:03] jk :P [01:05] hah well its my mistake for not reading the timestamp :) [01:13] is juju local working okay now? [01:15] lazyPower: im still not up and running with juju [01:15] keeps bugging out [01:15] im at point now where it wont let me add machines at all [01:17] meaning the manual provider workspace has all but given up hope? [01:17] you can no longer spin up a new vm and register it? [01:18] the existing vm's are dirty and wont respond to any deployment commands? they just fail? [01:18] i'm reaching here, whats it doing? [01:21] all of the above [01:21] so I spun up a new vm fresh install of ubuntu and cant even provision it [01:21] https://bugs.launchpad.net/juju-core/+bug/1300264 I got this bug listed [01:21] <_mup_> Bug #1300264: manual provisioning requires target hostname to be directly resolvable [01:27] oi === seelaman` is now known as seelaman [01:28] pretty fancy bug - its triaged :P [01:28] i blame sarnold for all of he above Valduare. Not that he really did anything...directly... but he makes a great scapegoat [01:28] sarnold: we're still buds right? [01:28] lol [01:29] lazyPower: <3 [01:29] sarnold: <3 === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk === jam1 is now known as jam === CyberJacob|Away is now known as CyberJacob [07:25] Clarification on JuJu for web apps and pages | http://askubuntu.com/q/442259 === psivaa is now known as psivaa-afk === rogpeppe3 is now known as rogpeppe === CyberJacob is now known as CyberJacob|Away [08:28] hazmat I did not get what I need to do for the arch constraint [08:29] it's a juju client problem? === AduOvermind is now known as overm1nd === psivaa-afk is now known as psivaa === axw is now known as axw-afk [11:19] frankban: FFe in bug 1282630 approved, but could you please confirm that my statement in comment #5 was accurate? [11:39] conversion from commencing to ready state in maas | http://askubuntu.com/q/442350 === rogpeppe2 is now known as rogpeppe [12:08] overm1nd, output? [12:08] overm1nd, from -v verbose mode [12:08] overm1nd, juju should be detecting the host arch when ssh'ing and retrieving the correct tools for it [12:11] hi hazmat in PVT for the log [12:13] rbasak: yeah quickstart 1.3 does not require 1.18, it just includes changes to make quickstart work well on 1.18. rick_h_: relatedly, what's the plan for FFe vs upcoming quickstart changes (e.g. joyent and/or maas provider support)? [12:13] overm1nd, what's the cli your using? [12:13] overm1nd, are you passing --upload-tools [12:13] i think i might have defaulted that to on.. which is the problem [12:13] I'm not doing anything special [12:13] I just added the arch param [12:15] juju docean bootstrap --constraints="mem=512M, region=ams2, arch=amd64" -v [12:15] this is the command I'm using [12:17] frankban: we can ask rbasak if it'll be possible to get a late update or else we'll just have to make that a ppa release. [12:18] frankban: wev'e been delayed so long with things not sure if it's possible to still get it in trusty or not [12:19] rick_h_: ack, ok thank you. the joyent fixes we needed seems to be landing now, so the joyent card could be unblocked very soon [12:20] frankban: yea, with the delay we've had in that I've assumed we probably won't be able to get that into trusty [12:20] frankban: but we can ask/check if there's some way [12:25] hazmat I see, --upload-tools is passed when creating the machine [12:25] but I don't get why it should be ommitted [12:26] it's not mandatory to have juju tools on the started new machine? [12:29] overm1nd, juju will pull the right binaries on its own [12:29] ah ok [12:30] overm1nd, upload-tools is there because its simple reliable, juju pull binaries mechanism involves a few roundtrips to query various sources. [12:30] so it's failing because architecture are different between the client and the machine [12:30] but upload-tools will fail in this case [12:30] but the pull of binaries would work [12:30] understand [12:31] and the pull is based on the arch param I will pass right? [12:32] overm1nd, no.. thats ignored.. for manual provider juju is going to inspect the machine via ssh and determine the architecture [12:32] ok [12:32] manual provider == docean plugin [12:33] thank you I will try to mess with the code to exclude the upload-tool [12:34] overm1nd, fix pushed to git [12:34] you are super kind and fast! [12:34] overm1nd, your welcome.. let me know if that solves it for you [12:35] sure [12:42] hazmat working great :) [12:42] overm1nd, awesome, i'm gonna push a new release, there's a few other minor fixes. [12:42] agree [12:45] Hazmat are you still sprinting? [12:45] lazyPower, no.. just bcsaller ... i'm running :-) [12:45] Zombies? :) [12:45] apocalypse happens [12:46] I'm in your neck of the woods is why I ask. [12:47] Next time we'll have to coordinate and have an outing [12:48] lazyPower, ah.. sounds good.. your dc or sf? [12:48] I'm in DC until tomorrow afternoon [13:28] marcoceppi: could you help me to completely remove juju and reinstall it from scratch! you may want to a glance at this thread: http://askubuntu.com/q/436975/152405 [13:30] onrea: what platform are you on? [13:30] can* [13:30] Ubuntu 13.10 [13:30] 64 bit [13:31] onrea: sudo apt-get purge juju-core juju juju-local [13:33] I've copied these two file manually to /var/cache/lxc/cloud-precise [13:33] https://cloud-images.ubuntu.com/server/releases/precise/release-20140227/ubuntu-12.04-server-cloudimg-amd64.tar.gz (229 MB) https://cloud-images.ubuntu.com/server/releases/precise/release-20140227/ubuntu-12.04-server-cloudimg-amd64-root.tar.gz (223 MB) [13:33] should I delete them, too? [13:34] What about these packages: postgresql-9.3, charm-tools, cloud-utils [13:36] I'm getting "agent-state-info: '(error: template container "juju-precise-template" did not stop)'" [13:37] It's still going - just a particularly slow machine. I can recover afterwards, but is this timeout tunable anywhere? [13:38] I can't find any environments.yaml reference documentation. === hatch__ is now known as hatch [14:57] lazyPower, wasn't there something in the queue to update owncloud? [14:58] jcastro: yeah thats part of the ceph update that zchander wrote [14:58] is that in the queue I can't see it [14:58] it needs a bit more work, but its coming along nicely [14:58] i already reviewed it and sent it back with requested mods [14:58] do you need the link? [14:58] yes please [14:59] jcastro: https://code.launchpad.net/~xander-h/charms/precise/owncloud/ceph-support [15:00] that guy was on irc wasn't he? [15:01] zchander: <-- he's here. [15:04] hey maybe we should have cherry picked the version upgrade [15:06] anyone have ever seen an lxc machine that gets terminated as soon at it connect to the api server? http://paste.ubuntu.com/7194643/ [15:12] agent-state-info: '(error: error executing "lxc-start": command get_cgroup [15:12] failed to receive response)' [15:12] anyone see that before? [15:12] hmm thats a new one [15:12] oh I know why, nevermind! [15:12] I am trying to LXC inside a local deployment, duh [15:13] Jcastro the upgrade was incomplete. It needed the mysql db migrations [15:14] lazyPower, ahh, I see [15:17] hazmat, is `to: lxc:0` a valid line for a deployer file? [15:17] I want to stick all the services in this bundle on 0, in containers [15:18] but if I do "to: 0" for each service that's hulksmash right? === benji_ is now known as benji [15:29] nevermind, I forgot to add the backports on precise for lxc [15:34] jcastro, http://pythonhosted.org/juju-deployer/config.html#placement [15:35] yep I read that [15:35] double checking re 0.. [15:37] jcastro, yeah that should work kvm:0 and lxc:0 [15:38] * hazmat adds a unit test for that one [15:39] jcastro, in general if you can i recommend doing the co-location form especially for distributed bundles [15:39] jcastro, ie you can get the entire bundle into a single machine using colocated services that's not the bootstrap machine [15:39] with containers [15:40] jcastro, otherwise when people are pulling in these bundles, their going to overload their machine 0 if they pull more than 1 [15:40] er.. use more than one compact bundle [15:40] hazmat, this one is more specifically for a VPS case [15:40] like, I want to go cheap [15:41] jcastro, fair enough ... have you tried the docean plugin ? ;-) [15:41] I have not [15:41] jcastro, well its tailored made for the go cheap [15:41] are your instructions awesome? [15:41] jcastro, i think so.. they beat the heck out of the deployer docs .. ;-) [15:41] jcastro, https://github.com/kapilt/juju-digitalocean [15:42] jcastro, you tell me [15:42] I liked the docs, they provided examples, just not the one I was looking for [15:42] I'll give it a shot later today [15:42] jcastro, the wording on them is tortorous and geeky.. re deployer.. i could barely parse it and i wrote it. [15:43] jcastro, like what the heck does this mean "The serenade service is overriding the default deploy-with unit by explicitly specifying a unit index for the deployment. These are not unit id based but rather based on extant unit count of the remote service starting with 0." [15:43] I had never even heard of serenade [15:43] jcastro, its an example service in the placement docs of deployer [15:43] hazmat, I can fix it up, where do the docs live? I can do that later [15:44] jcastro, there in the source tree for deployer.. lp:juju-deployer [15:44] cd docs [15:44] ack [15:44] sphinx and rst :-) [15:45] I was happily using the lxc provider this afternoon was suddenly it stopped working after a juju destroy-envirnment. I can bootstrap fine but when I try and provision a machine its gets stuck in pending. I've cleared out /var/cache/lxc/. The only smoking gun I can find is in the machine-0.log: [15:45] ERROR juju runner.go:220 worker: exited "environ-provisioner": no state server machines with addresses found [15:46] what state server machines is it referring to (or is this a red herring) ? [15:49] rbasak: juju is making a change that breaks quickstart that they're aiming for 1.18. We've gotten through our original FFE, is there something we can do/file to let us get one more update in so that we don't have broken juju/quickstart combo in trusty? [15:50] rick_h_: this concerns me. What happens in the future if juju makes another change that breaks quickstart? [15:50] rick_h_: it sounds to me that --distro-only needs to be the default then. [15:50] Otherwise the juju-quickstart in Trusty could forever be broken. [15:50] jamespage: ^^ [15:51] rick_h_, rbasak: this is a fix, not a new feature right? [15:51] rbasak: yes, the email about it went out overnight. frankban we can support both paths right? e.g. it'll be backwards compatible and we're just missing the forward working side? [15:51] rick_h_: setting that aside... [15:51] Yeah [15:51] For the immediate problem, we can just distro-patch a fix or something. [15:51] No FFe needed for that. [15:52] rbasak: jamespage yes, so this is a fix, they're moving where you get ip address info. Quickstart needs to grow support for the new location [15:52] But I do think there's a bigger problem here, and we need to address it. Otherwise we're just putting it off until after Trusty release, when we won't be able to fix it, or have far more limited options. [15:52] rbasak: ok, thanks. We should have a fix before EOW. [15:52] yes - no FFe needed [15:53] rbasak: well, I'm not sure even distro-only would help. The idea is that juju updated and we need to stay in sync. If they get a non-distory juju, from a ppa, then quickstat is in that ppa and would upgrade? [15:53] we're getting a last minute heads up on the change before their stable release [15:57] rick_h_: we have the opportunity of fixing it in distro - we can upload both a new juju-core that breaks things, and a new juju-quickstart that can accomodate that change. [15:58] rick_h_: however, after release, if juju-quickstart in Trusty uses the PPA by default, then users using the distro juju-quickstart will get a new juju-core from the PPA, and that will forever be broken. [15:58] rick_h_: an SRU might be possible in that situation, but I don't think it's appropriate to have that breakage happen, and need an SRU, for end users. [15:59] I am trying to workaround public ip address changes on a unit not firing hooks by blocking in the hook, and waiting for 'unit-get public-address' to change [15:59] rbasak: yea, good point and I follow it now. [15:59] talking with frankban about it [15:59] but it never does, even after juju status is reporting a public ip [15:59] rick_h_: ack. To make it clear, I'm fine with uploading a fix for the immediate issue. [15:59] rbasak: +1 [16:00] I understand that some other charms use the same workaround, but I haven't found them [16:00] bloodearnest: I'm not an expert in this area. But my understanding is that your hook should exit, and you'll have a hook fire again when something changes. [16:01] * rbasak isn't sure how that applies to public-address though [16:01] rbasak: sadly that doesn't yet work for public ip address changes [16:01] Ah [16:01] or volume changes [16:01] What's a volume change? [16:02] rbasak: when a new volume is added or removed from the instance [16:02] Is there a big piece I'm missing here? What does juju know about "volumes"? [16:03] Is there any documentation on that, please? [16:03] * rbasak is interested. [16:05] rbasak: juju knows nothing about volumes yet, that's the problem :) [16:05] rbasak: charms know about volumes though [16:05] bloodearnest: you get a udev event there though, right? So you can call back into a hook context using juju-run. [16:05] rbasak: the postgresql charm would be a good place to look [16:06] rbasak: if you're running the development release, yes. We're in 1.16 in prod. [16:06] but I need to deploy this ASAP [16:07] rbasak: once 1.18 is out, then this hacky workaround will be replaced by a less hacky workaround :) [16:13] rbasak: talked with frankban and we agree distro default will be done with this bug fix to make that work [16:14] rbasak: the only thing we'll need to look at with you as a follow up is that we want anyone that gets quickstart from pypi to get the ppa by default [16:14] rbasak: but you use that for the packaging in the distro, so we'll need to agree on some way to have that setting flipped. But that can go after we get 1.18 unblocked [16:14] rick_h_: ack. Hold fire for a few minutes though? otp. [16:15] rbasak: rgr [16:33] rick_h_, frankban, jamespage: I filed bug 1301481. Is there a bug for the immediate issue? [16:33] <_mup_> Bug #1301481: juju-quickstart will be broken in Trusty after juju-core updates in its stable PPA [16:34] rbasak: yep #1301464 [16:34] <_mup_> Bug #1301464: The mega-watcher for machines does not include containers addresses [16:35] rick_h_: thanks. Just to check, we should have juju-core in Ubuntu and juju-quickstart in Ubuntu tasks for these, right, since we need to track when they are fixed in each corresponding Ubuntu package? [16:35] rbasak: hmm, yes, you're right. [16:36] OK, thanks. I'll do that now. [16:36] thanks rbasak [16:38] bloodearnest: ah, that makes sense. Thanks! [16:52] marcoceppi, gnuoy: nice easy one for either of you if you have time - https://code.launchpad.net/~openstack-charmers/charm-helpers/icehouse/+merge/213888 [16:53] jamespage: ack [16:57] marcoceppi, ta === vladk is now known as vladk|offline [17:29] Glad there is a manual environment. Anything I should know about the manual environment usage. Set an 12.04 LTS server up as virtual machine host and install juju,python-pycurl, and openssh packages. anything else? [17:32] webbrandon: don't install juju, just install openssh [17:33] damn, too late, lol [17:33] marcoceppi: thanks [17:37] marcoceppi: make develop in amulet complains about missing setuptools... [17:38] tvansteenburgh: then your venv didn't setup [17:38] tvansteenburgh: make clean_all [17:38] then make develop === hatch__ is now known as hatch [17:39] well i have to make venv first right? [17:40] marcoceppi: http://pastebin.ubuntu.com/7195228/ [17:42] tvansteenburgh: ah, so python three venv isn't working [17:42] tvansteenburgh: what version of Ubuntu are you on? [17:42] trusty [17:43] 2 tvansteenburgh@trusty-vm:~/src/amulet⟫ venv/bin/python --version [17:43] Python 3.4.0 [17:43] tvansteenburgh@trusty-vm:~/src/amulet⟫ venv/bin/python -m ensurepip [17:43] /home/tvansteenburgh/src/amulet/venv/bin/python: No module named ensurepip [17:43] tvansteenburgh: is python3-pip installed? [17:43] how can I make juju treat my manual environment set up virtual machines on its own instead of using 'juju add-machine user@host' and more like when I add a service though say aws. [17:44] webbrandon: you can't [17:44] well, other than use maas [17:44] man my typing is bad today [17:45] marcoceppi: just installed it [17:45] marcoceppi: thanks, Maybe I will try MASS. [17:45] tvansteenburgh: do a clean_all then make install again [17:45] same error [17:45] i wonder if i can install ensurepip with pip [17:45] heh [17:46] apparently not [17:47] tvansteenburgh: do you have python3-setuptools ? [17:48] yeah [17:48] the make file was written for saucy [17:48] so there is probably something missing [17:49] if anyone is into Altcoin mining I would live you participation: https://github.com/webbrandon/mpos-charm (dont pay to much attention to readme, been a week since i updated it.) [17:50] marcoceppi: my python 3.4.0 is missing what stdlib module for some reason [17:50] s/what/a/ [17:50] so maybe not the makefile at fault [17:50] tvansteenburgh: wat [17:50] i can try reinstalling python from source i guess? [17:51] tvansteenburgh: uhhhh, I wouldn't [17:51] yeah, no `ensurepip` [17:51] tvansteenburgh: I was looking at this [17:51] http://bugs.python.org/issue19406 [17:51] ntos ure if it helps === roadmr is now known as roadmr_afk [18:03] sorry to be buggin today. Im am working with manual environment on 12.04 LTS as virtual machine host and when I want to add a new machine it says :ERROR machine is already provisioned. Tried all methods of adding but it gives me the same everytime, what did I miss or is something wrong? === mhall119_ is now known as mhall119 [18:05] or do I actual need to creat my own virtual machines and connect to those? [18:06] think I missunderstood the manual environment. SWitching to MASS per your suggestion marcoceppi === vladk|offline is now known as vladk [18:13] marcoceppi: got it to work with this: [18:13] - python3 -m venv venv [18:13] + python3 -m venv venv --without-pip [18:16] tvansteenburgh: cool [18:16] tvansteenburgh: makes sense since we install pip later on [18:16] right [18:17] tvansteenburgh: pull req welcome :) [18:17] yeah, will do [18:19] twittersphere just sent me this: https://bugs.launchpad.net/ubuntu/+source/python3.4/+bug/1290847 [18:19] <_mup_> Bug #1290847: Command "python3 -m venv" fails if "--without-pip" isn't given [18:31] tvansteenburgh: yeah === CyberJacob|Away is now known as CyberJacob === roadmr_afk is now known as roadmr [18:53] no.. when you setup a private openstack cloud - how does juju know what images map to what releases, etc? is it expected that you setup your own simple stream of data? [18:53] s/no/so/ [20:05] ok, I got the same "[Errno 111] Connection refused" again but this time while using JujuBox. The issue showed up after I vagrant suspend and vagrant resume [20:06] The issue is shown in the juju-gui [20:06] maybe there is a service not running after a reboot? [20:07] I can actually access wordpress (the juju I deployed) fine, but not juju-gui [20:09] lemao: is this in Firfox? [20:09] safari [20:09] lemao: there's a known issue where firefox has issues re-establishing the websocket connection when the machine the gui runs on is reboot [20:09] rick_h_: and this was working yesterday with a fresh JujuBox vagrant box launched [20:10] rick_h_: is there a workaround? [20:10] lemao: looking for the bug second === BradCrittenden is now known as bac [20:11] lemao: hmm, so using chrome worked. Closing and reopening the browser maybe? It's a card of work to investigate atm [20:12] rick_h_: ok. Let me try it. [20:36] okay getting started with maas has raken longer then exprected [20:39] only reason im doing this is because of aws storage size for single host. they need to make tools that integrate their server environments directly into their S3 bin by default when reaching local capacity ;0) [20:43] where do the logs go for the maas server you can normally HTTP to [20:45] Fishy__: /var/log/mass on that host iirc [20:45] er maas obv :) [20:45] celery.log is showing the wrong ip, hummm [20:51] or maybe my webserver isn't running at all? === vladk is now known as vladk|offline [21:04] hi marcoceppi, i've proposed a branch for charm-tools that i'd like you to look at when you've got a chance: https://codereview.appspot.com/83700043. it makes it easy to point charm-proof at different instances of charmworld for bundle proofing. [21:09] hows it going guys [21:10] hey Valduare :) [21:10] lazyPower: see? no waiting three hours :P [21:11] lol [21:11] very nice [21:11] im getting this channel trained up right! :P [21:11] sarnold: clever :P [21:11] lazyPower: :D <3 [21:12] if I brew uninstall juju [21:12] how can I make sure everything is cleaned up [21:20] rick_h_: It seems that a service was not running in the juju-gui machine. I had to 'sudo restart jujud-unit-juju-gui-0' and it is now working [21:21] rick_h_: maybe the Javascript cached in the browser was trying to connect to the missing service. [21:24] one other problem I am having with this Vagrant/JujuBox setup: juju debug-log # I get: Permission denied (publickey,password). [21:25] however, juju ssh testcharm/0 works fine [21:26] what is juju debug-log trying to do? ssh into each of the machines? [21:32] how much memory does juju-bootstrap vm need [21:35] ok, debug-log is trying to ssh into machine-0, which doesn't have the juju key in the authorized_keys file. This seems to be a problem with JujuBox [21:41] wellll, the private key has a paraphrase so this must be another f...g user error [21:41] thanks for listening :-) === sputnik1_ is now known as sputnik13net [22:38] gah [22:38] i dont get it [22:38] i’ve made all new vm’s and cant provision them [22:42] Valduare: what output are you seeing ? [22:42] hmm [22:42] think I just found something odd [22:42] the juju-bootstrap my user account on there’s using my sparkleshare id_rsa key not the one from .ssh/ dir [22:43] ugh [22:54] re-installing the vm’s again from scratch and this time doing a snapshot of fresh install lol === CyberJacob is now known as CyberJacob|Away [23:22] question [23:22] since my VMs are behind NAT [23:23] should i be tunneling into the juju-bootstrap? [23:23] so it can run the commands with their local ip [23:23] instead of trying to port forward specific ssh ports for each vm [23:24] Ok, I destroy'ed and up'ed the JujuBox vagrant box, vagrant ssh'ed into the box and juju debug-log will not work - so this doesn't seem to be a user error. Where should I report this? [23:39] arrg can't get by commisioning [23:43] Valduare: before we move on to that question, i'm still waiting for detalis for your first request of 09:42 [23:44] my history dosnt go back that far [23:44] what did i say [23:45] 09:38 < Valduare> i’ve made all new vm’s and cant provision them [23:45] 09:38 < Valduare> i’ve made all new vm’s and cant provision them [23:45] 09:38 < Valduare> i’ve made all new vm’s and cant provision them [23:45] 09:38 < Valduare> i’ve made all new vm’s and cant provision them [23:45] ah [23:45] < Valduare> i’ve made all new vm’s and cant provision them [23:46] so I blew away the vm’s and started from scratch [23:46] even uninstalled juju with brew uninstall [23:46] then re-installed 1.17.7 [23:46] and it is not able to add machines [23:46] no host found [23:49] going to log into ubuntu brb [23:51] can we bootstrap envs in the trusty 14.04 beta ? [23:52] ghartmann: yes and no [23:52] yes you can deploy cs:trsuty/ubuntu [23:52] but that is about it [23:54] I am getting an error when trying to bootstrap [23:55] and it seems to be related with 1.17 solved in 1.18 [23:55] ghartmann: can you give some details please [23:56] sudo juju bootstrap used to work on 13.10 [23:56] it seems that now we can't run it as sudo [23:56] ghartmann: which provier are you using ? [23:56] local provider [23:56] ghartmann: do the docs say to use sudo ? [23:57] * davechen1y isn't sure, this has changed [23:57] oh look [23:57] they do say you have to use sudo [23:57] it used to require sudo, somewhere on the webpages [23:57] yup, docs are wrong [23:57] but it seems that now it won't need it anymore [23:58] juju will ask if it needs sudo permissions [23:58] the error I get now is [23:58] ERROR bootstrapping a local environment must not be done as root [23:59] but without sudo it giver panic [23:59] gives [23:59] "panic: runtime error: invalid memory address or nil pointer dereference" [23:59] ghartmann: can oyu paste the full panic message [23:59] on the channel ?