[00:27] marcoceppi: no, I don't recall and answer [00:27] marcoceppi: just weirdness [00:28] marcoceppi, sarnold: we have on our roadmap the ability to use juju where there is no storage api [00:28] but it is down the line, closer to feb/mar next year at least [00:28] thumper: cool :) [00:52] thumper: for juju-plugins is the -e flag parsed in core or are all flags now passed to the plugin? (and is JUJU_ENV set by juju-core anymore)? [00:59] Hey guys, I'm really new to this and I'm running into a problem. I have a MAAS server setup, and when I try to bootstrap juju, it keeps erroring with "ERROR could not access file 'provider-state': Get http://[old ip]/MAAS/api/1.0/files/provider-state/: dial tcp [old ip]:80: no route to host. [01:00] marcoceppi: not sure, someone has "tweaked it" [01:00] The juju yaml file was changed to the new IP, but it's wanting to look at the old IP for some reason. Thoughts? [01:00] BrianH: what version of juju? [01:01] marcoceppi: 3.2 [01:01] BrianH: that version of juju does not exist, what does `juju version` say? [01:02] Oooh, sorry, 1.16.0-saucy-amd64 [01:03] Setting up a Zentyal 3.2 server at the same time :P [01:03] BrianH: no worries! :) [01:03] BrianH: have you run `juju destroy-environment` ? [01:03] rather, try running that again [01:03] then bootstrap with --debug flag [01:05] Same error, but with the debug spam. [01:05] I'd have to switch my IRC client to the machine to pastebin the results. [01:06] BrianH: run destroy again, then tell me what the contents of ~/.juju/environments/ directory looks like [01:07] it has a maas.jenv file [01:07] BrianH: that's where it's getting the settings from. thumper shouldn't destroy-environment delete that? BrianH, delete that jenv file then bootstrap again [01:08] yes destroy-environment should delete the jenv file [01:08] also, thumper, if I should bother someone else let me know. You're just an easy nick to remember ;) [01:08] :) [01:08] * thumper is just writing emails ATM [01:08] It's not deleting it. Looks like it's filled with tons of old info in there (catted the file before deleting). 1 sec ... [01:10] BrianH: btw, the pastebinit package and program is really helpful :) [01:11] it attempted to retrieve tools, then errors out "ERROR cannot start bootstrap instance: cannot run instances: somaasapi: got error back from server: 409 CONFLICT" [01:11] err, gomaasapi*, not somaasapi [01:11] hah [01:12] BrianH: we know what you meant [01:12] 409 conflict can mean a ton of things, likely it means there are no instances available for your user [01:13] instances on the MAAS? [01:13] BrianH: yes [01:13] BrianH: make sure you've got nodes enlisted and available for your user in the dashboard. Make sure you have your ssh keys, user authentication, etc all configured. Run destroy environment again (for good measure) make sure the jenv is deleted (if it's not we'll need to talk about getting that filed as a bug), bootstrap again with --debug and pipe it to pastebinit (which can be installed on all ubuntu distros) [01:14] I've done plenty of virtualization before (KVM, etc.) but this cloud stuff is so confusing, haha. [01:14] BrianH: the maas stuff can be a bit dodgy to get up and running at first [01:14] I haven't setup any nodes on the MAAS yet. Do I need to do that first? [01:14] BrianH: yeah [01:15] BrianH: so, the way this works with MAAS+Juju is it's not like EC2 or openstack where juju will tell the provider to create a machine [01:15] BrianH: maas is designed to solve the problem "I have all this hardware and I want to use juju to drive it" [01:16] BrianH: so you must tell maas about your hardware/machines first, then juju will use the pool of available machines to deploy stuff to it [01:16] Ah, gotcha. [01:16] It's so hard to find "for dummies" tutorials on this stuff. :) [01:16] bootstraping requires a machine in the provider to do the orchestration. So if you don't have a machine inlisted you'll get a conflict from the maas api, aka "YOU ASKED ME TO DO SOMETHING AND I CAN'T" [01:17] BrianH: yeah, maas still has a bit of a learning curve to it unfortuantely [01:17] marcoceppi, how'd the reboot debug turn out? [01:18] marcoceppi there's a bug in the last release re plugins not recieving JUJU_ENV [01:19] hazmat: I just realized that I have 1.15 bootstrapped, the log is basically empty [01:19] atm its a four layer lookup (cli, env var, home env, env file default) [01:20] that gets duplicated in every plugin [01:20] hazmat: yeah, I was about to start writing a python plugin helper [01:21] that you can call to inherit an argparse that has the same stuff that the juju cli does, but I wanted to make sure that was expected behavior now and not a regression [01:22] olafura, if you need some guidance let me know.. but in truth things would probably be easier with a core plugin, pyjuju dev is basically dead, and support is questionable, but either way i'd be happy to help get you started. [01:22] marcoceppi, its a documented regression [01:23] marcoceppi, future plan versions should get JUJU_ENV var passed [01:23] hazmat: awesome [01:23] Hmm, the node is stuck "Commissioning". Does this usually take a while? [01:24] BrianH: it can take a bit of time, IIRC this is maas doing some pxe booting stuff [01:24] marcoceppi, although they also need to support -e in that context, core isn't parsing their args for them [01:24] hazmat: well it'd be easier to just write an argparse that read -e and default value was os.environ['JUJU_ENV'] [01:25] marcoceppi, definitely [01:26] marcoceppi, didn't see a bug just filed 1246156 to ref the issue [01:26] hazmat: bug? [01:26] hazmat: awesome [01:26] Need help configuing juju on Windows 8 | http://askubuntu.com/q/368232 [01:29] hazmat: thank you for that, pyjuju is great for debugging at least for me. I want to get it into core plugin when I have worked out the kinds. I think a fork of goamz with Cloud Stack specific quirks and a juju provider with some ec2_uri and s3_uri configuration options is the best way to go. [01:35] hazmat: I might commit some in logging functions through out juju-core so I can better see whats going wrong. [01:36] olafura: I think that's what --debug and --show-log are for [01:41] marcoceppi: I changed the IP address of my MAAS server and the web interface is barfing with an Internal Server error. Any way I can fix this? [01:41] I tried restarting avahi-daemon, but still the same. [01:41] BrianH: uh, it's a django application, there's a configuration for it somewhere. It's been a wee bit of time since I've used maas [01:42] BrianH: you can try to run `sudo dpkg-reconfigure mass-cluster-controller` [01:42] BrianH: that should allow you to re-enter the settings [01:43] maroceppi: I know and they are very helpful, I was just warning that if I find somewhere it's missing and would help then I would commit. It looks like the Go code might have a better debugging code then the python version. [01:43] BrianH: err, maybe just run dpkg-reconfigure maas [01:44] Hmm, I ran the first one, entered the IP, still same error. [01:44] Same after dpkg-reconfigure maas [01:45] BrianH: what about dpkg-reconfigure python-django-maas [01:45] Same. [01:46] I'll try rebooting it. [01:46] BrianH: is there an /etc/maas* file/directory? [01:46] Yes [01:46] You managed to catch me just a few days before trying to build my own small maas cluster :\ [01:46] BrianH: try searching through those files for the old IP and replace with the new ip [01:47] marcoceppi: Will do. I appreciate all the help. :) [01:47] heh, I'm lazy enough I'd aim for dpkg --purge and just start with a clean slate, hehe [01:47] sarnold: thought about that, but didn't want to bork anything he's got enlisted [01:47] sarnold: not sure how maas would handle that, though I guess it would just re-enlist it [01:48] marcoceppi: I don't have anything enlisted at the moment. It's all virtualized, so it's easy to setup something too [01:48] marcoceppi: yeah, that'd be painful if much were using it.. but I figured renumbering the maas contrller wouldn't happen after it'd been in use for a while :) [01:48] BrianH: if that doesn't resolve it, then sarnold's suggestion of purge and start again might not be a bad idea [01:48] Cool beans. I might just do that. :) [01:48] BrianH: also, what distro is the maas-master? precise? [01:48] marcoceppi: your approach has the benefit of -learning- how it works. :) [01:48] s/distro/release/ [01:49] No, it's on saucy [01:49] BrianH: cool, saucy has a "better" version of maas [01:50] Ah, good to know. :) [01:50] I read there were lots of improvements from the LTS release, so I figured I'd try getting it running with Saucy first. [01:50] olafura, sounds good [01:51] BrianH: you can use the cloud-tools archive to get the most recent version of maas/juju on precise, but if you're on saucy that's fine (for now) [01:51] I'm just a poor college student trying to learn all this cool, amazing cloud stuff, haha. [01:52] is there any easy trick to make something run only on the first time a db-relation-joined happens for a particular database? [01:52] BrianH: ah, in which case, saucy will do fine for you [01:52] olafura, there's a sample bare-bones skeleton provider for core in https://code.launchpad.net/~fwereade/juju-core/provider-skeleton/+merge/189638 [01:52] so that if I remove-relation and then add-relation again, it doesn't run again [01:52] this is for populating a database with initial data, but not over-writing existing data if it happens to rejoin [01:52] mhall119, store local state [01:52] or adding another unit [01:52] mhall119: you can use files in the $CHARM_DIR to indicate this, for instance after doing the operations `touch .db-populated` then have a check in for that file [01:52] hazmat: Thank you I'll look at that [01:53] I probably have the most high tech home networks in my entire town (probably better than most small business around here too). [01:53] mhall119, ie. store some local state the first time x happens, and check if state before doing x again. [01:53] ok, so that's the usual way of doing it? [01:53] and what's the hook that is called when remove-relation happens? [01:54] mhall119, general case yes, specifics vary based on problem at hand. [01:54] dp-relation-removed? [01:54] mhall119: it gets a little trickier in a multi-unit layout. You'll probably need to devise a way to check the database if that's done [01:54] mhall119, db-relation-broken [01:54] thanks [01:54] marcoceppi: yeah, I have at least 2 instances of the django app connecting to one instance of the db [01:54] mhall119, what marcoceppi is key.. ie check the db as source of truth / sync between multiple units. [01:55] mhall119: I had this problem in the discourse charm, I ended up just having the charm run a query against postgresql to see if it had done the seed or not [01:55] only one set to be an admin node though, and only admin nodes setup the database, so that should be okay [01:55] mhall119: ah, then just touching a file to track state should suffice [01:55] marcoceppi: I can do that, write a custom django management command that checks the db and updates it if needed [01:56] mhall119: that'd be the fool proof, multi-peer, way of doing it [01:56] but if you design the charm to only have one admin node ever, then local state should suffice [01:57] it's designed to *expect* one admin node ever [01:57] the more I think about it, the more I want to recommend you write a task. What if you want to HA your admin nodes? [01:57] if somebody were to make two, it would behave in undefined buy very likely undesirable ways [01:57] mhall119: okay, well that parts up to you then, if admin isn't design to scale there's probably bigger issues to worry about [01:57] marcoceppi: If I understand the webops correctly, the admin node doesn't actually get exposed to the outside world, so it would never need HA [01:58] mhall119: well, it might want HA if the instance were to go away, then you'd possibly want failover so the nodes can talk to a new admin. But I'm just speculating, you know the service better than me (want to make sure you ahve all the info to make an informed descision) [01:59] marcoceppi: the code is the same, the only thing that makes the admin node the admin node is juju set admin_node=True that tells the charm to run syndb, migrate, and other DB setup commands [02:00] the non-admin doesn't talk to the admin, or vice-versa [02:00] mhall119: cool [02:00] mhall119: gotchya [02:00] so a state file should suffice for now [02:00] sounds like it [02:05] marcoceppi: heh, I just scratched the VMs for my server and node and rebuilding it from scratch. I'll set the static IP from the getgo so this doesn't happen again. [02:32] marcoceppi: Btw, while setting up this new server, I discovered it's dpkg-reconfigure maas-region-controller for address changes. [02:33] :) [02:40] BrianH: awesome! good to know [02:41] BrianH: I know you mentioned a complicated home networking setup, so you may already know that maas will basically try to own it's own network and does addressing for nodes via dhcp [02:41] marcoceppi: Yep, I have my dhcp server running on a Zentyal server. [02:42] BrianH: right, but maas runs its own and assumes it is the controller of the network [02:42] maas can use an external dhcp server [02:42] hazmat: oh, cool. I wasn't sure [02:43] marcoceppi, afaicr its just don't install maas-dhcp and configure next-server for dhcp to point to maas or use avahi [03:05] marcoceppi: Ok, I have a new server and node setup. It's still saying "Commissioning" under the status, but the node won't fire up (It's a VirtualBox VM, so I imaging I need to start it manually?). When I start it and it attempts to PXE boot, I get an error about "Nothing to boot: No such file or directory" [03:06] BrianH: yeah, VirtualBox and maas don't play together because VirtualBox doesn't pxe boot [03:06] It sees the Next server and gets it's own IP. [03:06] BrianH: I tried this over a year ago with poor results: http://marcoceppi.com/2012/05/juju-maas-virtualbox/ [03:06] Isn't there a VirtualBox PXE boot image? I think it was iPXE? [03:07] I'm using that tutorial. [03:07] BrianH: right, it has PXE boot but not WOL [03:07] got those two mixed up [03:07] Ah, gotcha. [03:08] It's getting late over here, brain is slowing down [03:08] BrianH: I need to update this article with how to use vMAAS instead. Since MAAS has better virtual support build in (KVM/libvirt support) [03:12] marcoceppi: Nice, I'll keep an eye on it then. I gotta crash for the evening (early day of classes tomorrow). I appreciate all the help you've given. Thank you. :) [03:16] BrianH: o/ have a good one [03:20] Hi, I am trying to get juju to bootstrap on a fresh private OpenStack. I am having issues at the bootstrap level . [03:20] sodre: what version of juju are you using? `juju version` [03:20] sodre, could you pastebin your juju bootstrap -v --debug [03:20] output [03:22] one sec.... its uploading ... [03:26] http://pastebin.com/t0H8DVVG [03:34] hazmat, the pastbin link is up. [03:34] sodre, thanks [03:34] marcoceppi, it is 1.16 [03:35] I am running on saucy and trying to bootstrap a precise image. [03:37] sodre, and you have a precise image loaded into glance? [03:37] I've used smoser's scripts to load up images into open stack. [03:37] It created a bucket called simplestreams [03:38] and yes, a bunch of images on glance. [03:39] from oneiric to saucy. Both daily and released images. [03:40] sodre, can you try running $ juju sync-tools [03:41] it failed, can I paste the error here ? [03:42] sodre, sorry could you re-run with -v --debug and pastebin it [03:42] unless its a one liner.. its generally nicer to pastebin blocks [03:43] okay. it goes in pastebin then. [03:43] http://pastebin.com/BJJUrasS [03:43] sodre, so basically juju needs to find two pieces of info.. tools which it uploads and an image to run them on [03:43] sodre: there's a command pastebinit that you can install and pipe output to [03:44] both are located in a file format called simpelstreams [03:44] hmm [03:45] there's two commands .. one to generate tools simple streams, and another to generate the image simple stream. [03:45] let me install pastebininit... [03:45] okay. [03:46] how do these two commands work ? [03:47] sodre, they basically stick a file into ostack swift with contents from either the upload or listing of tools (in the case of tools) or the an explictly passed in image id in the case of the image command [03:47] the image command is done as a plugin.. juju metadata -h [03:47] but... holding off on that for a moment [03:47] okay. [03:48] sodre, the inability to list the bucket looks suspect in the last pastebin [03:48] sodre, how'd you install openstack? [03:48] agreed . I can list them using swift without problems. [03:49] I installed using JUJU/MAAS [03:49] hmm [03:49] the only difference is that I am using radosgw [03:50] sodre, so from the first pastebin the issue is the need for cloud images for juju to find [03:51] I agree, I can post the output of glance list-images. [03:51] the fact that its uploading tools again on bootstrap is suspect imo, but i think is just because of the lack of simplestreams metadata for the tools, but its not the fatal issue just annoying [03:52] yes. I faced the same issue when bootstrapping MAAS [03:52] glace image-list > http://paste.ubuntu.com/6327950/ [03:55] sodre, try this (precise amd64 image) juju metadata generate-image -i 907ca55d-a2e4-47c5-b26b-4be12bd78ecc -r http://m1basic-05.vm.draco.metal.as:5000/v2.0 [03:57] okay [03:57] boilerplace image metadata written to .juju... [03:58] what is my "public" bucket ? [03:58] sodre, 'juju-dist' bucket [03:59] okay. [03:59] sodre, from the first pastebin it looks like its looking here.. http://m1basic-04.vm.draco.metal.as:80/swift/v1/admin-juju/streams/v1/index.json [03:59] that was the control-bucket [03:59] I can create a juju-dist [04:00] sodre, it doesn't look like your public-bucket is setup correctly.. i believe juju is introspecting keystone metadata here.. the url its getting back is.. sodre, let's try that first [04:01] er.. is [04:01] swift://simplestreams/data/streams/v1/index.json [04:01] which isn't valid, so its not really looking in juju-dist in this case [04:01] okay.. [04:02] sodre, we can try and fix that later, but first we can just drop the simplestreams data into your control-bucket at that location [04:02] hazmat: that genetate image command above is wrong [04:02] -r is region [04:02] -u is endpoint [04:03] looks like -r was being used with an endpoint url [04:03] :) should i start again ? [04:03] wallyworld_, cool, maybe we should document it ;-) [04:03] wallyworld_, so -r RegionOne and -u http://m1basic-05.vm.draco.metal.as:5000/v2.0 [04:03] hazmat: the doco is currently in the command when you do help, but real doco is a wip [04:04] yes [04:04] wallyworld_, cli help output sadly isn't an example of what a user needs to do. [04:04] yeah i know. doco is on the todo list [04:05] i'm way past EOD so i'm going to wander into the night.. sodre your in good hands with wallyworld_ [04:05] okay. Thanks hazmat! [04:07] wallyworld_ : I am in the process of rerunning juju bootstrap. After that I'll upload the generated image metadata. [04:07] bootstrap won't work without the correct image metadata [04:07] both tools and image metadata needs to be in place for bootstrap to work [04:07] correct. That is what hazmat was trying to fix for me. [04:09] do you have a different way to go about it ? [04:09] tl;dr; you need to generate image metadata and upload to your private storage. tools will be synced automatically if not present and bootstrap should run [04:10] or you could upload the tools yourself, but best to let juju do it. i assume you are running 1.16? [04:10] correct. [04:10] I have a bucket called simplestreams [04:10] cool. so "juju metadata generate-images -i xxxxx -r region -u endpoint" [04:10] no [04:10] upload streams/v1/* to private storage [04:11] any bucket in particular ? [04:11] the dir structure is analogoues to cloud-images.canonical.com [04:11] the root of the private storage i *think* from memorty [04:12] so when you run generate, you will have a streams/v1 dir somewhere [04:12] upload that tree to private storage [04:12] it just gave out the .json files directly [04:12] then use validate-image command to ensure it is correct [04:12] but I know what you mean now. [04:13] it has changed in recent builds so i might be misremembering exactly what 1.16 does [04:13] use validate-images before you bootstrap to make sure it is all ok, save wasting time [04:13] it came back with an error. [04:14] ERROR index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} not found [04:14] ERROR exit status 1 [04:14] is that from juju metadata validate-images? [04:14] yes [04:14] can you paste your index file? [04:15] power2_mine. [04:15] also run with --debug [04:15] so i can see where it is trying to look [04:15] power2_mine. [04:15] okay [04:15] whoops [04:15] sorry [04:16] what's power2_mine? [04:16] atm its an old password ;-) [04:16] index.json > http://paste.ubuntu.com/6328011/ [04:16] lol [04:16] screen saver and multi-monitor fail [04:17] hazmat: i need to validate your account details and current password, can you send to me :-P [04:18] sodre: that index file looks ok, so it seems it is not being uploaded to the right place [04:18] sodre: can you run validate-images with --debug? [04:19] validate-images --debug > http://paste.ubuntu.com/6328025/ [04:20] sodre: where did swift://simplestreams.... come from? that's not right [04:20] yeah.. good point. [04:20] maybe I should clean my env again. [04:20] wallyworld_, that's probably from keystone as a default unconfigured value [04:21] ahhhhh [04:21] I had an old openstack.jenv laying around. [04:21] oh ok. keystone should not be returning anything for product-streams endpoint else juju will use it [04:21] yeah, those jenv files are a bit of a trap [04:21] yeap. [04:22] wallyworld_, it looks like he could upload directly to the control-bucket 'admin-juju' not optimal but functional [04:22] ugh.. stale jenvs.. [04:22] hazmat: yeah, right now, you do need to upload to control bucket [04:22] alright. should I just get rid of the image-metadata-url from .jenv ? [04:22] yep [04:23] wallyworld_: sodre: wallyworld_: sodre: https://bugs.launchpad.net/goose/+bug/1209003 [04:23] <_mup_> Bug #1209003: juju bootstrap fails with openstack provider (failed unmarshaling the response body) [04:23] I'm off for a sec, but will be back in ~ 1 hr [04:24] jam: right now it's a simplestreams config issue. hopefully goose bug won't matter once that gets sorted [04:25] jam: that looks like some of the errors I am seeing as well. [04:25] alright. Let me try again with a clean jenv. [04:26] it looks like I posted it to the wrong place... [04:28] sodre: you only need image-metadata-url if you want to get tools from a place other than 1) your private cloud storage, 2) the configured endpoint in keystone [04:29] latest validate-images --debug http://paste.ubuntu.com/6328061/ [04:29] Ideally I would like to host it internally. But right now I just want it to work :) [04:32] sodre: so, it looks like it can find the index file now. but there's a mismtach on region/endpoint. looks like endpoint in the json is http:// . are you sure it should not be https:// [04:32] I don't think the default install used https [04:32] it should be the same as your auth_url [04:32] let me double check. [04:33] it is http [04:34] hmmm. can you paste the whole output without the truncation? [04:34] as per keystone catalog [04:34] it is expecting to match what is in your env file [04:35] are you using auth-url setting in env file? [04:35] Okay. do you want the output of export ? [04:35] or the output from openstackrc.sh ? [04:35] just the --debug when running the validate-images [04:36] that was the whole output, 17 lines. [04:36] btw, the generate-images command in 1.16 was a prototype tool for developers, it wasn't intended for end users. but there's no easy way to do private clouds without it. it's much better in next release [04:37] sodre: looks like the log is truncated on the right edge though [04:37] oh wait [04:37] i missed the scrol bar [04:37] doh [04:37] :) [04:39] sodre: so just to check, can you paste the content of http://m1basic-04.vm.draco.metal.as:80/swift/v1/admin-juju/streams/v1/index.json for me? [04:41] this is the output after calling swift download admin-juju .... > http://paste.ubuntu.com/6328095/ [04:42] sodre: can you see the problem? [04:42] i can :-) [04:42] ohhh [04:42] :) [04:42] Region region region :) [04:42] yeah :-) [04:42] that's strange.. [04:43] looks like that file was from the earlier wrong command [04:43] argh... [04:43] where -r was used [04:44] alright... getting better [04:44] odre@ubuntu:~/.juju$ juju metadata validate-images [04:44] matching image ids for region "RegionOne": [04:44] 907ca55d-a2e4-47c5-b26b-4be12bd78ecc [04:44] yay [04:44] yay \o/ [04:45] so, should I bootstrap now ? [04:45] why not. i can't recall if 1.16 had the tools syncing stuff in it [04:45] cause you could get the tools set up first [04:45] save the upload [04:45] i think it did [04:46] it has some in there. [04:46] so, you can get the tarball you want [04:46] save locally to /tools/releases [04:46] juju sync-tools --source= --destination= [04:47] then upload tree to private storage [04:47] so private storage will have a tools dir in it [04:48] or you could just bootstrap with --upload-tools :-) [04:48] I think it has one from the left-over bootstrap we did earlier [04:48] you could run validate-tools then [04:48] to see if juju can find them [04:48] validated :) [04:48] \o/ [04:49] so bootstrap should work hopefully [04:49] unless that goose bug gets in the way [04:50] alright... [04:50] I forgot to run with debug [04:50] oh, did it fail? [04:51] well.. [04:51] it tried to start it with a m1.tiny. [04:51] and that gave an error. [04:51] but we had a lot of progress. [04:52] yeah, there's a potential bug selecting a large enough instance type. use a constraint [04:52] --constraint mem=1024 for example [04:52] add that to bootstrap command [04:52] okay, if I control-c bootstrap how do I get it to run again ? [04:52] bootstrap should return quite quickly [04:53] you could juju destroy-environment, BUT that will also delete tools and image metadata [04:53] what i would do is [04:53] kill the bootstrap machine manually using nova cli [04:54] then remove the provider-state file which juju put in private storage [04:54] that will allow you run run bootstrap again [04:55] sodre, wallyworld_, and kill the jenv again [04:55] okay. [04:55] the jenv can stay i think? [04:55] wallyworld_, it refs the old instance id i think [04:55] cause it has cached the env stuff, no need to delete it [04:55] ok, won't hurt [04:56] jenv gone. [04:56] in the past, i haven't needed to delete it i don't think, but better be safe [04:57] wallyworld_, your right, its fine for this provider type [04:57] how is juju handling neutron networks? Does it create one by default ? [04:58] ok. so many gotchas to keep track of :-) [04:58] wallyworld_, manual provider was the one it caused issues with for me in this same context. [04:58] sodre: we haven't done anything to support neutron yet afaik. it's a work in progress [04:58] unless i'm misunderstanding the current state of play [04:59] sodre, its not really doing anything with them atm, it assumes a private network for the instances to talk among, and a public net (or floating ips).. [04:59] okay. so every instance will connect directly to the ext-net [04:59] we've got plans to address for 14.04 with first class network support (vpc, neutron, vlan, etc) [04:59] nova can be setup so that neutron is transparent [05:00] should keep things working for people [05:00] lifeless: ! [05:00] lifeless, long time :-) [05:00] hi [05:00] wallyworld_: hazmat: o/ [05:00] lifeless, see you next week :-) [05:00] hazmat: most excellent [05:00] i guess my setup is not that way yet. the instance only got a floating ip [05:00] what openstack release supports neutron transparently? [05:00] sodre: no internal address ? [05:01] wallyworld_: Grizzly and Havana and Icehouse [05:01] excellent, thanks [05:01] wallyworld_: it's a config issue though [05:01] wallyworld_: see the default_floating_pool nova setting [05:01] lifeless: yes, [05:01] yeah, in the past we've had to assume lcd [05:02] if thats wrong, and the default value is 'nova' but the example used in the all the admin guides for neutron calls it 'ext-net', then nova will refuse to do floating ip operations [05:02] wallyworld_: separately there is a nova setting to auto-allocate floating ips to instances [05:02] that defaults off [05:02] ok [05:02] without that instances by default end up with no floating/public ip [05:03] here is the pastebin http://paste.ubuntu.com/6328157/ [05:03] sodre: i've not seen that error before. openstack networking is not my strong point [05:04] sodre: you could try use-floating-ip=false [05:04] in juju env config [05:04] yeah, let me try again. [05:06] No nw_info cache associated with instance <- thats a new one [05:06] it's being thrown from the nova virt rpcapi manager [05:06] [or near tere, I haven't grepped for it yet] [05:06] I just needed to have a local network setup before calling bootstrap [05:10] Guys, Thank you so much. [05:11] I would not have been able to figure all this out on my own . [05:11] I am having issues with the image booting up but I think they are all on my end [05:12] sodre: no problem. the tooling and doco associated with setting up a private cloud is very much a work in progress. it works if done correctly, but the doco is not finished yet. next release will be better [05:13] wallyworld_: Thanks a lot. Is there a place where the wip document is located? [05:13] sodre: right now, wip = no doc except for "juju help " sorry [05:13] np. [05:14] so the commands have help but there's no end user task oriented doc [05:14] ic... well .. thanks again ! [05:14] anytime [05:15] about moving the tools and and streams to their own dedicated buckets... [05:16] once that is done, it is just a matter of changing tools-url and image-metadata-url , right ? [05:16] yeah [05:16] is there an easy script to mirror the s4 juju-dist bucket ? [05:16] set up a publicly readable bucket [05:16] s/s4/s3/ [05:17] that is going away real soon, and streams.canonical.com will take its place [05:17] and mirroing will just be an rsync [05:17] nice. [05:17] if you can hang on a week or so..... [05:17] not sure the exact time, but rsn [05:18] it will coincide with release of juju 1.18 [05:18] when is 1.17 coming out ? [05:18] hola juju crowd.. I have a problem with 1.14.0-0ubuntu1~ubuntu12.04.1~juju1. I need to set a config-flag for the nova-cloud-controller charm. [05:19] soon. we will release that as a 1.18 beta if you like [05:19] i try: juju set nova-cloud-controller config-flags="scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter" [05:19] but nova.conf on the machine end up with only scheduler_default_filters=AggregateInstanceExtraSpecsFilter [05:19] try using "" around the filters value? [05:20] just a guess [05:20] i first try \, , and this was a disaster [05:20] wallyworld_: have a good rest of day back in your TZ. [05:20] sodre: will do :-) let us know if you need anything else [05:20] there are 2 nova-cloud-controller (haclustercharm subordinate), and one unit ended up with config error that i could not sovled (no realtion id error each time i try a new juju set) [05:21] thanks I'll stop by again. [05:21] i had to destroy the unit and redeploy it again. [05:21] melmoth: i'm not sure of the answer to your question. marcoceppi are you around to help out? [05:22] marcoceppi, if you are around this is with some folk you already met (remember the land of the rising sun ? :-) ) === CyberJacob|Away is now known as CyberJacob === dosaboy_ is now known as dosaboy [12:59] sodre, wallyworld hazmat, fwiw, the 'example-sync' that sodre ran specifically creates metadata in swift bucket. [12:59] so that juju should just need to be pointed at that. [13:00] (or the target 'swift' output made to match) [13:00] also an option is to register that endpoint in keystone (swift path) and then juju will find it there. [13:00] that is how canonistack works. === zz_paulczar is now known as paulczar [14:04] sinzui: https://code.launchpad.net/~adeuring/charmworld/more-heartbeat-info/+merge/193248 [14:04] thank you adam_g [14:04] thank you adeuring === paulczar is now known as zz_paulczar [14:13] so in ceph charm, where does 'charm' command come from (in Makefile). [14:13] racedo:/win 27 [14:13] jamespage, ^ ? [14:13] doh [14:14] smoser, oh [14:14] smoser, charm-helpers itself - that bit sucks alot right now [14:14] smoser, no [14:14] sorry [14:14] charm-tools [14:14] charm proof right? [14:14] I have a "local" environment but I can't connect to it - and I think it's because I ran out of disk space. How can I clean up if I can't connect? [14:15] provider-state: dial tcp 10.0.3.1:8040: connection refused [14:16] smoser: the charm command is from charm-tools [14:16] freephile: that means that the API service or db service isn't running, what does `initctl list | grep juju` show? [14:16] marcoceppi, so install that from archive ? [14:17] smoser: in saucy it's good, otherwise install from ppa:juju/stable [14:17] marcoceppi: juju-db-root-local stop/waiting juju-agent-root-local start/running, process 1145 [14:17] saucy... pfft. [14:17] trusty man. [14:17] smoser: trusty is good too :) [14:17] lol smoser [14:17] smoser: basically you want charm-tools > 1.0 [14:17] charm tools depends on juju core ? [14:18] smoser: recommends, I believe [14:18] ah. probably. [14:18] since it's also a juju plugin, via `juju charm` [14:18] recommends == depends for all practical purposes [14:18] * marcoceppi nods [14:20] marcoceppi: do I start with (as root) 'service juju-db-root-local start' [14:20] freephile: sorry, yes sudo start juju-db-root-local [14:21] then run juju status again [14:21] or whatever command failed [14:28] if I start the db service, it immediately stops (because I'm out of disk space). === zz_paulczar is now known as paulczar [14:28] I tried 'start juju-db-root-local && juju ssh opengrok/0 initctl stop opengrok-index' [14:29] to no avail [14:29] freephile: ohh, you're going to need to free up some disk space [14:30] freephile: if you can't trim files (say, zeroing out ~/.juju/local/log/*.log files) etc, you can just destroy the opengrok unit with lxc commands [14:36] adeuring, r=me. coordinate your change to Approved with bac. he is giving trunk to juju-gui-bot now [14:37] adeuring, I had some suggestions for a follow branch [14:37] sinzui: thanks [14:37] evilnickveitch, hey, apparently you got a submission on bundles for the docs? [14:37] marcoceppi: thanks, I zeroed out the log files (was wondering about that) but it wasn't enough apparently to get the db to stay up. I'll check into lxc commands [14:38] freephile: you'll want `sudo lxc-ls --fancy`, then `sudo lxc-destroy -n ` [14:38] sinzui: featured is already covered by "API(2|3) interesting", I think [14:39] freephile: after you clear up disk space, start the juju db then run juju destroy-environment to get the rest of the deployment cleaned up [14:39] jcastro, i do, yes [14:39] adeuring, it is not [14:39] evilnickveitch, do you have it anywhere I can look at it? branch or something? [14:39] sinzui: so, you think it should get its own status? Or am I missing something else? [14:40] adeuring, API2/3 can fail for 3 or more reasons. Knowing specifically that featured is empty on staging is fast fix. [14:40] jcastro, i have a google doc [14:41] adeuring: please do not land to charmworld right now. [14:41] bac: ok, tell me hwen i can land it [14:41] adeuring, juju-gui (staging and production) needs to fail when those collections are empty. We we setup a new env, charmworld is still in a bad state after the first ingest because we are often missing human created data === paulczar is now known as zz_paulczar === zz_paulczar is now known as paulczar [14:50] marcoceppi: Success!!! `lxc-ls -l; lxc-stop -n root-local-machine-2; lxc-destroy -n root-local-machine-2; start juju-db-root-local; juju status;` [14:52] freephile: cool, you'll find that opengrok service is in a down state (obviously, as you destroyed it) but you should be able to destroy the environment and recreate it [14:52] etc [14:57] adeuring: go ahead and land charmworld as before. then hold off any more. [14:57] bac: ok, thanks [14:58] bac: done === natefinch is now known as natefinch-afk [15:54] hey. [15:54] so i just manually manage 'revision' file ? [15:54] smoser: no [15:54] revision file is only used for local deployments, and it should be incremented automatically by juju [15:55] smoser: infact you can add it to .bzrignore [15:55] jcastro: CHARM SYNC [15:55] o/ [15:55] yep [15:55] firing it up [15:55] wanna seed the pad? [15:55] woo who [15:55] jcastro: yup [15:57] evilnickveitch: https://juju.ubuntu.com/docs/authors-charm-writing.html "The README is a good place to make nots about how the charm works" <-- should I file a bug about that, or is it okay to have just mentioned it here? [15:57] evilnickveitch, misfire, one sec. [15:58] https://plus.google.com/hangouts/_/7acpicbshl5mtk1tqjntg4g30k?authuser=0&hl=en [15:59] evilnickveitch, arosales ^^ [15:59] jcastro: http://pad.ubuntu.com/7mf2jvKXNa === rogpeppe2 is now known as rogpeppe === paulczar is now known as zz_paulczar === mwhudson- is now known as mwhudson [17:14] marcoceppi, i asked about 'revision' because 'charm proof' complained 'ERROR' in its absense. [17:37] halp! [17:37] mhall@mhall-thinkpad:~$ juju status [17:37] ERROR Unable to connect to environment "local". [17:37] Please check your credentials or use 'juju bootstrap' to create a new environment. [17:37] Error details: [17:37] Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused [17:37] sinzui, are there PPA builds of 1.16.1 somewhere? [17:37] mhall@mhall-thinkpad:~$ sudo juju bootstrap [17:37] Swipe your right index finger across the fingerprint reader [17:37] ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused [17:39] adam_g, they are not yet. We are looking into an azure issue that first blocked the test, and now looks like a fix is needed for 1.16.1 [17:39] adam_g, I am off to lunch. I can arrange a package for you if you need one today [17:45] jcastro: juju is broken [17:45] what's up? [17:45] see my errors above [17:45] I can't even bootstrap a local env [17:45] can you pastebin the `sudo juju --debug bootstrap`? [17:46] mhall119, you're in luck, marco and I are working on a troubleshooting the local provider document [17:46] and by luck I mean "haha". [17:46] mhall119: a good first step is to delete any misc .jenv files in ~/.juju/environments and try again [17:47] mgz: \o/ that seems to have done the trick [17:47] \o/ [17:49] * mhall119 tries deploying again === zz_paulczar is now known as paulczar [18:04] smoser: that's been fixed in 1.1 which should be released tomorrow [18:05] mhall119: jcastro: https://bugs.launchpad.net/juju-core/+bug/1246429 [18:05] <_mup_> Bug #1246429: destroy-environment no longer removes .jenv [18:06] thanks marcoceppi === natefinch-afk is now known as natefinch [18:09] I was about to say "Hey I can't replicate this!" But then I realized I'm on 1.15.1 :) [18:49] smoser: can I talk to you about simplestreams [18:53] sodre, i've got a few minutes. [18:53] whats up? [18:54] I was trying to run your script last night. The found an issue with the integration with radosgw [18:54] s/The/I [18:54] hm.. ok. [18:54] my question: is there a particular issue whey you call _strip_version? [18:54] s/whey/why/ [18:55] this is on line openstack.py:98 [18:58] sodre, that is copied from other clients that do it. [18:58] what is the problem with doing that ? [18:59] It does not work with the default ( juju deployed ) ceph-radosgw charm. [19:00] If I don't strip the version, then your code works fine. [19:01] hm.. === paulczar is now known as zz_paulczar [19:22] smoser: still thinking ? [19:26] sodre, sorry. on a call now. [19:26] alright np. [19:26] let me know when we can chat about that bug. [19:27] jcastro: is there any easy way to condense 'juju status' to just the really useful details? [19:27] I want to 'watch "juju status"' to see things changing, but it's more than will fit on my terminal [19:27] mhall119: same problem here .... [19:28] mhall119: sounds like you want to write a plugin [19:28] mhall119: like what, you just want a list of units and their status? [19:28] yeah [19:28] mhall119: hold up, let me try something [19:29] have you tried watch 'juju status | grep state' ? [19:31] the quotes are important. [19:32] sodre: that doesn't give me the unit though [19:33] yeah, we need a better 'grep' ' [19:33] mhall119: sodre: it's time to introduce you to plugins, give me just a few more mins I'll have a working example [19:34] :) nice [19:35] mhall119, I do `juju status wordpress` or whatever to get each one [19:35] I have long wanted [19:35] juju top [19:35] with an htop looking view of stuff [19:40] +1 [19:43] mhall119 jcastro sodre drop this in a directory in your path: http://paste.ubuntu.com/6331880/ [19:43] once it's in path, juju prettyprint will produce that, you should be able to watch it from there [19:44] come onnnnnnn pastebin [19:45] mhall119: sodre https://gist.github.com/marcoceppi/7238964 [19:46] With more time you could easily make a juju top command which could poll the API and present useful data about services, units and machines much like htop [19:47] yeah, it's just a manpower issue [19:47] no one's going to drop working on HA to work on juju top, heh [19:47] exactly. So I have just empowered two users to use and abuse plugins === zz_paulczar is now known as paulczar [19:48] It'd be a nice low hanging fruit for a new person though [19:48] now we just need to wait for mhall119 to submit his juju top plugin :) [19:48] (python-jujuclient exists as a pyton library for talking to the API, hint hint wink wink) [19:48] stealing client people won't happen either, I've already tried that [19:48] * marcoceppi twiddles thumbs [19:49] I like it :) [19:50] it crashes at first since I have nothing on open-ports. but I got the gist. [19:51] sodre: ahh, yeah public-address will mess it up too. You'd just have to add sanity checks in there [19:51] * marcoceppi does the quick and dirty script [19:51] "use at your own risk" [19:51] thanks for pointing out how I can put it together. [19:52] sodre: yeah, you can run `juju help plugins` to get an idea of plugins you have installed and what not [19:53] they're an under publicised feature of juju [19:53] ahhh [19:53] that's where the metadata and deployer show up. [19:54] sodre: same with charm-tools if you have that installed `juju charm`, etc [19:55] really it's basically juju- if it doesn't exist in core but that binary exists, juju core just passes everything on to it [19:55] exactly like git and bazaar plugins [19:55] ic [20:00] jcastro, so to double confirm on charm bundles [20:00] so bundles are pretty cool [20:00] ok so I have .... 5 bundles right now [20:01] liferay [20:01] while policy is properly defined bundles will be "featured" in the gui but just under your name space. Is my understanding correct? [20:01] jcastro, ^ [20:01] "scalable jenkins" which is one jenkins with 3 slaves [20:01] scalable mediawiki with a load balancer [20:01] a simple mediawiki [20:01] and wordpress [20:01] arosales, yes, I'm about to push the first one [20:01] and then we'll see how it gets indexed [20:01] jcastro, cool thanks for confirming on that [20:04] jcastro: will you be able to promote non "promulgated" bundles? [20:05] I am asking bac that now [20:06] jcastro: dude, can you test charm-tools 1.1 for me? [20:07] and just `charm proof --bundle` each of the bundles you're writing? [20:07] oh, yeah! [20:07] PPA? [20:07] jcastro: because you're definitely doing it wrong [20:07] jcastro: it'll be a manual install, let me update the URL and I'll give you a link [20:07] ok guys, so featuring a bundle will be the same as a charm [20:07] we go into manage.jujucharms.com and check the box [20:07] it'll ingest the first bundle in ~15 or so, then we can mess with it [20:10] marcoceppi, hey so bac tells me that we'll also need to promulgate the bundles [20:10] so we'll need charm tools updated [20:11] it has promulgate support [20:11] <3 [20:11] for bundles? [20:11] <3 [20:11] yes [20:11] sweet [20:11] oh, you mentioned it during the status call, I remember now [20:12] marcoceppi, ok so I'll test your proof tool [20:12] then push [20:12] we'll wait 15 for them to seed in the store [20:12] then you can promulgate? [20:12] yes [20:12] jcastro: [20:12] I'll push my discourse one up too, but not promulgate it [20:13] bzr branch lp:~marcoceppi/charm-tools/bundle-support charm-tools; cd charm-tools; python setup.py install [20:13] err [20:13] bzr branch lp:~marcoceppi/charm-tools/bundle-support charm-tools; cd charm-tools; sudo python setup.py install [20:13] jcastro: then you should be able to juju charm proof --bundle /path/to/bundle/directory [20:13] well, you can ommit the --bundle flag, it'll detect a bundle automatically [20:14] and then the fury of the proof'er will come down upon you! [20:14] rick_h__: I was looking over his branch, saying to myself "yeah, this is a great test case for proof" [20:14] lol [20:15] marcoceppi: so heads up, we're actually going to work on pulling in the deployer to do bundle proofing. Share the same exact bits as much as possible. So heads up that new stuff should pop up even though you don't update the charm-tools [20:15] rick_h__: that's fine and perfect [20:16] marcoceppi, http://pastebin.ubuntu.com/6331983/ [20:16] I tried different permutations [20:16] lol [20:16] rick_h__: the only thing I'm really checking for in the deployer file is annotations [20:16] jcastro: because it's not a valid bundle [20:16] marcoceppi: rgr, just more an FYI because people will fail proof and probably come chat with you/this channel [20:16] the error message isn't clear though, it looks for "bundle.json or bundle.yaml" [20:17] which are the only two files supported [20:17] rick_h__: I'll make sure it displays warnings as well [20:17] from the api [20:17] marcoceppi: cool [20:17] so is that a bug in the tool or are we expecting everyone to name things bundle.yaml [20:17] jcastro: I'll update so that when you use --bundle flag and it detects not a bundle it'll say "Not a bundle because no bundle file (.json or .yaml) found" [20:17] jcastro: expecting them to name it bundle.yaml [20:18] jcastro: the GUI expects bundle.* [20:18] it's a bug in that the message to the user is misleading (and ugly exception traceback) [20:18] same errors when I rename it to bundle.yaml [20:19] also, daddy needs autocompletion! [20:19] jcastro: well daddy can submit a merge req :) [20:19] jcastro: one second, let me branch your branch [20:19] I hope the bundle is valid, because I got it from the gui [20:19] if not, we have other problems, heh [20:20] jcastro: where is your branch? [20:20] jcastro: no, we've got chances to excel :) [20:20] jcastro: rename the envExport to 'wordpress' as well please [20:21] rick_h__: yeah, I was hoping proof would pick that up [20:21] jcastro: we've got a bug to change that to ask you for a name on export, but must not have made it yet [20:21] jcastro: please don't name it wordpress [20:21] marcoceppi: we don't, it's valid [20:21] rick_h__: my proof will [20:21] marcoceppi: but yea, we want to fix the gui export to not keep reusing the same name [20:21] marcoceppi: oh, cool then. [20:21] https://code.launchpad.net/~jorge/charms/bundles/wordpress/bundle [20:21] branch is here [20:21] sorry it's so convoluted. I miss _one lousy_ session [20:21] and this is what you come up with rick [20:22] might as well add some plusses and whitespace to the url [20:22] lmao, to which url? the LP branches? [20:22] yeah, seriously, who is going to remember this url? [20:22] this is bws-readme all over again [20:22] jcastro: it needs be called bundles.yaml [20:22] rick_h__: correct? [20:22] bundles with an s? [20:22] what's the file name plurarl or singular? [20:23] I'm currently looking for pluarl [20:23] I'm also looking for a new spell checker [20:23] plural doesn't work either [20:23] marcoceppi: bundles.yaml [20:23] is that we're looking for [20:23] in ingest [20:23] jcastro: user's should never see the url tbh [20:24] jcastro: it works for me but I get a weird error from remote proof [20:24] ok so what will the final cli command look like for deploying a bundle? [20:24] jcastro: they go to the gui and either get a UI to pick the one, or they get a bundle:~jcastro/wordpress/5/wordpress url [20:24] also, if not wordpress, what do I name envExport? [20:24] jcastro: something more descriptive than wordpress, you're creating a solution [20:25] jcastro: so if it's just wordpress + msyql and default config you've created a solution not many people would want imo [20:25] wordpress-simple is a good start [20:25] got it [20:25] rick_h__, but the gui doesn't support colocation yet [20:25] the name should describe what you've solved [20:25] so for a bunch of these one shot bundles they'll need to CLI [20:25] jcastro: not showing it no, but it should 'work' [20:25] marcoceppi, got it [20:26] marcoceppi, that's what I was naming the yaml files [20:26] like simple-wordpress.yaml [20:26] jcastro: yeah [20:26] this is what I was talking about [20:26] that bundles.yaml file can have MULTIPLE bundles in it [20:26] rick_h__, is there a way I can do `juju deploy bundle:~jorge/wordpress` without all the other stuff? [20:26] so you can have a wordpress branch, with a bundles.yaml that has simple-wordpress, scaled-out-wordpress, etc [20:27] oh! [20:27] jcastro: yes, that's what quickstart is for [20:27] juju quickstart bundle:~jcastro/... [20:27] but I can't make that in the GUI, I'd have to make them individually and them combine them into one file [20:27] jcastro: right [20:27] rick_h__, right, so when does that land in relation to bundles? [20:27] jcastro: the gui only handles one at a time [20:27] jcastro: along-side-ish? I'm not 100% sure. It's almost working now [20:27] ok so what happens as of today if I drag a multiple environment yaml file into the GUI? [20:28] jcastro: it tries to deploy them [20:28] until they collide "You've already got a wordpress installed" and then dies [20:28] no, it fails if there is more than one target in the file [20:28] it can ask for a named target, but there is no UI around that now [20:29] ok so for now they'll have to be individual bundles [20:29] bcsaller: oh, right. I was thinking if you did multiple drags [20:29] simple-wordpress, HA-wordpress, and so on? [20:29] jcastro: yes [20:29] jcastro: fixed charm-tools, bzr pull, run install again [20:29] marcoceppi, got it [20:31] W: No readme file found [20:31] E: envExport is the default export name. Please use a unique name [20:31] E: envExport: Could not find charm: wordpress [20:31] E: envExport: Could not find charm: mysql [20:31] Yeah! [20:31] now we're getting somewhere [20:31] jcastro: I'm working with rick_h__ on why it says can not find charm [20:32] the first two are valid issues [20:32] on it [20:33] rick_h__, man, if at some point today I have to add a -HEAD to the end of one of these commands .... *eyes narrow* [20:33] ok. [20:33] stupid person here. [20:33] $ bzr push lp:~smoser/charms/precise/maas-region [20:33] bzr: ERROR: Permission denied: "~smoser/charms/precise/maas-region/": : Cannot create branch at '/~smoser/charms/precise/maas-region' [20:33] jcastro: not at all [20:33] what shoudl that be ? [20:33] smoser: add /trunk to the end of it [20:33] jcastro: it's a new feature, bug is in there. [20:33] ah. [20:33] gracias [20:33] it's user/project/series/package/branch [20:34] ok wordpress is done, hopping on a call with kirkland, and I'll finish up the rest. [20:34] * kirkland high fives jcastro [20:34] rick_h__, what url can I monitor to see when the bundle gets ingested? [20:35] jcastro: http://manage.jujucharms.com/search?search_text=jcastro&op= [20:35] jcastro: right now it will have failed due to the file name [20:35] rick_h__, sorry for being annoying, you know how near and dear simple URLs are to my heart. [20:36] I just commited a fix with the rename [20:36] jcastro: so a push up with that fixed should get it ingested [20:36] jcastro: cool [20:36] * jcastro nods [20:36] jcastro: I'm with you, but the urls for branches in the series and crap is waaaay out of my hands. [20:37] yeah I get it [20:44] thumper: is there a short flag for --debug? [20:44] or is it only in long form? [20:45] only long at this stage [20:45] ack [20:47] jcastro: the versionless cs: urls are breaking it atm. We *want* to support it so will have to update charmworld [20:48] jcastro: if you put the versions back in it'll work ok and proof. I'm starting a branch to figure out a way around the versionless issue now [20:48] marcoceppi: ^ [20:59] rick_h__: ack, thanks! [21:00] marcoceppi: have a fix, Will get it reviewed and landed tomorrow. Can verify on staging sometime then [21:00] rick_h__: cool cool, I'll release charm-tools regardless after I iron out a few things here [21:00] since that's all remote proof [21:00] marcoceppi: +1 appreciate it [21:19] rick_h__, ok so is it ok if I push versionless now? [21:19] jcastro: yea, we're not proofing, just it'll continue to fail marcoceppi's proof tool until this lands on production [21:19] that's fine [21:21] hmm, no ingestion yet? === txwikinger is now known as txwikinger2 === txwikinger2 is now known as txwikinger [22:13] thumper's blog post about his logging library for Go is now up on hacker news: https://news.ycombinator.com/item?id=6643805 === paulczar is now known as zz_paulczar === zz_paulczar is now known as paulczar [22:48] hazmat: How does juju-deployer determine the bootstrap IP addresses for using the jujuclient? [22:49] marcoceppi, its going to move to api-endpoints in the future [22:49] marcoceppi, atm its using juju status [22:49] hazmat: ah, gotchya [22:49] marcoceppi, you trying to use the api direct? [22:49] er jujuclient direct [22:49] hazmat: yeah, was going to try to [22:49] marcoceppi, cool [22:50] hazmat: but I don't know how to find the bootstrap IP address without first running juju status [22:50] is it in the jenv file? [22:50] marcoceppi, juju api-endpoints [22:50] magic [22:50] hazmat: thank you! [22:50] marcoceppi, i added that for explicitly this purpose ... [22:50] np [22:50] <3 [22:54] guys, I am facing an issue with juju bootstrap not setting up my password [22:54] sodre, you mean your ssh key? [22:54] yes [22:54] I can paste the vm boot-log [22:55] http://paste.ubuntu.com/6332775/ === paulczar is now known as zz_paulczar [22:57] sodre, that's a nice one.. [22:57] yeah :) [22:57] sodre, that's a go panic trying to set the mongodb password. its basically your admin-secret from the environments.yaml [22:57] ahhh [22:57] it needs to be complicated, right ? [22:58] sodre, well not really, it needs to be sized under 30 characters i think [22:58] (note there's a "-----BEGIN RSA PRIVATE KEY-----" in that paste, I hope it's ephemeral data...) [22:58] its just a random string in this mostly.. [22:59] sarnold, it is.. if the environment is. [22:59] yes, the environment was random [22:59] yay :) [22:59] sarnold: thanks for pointing it out. [22:59] its part of the auto generated ca and server cert juju setups for th enev [23:00] sodre, i'd re-try with a random 10 digit string [23:00] yeap. I am doing that right now. I think I had read about that ''feature'' before. [23:02] sodre, i thought it was size validated.. but maybe not.. [23:02] Most people don't see this issue because the environment is generated. [23:02] I wrote mine by hand, so that is why it happened. [23:05] humm... same error [23:05] sodre, hmm [23:05] let me do a generate [23:05] and try again. [23:06] sodre, its get stored in jenv [23:06] if your yanking [23:06] destroy-env clears that though [23:07] I deleted that by hand, I think [23:29] hazmat: same issue even with a random long password [23:42] sodre: maybe you can update the bug with any additional info? [23:44] will do [23:44] what would you need ?