[00:27] <thumper> marcoceppi: no, I don't recall and answer
[00:27] <thumper> marcoceppi: just weirdness
[00:28] <thumper> marcoceppi, sarnold: we have on our roadmap the ability to use juju where there is no storage api
[00:28] <thumper> but it is down the line, closer to feb/mar next year at least
[00:28] <sarnold> thumper: cool :)
[00:52] <marcoceppi> thumper: for juju-plugins is the -e flag parsed in core or are all flags now passed to the plugin? (and is JUJU_ENV set by juju-core anymore)?
[00:59] <BrianH> Hey guys, I'm really new to this and I'm running into a problem.  I have a MAAS server setup, and when I try to bootstrap juju, it keeps erroring with "ERROR could not access file 'provider-state': Get http://[old ip]/MAAS/api/1.0/files/provider-state/: dial tcp [old ip]:80: no route to host.
[01:00] <thumper> marcoceppi: not sure, someone has "tweaked it"
[01:00] <BrianH> The juju yaml file was changed to the new IP, but it's wanting to look at the old IP for some reason.  Thoughts?
[01:00] <marcoceppi> BrianH: what version of juju?
[01:01] <BrianH> marcoceppi: 3.2
[01:01] <marcoceppi> BrianH: that version of juju does not exist, what does `juju version` say?
[01:02] <BrianH> Oooh, sorry, 1.16.0-saucy-amd64
[01:03] <BrianH> Setting up a Zentyal 3.2 server at the same time :P
[01:03] <marcoceppi> BrianH: no worries! :)
[01:03] <marcoceppi> BrianH: have you run `juju destroy-environment` ?
[01:03] <marcoceppi> rather, try running that again
[01:03] <marcoceppi> then bootstrap with --debug flag
[01:05] <BrianH> Same error, but with the debug spam.
[01:05] <BrianH> I'd have to switch my IRC client to the machine to pastebin the results.
[01:06] <marcoceppi> BrianH: run destroy again, then tell me what the contents of ~/.juju/environments/ directory looks like
[01:07] <BrianH> it has a maas.jenv file
[01:07] <marcoceppi> BrianH: that's where it's getting the settings from. thumper shouldn't destroy-environment delete that? BrianH, delete that jenv file then bootstrap again
[01:08] <thumper> yes destroy-environment should delete the jenv file
[01:08] <marcoceppi> also, thumper, if I should bother someone else let me know. You're just an easy nick to remember ;)
[01:08] <thumper> :)
[01:08]  * thumper is just writing emails ATM
[01:08] <BrianH> It's not deleting it.  Looks like it's filled with tons of old info in there (catted the file before deleting).  1 sec ...
[01:10] <sarnold> BrianH: btw, the pastebinit package and program is really helpful :)
[01:11] <BrianH> it attempted to retrieve tools, then errors out "ERROR cannot start  bootstrap instance: cannot run instances: somaasapi: got error back from server: 409 CONFLICT"
[01:11] <BrianH> err, gomaasapi*, not somaasapi
[01:11] <thumper> hah
[01:12] <thumper> BrianH: we know what you meant
[01:12] <marcoceppi> 409 conflict can mean a ton of things, likely it means there are no instances available for your user
[01:13] <BrianH> instances on the MAAS?
[01:13] <marcoceppi> BrianH: yes
[01:13] <marcoceppi> BrianH: make sure you've got nodes enlisted and available for your user in the dashboard. Make sure you have your ssh keys, user authentication, etc all configured. Run destroy environment again (for good measure) make sure the jenv is deleted (if it's not we'll need to talk about getting that filed as a bug), bootstrap again with --debug and pipe it to pastebinit (which can be installed on all ubuntu distros)
[01:14] <BrianH> I've done plenty of virtualization before (KVM, etc.) but this cloud stuff is so confusing, haha.
[01:14] <marcoceppi> BrianH: the maas stuff can be a bit dodgy to get up and running at first
[01:14] <BrianH> I haven't setup any nodes on the MAAS yet.  Do I need to do that first?
[01:14] <marcoceppi> BrianH: yeah
[01:15] <marcoceppi> BrianH: so, the way this works with MAAS+Juju is it's not like EC2 or openstack where juju will tell the provider to create a machine
[01:15] <marcoceppi> BrianH: maas is designed to solve the problem "I have all this hardware and I want to use juju to drive it"
[01:16] <marcoceppi> BrianH: so you must tell maas about your hardware/machines first, then juju will use the pool of available machines to deploy stuff to it
[01:16] <BrianH> Ah, gotcha.
[01:16] <BrianH> It's so hard to find "for dummies" tutorials on this stuff. :)
[01:16] <marcoceppi> bootstraping requires a machine in the provider to do the orchestration. So if you don't have a machine inlisted you'll get a conflict from the maas api, aka "YOU ASKED ME TO DO SOMETHING AND I CAN'T"
[01:17] <marcoceppi> BrianH: yeah, maas still has a bit of a learning curve to it unfortuantely
[01:17] <hazmat> marcoceppi, how'd the reboot debug turn out?
[01:18] <hazmat>  marcoceppi there's a bug in the last release re plugins not recieving JUJU_ENV
[01:19] <marcoceppi> hazmat: I just realized that I have 1.15 bootstrapped, the log is basically empty
[01:19] <hazmat> atm its a four layer lookup  (cli, env var, home env, env file default)
[01:20] <hazmat> that gets duplicated in every plugin
[01:20] <marcoceppi> hazmat: yeah, I was about to start writing a python plugin helper
[01:21] <marcoceppi> that you can call to inherit an argparse that has the same stuff that the juju cli does, but I wanted to make sure that was expected behavior now and not a regression
[01:22] <hazmat> olafura, if you need some guidance let me know.. but in truth things would probably be easier with a core plugin, pyjuju dev is basically dead, and support is questionable, but either way i'd be happy to help get you started.
[01:22] <hazmat> marcoceppi, its a documented regression
[01:23] <hazmat> marcoceppi, future plan versions should get JUJU_ENV var passed
[01:23] <marcoceppi> hazmat: awesome
[01:23] <BrianH> Hmm, the node is stuck "Commissioning".  Does this usually take a while?
[01:24] <marcoceppi> BrianH: it can take a bit of time, IIRC this is maas doing some pxe booting stuff
[01:24] <hazmat> marcoceppi, although they also need to support -e in that context,  core isn't parsing their args for them
[01:24] <marcoceppi> hazmat: well it'd be easier to just write an argparse that read -e and default value was os.environ['JUJU_ENV']
[01:25] <hazmat> marcoceppi, definitely
[01:26] <hazmat> marcoceppi, didn't see a bug just filed 1246156 to ref the issue
[01:26] <marcoceppi> hazmat: bug?
[01:26] <marcoceppi> hazmat: awesome
[01:26] <AskUbuntu> Need help configuing juju on Windows 8 | http://askubuntu.com/q/368232
[01:29] <olafura> hazmat:  thank you for that, pyjuju is great for debugging at least for me. I want to get it into core plugin when I have worked out the kinds. I think a fork of goamz with Cloud Stack specific quirks and a juju provider with some ec2_uri and s3_uri configuration options is the best way to go.
[01:35] <olafura> hazmat: I might commit some in logging functions through out juju-core so I can better see whats going wrong.
[01:36] <marcoceppi> olafura: I think that's what --debug and --show-log are for
[01:41] <BrianH> marcoceppi: I changed the IP address of my MAAS server and the web interface is barfing with an Internal Server error.  Any way I can fix this?
[01:41] <BrianH> I tried restarting avahi-daemon, but still the same.
[01:41] <marcoceppi> BrianH: uh, it's a django application, there's a configuration for it somewhere. It's been a wee bit of time since I've used maas
[01:42] <marcoceppi> BrianH: you can try to run `sudo dpkg-reconfigure mass-cluster-controller`
[01:42] <marcoceppi> BrianH: that should allow you to re-enter the settings
[01:43] <olafura> maroceppi: I know  and they are very helpful, I was just warning that if I find somewhere it's missing and would help then I would commit. It looks like the Go code might have a better debugging code then the python version.
[01:43] <marcoceppi> BrianH: err, maybe just run dpkg-reconfigure maas
[01:44] <BrianH> Hmm, I ran the first one, entered the IP, still same error.
[01:44] <BrianH> Same after dpkg-reconfigure maas
[01:45] <marcoceppi> BrianH: what about dpkg-reconfigure python-django-maas
[01:45] <BrianH> Same.
[01:46] <BrianH> I'll try rebooting it.
[01:46] <marcoceppi> BrianH: is there an /etc/maas* file/directory?
[01:46] <BrianH> Yes
[01:46] <marcoceppi> You managed to catch me just a few days before trying to build my own small maas cluster :\
[01:46] <marcoceppi> BrianH: try searching through those files for the old IP and replace with the new ip
[01:47] <BrianH> marcoceppi: Will do. I appreciate all the help. :)
[01:47] <sarnold> heh, I'm lazy enough I'd aim for dpkg --purge and just start with a clean slate, hehe
[01:47] <marcoceppi> sarnold: thought about that, but didn't want to bork anything he's got enlisted
[01:47] <marcoceppi> sarnold: not sure how maas would handle that, though I guess it would just re-enlist it
[01:48] <BrianH> marcoceppi: I don't have anything enlisted at the moment.  It's all virtualized, so it's easy to setup something too
[01:48] <sarnold> marcoceppi: yeah, that'd be painful if much were using it.. but I figured renumbering the maas contrller wouldn't happen after it'd been in use for a while :)
[01:48] <marcoceppi> BrianH: if that doesn't resolve it, then sarnold's suggestion of purge and start again might not be a bad idea
[01:48] <BrianH> Cool beans.  I might just do that. :)
[01:48] <marcoceppi> BrianH: also, what distro is the maas-master? precise?
[01:48] <sarnold> marcoceppi: your approach has the benefit of -learning- how it works. :)
[01:48] <marcoceppi> s/distro/release/
[01:49] <BrianH> No, it's on saucy
[01:49] <marcoceppi> BrianH: cool, saucy has a "better" version of maas
[01:50] <BrianH> Ah, good to know. :)
[01:50] <BrianH> I read there were lots of improvements from the LTS release, so I figured I'd try getting it running with Saucy first.
[01:50] <hazmat> olafura, sounds good
[01:51] <marcoceppi> BrianH: you can use the cloud-tools archive to get the most recent version of maas/juju on precise, but if you're on saucy that's fine (for now)
[01:51] <BrianH> I'm just a poor college student trying to learn all this cool, amazing cloud stuff, haha.
[01:52] <mhall119> is there any easy trick to make something run only on the first time a db-relation-joined happens for a particular database?
[01:52] <marcoceppi> BrianH: ah, in which case, saucy will do fine for you
[01:52] <hazmat> olafura, there's a sample bare-bones skeleton provider for core in https://code.launchpad.net/~fwereade/juju-core/provider-skeleton/+merge/189638
[01:52] <mhall119> so that if I remove-relation and then add-relation again, it doesn't run again
[01:52] <mhall119> this is for populating a database with initial data, but not over-writing existing data if it happens to rejoin
[01:52] <hazmat> mhall119, store local state
[01:52] <mhall119> or adding another unit
[01:52] <marcoceppi> mhall119: you can use files in the $CHARM_DIR to indicate this, for instance after doing the operations `touch .db-populated` then have a check in for that file
[01:52] <olafura> hazmat: Thank you I'll look at that
[01:53] <BrianH> I probably have the most high tech home networks in my entire town (probably better than most small business around here too).
[01:53] <hazmat> mhall119, ie. store some local state the first time x happens, and check if state before doing x again.
[01:53] <mhall119> ok, so that's the usual way of doing it?
[01:53] <mhall119> and what's the hook that is called when remove-relation happens?
[01:54] <hazmat> mhall119, general case yes, specifics vary based on problem at hand.
[01:54] <mhall119> dp-relation-removed?
[01:54] <marcoceppi> mhall119: it gets a little trickier in a multi-unit layout. You'll probably need to devise a way to check the database if that's done
[01:54] <hazmat> mhall119, db-relation-broken
[01:54] <mhall119> thanks
[01:54] <mhall119> marcoceppi: yeah, I have at least 2 instances of the django app connecting to one instance of the db
[01:54] <hazmat> mhall119, what marcoceppi is key.. ie check the db as source of truth / sync between multiple units.
[01:55] <marcoceppi> mhall119: I had this problem in the discourse charm, I ended up just having the charm run a query against postgresql to see if it had done the seed or not
[01:55] <mhall119> only one set to be an admin node though, and only admin nodes setup the database, so that should be okay
[01:55] <marcoceppi> mhall119: ah, then just touching a file to track state should suffice
[01:55] <mhall119> marcoceppi: I can do that, write a custom django management command that checks the db and updates it if needed
[01:56] <marcoceppi> mhall119: that'd be the fool proof, multi-peer, way of doing it
[01:56] <marcoceppi> but if you design the charm to only have one admin node ever, then local state should suffice
[01:57] <mhall119> it's designed to *expect* one admin node ever
[01:57] <marcoceppi> the more I think about it, the more I want to recommend you write a task. What if you want to HA your admin nodes?
[01:57] <mhall119> if somebody were to make two, it would behave in undefined buy very likely undesirable ways
[01:57] <marcoceppi> mhall119: okay, well that parts up to you then, if admin isn't design to scale there's probably bigger issues to worry about
[01:57] <mhall119> marcoceppi: If I understand the webops correctly, the admin node doesn't actually get exposed to the outside world, so it would never need HA
[01:58] <marcoceppi> mhall119: well, it might want HA if the instance were to go away, then you'd possibly want failover so the nodes can talk to a new admin. But I'm just speculating, you know the service better than me (want to make sure you ahve all the info to make an informed descision)
[01:59] <mhall119> marcoceppi: the code is the same, the only thing that makes the admin node the admin node is juju set admin_node=True that tells the charm to run syndb, migrate, and other DB setup commands
[02:00] <mhall119> the non-admin doesn't talk to the admin, or vice-versa
[02:00] <marcoceppi> mhall119: cool
[02:00] <marcoceppi> mhall119: gotchya
[02:00] <mhall119> so a state file should suffice for now
[02:00] <marcoceppi> sounds like it
[02:05] <BrianH> marcoceppi: heh, I just scratched the VMs for my server and node and rebuilding it from scratch.  I'll set the static IP from the getgo so this doesn't happen again.
[02:32] <BrianH> marcoceppi: Btw, while setting up this new server, I discovered it's dpkg-reconfigure maas-region-controller for address changes.
[02:33] <sarnold> :)
[02:40] <marcoceppi> BrianH: awesome! good to know
[02:41] <marcoceppi> BrianH: I know you mentioned a complicated home networking setup, so you may already know that maas will basically try to own it's own network and does addressing for nodes via dhcp
[02:41] <BrianH> marcoceppi: Yep, I have my dhcp server running on a Zentyal server.
[02:42] <marcoceppi> BrianH: right, but maas runs its own and assumes it is the controller of the network
[02:42] <hazmat> maas can use an external dhcp server
[02:42] <marcoceppi> hazmat: oh, cool. I wasn't sure
[02:43] <hazmat> marcoceppi, afaicr its just don't install maas-dhcp and configure next-server for dhcp to point to maas or use avahi
[03:05] <BrianH> marcoceppi: Ok, I have a new server and node setup.  It's still saying "Commissioning" under the status, but the node won't fire up (It's a VirtualBox VM, so I imaging I need to start it manually?).  When I start it and it attempts to PXE boot, I get an error about "Nothing to boot: No such file or directory"
[03:06] <marcoceppi> BrianH: yeah, VirtualBox and maas don't play together because VirtualBox doesn't pxe boot
[03:06] <BrianH> It sees the Next server and gets it's own IP.
[03:06] <marcoceppi> BrianH: I tried this over a year ago with poor results: http://marcoceppi.com/2012/05/juju-maas-virtualbox/
[03:06] <BrianH> Isn't there a VirtualBox PXE boot image?  I think it was iPXE?
[03:07] <BrianH> I'm using that tutorial.
[03:07] <marcoceppi> BrianH: right, it has PXE boot but not WOL
[03:07] <marcoceppi> got those two mixed up
[03:07] <BrianH> Ah, gotcha.
[03:08] <marcoceppi> It's getting late over here, brain is slowing down
[03:08] <marcoceppi> BrianH: I need to update this article with how to use vMAAS instead. Since MAAS has better virtual support build in (KVM/libvirt support)
[03:12] <BrianH> marcoceppi: Nice, I'll keep an eye on it then.  I gotta crash for the evening (early day of classes tomorrow).  I appreciate all the help you've given.  Thank you. :)
[03:16] <marcoceppi> BrianH: o/ have a good one
[03:20] <sodre> Hi, I am trying to get juju to bootstrap on a fresh private OpenStack. I am having issues at the bootstrap level .
[03:20] <marcoceppi> sodre: what version of juju are you using? `juju version`
[03:20] <hazmat> sodre, could you pastebin your juju bootstrap -v --debug
[03:20] <hazmat> output
[03:22] <sodre> one sec.... its uploading ...
[03:26] <sodre> http://pastebin.com/t0H8DVVG
[03:34] <sodre> hazmat, the pastbin link is up.
[03:34] <hazmat> sodre, thanks
[03:34] <sodre> marcoceppi, it is 1.16
[03:35] <sodre> I am running on saucy and trying to bootstrap a precise image.
[03:37] <hazmat> sodre, and you have a precise image loaded into glance?
[03:37] <sodre> I've used smoser's scripts to load up images into open stack.
[03:37] <sodre> It created a bucket called simplestreams
[03:38] <sodre> and yes, a bunch of images on glance.
[03:39] <sodre> from oneiric to saucy. Both daily and released images.
[03:40] <hazmat> sodre, can you try running $ juju sync-tools
[03:41] <sodre> it failed, can I paste the error here ?
[03:42] <hazmat> sodre, sorry could you re-run with -v  --debug and pastebin it
[03:42] <hazmat> unless its a one liner.. its generally nicer to pastebin blocks
[03:43] <sodre> okay. it goes in pastebin then.
[03:43] <sodre> http://pastebin.com/BJJUrasS
[03:43] <hazmat> sodre, so basically juju needs to find two pieces of info.. tools which it uploads and an image to run them on
[03:43] <marcoceppi> sodre: there's a command pastebinit that you can install and pipe output to
[03:44] <hazmat> both are located in a file format called simpelstreams
[03:44] <hazmat> hmm
[03:45] <hazmat> there's two commands .. one to generate tools simple streams, and another to generate the image simple stream.
[03:45] <sodre> let me install pastebininit...
[03:45] <sodre> okay.
[03:46] <sodre> how do these two commands work ?
[03:47] <hazmat> sodre, they basically stick a file into ostack swift with contents from either the upload or listing of tools (in the case of tools) or the an explictly passed in image id in the case of the image command
[03:47] <hazmat> the image command is done as a plugin.. juju metadata -h
[03:47] <hazmat> but... holding off on that for a moment
[03:47] <sodre> okay.
[03:48] <hazmat> sodre, the inability to list the bucket looks suspect in the last pastebin
[03:48] <hazmat> sodre, how'd you install openstack?
[03:48] <sodre> agreed . I can list them using swift without problems.
[03:49] <sodre> I installed using JUJU/MAAS
[03:49] <hazmat> hmm
[03:49] <sodre> the only difference is that I am using radosgw
[03:50] <hazmat> sodre, so from the first pastebin the issue is the need for cloud images for juju to find
[03:51] <sodre> I agree, I can post the output of glance list-images.
[03:51] <hazmat> the fact that its uploading tools again on bootstrap is suspect imo, but i think is just because of the lack of simplestreams metadata for the tools, but its not the fatal issue just annoying
[03:52] <sodre> yes. I faced the same issue when bootstrapping MAAS
[03:52] <sodre> glace image-list > http://paste.ubuntu.com/6327950/
[03:55] <hazmat> sodre, try this (precise amd64 image) juju metadata generate-image -i 907ca55d-a2e4-47c5-b26b-4be12bd78ecc -r http://m1basic-05.vm.draco.metal.as:5000/v2.0
[03:57] <sodre> okay
[03:57] <sodre> boilerplace image metadata written to .juju...
[03:58] <sodre> what is my "public" bucket ?
[03:58] <hazmat> sodre, 'juju-dist' bucket
[03:59] <sodre> okay.
[03:59] <hazmat> sodre, from the first pastebin it looks like its looking here.. http://m1basic-04.vm.draco.metal.as:80/swift/v1/admin-juju/streams/v1/index.json
[03:59] <sodre> that was the control-bucket
[03:59] <sodre> I can create a juju-dist
[04:00] <hazmat> sodre, it doesn't look like your public-bucket is setup correctly.. i believe juju is introspecting keystone metadata here.. the url its getting back is.. sodre, let's try that first
[04:01] <hazmat> er.. is
[04:01] <hazmat> swift://simplestreams/data/streams/v1/index.json
[04:01] <hazmat> which isn't valid, so its not really  looking in juju-dist in this case
[04:01] <sodre> okay..
[04:02] <hazmat> sodre, we can try and fix that later, but first we can just drop the simplestreams data into your control-bucket at that location
[04:02] <wallyworld_> hazmat: that genetate image command above is wrong
[04:02] <wallyworld_> -r is region
[04:02] <wallyworld_> -u is endpoint
[04:03] <wallyworld_> looks like -r was being used with an endpoint url
[04:03] <sodre> :) should i start again ?
[04:03] <hazmat> wallyworld_, cool, maybe we should document it ;-)
[04:03] <hazmat> wallyworld_, so -r RegionOne and -u http://m1basic-05.vm.draco.metal.as:5000/v2.0
[04:03] <wallyworld_> hazmat: the doco is currently in the command when you do help, but real doco is a wip
[04:04] <wallyworld_> yes
[04:04] <hazmat> wallyworld_, cli help output sadly isn't an example of what a user needs to do.
[04:04] <wallyworld_> yeah i know. doco is on the todo list
[04:05] <hazmat> i'm way past EOD so i'm going to wander into the night.. sodre your in good hands with wallyworld_
[04:05] <sodre> okay. Thanks hazmat!
[04:07] <sodre> wallyworld_ : I am in the process of rerunning juju bootstrap. After that I'll upload the generated image metadata.
[04:07] <wallyworld_> bootstrap won't work without the correct image metadata
[04:07] <wallyworld_> both tools and image metadata needs to be in place for bootstrap to work
[04:07] <sodre> correct. That is what hazmat was trying to fix for me.
[04:09] <sodre> do you have a different way to go about it ?
[04:09] <wallyworld_> tl;dr; you need to generate image metadata and upload to your private storage. tools will be synced automatically if not present and bootstrap should run
[04:10] <wallyworld_> or you could upload the tools yourself, but best to let juju do it. i assume you are running 1.16?
[04:10] <sodre> correct.
[04:10] <sodre> I have a bucket called simplestreams
[04:10] <wallyworld_> cool. so "juju metadata generate-images -i xxxxx -r region -u endpoint"
[04:10] <wallyworld_> no
[04:10] <wallyworld_> upload streams/v1/* to private storage
[04:11] <sodre> any bucket in particular ?
[04:11] <wallyworld_> the dir structure is analogoues to cloud-images.canonical.com
[04:11] <wallyworld_> the root of the private storage i *think* from memorty
[04:12] <wallyworld_> so when you run generate, you will have a streams/v1 dir somewhere
[04:12] <wallyworld_> upload that tree to private storage
[04:12] <sodre> it just gave out the .json files directly
[04:12] <wallyworld_> then use validate-image command to ensure it is correct
[04:12] <sodre> but I know what you mean now.
[04:13] <wallyworld_> it has changed in recent builds so i might be misremembering exactly what 1.16 does
[04:13] <wallyworld_> use validate-images before you bootstrap to make sure it is all ok, save wasting time
[04:13] <sodre> it came back with an error.
[04:14] <sodre> ERROR index file has no data for cloud {RegionOne http://m1basic-05.vm.draco.metal.as:5000/v2.0} not found
[04:14] <sodre> ERROR exit status 1
[04:14] <wallyworld_> is that from juju metadata validate-images?
[04:14] <sodre> yes
[04:14] <wallyworld_> can you paste your index file?
[04:15] <hazmat> power2_mine.
[04:15] <wallyworld_> also run with --debug
[04:15] <wallyworld_> so i can see where it is trying to look
[04:15] <hazmat>  power2_mine.
[04:15] <sodre> okay
[04:15] <hazmat> whoops
[04:15] <hazmat> sorry
[04:16] <wallyworld_> what's power2_mine?
[04:16] <hazmat> atm its an old password ;-)
[04:16] <sodre> index.json > http://paste.ubuntu.com/6328011/
[04:16] <wallyworld_> lol
[04:16] <hazmat> screen saver and multi-monitor fail
[04:17] <wallyworld_> hazmat: i need to validate your account details and current password, can you send to me :-P
[04:18] <wallyworld_> sodre: that index file looks ok, so it seems it is not being uploaded to the right place
[04:18] <wallyworld_> sodre: can you run validate-images with --debug?
[04:19] <sodre> validate-images --debug > http://paste.ubuntu.com/6328025/
[04:20] <wallyworld_> sodre: where did swift://simplestreams.... come from? that's not right
[04:20] <sodre> yeah.. good point.
[04:20] <sodre> maybe I should clean my env again.
[04:20] <hazmat> wallyworld_, that's probably from keystone as a default unconfigured value
[04:21] <sodre> ahhhhh
[04:21] <sodre> I had an old openstack.jenv laying around.
[04:21] <wallyworld_> oh ok. keystone should not be returning anything for product-streams endpoint else juju will use it
[04:21] <wallyworld_> yeah, those jenv files are a bit of a trap
[04:21] <sodre> yeap.
[04:22] <hazmat> wallyworld_, it looks like he could upload directly to the control-bucket 'admin-juju' not optimal but functional
[04:22] <hazmat> ugh.. stale jenvs..
[04:22] <wallyworld_> hazmat: yeah, right now, you do need to upload to control bucket
[04:22] <sodre> alright. should I just get rid of the image-metadata-url from .jenv ?
[04:22] <wallyworld_> yep
[04:23] <jam> wallyworld_: sodre: wallyworld_: sodre: https://bugs.launchpad.net/goose/+bug/1209003
[04:23] <_mup_> Bug #1209003: juju bootstrap fails with openstack provider (failed unmarshaling the response body) <openstack> <Go OpenStack Exchange:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/1209003>
[04:23] <jam> I'm off for a sec, but will be back in ~ 1 hr
[04:24] <wallyworld_> jam: right now it's a simplestreams config issue. hopefully goose bug won't matter once that gets sorted
[04:25] <sodre> jam: that  looks like some of the errors I am seeing as well.
[04:25] <sodre> alright. Let me try again with a clean jenv.
[04:26] <sodre> it looks like I posted it to the wrong place...
[04:28] <wallyworld_> sodre: you only need image-metadata-url if you want to get tools from a place other than 1) your private cloud storage, 2) the configured endpoint in keystone
[04:29] <sodre> latest validate-images --debug http://paste.ubuntu.com/6328061/
[04:29] <sodre> Ideally I would like to host it internally. But right now I just want it to work :)
[04:32] <wallyworld_> sodre: so, it looks like it can find the index file now. but there's a mismtach on region/endpoint. looks like endpoint in the json is http:// . are you sure it should not be https://
[04:32] <sodre> I don't think the default install used https
[04:32] <wallyworld_> it should be the same as your auth_url
[04:32] <sodre> let me double check.
[04:33] <sodre> it is http
[04:34] <wallyworld_> hmmm. can you paste the whole output without the truncation?
[04:34] <sodre> as per keystone catalog
[04:34] <wallyworld_> it is expecting to match what is in your env file
[04:35] <wallyworld_> are you using auth-url setting in env file?
[04:35] <sodre> Okay. do you want the output of export ?
[04:35] <sodre> or the output from openstackrc.sh ?
[04:35] <wallyworld_> just the --debug when running the validate-images
[04:36] <sodre> that was the whole output, 17 lines.
[04:36] <wallyworld_> btw, the generate-images command in 1.16 was a prototype tool for developers, it wasn't intended for end users. but there's no easy way to do private clouds without it. it's much better in next release
[04:37] <wallyworld_> sodre: looks like the log is truncated on the right edge though
[04:37] <wallyworld_> oh wait
[04:37] <wallyworld_> i missed the scrol bar
[04:37] <wallyworld_> doh
[04:37] <sodre> :)
[04:39] <wallyworld_> sodre: so just to check, can you paste the content of http://m1basic-04.vm.draco.metal.as:80/swift/v1/admin-juju/streams/v1/index.json for me?
[04:41] <sodre> this is the output after calling swift download admin-juju ....  > http://paste.ubuntu.com/6328095/
[04:42] <wallyworld_> sodre: can you see the problem?
[04:42] <wallyworld_> i can :-)
[04:42] <sodre> ohhh
[04:42] <sodre> :)
[04:42] <sodre> Region region region :)
[04:42] <wallyworld_> yeah :-)
[04:42] <sodre> that's strange..
[04:43] <wallyworld_> looks like that file was from the earlier wrong command
[04:43] <sodre> argh...
[04:43] <wallyworld_> where -r <endpoint> was used
[04:44] <sodre> alright... getting better
[04:44] <sodre> odre@ubuntu:~/.juju$ juju metadata validate-images
[04:44] <sodre> matching image ids for region "RegionOne":
[04:44] <sodre> 907ca55d-a2e4-47c5-b26b-4be12bd78ecc
[04:44] <sodre> yay
[04:44] <wallyworld_> yay \o/
[04:45] <sodre> so, should I bootstrap now ?
[04:45] <wallyworld_> why not. i can't recall if 1.16 had the tools syncing stuff in it
[04:45] <wallyworld_> cause you could get the tools set up first
[04:45] <wallyworld_> save the upload
[04:45] <wallyworld_> i think it did
[04:46] <sodre> it has some in there.
[04:46] <wallyworld_> so, you can get the tarball you want
[04:46] <wallyworld_> save locally to <dir>/tools/releases
[04:46] <wallyworld_> juju sync-tools --source=<dir> --destination=<dir>
[04:47] <wallyworld_> then upload <dir> tree to private storage
[04:47] <wallyworld_> so private storage will have a tools dir in it
[04:48] <wallyworld_> or you could just bootstrap with --upload-tools :-)
[04:48] <sodre> I think it has one from the left-over bootstrap we did earlier
[04:48] <wallyworld_> you could run validate-tools then
[04:48] <wallyworld_> to see if juju can find them
[04:48] <sodre> validated :)
[04:48] <wallyworld_> \o/
[04:49] <wallyworld_> so bootstrap should work hopefully
[04:49] <wallyworld_> unless that goose bug gets in the way
[04:50] <sodre> alright...
[04:50] <sodre> I forgot to run with debug
[04:50] <wallyworld_> oh, did it fail?
[04:51] <sodre> well..
[04:51] <sodre> it tried to start it with a m1.tiny.
[04:51] <sodre> and that gave an error.
[04:51] <sodre> but we had a lot of progress.
[04:52] <wallyworld_> yeah, there's a potential bug selecting a large enough instance type. use a constraint
[04:52] <wallyworld_> --constraint mem=1024 for example
[04:52] <wallyworld_> add that to bootstrap command
[04:52] <sodre> okay, if I control-c  bootstrap how do I get it to run again ?
[04:52] <wallyworld_> bootstrap should return quite quickly
[04:53] <wallyworld_> you could juju destroy-environment, BUT that will also delete tools and image metadata
[04:53] <wallyworld_> what i would do is
[04:53] <wallyworld_> kill the bootstrap machine manually using nova cli
[04:54] <wallyworld_> then remove the provider-state file which juju put in private storage
[04:54] <wallyworld_> that will allow you run run bootstrap again
[04:55] <hazmat> sodre, wallyworld_, and kill the  jenv again
[04:55] <sodre> okay.
[04:55] <wallyworld_> the jenv can stay i think?
[04:55] <hazmat> wallyworld_, it refs the old instance id i think
[04:55] <wallyworld_> cause it has cached the env stuff, no need to delete it
[04:55] <wallyworld_> ok, won't hurt
[04:56] <sodre> jenv gone.
[04:56] <wallyworld_> in the past, i haven't needed to delete it i don't think, but better be safe
[04:57] <hazmat> wallyworld_, your right, its fine for this provider type
[04:57] <sodre> how is juju handling neutron networks? Does it create one by default ?
[04:58] <wallyworld_> ok. so many gotchas to keep track of :-)
[04:58] <hazmat> wallyworld_, manual provider was the one it caused issues with for me in this same context.
[04:58] <wallyworld_> sodre: we haven't done anything to support neutron yet afaik. it's a work in progress
[04:58] <wallyworld_> unless i'm misunderstanding the current state of play
[04:59] <hazmat> sodre, its not really doing anything with them atm, it assumes a private network for the instances to talk among, and a public net (or floating ips)..
[04:59] <sodre> okay. so every instance will connect directly to the ext-net
[04:59] <hazmat> we've got plans to address for 14.04 with first class network support (vpc, neutron, vlan, etc)
[04:59] <lifeless> nova can be setup so that neutron is transparent
[05:00] <lifeless> should keep things working for people
[05:00] <wallyworld_> lifeless: !
[05:00] <hazmat> lifeless, long time :-)
[05:00] <wallyworld_> hi
[05:00] <lifeless> wallyworld_: hazmat: o/
[05:00] <hazmat> lifeless, see you next week :-)
[05:00] <lifeless> hazmat: most excellent
[05:00] <sodre> i guess my setup is not that way yet.  the instance only got a floating ip
[05:00] <wallyworld_> what openstack release supports neutron transparently?
[05:00] <lifeless> sodre: no internal address ?
[05:01] <lifeless> wallyworld_: Grizzly and Havana and Icehouse
[05:01] <wallyworld_> excellent, thanks
[05:01] <lifeless> wallyworld_: it's a config issue though
[05:01] <lifeless> wallyworld_: see the default_floating_pool nova setting
[05:01] <sodre> lifeless: yes,
[05:01] <wallyworld_> yeah, in the past we've had to assume lcd
[05:02] <lifeless> if thats wrong, and the default value is 'nova' but the example used in the all the admin guides for neutron calls it 'ext-net', then nova will refuse to do floating ip operations
[05:02] <lifeless> wallyworld_: separately there is a nova setting to auto-allocate floating ips to instances
[05:02] <lifeless> that defaults off
[05:02] <wallyworld_> ok
[05:02] <lifeless> without that instances by default end up with no floating/public ip
[05:03] <sodre> here is the pastebin http://paste.ubuntu.com/6328157/
[05:03] <wallyworld_> sodre: i've not seen that error before. openstack networking is not my strong point
[05:04] <wallyworld_> sodre: you could try use-floating-ip=false
[05:04] <wallyworld_> in juju env config
[05:04] <sodre> yeah, let me try again.
[05:06] <lifeless> No nw_info cache associated with instance <- thats a new one
[05:06] <lifeless> it's being thrown from the nova virt rpcapi manager
[05:06] <lifeless> [or near tere, I haven't grepped for it yet]
[05:06] <sodre> I just needed to have a local network setup before calling bootstrap
[05:10] <sodre> Guys, Thank you so much.
[05:11] <sodre> I would not have been able to figure all this out on my own .
[05:11] <sodre> I am having issues with the image booting up but I think they are all on my end
[05:12] <wallyworld_> sodre: no problem. the tooling and doco associated with setting up a private cloud is very much a work in progress. it works if done correctly, but the doco is not finished yet. next release will be better
[05:13] <sodre> wallyworld_: Thanks a lot. Is there a place where the wip document is located?
[05:13] <wallyworld_> sodre: right now, wip = no doc except for "juju help <foo>" sorry
[05:13] <sodre> np.
[05:14] <wallyworld_> so the commands have help but there's no end user task oriented doc
[05:14] <sodre> ic... well .. thanks again !
[05:14] <wallyworld_> anytime
[05:15] <sodre> about moving the tools and and streams to their own dedicated buckets...
[05:16] <sodre> once that is done, it is just a matter of changing tools-url and image-metadata-url , right ?
[05:16] <wallyworld_> yeah
[05:16] <sodre> is there an easy script to mirror the s4 juju-dist bucket ?
[05:16] <wallyworld_> set up a publicly readable bucket
[05:16] <sodre> s/s4/s3/
[05:17] <wallyworld_> that is going away real soon, and streams.canonical.com will take its place
[05:17] <wallyworld_> and mirroing will just be an rsync
[05:17] <sodre> nice.
[05:17] <wallyworld_> if you can hang on a week or so.....
[05:17] <wallyworld_> not sure the exact time, but rsn
[05:18] <wallyworld_> it will coincide with release of juju 1.18
[05:18] <sodre> when is 1.17 coming out ?
[05:18] <melmoth> hola juju crowd.. I have a problem with 1.14.0-0ubuntu1~ubuntu12.04.1~juju1. I need to set a config-flag for the nova-cloud-controller charm.
[05:19] <wallyworld_> soon. we will release that as a 1.18 beta if you like
[05:19] <melmoth> i try: juju set nova-cloud-controller config-flags="scheduler_default_filters=AggregateInstanceExtraSpecsFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter"
[05:19] <melmoth> but nova.conf on the machine end up with only scheduler_default_filters=AggregateInstanceExtraSpecsFilter
[05:19] <wallyworld_> try using "" around the filters value?
[05:20] <wallyworld_> just a guess
[05:20] <melmoth> i first try \, , and this was a disaster
[05:20] <sodre> wallyworld_: have a good rest of day back in your TZ.
[05:20] <wallyworld_> sodre: will do :-) let us know if you need anything else
[05:20] <melmoth> there are 2 nova-cloud-controller (haclustercharm subordinate), and one unit ended up with config error that i could not sovled (no realtion id error each time i try a new juju set)
[05:21] <sodre> thanks I'll stop by again.
[05:21] <melmoth> i had to destroy the unit and redeploy it again.
[05:21] <wallyworld_> melmoth: i'm not sure of the answer to your question. marcoceppi  are you around to help out?
[05:22] <melmoth> marcoceppi, if you are around this is with some folk you already met (remember the land of the rising sun ? :-) )
[12:59] <smoser> sodre, wallyworld hazmat, fwiw, the 'example-sync' that sodre ran specifically creates metadata in swift bucket.
[12:59] <smoser> so that juju should just need to be pointed at that.
[13:00] <smoser> (or the target 'swift' output  made to match)
[13:00] <smoser> also an option is to register that endpoint in keystone (swift path) and then juju will find it there.
[13:00] <smoser> that is how canonistack works.
[14:04] <adeuring> sinzui: https://code.launchpad.net/~adeuring/charmworld/more-heartbeat-info/+merge/193248
[14:04] <sinzui> thank you adam_g
[14:04] <sinzui> thank you adeuring
[14:13] <smoser> so in ceph charm, where does 'charm' command come from (in Makefile).
[14:13] <ehw> racedo:/win 27
[14:13] <smoser> jamespage, ^ ?
[14:13] <ehw> doh
[14:14] <jamespage> smoser, oh
[14:14] <jamespage> smoser, charm-helpers itself - that bit sucks alot right now
[14:14] <jamespage> smoser, no
[14:14] <jamespage> sorry
[14:14] <jamespage> charm-tools
[14:14] <jamespage> charm proof right?
[14:14] <freephile> I have a "local" environment but I can't connect to it - and I think it's because I ran out of disk space.  How can I clean up if I can't connect?
[14:15] <freephile> provider-state: dial tcp 10.0.3.1:8040: connection refused
[14:16] <marcoceppi> smoser: the charm command is from charm-tools
[14:16] <marcoceppi> freephile: that means that the API service or db service isn't running, what does `initctl list | grep juju` show?
[14:16] <smoser> marcoceppi, so install that from archive ?
[14:17] <marcoceppi> smoser: in saucy it's good, otherwise install from ppa:juju/stable
[14:17] <freephile> marcoceppi: juju-db-root-local stop/waiting juju-agent-root-local start/running, process 1145
[14:17] <smoser> saucy... pfft.
[14:17] <smoser> trusty man.
[14:17] <marcoceppi> smoser: trusty is good too :)
[14:17] <rick_h__> lol smoser
[14:17] <marcoceppi> smoser: basically you want charm-tools > 1.0
[14:17] <smoser> charm tools depends on juju core ?
[14:18] <marcoceppi> smoser: recommends, I believe
[14:18] <smoser> ah. probably.
[14:18] <marcoceppi> since it's also a juju plugin, via `juju charm`
[14:18] <smoser> recommends == depends for all practical purposes
[14:18]  * marcoceppi nods
[14:20] <freephile> marcoceppi: do I start with (as root) 'service juju-db-root-local start'
[14:20] <marcoceppi> freephile: sorry, yes sudo start juju-db-root-local
[14:21] <marcoceppi> then run juju status again
[14:21] <marcoceppi> or whatever command failed
[14:28] <freephile> if I start the db service, it immediately stops (because I'm out of disk space).
[14:28] <freephile> I tried 'start juju-db-root-local && juju ssh opengrok/0 initctl stop opengrok-index'
[14:29] <freephile> to no avail
[14:29] <marcoceppi> freephile: ohh, you're going to need to free up some disk space
[14:30] <marcoceppi> freephile: if you can't trim files (say, zeroing out ~/.juju/local/log/*.log files) etc, you can just destroy the opengrok unit with lxc commands
[14:36] <sinzui> adeuring, r=me. coordinate your change to Approved with bac. he is giving trunk to juju-gui-bot now
[14:37] <sinzui> adeuring, I had some suggestions for a follow branch
[14:37] <adeuring> sinzui: thanks
[14:37] <jcastro> evilnickveitch, hey, apparently you got a submission on bundles for the docs?
[14:37] <freephile> marcoceppi: thanks, I zeroed out the log files (was wondering about that) but it wasn't enough apparently to get the db to stay up.  I'll check into lxc commands
[14:38] <marcoceppi> freephile: you'll want `sudo lxc-ls --fancy`, then `sudo lxc-destroy -n <name from ls command>`
[14:38] <adeuring> sinzui: featured is already covered by "API(2|3) interesting", I think
[14:39] <marcoceppi> freephile: after you clear up disk space, start the juju db then run juju destroy-environment to get the rest of the deployment cleaned up
[14:39] <evilnickveitch> jcastro, i do, yes
[14:39] <sinzui> adeuring, it is not
[14:39] <jcastro> evilnickveitch, do you have it anywhere I can look at it? branch or something?
[14:39] <adeuring> sinzui: so, you think it should get its own status? Or am I missing something else?
[14:40] <sinzui> adeuring, API2/3 can fail for 3 or more reasons. Knowing specifically that featured is empty on staging is fast fix.
[14:40] <evilnickveitch> jcastro, i have a google doc
[14:41] <bac> adeuring: please do not land to charmworld right now.
[14:41] <adeuring> bac: ok, tell me hwen i can land it
[14:41] <sinzui> adeuring, juju-gui (staging and production) needs to fail when those collections are empty. We we setup a new env, charmworld is still in a bad state after the first ingest because we are often missing human created data
[14:50] <freephile> marcoceppi: Success!!! `lxc-ls -l; lxc-stop -n root-local-machine-2; lxc-destroy -n root-local-machine-2; start juju-db-root-local; juju status;`
[14:52] <marcoceppi> freephile: cool, you'll find that opengrok service is in a down state (obviously, as you destroyed it) but you should be able to destroy the environment and recreate it
[14:52] <marcoceppi> etc
[14:57] <bac> adeuring: go ahead and land charmworld as before.  then hold off any more.
[14:57] <adeuring> bac: ok, thanks
[14:58] <adeuring> bac: done
[15:54] <smoser> hey.
[15:54] <smoser> so i just manually manage 'revision' file ?
[15:54] <marcoceppi> smoser: no
[15:54] <marcoceppi> revision file is only used for local deployments, and it should be incremented automatically by juju
[15:55] <marcoceppi> smoser: infact you can add it to .bzrignore
[15:55] <marcoceppi> jcastro: CHARM SYNC
[15:55] <marcoceppi> o/
[15:55] <jcastro> yep
[15:55] <jcastro> firing it up
[15:55] <jcastro> wanna seed the pad?
[15:55] <marcoceppi> woo who
[15:55] <marcoceppi> jcastro: yup
[15:57] <mthaddon> evilnickveitch: https://juju.ubuntu.com/docs/authors-charm-writing.html "The README is a good place to make nots about how the charm works" <-- should I file a bug about that, or is it okay to have just mentioned it here?
[15:57] <jcastro> evilnickveitch, misfire, one sec.
[15:58] <jcastro> https://plus.google.com/hangouts/_/7acpicbshl5mtk1tqjntg4g30k?authuser=0&hl=en
[15:59] <jcastro> evilnickveitch, arosales ^^
[15:59] <marcoceppi> jcastro: http://pad.ubuntu.com/7mf2jvKXNa
[17:14] <smoser> marcoceppi, i asked about 'revision' because 'charm proof' complained 'ERROR' in its absense.
[17:37] <mhall119> halp!
[17:37] <mhall119> mhall@mhall-thinkpad:~$ juju status
[17:37] <mhall119> ERROR Unable to connect to environment "local".
[17:37] <mhall119> Please check your credentials or use 'juju bootstrap' to create a new environment.
[17:37] <mhall119> Error details:
[17:37] <mhall119> Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
[17:37] <adam_g> sinzui, are there PPA builds of 1.16.1 somewhere?
[17:37] <mhall119> mhall@mhall-thinkpad:~$ sudo juju bootstrap
[17:37] <mhall119> Swipe your right index finger across the fingerprint reader
[17:37] <mhall119> ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
[17:39] <sinzui> adam_g, they are not yet. We are looking into an azure issue that first blocked the test, and now looks like a fix is needed for 1.16.1
[17:39] <sinzui> adam_g, I am off to lunch. I can arrange a package for you if you need one today
[17:45] <mhall119> jcastro: juju is broken
[17:45] <jcastro> what's up?
[17:45] <mhall119> see my errors above
[17:45] <mhall119> I can't even bootstrap a local env
[17:45] <jcastro> can you pastebin the `sudo juju --debug bootstrap`?
[17:46] <jcastro> mhall119, you're in luck, marco and I are working on a troubleshooting the local provider document
[17:46] <jcastro> and by luck I mean "haha".
[17:46] <mgz> mhall119: a good first step is to delete any misc .jenv files in ~/.juju/environments and try again
[17:47] <mhall119> mgz: \o/ that seems to have done the trick
[17:47] <jcastro> \o/
[17:49]  * mhall119 tries deploying again
[18:04] <marcoceppi> smoser: that's been fixed in 1.1 which should be released tomorrow
[18:05] <marcoceppi> mhall119: jcastro: https://bugs.launchpad.net/juju-core/+bug/1246429
[18:05] <_mup_> Bug #1246429: destroy-environment no longer removes .jenv <juju-core:New> <https://launchpad.net/bugs/1246429>
[18:06] <mhall119> thanks marcoceppi
[18:09] <marcoceppi> I was about to say "Hey I can't replicate this!" But then I realized I'm on 1.15.1 :)
[18:49] <sodre> smoser: can I talk to you about simplestreams
[18:53] <smoser> sodre, i've got a few minutes.
[18:53] <smoser> whats up?
[18:54] <sodre> I was trying to run your script last night. The found an issue with the integration with radosgw
[18:54] <sodre> s/The/I
[18:54] <smoser> hm.. ok.
[18:54] <sodre> my question: is there a particular issue whey you call _strip_version?
[18:54] <sodre> s/whey/why/
[18:55] <sodre> this is on line openstack.py:98
[18:58] <smoser> sodre, that is copied from other clients that do it.
[18:58] <smoser> what is the problem with doing that ?
[18:59] <sodre> It does not work with the default ( juju deployed ) ceph-radosgw charm.
[19:00] <sodre> If I don't strip the version, then your code works fine.
[19:01] <smoser> hm..
[19:22] <sodre> smoser: still thinking ?
[19:26] <smoser> sodre, sorry. on a call now.
[19:26] <sodre> alright np.
[19:26] <sodre> let me know when we can chat about that bug.
[19:27] <mhall119> jcastro: is there any easy way to condense 'juju status' to just the really useful details?
[19:27] <mhall119> I want to 'watch "juju status"' to see things changing, but it's more than will fit on my terminal
[19:27] <sodre> mhall119: same problem here ....
[19:28] <marcoceppi> mhall119: sounds like you want to write a plugin
[19:28] <marcoceppi> mhall119: like what, you just want a list of units and their status?
[19:28] <mhall119> yeah
[19:28] <marcoceppi> mhall119: hold up, let me try something
[19:29] <sodre> have you tried watch 'juju status | grep state' ?
[19:31] <sodre> the quotes are important.
[19:32] <mhall119> sodre: that doesn't give me the unit though
[19:33] <sodre> yeah, we need a better 'grep' '
[19:33] <marcoceppi> mhall119: sodre: it's time to introduce you to plugins, give me just a few more mins I'll have a working example
[19:34] <sodre> :) nice
[19:35] <jcastro> mhall119, I do `juju status wordpress` or whatever to get each one
[19:35] <jcastro> I have long wanted
[19:35] <jcastro> juju top
[19:35] <jcastro> with an htop looking view of stuff
[19:40] <mhall119> +1
[19:43] <marcoceppi> mhall119 jcastro sodre drop this in a directory in your path: http://paste.ubuntu.com/6331880/
[19:43] <marcoceppi> once it's in path, juju prettyprint will produce that, you should be able to watch it from there
[19:44] <marcoceppi> come onnnnnnn pastebin
[19:45] <marcoceppi> mhall119: sodre https://gist.github.com/marcoceppi/7238964
[19:46] <marcoceppi> With more time you could easily make a juju top command which could poll the API and present useful data about services, units and machines much like htop
[19:47] <jcastro> yeah, it's just a manpower issue
[19:47] <jcastro> no one's going to drop working on HA to work on juju top, heh
[19:47] <marcoceppi> exactly. So I have just empowered two users to use and abuse plugins
[19:48] <jcastro> It'd be a nice low hanging fruit for a new person though
[19:48] <marcoceppi> now we just need to wait for mhall119 to submit his juju top plugin :)
[19:48] <marcoceppi> (python-jujuclient exists as a pyton library for talking to the API, hint hint wink wink)
[19:48] <jcastro> stealing client people won't happen either, I've already tried that
[19:48]  * marcoceppi twiddles thumbs
[19:49] <sodre> I like it :)
[19:50] <sodre> it crashes at first since I have nothing on open-ports. but I got the gist.
[19:51] <marcoceppi> sodre: ahh, yeah public-address will mess it up too. You'd just have to add sanity checks in there
[19:51]  * marcoceppi does the quick and dirty script
[19:51] <marcoceppi> "use at your own risk"
[19:51] <sodre> thanks for pointing out how I can put it together.
[19:52] <marcoceppi> sodre: yeah, you can run `juju help plugins` to get an idea of plugins you have installed and what not
[19:53] <marcoceppi> they're an under publicised feature of juju
[19:53] <sodre> ahhh
[19:53] <sodre> that's where the metadata and deployer show up.
[19:54] <marcoceppi> sodre: same with charm-tools if you have that installed `juju charm`, etc
[19:55] <marcoceppi> really it's basically juju-<subcommand> if it doesn't exist in core but that binary exists, juju core just passes everything on to it
[19:55] <marcoceppi> exactly like git and bazaar plugins
[19:55] <sodre> ic
[20:00] <arosales> jcastro, so to double confirm on charm bundles
[20:00] <marcoceppi> so bundles are pretty cool
[20:00] <jcastro> ok so I have .... 5 bundles right now
[20:01] <jcastro> liferay
[20:01] <arosales> while policy is properly defined bundles will be "featured" in the gui but just under your name space. Is my understanding correct?
[20:01] <arosales> jcastro, ^
[20:01] <jcastro> "scalable jenkins" which is one jenkins with 3 slaves
[20:01] <jcastro> scalable mediawiki with a load balancer
[20:01] <jcastro> a simple mediawiki
[20:01] <jcastro> and wordpress
[20:01] <jcastro> arosales, yes, I'm about to push the first one
[20:01] <jcastro> and then we'll see how it gets indexed
[20:01] <arosales> jcastro, cool thanks for confirming on that
[20:04] <marcoceppi> jcastro: will you be able to promote non "promulgated" bundles?
[20:05] <jcastro> I am asking bac that now
[20:06] <marcoceppi> jcastro: dude, can you test charm-tools 1.1 for me?
[20:07] <marcoceppi> and just `charm proof --bundle` each of the bundles you're writing?
[20:07] <jcastro> oh, yeah!
[20:07] <jcastro> PPA?
[20:07] <marcoceppi> jcastro: because you're definitely doing it wrong
[20:07] <marcoceppi> jcastro: it'll be a manual install, let me update the URL and I'll give you a link
[20:07] <jcastro> ok guys, so featuring a bundle will be the same as a charm
[20:07] <jcastro> we go into manage.jujucharms.com and check the box
[20:07] <jcastro> it'll ingest the first bundle in ~15 or so, then we can mess with it
[20:10] <jcastro> marcoceppi, hey so bac tells me that we'll also need to promulgate the bundles
[20:10] <jcastro> so we'll need charm tools updated
[20:11] <marcoceppi> it has promulgate support
[20:11] <marcoceppi> <3
[20:11] <jcastro> for bundles?
[20:11] <jcastro> <3
[20:11] <marcoceppi> yes
[20:11] <bac> sweet
[20:11] <jcastro> oh, you mentioned it during the status call, I remember now
[20:12] <jcastro> marcoceppi, ok so I'll test your proof tool
[20:12] <jcastro> then push
[20:12] <jcastro> we'll wait 15 for them to seed in the store
[20:12] <jcastro> then you can promulgate?
[20:12] <marcoceppi> yes
[20:12] <marcoceppi> jcastro:
[20:12] <jcastro> I'll push my discourse one up too, but not promulgate it
[20:13] <marcoceppi> bzr branch lp:~marcoceppi/charm-tools/bundle-support charm-tools; cd charm-tools; python setup.py install
[20:13] <marcoceppi> err
[20:13] <marcoceppi> bzr branch lp:~marcoceppi/charm-tools/bundle-support charm-tools; cd charm-tools; sudo python setup.py install
[20:13] <marcoceppi> jcastro: then you should be able to juju charm proof --bundle /path/to/bundle/directory
[20:13] <marcoceppi> well, you can ommit the --bundle flag, it'll detect a bundle automatically
[20:14] <rick_h__> and then the fury of the proof'er will come down upon you!
[20:14] <marcoceppi> rick_h__: I was looking over his branch, saying to myself "yeah, this is a great test case for proof"
[20:14] <rick_h__> lol
[20:15] <rick_h__> marcoceppi: so heads up, we're actually going to work on pulling in the deployer to do bundle proofing. Share the same exact bits as much as possible. So heads up that new stuff should pop up even though you don't update the charm-tools
[20:15] <marcoceppi> rick_h__: that's fine and perfect
[20:16] <jcastro> marcoceppi, http://pastebin.ubuntu.com/6331983/
[20:16] <jcastro> I tried different permutations
[20:16] <rick_h__> lol
[20:16] <marcoceppi> rick_h__: the only thing I'm really checking for in the deployer file is annotations
[20:16] <marcoceppi> jcastro: because it's not a valid bundle
[20:16] <rick_h__> marcoceppi: rgr, just more an FYI because people will fail proof and probably come chat with you/this channel
[20:16] <marcoceppi> the error message isn't clear though, it looks for "bundle.json or bundle.yaml"
[20:17] <marcoceppi> which are the only two files supported
[20:17] <marcoceppi> rick_h__: I'll make sure it displays warnings as well
[20:17] <marcoceppi> from the api
[20:17] <rick_h__> marcoceppi: cool
[20:17] <jcastro> so is that a bug in the tool or are we expecting everyone to name things bundle.yaml
[20:17] <marcoceppi> jcastro: I'll update so that when you use --bundle flag and it detects not a bundle it'll say "Not a bundle because no bundle file (.json or .yaml) found"
[20:17] <rick_h__> jcastro: expecting them to name it bundle.yaml
[20:18] <marcoceppi> jcastro: the GUI expects bundle.*
[20:18] <marcoceppi> it's a bug in that the message to the user is misleading (and ugly exception traceback)
[20:18] <jcastro> same errors when I rename it to bundle.yaml
[20:19] <jcastro> also, daddy needs autocompletion!
[20:19] <marcoceppi> jcastro: well daddy can submit a merge req :)
[20:19] <marcoceppi> jcastro: one second, let me branch your branch
[20:19] <jcastro> I hope the bundle is valid, because I got it from the gui
[20:19] <jcastro> if not, we have other problems, heh
[20:20] <marcoceppi> jcastro: where is your branch?
[20:20] <rick_h__> jcastro: no, we've got chances to excel :)
[20:20] <rick_h__> jcastro: rename the envExport to 'wordpress' as well please
[20:21] <marcoceppi> rick_h__: yeah, I was hoping proof would pick that up
[20:21] <rick_h__> jcastro: we've got a bug to change that to ask you for a name on export, but must not have made it yet
[20:21] <marcoceppi> jcastro: please don't name it wordpress
[20:21] <rick_h__> marcoceppi: we don't, it's valid
[20:21] <marcoceppi> rick_h__: my proof will
[20:21] <rick_h__> marcoceppi: but yea, we want to fix the gui export to not keep reusing the same name
[20:21] <rick_h__> marcoceppi: oh, cool then.
[20:21] <jcastro> https://code.launchpad.net/~jorge/charms/bundles/wordpress/bundle
[20:21] <jcastro> branch is here
[20:21] <jcastro> sorry it's so convoluted. I miss _one lousy_ session
[20:21] <jcastro> and this is what you come up with rick
[20:22] <jcastro> might as well add some plusses and whitespace to the url
[20:22] <rick_h__> lmao, to which url? the LP branches?
[20:22] <jcastro> yeah, seriously, who is going to remember this url?
[20:22] <jcastro> this is bws-readme all over again
[20:22] <marcoceppi> jcastro: it needs be called bundles.yaml
[20:22] <marcoceppi> rick_h__: correct?
[20:22] <jcastro> bundles with an s?
[20:22] <marcoceppi> what's the file name plurarl or singular?
[20:23] <marcoceppi> I'm currently looking for pluarl
[20:23] <marcoceppi> I'm also looking for a new spell checker
[20:23] <jcastro> plural doesn't work either
[20:23] <rick_h__> marcoceppi: bundles.yaml
[20:23] <rick_h__> is that we're looking for
[20:23] <rick_h__> in ingest
[20:23] <rick_h__> jcastro: user's should never see the url tbh
[20:24] <marcoceppi> jcastro: it works for me but I get a weird error from remote proof
[20:24] <jcastro> ok so what will the final cli command look like for deploying a bundle?
[20:24] <rick_h__> jcastro: they go to the gui and either get a UI to pick the one, or they get a bundle:~jcastro/wordpress/5/wordpress url
[20:24] <jcastro> also, if not wordpress, what do I name envExport?
[20:24] <marcoceppi> jcastro: something more descriptive than wordpress, you're creating a solution
[20:25] <marcoceppi> jcastro: so if it's just wordpress + msyql and default config you've created a solution not many people would want imo
[20:25] <marcoceppi> wordpress-simple is a good start
[20:25] <jcastro> got it
[20:25] <jcastro> rick_h__, but the gui doesn't support colocation yet
[20:25] <marcoceppi> the name should describe what you've solved
[20:25] <jcastro> so for a bunch of these one shot bundles they'll need to CLI
[20:25] <rick_h__> jcastro: not showing it no, but it should 'work'
[20:25] <jcastro> marcoceppi, got it
[20:26] <jcastro> marcoceppi, that's what I was naming the yaml files
[20:26] <jcastro> like simple-wordpress.yaml
[20:26] <marcoceppi> jcastro: yeah
[20:26] <marcoceppi> this is what I was talking about
[20:26] <marcoceppi> that bundles.yaml file can have MULTIPLE bundles in it
[20:26] <jcastro> rick_h__, is there a way I can do `juju deploy bundle:~jorge/wordpress` without all the other stuff?
[20:26] <marcoceppi> so you can have a wordpress branch, with a bundles.yaml that has simple-wordpress, scaled-out-wordpress, etc
[20:27] <jcastro> oh!
[20:27] <rick_h__> jcastro: yes, that's what quickstart is for
[20:27] <rick_h__> juju quickstart bundle:~jcastro/...
[20:27] <jcastro> but I can't make that in the GUI, I'd have to make them individually and them combine them into one file
[20:27] <rick_h__> jcastro: right
[20:27] <jcastro> rick_h__, right, so when does that land in relation to bundles?
[20:27] <rick_h__> jcastro: the gui only handles one at a time
[20:27] <rick_h__> jcastro: along-side-ish? I'm not 100% sure. It's almost working now
[20:27] <jcastro> ok so what happens as of today if I drag a multiple environment yaml file into the GUI?
[20:28] <rick_h__> jcastro: it tries to deploy them
[20:28] <rick_h__> until they collide "You've already got a wordpress installed" and then dies
[20:28] <bcsaller> no, it fails if there is more than one target in the file
[20:28] <bcsaller> it can ask for a named target, but there is no UI around that now
[20:29] <jcastro> ok so for now they'll have to be individual bundles
[20:29] <rick_h__> bcsaller: oh, right. I was thinking if you did multiple drags
[20:29] <jcastro> simple-wordpress, HA-wordpress, and so on?
[20:29] <rick_h__> jcastro: yes
[20:29] <marcoceppi> jcastro: fixed charm-tools, bzr pull, run install again
[20:29] <jcastro> marcoceppi, got it
[20:31] <jcastro> W: No readme file found
[20:31] <jcastro> E: envExport is the default export name. Please use a unique name
[20:31] <jcastro> E: envExport: Could not find charm: wordpress
[20:31] <jcastro> E: envExport: Could not find charm: mysql
[20:31] <jcastro> Yeah!
[20:31] <jcastro> now we're getting somewhere
[20:31] <marcoceppi> jcastro: I'm working with rick_h__ on why it says can not find charm
[20:32] <marcoceppi> the first two are valid issues
[20:32] <jcastro> on it
[20:33] <jcastro> rick_h__, man, if at some point today I have to add a -HEAD to the end of one of these commands .... *eyes narrow*
[20:33] <smoser> ok.
[20:33] <smoser> stupid person here.
[20:33] <smoser> $ bzr push lp:~smoser/charms/precise/maas-region
[20:33] <smoser> bzr: ERROR: Permission denied: "~smoser/charms/precise/maas-region/": : Cannot create branch at '/~smoser/charms/precise/maas-region'
[20:33] <rick_h__> jcastro: not at all
[20:33] <smoser> what shoudl that be ?
[20:33] <marcoceppi> smoser: add /trunk to the end of it
[20:33] <rick_h__> jcastro: it's a new feature, bug is in there.
[20:33] <smoser> ah.
[20:33] <smoser> gracias
[20:33] <marcoceppi> it's user/project/series/package/branch
[20:34] <jcastro> ok wordpress is done, hopping on a call with kirkland, and I'll finish up the rest.
[20:34]  * kirkland high fives jcastro 
[20:34] <jcastro> rick_h__, what url can I monitor to see when the bundle gets ingested?
[20:35] <rick_h__> jcastro: http://manage.jujucharms.com/search?search_text=jcastro&op=
[20:35] <rick_h__> jcastro: right now it will have failed due to the file name
[20:35] <jcastro> rick_h__, sorry for being annoying, you know how near and dear simple URLs are to my heart.
[20:36] <jcastro> I just commited a fix with the rename
[20:36] <rick_h__> jcastro: so a push up with that fixed should get it ingested
[20:36] <rick_h__> jcastro: cool
[20:36]  * jcastro nods
[20:36] <rick_h__> jcastro: I'm with you, but the urls for branches in the series and crap is waaaay out of my hands.
[20:37] <jcastro> yeah I get it
[20:44] <marcoceppi> thumper: is there a short flag for --debug?
[20:44] <marcoceppi> or is it only in long form?
[20:45] <thumper> only long at this stage
[20:45] <marcoceppi> ack
[20:47] <rick_h__> jcastro: the versionless cs: urls are breaking it atm. We *want* to support it so will have to update charmworld
[20:48] <rick_h__> jcastro: if you put the versions back in it'll work ok and proof. I'm starting a branch to figure out a way around the versionless issue now
[20:48] <rick_h__> marcoceppi: ^
[20:59] <marcoceppi> rick_h__: ack, thanks!
[21:00] <rick_h__> marcoceppi: have a fix, Will get it reviewed and landed tomorrow. Can verify on staging sometime then
[21:00] <marcoceppi> rick_h__: cool cool, I'll release charm-tools regardless after I iron out a few things here
[21:00] <marcoceppi> since that's all remote proof
[21:00] <rick_h__> marcoceppi: +1 appreciate it
[21:19] <jcastro> rick_h__, ok so is it ok if I push versionless now?
[21:19] <rick_h__> jcastro: yea, we're not proofing, just it'll continue to fail marcoceppi's proof tool until this lands on production
[21:19] <jcastro> that's fine
[21:21] <jcastro> hmm, no ingestion yet?
[22:13] <mramm> thumper's blog post about his logging library for Go is now up on hacker news: https://news.ycombinator.com/item?id=6643805
[22:48] <marcoceppi> hazmat: How does juju-deployer determine the bootstrap IP addresses for using the jujuclient?
[22:49] <hazmat> marcoceppi, its going to move to api-endpoints in the future
[22:49] <hazmat> marcoceppi, atm its using juju status
[22:49] <marcoceppi> hazmat: ah, gotchya
[22:49] <hazmat> marcoceppi, you trying to use the api direct?
[22:49] <hazmat> er jujuclient direct
[22:49] <marcoceppi> hazmat: yeah, was going to try to
[22:49] <hazmat> marcoceppi, cool
[22:50] <marcoceppi> hazmat: but I don't know how to find the bootstrap IP address without first running juju status
[22:50] <marcoceppi> is it in the jenv file?
[22:50] <hazmat> marcoceppi, juju api-endpoints
[22:50] <marcoceppi> magic
[22:50] <marcoceppi> hazmat: thank you!
[22:50] <hazmat> marcoceppi, i added that for explicitly this purpose ...
[22:50] <hazmat> np
[22:50] <marcoceppi> <3
[22:54] <sodre> guys, I am facing an issue with juju bootstrap  not setting up my password
[22:54] <hazmat> sodre, you mean your ssh key?
[22:54] <sodre> yes
[22:54] <sodre> I can paste the vm boot-log
[22:55] <sodre> http://paste.ubuntu.com/6332775/
[22:57] <hazmat> sodre, that's a nice one..
[22:57] <sodre> yeah :)
[22:57] <hazmat> sodre, that's a go panic trying to set the mongodb password. its basically your admin-secret from the environments.yaml
[22:57] <sodre> ahhh
[22:57] <sodre> it needs to be complicated, right ?
[22:58] <hazmat> sodre, well not really, it needs to be sized under 30 characters i think
[22:58] <sarnold> (note there's a "-----BEGIN RSA PRIVATE KEY-----" in that paste, I hope it's ephemeral data...)
[22:58] <hazmat> its just a random string in this mostly..
[22:59] <hazmat> sarnold, it is.. if the environment is.
[22:59] <sodre> yes, the environment was random
[22:59] <sarnold> yay :)
[22:59] <sodre> sarnold: thanks for pointing it out.
[22:59] <hazmat> its part of the auto generated ca and server cert juju setups for th enev
[23:00] <hazmat> sodre, i'd re-try with a random 10 digit string
[23:00] <sodre> yeap. I am doing that right now. I think I had read about that ''feature'' before.
[23:02] <hazmat> sodre, i thought it was size validated.. but maybe not..
[23:02] <sodre> Most people don't see this issue because the environment is generated.
[23:02] <sodre> I wrote mine by hand, so that is why it happened.
[23:05] <sodre> humm... same error
[23:05] <hazmat> sodre, hmm
[23:05] <sodre> let me do a generate
[23:05] <sodre> and try again.
[23:06] <hazmat> sodre, its get stored in jenv
[23:06] <hazmat> if your yanking
[23:06] <hazmat> destroy-env clears that though
[23:07] <sodre> I deleted that by hand, I think
[23:29] <sodre> hazmat: same issue even with a random long password
[23:42] <wallyworld__> sodre: maybe you can update the bug with any additional info?
[23:44] <sodre> will do
[23:44] <sodre> what would you need ?