[00:28] <MACscr> is there a static config i can use to set a default for amazon instance sizes? I only want to use tiny instances to start
[00:28] <davecheney> MACscr: use bootstrap constraints
[00:28] <davecheney> or deploy constriants
[00:28] <MACscr> also, just to verify, if i want to have 4 amazon instances in 4 regions, i have to have a controller in each region?
[00:30] <davecheney> MACscr: not at this time
[00:30] <davecheney> what we call 'provider specific' constraints are being worked at this time
[00:30] <davecheney> there is currently no way to say 'this unit must be in a specific availability zone'
[00:31] <davecheney> we know it's a problem
[00:31] <davecheney> we're working on fixing it
[00:31] <sarnold> I think to have four regions under control at once, you have four separate environments in your environments.yaml -- and none of them know anything about any of the others, right?
[00:31] <davecheney> sarnold: regions are easy
[00:31] <sarnold> https://bugs.launchpad.net/juju-core/+bug/1160667
[00:31] <_mup_> Bug #1160667: Expose regions and availability zones to users <juju-core:Triaged> <https://launchpad.net/bugs/1160667>
[00:31] <davecheney> avaliability zones within a region areharder
[00:33] <MACscr> well to be honest, i dont really need them to be aware of each other as they are just going to be dns servers with a little ping monitoring on them as well (smokeping)
[00:34] <MACscr> juju would just be used to deploy ubuntu on them. I guess i could manually install mysql on them
[00:35] <sarnold> MACscr: you either use --to to deploy a mysql charm onto them, or write a subordinate charm that installs mysql alongside the dns servers..
[00:38] <MACscr> ok, so you and dave seemed to either have conflicting views or i just got a bit confused. Are having multiple regions easy or not? Basically I need to setup an instance in Asian, Europe and two in the US
[00:39] <MACscr> and the two in the US are going to be in two different amazon regions (east cost and west coast)
[00:41] <sarnold> MACscr: I think we both said the same thing from different angles :)
[00:44] <sarnold> MACscr: you can do multiple environments and then juju -e asia deploy dns, juju -e europe deploy dns, etc... but they can't do relations between asia and europe
[00:46] <MACscr> sarnold: understandable. This might be an option in the future though? cross region relations?
[00:47] <MACscr> also, how do i setup multiple regions/environments for a single cloud? comma separate list?
[00:48] <sarnold> MACscr: different environment declarations in your environments.yaml
[00:48] <marcoceppi> MACscr: there is talk on eventually having cross environment relations, not sure where on the roadmap that is ATM
[00:48] <marcoceppi> MACscr: you'll need to create a new environment for each region in your environments.yaml
[00:49] <marcoceppi> Just name them uniquely, and make sure each has a unique bucket-name. They can share the same credentials
[00:50] <MACscr> so do i just copy and paste the amazon group of configs and then just change the region? Im not sure how to setup multiple environments for the same provider
[01:00] <davecheney> MACscr: yes, we call these cross environment relations
[01:01] <davecheney> basically relations between services in different enironments
[01:01] <davecheney> it's on the roadmap
[01:01] <davecheney> i can't give you a solid idea when it will happen
[01:01] <MACscr> so not next monday?
[01:01] <MACscr> jk
[01:03] <marcoceppi> MACscr: here's an example
[01:03] <marcoceppi> of multiple juju environments
[01:10]  * marcoceppi scrubs his environments yaml for MACscr
[01:12] <marcoceppi> MACscr: http://paste.ubuntu.com/6030850/
[01:12] <marcoceppi> You just need to change the environments name (us-east, blah-lbah, whatever you want), the region, and the control bucket
[01:12] <marcoceppi> Everything else can be the same
[01:14] <marcoceppi> MACscr: and the region names, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions
[01:26] <davecheney> MACscr: i have an environment in my environments.yaml for ever ec2 region
[01:27] <davecheney> so I can do things like
[01:27] <davecheney> juju bootstrap -e ap-southeast-1
[03:08] <MACscr> davecheney: thanks!
[03:41] <MACscr> ok, so i know juju-ui can be used for creating relations, etc, but what about managing them all after they are deployed? id like a single panel for all systems, vm or physical
[03:43] <MACscr> landscape pricing is just to high imho
[05:12] <x-warrior> what is the parameters to create a micro instance? sorry to ask this again
[05:12] <x-warrior> :/
[05:14] <davecheney> x-warrior: for the ec2 provider, you'd use constraints
[05:14]  * davecheney does a test before recommending anyting
[05:15] <x-warrior> yeap it is constraints
[05:15] <x-warrior> but I cant remember which ones
[05:15] <x-warrior> :(
[05:15] <x-warrior> I asked this early today but I couldn't find a channel log :/
[05:17] <davecheney> x-warrior: i'm just doing a demo now
[05:17] <davecheney> two secs
[05:17] <x-warrior> ty
[05:19] <x-warrior> I tried with cpu-power=0
[05:19] <x-warrior> but I get this error: "ERROR juju supercommand.go:235 command failed: cannot start bootstrap instance: cannot set up groups: cannot revoke security group: Source group ID missing. (MissingParameter)
[05:19] <x-warrior> "
[05:19] <davecheney> x-warrior: ewww
[05:19] <davecheney> nice error
[05:19] <davecheney> will try to reproduce that one
[05:20] <davecheney> still working on your first question
[05:20] <x-warrior> I think that came from, starting a regular bootstrap... deleting s3, deleting ec2 machine and trying to create it again with constraints...
[05:20] <x-warrior> maybe it loose the id from groups and stuff like that to 'revoke' and 'recreate' or something
[05:22] <davecheney> x-warrior: hmm, if you are rough with juju it probably won't respect you in the morning
[05:24] <x-warrior> :(
[05:25] <x-warrior> removing the juju-amazon groups
[05:26] <x-warrior> seems to solve the problem
[05:26] <davecheney> x-warrior: I think it would be
[05:26] <davecheney> juju bootstrap  --constraints="arch=i386 mem=640M"
[05:26] <davecheney> but I haven't been able to check yet
[05:26] <davecheney> having some other problems at the moment
[05:26] <davecheney> you can't say type=t1.micro
[05:26] <davecheney> because we don't support what we call `provider specific constraints`
[05:27] <x-warrior> using 'juju -v bootstrap --constraints="cpu-power=0"'
[05:27] <davecheney> so you need to describe something that looks like a t1 micro
[05:27] <x-warrior> created the micro instance
[05:27] <davecheney> yup, because there is nothing that is cpu-power0
[05:27] <davecheney> so the next largest is a t1.micro
[05:27] <davecheney> same thing
[05:27] <davecheney> we just did it a different way
[05:28] <x-warrior> sweet
[05:28] <x-warrior> now I see the instance running on my panel, but when I check juju status it gives me 'no instance running'
[05:28] <x-warrior> :s
[05:29] <davecheney> x-warrior: i can only recommend juju destroy-environment
[05:29] <davecheney> because you've probably damaged some invariants that juju was expecting
[05:29] <x-warrior> any other information I could provide? I'm able to connect to instance via ssh
[05:30] <davecheney> juju status -v
[05:30] <x-warrior> I can see '15088 ?        Ssl    0:00 /var/lib/juju/tools/machine-0/jujud machine --log-file /var/log/juju/machine-0.log --data-dir /var/lib/juju --machine-'  on server
[05:30] <x-warrior> on instance*
[05:30] <davecheney> what has probably happened is the control bucket provider-state file does not match the intsance id of your bootstrap node
[05:31] <x-warrior> 2013-08-27 05:30:37 INFO juju ec2.go:128 environs/ec2: opening environment "amazon", 2013-08-27 05:30:43 ERROR juju supercommand.go:235 command failed: no instances found
[05:31] <davecheney> pop open the s3 console and get the contents of the provider-state file from your control bucket
[05:31] <davecheney> i suspect it is missing or empty
[05:32] <x-warrior> should i paste it to pastebin?
[05:32] <x-warrior> http://pastebin.com/NvTmTBDi
[05:32] <davecheney> right, so does that instance number match the machine that is running ?
[05:33] <x-warrior> yes it does
[05:39] <x-warrior> rebooting the instance and checking juju status again, gives me the same result
[05:41] <davecheney> x-warrior: can you paste the output of juju status -v
[05:41] <davecheney> i supect it will error very early
[05:42] <x-warrior> http://pastebin.com/4fDUyAmA
[05:42] <x-warrior> that is it
[05:42] <x-warrior> too short I guess
[05:44] <davecheney> x-warrior: so juju looks in the control bucket, gets the instance id of the machine
[05:44] <davecheney> converts it to an ip address
[05:44] <davecheney> uses that ip to talk to mongodb running on the bootstrap node
[05:45] <davecheney> for whatever reason that instance id, or the yaml is invalid
[05:45] <davecheney> so that is all she wrote
[05:45] <davecheney> x-warrior: two secs, checking something
[05:45] <x-warrior> I'm not sure that I'm following, but ok
[05:45] <x-warrior> :D
[05:46] <x-warrior> I'm trying to start without constraint option, to check if it is something related to micro instance
[05:46] <x-warrior> or something
[05:47] <davecheney> x-warrior: short version is
[05:47] <davecheney> that environment is broken
[05:47] <davecheney> you will probably have to delete it via the aws console and start again
[05:48] <x-warrior> delete ec2, security groups and s3 is enough?
[05:48] <x-warrior> or it writes to some other place?
[05:48] <MACscr> ok, a controller is needed for each region/environment, right?  If so, is there any type of panel to keep track of everything as a whole?
[05:48] <davecheney> delete the control bucket
[05:48] <davecheney> and the instances you have lying around
[05:49] <davecheney> MACscr: you need a bootstrap node per environment
[05:49] <davecheney> and an environment can only cover one provider
[05:49] <davecheney> so in effect you need one bootstrap node per ec2 region
[05:49] <x-warrior> what do you mean by control bucket? bootstrap instance?
[05:50] <davecheney> control bucket is listed in your ~/.juju/environments.yaml
[05:50] <davecheney> it is where juju records persistant state
[05:50] <davecheney> bootstrap instance is that machine that you have left running
[05:50] <davecheney> it is the machine that juju spawns to host the mongodb
[05:50] <MACscr> davecheney: right, a bootstrap, make sense. So while juju helps get things deployed and related, is there not a tool for managing things after the fact?
[05:51] <davecheney> MACscr: you'd have to define after the fact
[05:51] <davecheney> we have the gui where you can view your juju environment
[05:51] <x-warrior> there is juju-gui
[05:51] <davecheney> and commands like add-unit/remove-unit help you scale up and scale down the number of units of a particular service
[05:52] <davecheney> but juju does not compete in the nagios/zabbix/xenoss space as a process monitoring tool
[05:52] <x-warrior> and you can deploy more then one service to the same machine now
[05:52] <davecheney> x-warrior: --to should be used with care
[05:52] <davecheney> really
[05:52] <davecheney> it takes all the safety guards off
[05:52] <MACscr> ok, but can juju-gui work with more than one environment? its restricted to a single one just like how a bootstrap is needed for each one. Correct?
[05:53] <davecheney> MACscr: yes, correct
[05:53] <davecheney> each environment is separate and unrelated
[05:53] <davecheney> the juju client and switch between environments with the -e, or juju switch commands
[05:53] <davecheney> but the juju-gui, being a charm itself, is deployed into an environment
[05:53] <MACscr> right, so something is need to manage everything
[05:53] <davecheney> so only controls that environment
[05:53] <MACscr> not just a single environment at a time
[05:54] <davecheney> MACscr: we have no product for that at this time
[05:54] <x-warrior> s3 cleaned up, instance terminated, .juju deleted... starting from scratch now
[05:54] <MACscr> not when it comes to relational stuff, but general instance management, etc
[05:54] <davecheney> MACscr: juju talks about services and units of services
[05:54] <MACscr> davecheney: yeah, seems like landscape is the only option of yours and its ridiculously priced
[05:54] <davecheney> the fact that it creates machines to host them is sort of co-incidental
[05:55] <davecheney> MACscr: i cannot comment on the price, but you have certainly interpreted our marketing message correctly
[05:55] <MACscr> hmm, well foreman with puppet can manage everything, but im trying not to have a bunch of overlap with tools
[05:56] <MACscr> and i really havent figured out puppet yet =P
[05:57] <x-warrior> after bootstrapping should I wait a while, to zookeeper and stuff like that goes 'up'?
[05:57] <x-warrior> or get installed or something?
[05:58] <davecheney> x-warrior: we don't use zookeeper anymore
[05:58] <davecheney> we use mongodb
[05:58] <davecheney> but the result is the same
[05:58] <x-warrior> ah sweet
[05:59] <davecheney> juju status will block until the bootstrap node is up and running
[05:59] <davecheney> you can see tht with
[05:59] <davecheney> juju status -v
[05:59] <davecheney> in fact, you should pass -v to everythig that you do with juju
[05:59] <davecheney> otherwise you'll have to rerun the command with -v anyway
[05:59] <x-warrior> yeap I learned that
[05:59] <x-warrior> x)
[05:59] <davecheney> we are working on our logging
[05:59] <davecheney> it needs fixing
[06:00] <davecheney> we're not done yet
[06:00] <x-warrior> no problem :D
[06:00] <x-warrior> http://pastebin.com/N9eFC8ur
[06:00] <x-warrior> that is all the outputs from a 'fresh' start... (deleted s3, groups, instance, .juju files)
[06:00] <davecheney> x-warrior: something is very wrong with your setup
[06:00] <davecheney> which juju
[06:01] <davecheney> it smells like you have both 0.7 and 1.12 installed
[06:01] <x-warrior> 1.12.0-raring-amd64
[06:01] <davecheney> ok
[06:01] <x-warrior> at least juju version gives me that
[06:01] <x-warrior> and this is the first time I install juju (like 1 hour ago)
[06:01] <davecheney> hmm, i'm a bit stumped
[06:02] <x-warrior> (on this computer ofc, I was trying on a mac before, but I had the same issue... so I thought it was a mac os related problem... then I moved to ubuntu...)
[06:02] <davecheney> can you confirm that i-2a4ed636 is running in the aws console
[06:03] <x-warrior> yes I can see it
[06:04] <x-warrior> I can connect to it as well
[06:04] <davecheney> i cannot explain why this is not working for you
[06:04] <davecheney> the logic is
[06:04] <davecheney> get the instance id from the provider-state file in the control bucket
[06:04] <davecheney> looking the ip that the instance points too
[06:04] <x-warrior> is there a -vv option or something?
[06:04] <davecheney> then connect to mongodb on that ip
[06:05] <x-warrior> which gets more verbous?
[06:05] <davecheney> x-warrior: there is --debug
[06:05] <davecheney> but I don't think it will make it much more verbose
[06:05] <davecheney> and it is fialing at the first step
[06:05] <davecheney> it's rejecting your provider-state file in the control bucket
[06:05] <davecheney> i do not know why
[06:05] <davecheney> i have not seen this failure mode before
[06:06] <davecheney> x-warrior: just for shits and giggles
[06:06] <x-warrior> 17070/37017 are the corrects ports?
[06:06] <davecheney> could you change the region: key in your enviornments.yaml to another region
[06:06] <x-warrior> yes I can try that
[06:06] <davecheney> (after deleting the current environemnt of course)
[06:06] <davecheney> 37017 is the correct port
[06:06] <davecheney> but you don't get that far
[06:07] <x-warrior> uhmm
[06:07] <x-warrior> so that seems very weird I guess
[06:07] <x-warrior> x)
[06:08] <davecheney> i have not seen that failure mode before
[06:09] <x-warrior>  deploying to another region
[06:19] <x-warrior> davecheney, changing the region 'fix' a little it goes a little bit further
[06:19] <davecheney> fix ?
[06:19] <kurt_> davecheney: do you know this error? error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST
[06:20] <davecheney> kurt_: i'm not a maas expert
[06:20] <davecheney> i mainly do the public clouds
[06:20] <davecheney> let me check
[06:21] <kurt_> googling it now...
[06:21] <davecheney> rough guess there is a permission problem creating or reading your control bucket
[06:22] <kurt_> I believe it is this: https://bugs.launchpad.net/maas/+bug/1204507
[06:22] <_mup_> Bug #1204507: MAAS rejects empty files <verification-done-precise> <verification-needed> <MAAS:Fix Committed by rvb> <MAAS 1.2:Fix Committed by rvb> <MAAS 1.3:Fix Committed by rvb> <maas (Ubuntu):Fix Released> <maas (Ubuntu Precise):Fix Committed> <maas (Ubuntu Quantal):Confirmed> <maas (Ubuntu Raring):Fix Committed> <https://launchpad.net/bugs/1204507>
[06:22] <x-warrior> http://pastebin.com/MiXJ4y9Z
[06:22] <kurt_> I think jcastro warned me about this this morning
[06:22] <x-warrior> well it is not a 'bug fix' but it is going a little further... I have no idea what is different besides the region...
[06:23] <kurt_> but some of the other folks thought it was fixed
[06:23] <x-warrior> I was using sa-east-1 region which is São Paulo, BR region... maybe some inconsistence between regions? :S
[06:24] <davecheney> kurt_: it's a known bug in maas
[06:24] <davecheney> if you switch your maas install to the daliy build
[06:24] <davecheney> it is fixed there
[06:24] <davecheney> i do not have a timeframe when the fix will be avilable in general
[06:25] <kurt_> ok, was just chatting with bigjools too
[06:25] <kurt_> lol
[06:25] <davecheney> x-warrior: up, thatis normal operation for ec2
[06:25] <davecheney> it takes 3-5 mins for each instance to start up
[06:26] <davecheney> once it is ready status will succeed, that is why it is retrying
[06:26] <x-warrior> yeap
[06:26] <x-warrior> now it listed the bootstrap machine
[06:27] <x-warrior> should the destroy-environment option
[06:27] <x-warrior> destroy the instance and do a correct cleanup?
[06:27] <davecheney> x-warrior: which region did not work
[06:27] <davecheney> and which region did work
[06:27] <davecheney> destroy-environment will remove all regions and the control bucket
[06:27] <davecheney> it might leave the security groups around
[06:27] <davecheney> that is fine
[06:27] <x-warrior> ok
[06:28] <davecheney> they are not expected to be deleted and can cope with being reused
[06:28] <x-warrior> so, now I'm using the default region option
[06:28] <x-warrior> without setting it on environment.yaml file
[06:28] <x-warrior> and using region: sa-east-1
[06:28] <x-warrior> it does not work
[06:28] <davecheney> ok, so there is something wrong with the sao paulo region atm
[06:28] <davecheney> it happens
[06:29] <davecheney> ap-southeast-2 was broken for several months for me
[06:29] <davecheney> x-warrior: if you would care to, you should log a but about this on juju-core
[06:29] <davecheney> although ec2 will denyu it
[06:29] <davecheney> each region is subtly different
[06:29] <x-warrior> where should I log it?
[06:29] <x-warrior> launchpad?
[06:30] <davecheney> launchpad.net/juju-core/
[06:30] <x-warrior> ok, I will log that later today
[06:30] <x-warrior> :D
[06:32] <x-warrior> btw I will keep joining this channel
[06:32] <x-warrior> if you guys need more help to trace it
[06:32] <x-warrior> I will be glad to help you
[06:32] <davecheney> x-warrior: thanks for the offer
[06:32] <davecheney> pointing the finger at sa-east-1 is good enough for now
[06:39] <x-warrior> davecheney, ok, I will do that when I wake up later... need to get some sleep now, almost 4am here
[06:39] <x-warrior> thanks for all the help
[06:39] <x-warrior> :D
[06:39] <x-warrior> have a good one
[06:40] <davecheney> x-warrior: ok, thanks
[06:40] <davecheney> ttys
[08:57] <vds> stub, hello, can I ask you how to use the persistent volume support of the postgres charm? That's how I changed the config http://paste.ubuntu.com/6030130/
[08:58] <vds> the volume exists already, of course
[08:59] <stub> vds: The bit of the charm I'm not familiar with :) I can try, or invoke our devops if needed.
[08:59] <vds> stub, who's the one to blame? :)
[09:00] <stub> vds: You are modifying config.yaml, instead of passing configuration parameters to the charm?
[09:00] <vds> stub, yes
[09:04] <stub> vds: I think you are supposed to use a config file looking like http://paste.ubuntu.com/5751886/, and then do 'juju deploy --config=myconfig.yaml cs:postgresql'
[09:07] <gnuoy> hi vds, whats the issue ?
[09:07] <vds> stub, thanks I'll try
[09:08] <vds> gnuoy, when I try to deploy postgresql changing the config this way http://paste.ubuntu.com/6030130/
[09:08] <vds> gnuoy, I get this error http://paste.ubuntu.com/6030125/
[09:09] <mthaddon> er, you're changing the config directly in the charm rather than passing it as an option to the charm?
[09:09] <gnuoy> looks like the version param is missing ./
[09:09] <gnuoy> ?
[09:09] <gnuoy> HOOK KeyError: 'version'
[09:20] <vds> mthaddon, is that bad?
[09:21] <mthaddon> vds: yes, you shouldn't change the charm itself before deploying
[09:23] <vds> mthaddon, ok, thanks.
[11:12] <alejandro> hola spanis?
[11:12] <alejandro> spanish?
[11:17] <alejandro> para que es esto?
[11:18] <varud> Anybody have thoughts on why an experimental 12.04 image I'm creating with juju has '    agent-state: down' after restart?
[11:19] <varud> Found this old stackoverflow - http://askubuntu.com/questions/218645/juju-instances-in-agent-state-down-after-turning-them-off-and-back-on-on-ec2
[11:20] <varud> but it's not relevant anymore (there's no juju-machine-agent.conf file)
[14:21] <mthaddon> jcastro: I can't find any docs on the upgrade-charm hook any more - is that expected?
[14:23] <marcoceppi> mthaddon: no, not expected. Let me see if I can find them. If not what's your question, I'd be happy to answer it
[14:25] <mthaddon> marcoceppi: no question besides wondering where the docs were - thx
[14:25]  * marcoceppi makes notes about having documentation for all hooks in the author docs
[14:36] <jcastro> marcoceppi: can you add it to the doc sprint spreadsheet?
[14:36] <jcastro> so we don't forget?
[14:36] <jcastro> evilnickveitch: ^^^
[14:36] <marcoceppi> jcastro: already did that
[14:36] <jcastro> <3
[14:36] <marcoceppi> E>
[14:37] <m_3> what the heck is that?
[14:37] <m_3> heart in a box?
[14:37] <rick_h> that's 'marco'
[14:37] <m_3> unhuh
[14:37] <rick_h> we just smile and move on
[14:37] <rick_h> :P
[14:53] <jcastro> http://insights.ubuntu.com/news/juju-charm-championship-expands-with-more-categories-more-prizes/
[14:53] <jcastro> help me spread the word folks!
[14:54] <m_3> hmmm... getting intermittent 503's on manage.jujucharms.com again today
[14:59] <jcastro> 5 minute warning on the first UDS session
[15:00] <jcastro> http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/
[15:00] <jcastro> we're starting with a charm policy review
[15:01] <kurt_> jcastro: juju 1.12 did NOT work
[15:01] <jcastro> ah, where did it fall over?
[15:02] <kurt_> jcastro: I ran in to this bug: http://pastebin.ubuntu.com/6032955/
[15:02] <kurt_> error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST
[15:02] <kurt_> on bootstrap
[15:03] <jcastro> hrpmh
[15:03] <kurt_> and, after talking with bigjools, there are no near term plans to put the fix in to maas in quantal
[15:04] <kurt_> fix has already made it's way to precise and raring (8/15)
[15:04] <mattyw> marcoceppi, I live here :)
[15:04] <marcoceppi> of course you do, just broadcasting for those who don't already live around here
[15:06] <kurt_> jcastro: for what I am trying to get done, do you suggest I start over on precise?  Raring is not LTS, right?
[15:06] <marcoceppi> mojo706: anything juju goes here, even "off-topic" juju, whatever that might be
[15:07] <mojo706> thanks
[15:09] <marcoceppi> Charm Policy review ongoing: http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/
[15:10] <mattyw> marcoceppi, have you and jcastro already starting working towards the netflix cloud prize? or is it just planning at the moment?
[15:10] <marcoceppi> mattyw: I'm not sure, I think m_3 was spearheading that, IIRC
[15:39] <kurt_> jcastro: would raring be a more viable option as long as my maas nodes are precise?
[15:39] <jcastro> I think going precise all the way is the way to go personally
[15:39] <jcastro> sorry I am unresponsive, we're doing UDS today
[15:40] <kurt_> no worries
[15:40] <kurt_> another acronym I don't know lol
[15:40] <kurt_> doesn't matter
[15:40] <kurt_> OK, I can try that.
[15:43] <marcoceppi> kurt_: Ubuntu Developer Summit, http://summit.ubuntu.com/
[15:43] <kurt_> macroceppi: ah thanks! awesome!
[15:43] <marcoceppi> kurt_: actually, I think this gives more information http://uds.ubuntu.com/
[15:46] <jcastro> kurt_: you might want to sit in on the openstack ones!
[15:46] <kurt_> I was just looking for those
[15:47] <m_3> mattyw: re netflix cloud-prize... how can I help?
[15:47] <mattyw> m_3, just wondering if there's a way I can help with it?
[15:47] <m_3> mattyw: I'm doing some netflixoss charms, but not for the cloud-prize
[15:47] <m_3> mattyw: I'm disqualified for the netflix cloud-prize proper
[15:48] <mattyw> m_3, how come?
[15:48] <x-warrior> davecheney, just filled that bug report :D
[15:49] <m_3> mattyw: there're reciprocal prizes/judging between canonical and netflix
[15:49] <m_3> so canonical employees are excluded from both prizes
[15:50] <m_3> mattyw: but there's lots to do... I'm getting recipes-rss working atm
[15:50] <m_3> lots of subs to be created
[15:50] <m_3> and I'm throwing around ideas about gradle/groovy hook impls
[16:03] <marcoceppi> Starting in about 2 minuts, http://summit.ubuntu.com/uds-1308/meeting/21896/servercloud-s-eco-messaging
[16:47] <jcastro> hey sinzui
[16:47] <jcastro> http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/
[16:48] <jcastro> wanna attend or send someone so we can talk charm review queue stuff?
[16:49] <kurt_> silly question: is local provider support all of the lxc stuff?
[16:50] <kurt_> or is it *any* method in which juju is getting deployed locally?
[16:50] <jcastro> yeah
[16:50] <jcastro> when we say local provider we mean LXC support
[16:50] <jcastro> currently. :p
[16:50] <kurt_> ok
[16:51] <kurt_> that appears to be hot topic
[16:51] <AskUbuntu> Checking my juju instance through Amazon's AWS console? | http://askubuntu.com/q/337987
[16:51] <sidnei> hazmat: i think we need a new release of juju-deployer, and then to ping jamespage to upload to saucy. the one currently in saucy is missing some important fixes.
[16:53] <jamespage> sidnei, hazmat: let me know when and what
[16:57] <kurt_> jamespage: when you consolidate services on to fewer nodes (juju --to) , do you have a suggestion on which charms/services stack best together or some kind of dev blueprint you guys use?
[16:59] <sinzui> jcastro, that you for the reminder. I thought today was the 26.
[16:59] <jcastro> it's the 27th!
[17:00] <jcastro> your name is Curtis and we have a session today.
[17:00] <jcastro> :)
[17:00] <kurt_> I was looking at this, but I don't believe this follows the tenant client you were suggesting I research https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
[17:13] <hazmat> sidnei, it is? i thought it got cut from the branch directly
[17:14] <sidnei> hazmat: it's 0.2.1 :/
[17:17] <hazmat> sidnei, its possible we have a packaging version divergence against the same source.
[17:17] <hazmat> jamespage, so latest rev 114 / version 0.2.3 (just incremented to be sure) is stable
[17:17] <sidnei> hazmat: it's missing the fix from r110 at least
[17:18] <hazmat> jamespage, if you could upload that we should be good for saucy. there are other changes inbound but they can land into the ppa for now.
[17:22] <marcoceppi> hazmat: could you put that version in the stable ppa as well?
[17:56] <jcastro> arosales: ok details updated for the next session
[17:56] <jcastro> so we should be able to just start on time this time, heh
[17:57] <arosales> jcastro, sorry about the conflicts last time
[17:57] <arosales> jcastro, I am leaving getting session started in your capable hands :-)
[17:57] <jcastro> no worries, first day is always rough
[17:57] <jcastro> that seems to have been our only problem today, I'll take it!
[17:57] <arosales> jcastro, for sure
[18:00] <jamespage> hazmat, I'm cutting from the tarballs on pypi
[18:00] <marcoceppi> jcastro: link?
[18:01] <jcastro> https://plus.google.com/hangouts/_/db564284829c94ffbbbd54b843fc04d071554fb5?authuser=0&hl=en
[18:01] <jcastro> http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/
[18:04] <hazmat> jamespage, aha, thanks
[18:04] <hazmat> jamespage, updated on pypi
[18:19] <jamespage> hazmat, sidnei: uploaded to saucy
[18:20] <jamespage> hazmat, is that compatible with juju-core 1.12.0 ?
[18:20] <hazmat> jamespage, yes
[18:21] <jamespage> kurt_, its probably easier to say what won't go together right now
[18:21] <jamespage> nova-compute, quantum-gateway, nova-cloud-controller will all conflict with each other in config files
[18:21] <jamespage> likewise ceph, cinder, glance and nova-compute (around /etc/ceph)
[18:47] <kurt_> jamespage: thanks.  do you use a two or three node deployment for testing?  I would be curious to see what people typically cluster together on the same node
[18:48] <jamespage> kurt_, if you just want compute (i.e. no cinder) then you can get away with three nodes
[18:48] <kurt_> jcastro: is it normal I would have to "sync-tools" after fresh install of juju 1.12?
[18:48] <jamespage> kurt_, right now its tricky and kinda unsupported because of the conflicts in the filesystem
[18:48] <jamespage> kurt_, juju container support will help with taht
[18:48] <jamespage> a charm assuming it has control over the filesystem is not unreasonable
[18:49] <kurt_> juju container support = lxc = local support you guys were just talking about?
[18:49] <jamespage> kurt_, kinda
[18:49] <jamespage> lxc is used in the local provider right now
[18:49] <kurt_> ok
[18:49] <jamespage> but a feature is being worked on to allow you to add LXC machines in other providers
[18:50] <jamespage> so you can slice up a server using LXC for deploying servers into
[18:50] <jamespage> servers/charms
[18:50] <kurt_> right, but just forcing with --to is going to cause problems?
[18:50] <kurt_> that was the path I was going down
[18:52] <sarnold> I wouldn't expect --to to work with every possible combination of charms
[18:52] <sarnold> but _many_ combinations might work fine
[18:53] <kurt_> sarnold: right.  If someone could share their working blueprint for a successful deployment, that would be awesome
[18:54] <kurt_> whether its 2, 3 or more nodes - I am just wondering what works
[18:54] <kurt_> juju 1.12 -> YARGH! LOL http://pastebin.ubuntu.com/6033776/
[18:55] <sarnold> lol
[18:56]  * kurt_ beating fists, stomping feet, and rolling eyes
[18:56] <marcoceppi> kurt_: destroying environment removes the bucket :\
[18:56] <marcoceppi> davecheney: ^ Might want to change that.
[18:56] <kurt_> marcoceppi: does syncing tools bootstrap too?
[18:56] <kurt_> that doesn't seem logical
[18:57] <marcoceppi> So, juju destroy-environment; juju sync-tools; juju bootstrap
[18:57] <sarnold> marcoceppi: dunno. I could wanting the billing to end when the environment is destroyed...
[18:57] <marcoceppi> kurt_: you need to sync-tools prior to bootstrap but after destroy
[18:58] <kurt_> marcoceppi: ok, the sync-tools is new for 1.12 for me
[18:58] <kurt_> never had to do that before
[19:00] <kurt_> marcoceppi: why does it say this then? "error: environment is already bootstrapped"
[19:00]  * kurt_ confused
[19:02] <marcoceppi> kurt_: because when you run juju bootstrap it creates a file in the bucket that says "this is bootstraped", even if an instance doesn't launch. It's to prevent two bootstraps from happening if the cloud provider takes a long time to start up the bootstrap node
[19:02] <marcoceppi> So there's a bug in there, in that if no tools are matched, or if there is a general bootstrap error, it should clean up that file
[19:04] <kurt_> marcoceppi: but the tools are there after download, is the error being generated prior to the tools downloading?  If I destroy my environment, there is no bootstrapped node, so that error seems misleading
[19:04] <marcoceppi> kurt_: there's a juju bootstrap --upload-tools option, however people keep telling me that it's more for development versions of Juju and that sync-tools is the right way to go.
[19:04] <marcoceppi> kurt_: I'm not sure of the nuances with they sync-tools, I've not had the pleasure of using it much
[19:05] <marcoceppi> hazmat: sync-tools and when to use it? ^
[19:05] <kurt_> marcoceppi: I don't think one has a choice when bootstrapping. you must have the tools
[19:06] <marcoceppi> kurt_: Yes, most public clouds have the tools already sync'd somewhere, so it's effortless
[19:07] <marcoceppi> for private clouds and maas, I'm not sure of the proper procedure
[19:07] <marcoceppi> I'm pretty sure you want to follow: juju destroy-environment; juju sync-tools; juju bootstrap
[19:07] <kurt_> marcoceppi: I would hope it would function in nearly the same way. :)
[19:07] <marcoceppi> I think the sync-tools is a pre-step to the process
[19:07] <kurt_> that's exactly what I did and that error showed up
[19:07] <marcoceppi> the bootstrap error?
[19:07] <kurt_> yeah
[19:08] <marcoceppi> kurt_: that's not right
[19:08] <marcoceppi> what happens if you destroy-environment then bootstrap again?
[19:08] <kurt_> and in looking at my maas, I have no bootstrapped node
[19:08] <kurt_> same thing
[19:08] <kurt_> let me try once more - I will paste to paste bin for you
[19:09] <marcoceppi> so there's something else going on
[19:09] <marcoceppi> kurt_:  use -v for both commands
[19:10] <kurt_> doing that...
[19:12] <kurt_> marcoceppi: ah… heh, in reviewing what I cut and pasted above, there was a slight error
[19:12] <kurt_> marcoceppi: notice: juju bootstrapdestroy-environment; juju sync-tools; juju bootstrap
[19:13] <marcoceppi> sarnold: I understand the wanting to kill the billing, possibly a juju destroy-environment --preserve-tools would resolve that
[19:13] <kurt_> should be: juju destroy-environment;  juju sync-tools; juju bootstrap <- My bad for not picking that up
[19:13] <marcoceppi> kurt_: np, give that a go
[19:14] <kurt_> working :)
[19:14] <hazmat> marcoceppi, when your in a private cloud, and don't have a compile env, ie just want to use juju
[19:14] <hazmat> marcoceppi, it will copy the latest release from public ec2 bucket into private cloud object storage (for the env/user)
[19:14] <marcoceppi> hazmat: so it's really for those who don't have go-land installed and don't want to compile the tools themeslves? IE majority of users?
[19:15] <hazmat> marcoceppi, yup majority of users in a private cloud.. public clouds should already have tools installed
[19:15] <marcoceppi> hazmat: ack, gotchya
[19:15] <marcoceppi> We'll need to update our docs to let people know about sync-tools and maas/private clouds
[19:15] <hazmat> there's another critical /high bug for private cloud usage, re allowing invalid ssl certs that causes issues for juju-core
[19:16] <hazmat> atm, requires updating the client's os level certs to accept the private cloud ssl cert ca as trusted.
[19:17] <hazmat> bug 1202163 fwiw
[19:17] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1202163>
[19:17]  * marcoceppi +1s
[19:25] <kurt_> hazmat: yes I saw something about a cert error
[19:25] <kurt_> kurt@maas-cntrl:~$ juju status
[19:25] <kurt_> error: no CA certificate in environment configuration
[19:25] <hazmat> that's a little different
[19:26] <kurt_> rebooting the node, destroying, and restarting seems to have fixed that anyways
[19:26] <hazmat> juju also internally uses ca certs to secure communications with mongodb and the api server
[19:26] <hazmat> those certs are kept in the JUJU_HOME (default ~/.juju)
[19:26] <hazmat> the certs referenced in the bug are the underlying iaas provider ssl certs
[19:26] <jcastro> marcoceppi: kirkland has a bunch of bugs that your sentry interface autodocumenter should fix
[19:26] <jcastro> where can he file bugs?
[19:27] <kurt_> ah ok - but that's mostly for external provider scenarios, right?
[19:27] <marcoceppi> jcastro: what kind of bugs?
[19:27] <hazmat> kurt_, it also applies to openstack using ssl
[19:27] <hazmat> in a private cloud
[19:27] <marcoceppi> lp:amulet should suffice
[19:27] <marcoceppi> jcastro: ^
[19:27] <kurt_> hazmat: ok, thnx
[19:27] <kirkland> marcoceppi: okay, thanks
[19:28] <X-warrior`> so I'm creating a charm, and now I used deploy on it, but some stuff went wrong and I'm fixing it, how can I remove this one to try the new one? or is it possible to update?
[19:29] <marcoceppi> X-warrior`: you can either use juju upgrade-charm to upgrade the charm in place, you can deploy the charm again under a different alias (IE: `juju deploy --upgrade --repository /path/to/charm/repo local:charm-name now-your-alias) or you can destroy the environment and re-bootstrap
[19:30] <kirkland> marcoceppi: thanks!  https://bugs.launchpad.net/amulet/+bug/1217540
[19:30] <_mup_> Bug #1217540: Every interface defined by a charm should be documented with examples <Amulet:New> <https://launchpad.net/bugs/1217540>
[19:31] <marcoceppi> kirkland: so I can do the first half of that, the documentation. The examples I might defer to another bug on another project. I'll let you know.
[19:31] <marcoceppi> well, I can attempt to do the first part*
[19:32] <kirkland> jcastro: marcoceppi: okay, and now I'd like to file a bug, requesting that the squid-reverseproxy charm add support for https_port -- where do I file that?  against launchpad.net/charms?
[19:32] <jcastro> yeah
[19:32] <jcastro> that would be against the charm itself
[19:34] <web-brandon> If I include a folder in my charm with files.  Where is it placed on the service instance server?
[19:34] <X-warrior`> marcoceppi, ty, does the upgrade-charm will update already deployed instance?
[19:35] <X-warrior`> or do I need to upgrade charm an then call deploy with --upgrade flag
[19:35] <web-brandon> Or is it only on the juju bootstrap instance
[19:36] <marcoceppi> X-warrior`: So, upgrade charm will update the charm contents, but that's it. If you want it to run hooks again (like run hooks/install or hooks/config-changed) you'll need to create a new hook in hooks/ called upgrade-charm and put that logic in there, for example: http://bazaar.launchpad.net/~charmers/charms/precise/wordpress/trunk/view/head:/hooks/upgrade-charm
[19:36] <marcoceppi> X-warrior`: nope, those are two different commands
[19:36] <marcoceppi> one is just an upgrade charm, the other will deploy a fresh copy of the charm under a new service name, so it will follow the typical install and deploy as if it was freshly deployed
[19:36] <X-warrior`> ok
[19:37] <X-warrior`> and what about removing already deployed services?
[19:37] <marcoceppi> X-warrior`: you can run juju destroy-service to remove deployed services, but you can't, IIRC, deploy a service again with the same name in an environment
[19:38] <marcoceppi> So you'll need to use the `juju deploy --upgrade --repository ... local:charm <alias>` syntax
[19:39] <marcoceppi> X-warrior`: So, in the event of mysql, if you've already run `juju deploy --repository ... local:mysql` and then you want to deploy fixes. You could run `juju upgrade-charm --repository ... mysql` OR if you wanted to start fresh, `juju destroy-service mysql` then run `juju deploy --upgrade --repository ... local:mysql db` that'll deploy mysql under the alias of "db" so you don't have it deployed again as "mysql"
[19:40] <X-warrior`> yeap, but if I would like to deploy fixes, my charm must have a upgrade-charm
[19:40] <X-warrior`> hook
[19:40] <X-warrior`> right?
[19:41] <marcoceppi> X-warrior`: It's not required, using upgrade-charm without an upgrade-charm hook works, a new version of the charm will be deployed
[19:41] <marcoceppi> However, that's all juju will do, is unpack the new version. It won't run any other hooks
[19:42] <X-warrior`> oh I get it
[19:42] <marcoceppi> X-warrior`: So, you could write an upgrade-charm hook right now, and then run upgrade-charm. Juju will unpack the new version then execute the new hook
[19:42] <X-warrior`> should the destroy-service command destroy the instance as well?
[19:42] <marcoceppi> But all hooks are considered "optional" for juju, if it doesn't exist juju just skips it and moves on
[19:42] <X-warrior`> yeap
[19:42] <marcoceppi> X-warrior`: no, instances will remain. It's juju's way of protecting data. You can remove them with `juju terminate-machine <machine-number>` if you're done with them
[19:43] <X-warrior`> ok
[19:43] <X-warrior`> let me try
[19:43] <marcoceppi> cool
[19:46] <X-warrior`> ERROR juju supercommand.go:235 command failed: no machines were destroyed: machine 1 has unit "test/0" assigned
[19:47] <marcoceppi> X-warrior`: is test/0 in an error state?
[19:47] <marcoceppi> X-warrior`: it probably says something like life: dying and agent-state is in error?
[19:48] <X-warrior`> I guess test/0 is an error state since the install hook failed and on log I can see "awaiting error resolution for "install" hook"
[19:49] <marcoceppi> X-warrior`: so, when a charm is in an error state it stops all other events and leaves them in an events queue
[19:49] <marcoceppi> In this case, config-changed and a bunch of other hooks are all queued up, with the last event being the service destruction.
[19:49] <X-warrior`> ok
[19:49] <marcoceppi> X-warrior`: so you'll need to "resolved" the errors for the charm before it can get to those other events
[19:50] <marcoceppi> X-warrior`: `juju resolved test/0` should suffice
[19:50] <marcoceppi> X-warrior`: you may need to run that command multiple times if it hits any other errors
[19:50] <X-warrior`> ok
[19:50] <marcoceppi> X-warrior`: Also, for future reference, you can have juju retry a hook, using `juju resolved --retry test/0`
[19:51] <X-warrior`> what does the 0 stands for? install?
[19:51] <marcoceppi> X-warrior`: that's the unit number. So each service you deploy gets at least one unit. If you wanted to scale out you could run juju add-unit test and you'd get test/0 and test/1
[19:51] <marcoceppi> if you notice in juju status they're listed under the "units" heading for the service
[19:52] <X-warrior`> so if I send the --retry flag it will rerun the failed hook
[19:53] <X-warrior`> so let's say in my case I had a problem with install hook, so it gets locked
[19:53] <X-warrior`> if I use upgrade-charm, it will get enqueued
[19:53] <X-warrior`> and if I use the --retry it will fail again
[19:53] <marcoceppi> X-warrior`: yes, so you could, for instance, juju ssh test/0 (to ssh in to the node), switch to the root user, go to /var/lib/juju/agents/unit-test-0/charm/, edit hooks/install, fix whatever the problem was, log out and then run juju resolved --retry test/0 to try again
[19:53] <X-warrior`> and then I need manually update the hook on the server
[19:54] <marcoceppi> Right, so at this point you're editing things on the server, it's just one possible workflow for writing charms. It's dangerous, because if you don't copy the fix to your local charm and destroy the service, you lose your changes
[19:54] <X-warrior`> yeap
[19:54] <marcoceppi> the alternative is to destroy the service, fix it locally, re deploy, or to destroy-environment, fix it locally, re-bootstrap, then deploy
[19:55] <marcoceppi> It's up to whatever way works best for you as an author
[19:55] <marcoceppi> each has their pros and cons
[19:56] <X-warrior`> yeap
[19:56] <web-brandon> If I include a folder in my charm with files.  Where is it placed? I cannot find it for the life of me
[19:57] <X-warrior`> let me try it again
[20:23] <X-warrior`> marcoceppi, I have to go now, will keep trying later, ty for your help
[20:23] <X-warrior`> have a good one
[20:29] <jcastro> FunnyLookinHat: is there a charm for beansbooks?
[20:32] <marcoceppi> web-brandon: It's placed inside the $CHARM_DIR, typically /var/lib/juju/agents/unit-<service-name>*/charm
[20:34] <web-brandon> marcoceppi: thank you so much.
[20:50] <marcoceppi> web-brandon: each hook is executed at the $CHARM_DIR (and that variable is available to hooks)
[20:51] <marcoceppi> We recommend not hardcoding paths when possible
[20:56] <web-brandon> just ran through a test to spit out the path in to juju-log.  I am about to perform some 'cp' actions with that as the base dir.
[20:57] <marcoceppi> web-brandon: if you want to be safe, you can just do $CHARM_DIR as the prefix to the path
[20:57] <web-brandon> i understand. it can change from server to server
[20:57] <marcoceppi> web-brandon: not only server to server, but version to version. However, all hooks will _always_ be executed from the $CHARM_DIR
[20:58] <web-brandon> marcoceppi: good to know.
[21:00] <web-brandon> gonna make a irc-bot charm
[21:48] <_mup_> Bug #1217591 was filed: Had to manually add access to bootstrap node on 17070 and 37017 to the secgroup <juju:New> <https://launchpad.net/bugs/1217591>