[00:28] is there a static config i can use to set a default for amazon instance sizes? I only want to use tiny instances to start [00:28] MACscr: use bootstrap constraints [00:28] or deploy constriants [00:28] also, just to verify, if i want to have 4 amazon instances in 4 regions, i have to have a controller in each region? [00:30] MACscr: not at this time [00:30] what we call 'provider specific' constraints are being worked at this time [00:30] there is currently no way to say 'this unit must be in a specific availability zone' [00:31] we know it's a problem [00:31] we're working on fixing it [00:31] I think to have four regions under control at once, you have four separate environments in your environments.yaml -- and none of them know anything about any of the others, right? [00:31] sarnold: regions are easy [00:31] https://bugs.launchpad.net/juju-core/+bug/1160667 [00:31] <_mup_> Bug #1160667: Expose regions and availability zones to users [00:31] avaliability zones within a region areharder === defunctzombie is now known as defunctzombie_zz [00:33] well to be honest, i dont really need them to be aware of each other as they are just going to be dns servers with a little ping monitoring on them as well (smokeping) [00:34] juju would just be used to deploy ubuntu on them. I guess i could manually install mysql on them [00:35] MACscr: you either use --to to deploy a mysql charm onto them, or write a subordinate charm that installs mysql alongside the dns servers.. [00:38] ok, so you and dave seemed to either have conflicting views or i just got a bit confused. Are having multiple regions easy or not? Basically I need to setup an instance in Asian, Europe and two in the US [00:39] and the two in the US are going to be in two different amazon regions (east cost and west coast) === weblife is now known as web-brandon [00:41] MACscr: I think we both said the same thing from different angles :) [00:44] MACscr: you can do multiple environments and then juju -e asia deploy dns, juju -e europe deploy dns, etc... but they can't do relations between asia and europe [00:46] sarnold: understandable. This might be an option in the future though? cross region relations? [00:47] also, how do i setup multiple regions/environments for a single cloud? comma separate list? [00:48] MACscr: different environment declarations in your environments.yaml [00:48] MACscr: there is talk on eventually having cross environment relations, not sure where on the roadmap that is ATM [00:48] MACscr: you'll need to create a new environment for each region in your environments.yaml [00:49] Just name them uniquely, and make sure each has a unique bucket-name. They can share the same credentials [00:50] so do i just copy and paste the amazon group of configs and then just change the region? Im not sure how to setup multiple environments for the same provider [01:00] MACscr: yes, we call these cross environment relations [01:01] basically relations between services in different enironments [01:01] it's on the roadmap [01:01] i can't give you a solid idea when it will happen [01:01] so not next monday? [01:01] jk [01:03] MACscr: here's an example [01:03] of multiple juju environments [01:10] * marcoceppi scrubs his environments yaml for MACscr [01:12] MACscr: http://paste.ubuntu.com/6030850/ [01:12] You just need to change the environments name (us-east, blah-lbah, whatever you want), the region, and the control bucket [01:12] Everything else can be the same [01:14] MACscr: and the region names, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions [01:26] MACscr: i have an environment in my environments.yaml for ever ec2 region [01:27] so I can do things like [01:27] juju bootstrap -e ap-southeast-1 === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie [03:08] davecheney: thanks! [03:41] ok, so i know juju-ui can be used for creating relations, etc, but what about managing them all after they are deployed? id like a single panel for all systems, vm or physical [03:43] landscape pricing is just to high imho [05:12] what is the parameters to create a micro instance? sorry to ask this again [05:12] :/ [05:14] x-warrior: for the ec2 provider, you'd use constraints === freeflying is now known as freeflying_away [05:14] * davecheney does a test before recommending anyting [05:15] yeap it is constraints [05:15] but I cant remember which ones [05:15] :( [05:15] I asked this early today but I couldn't find a channel log :/ [05:17] x-warrior: i'm just doing a demo now [05:17] two secs [05:17] ty [05:19] I tried with cpu-power=0 === freeflying_away is now known as freeflying [05:19] but I get this error: "ERROR juju supercommand.go:235 command failed: cannot start bootstrap instance: cannot set up groups: cannot revoke security group: Source group ID missing. (MissingParameter) [05:19] " [05:19] x-warrior: ewww [05:19] nice error [05:19] will try to reproduce that one [05:20] still working on your first question [05:20] I think that came from, starting a regular bootstrap... deleting s3, deleting ec2 machine and trying to create it again with constraints... [05:20] maybe it loose the id from groups and stuff like that to 'revoke' and 'recreate' or something [05:22] x-warrior: hmm, if you are rough with juju it probably won't respect you in the morning [05:24] :( [05:25] removing the juju-amazon groups [05:26] seems to solve the problem [05:26] x-warrior: I think it would be [05:26] juju bootstrap --constraints="arch=i386 mem=640M" [05:26] but I haven't been able to check yet [05:26] having some other problems at the moment [05:26] you can't say type=t1.micro [05:26] because we don't support what we call `provider specific constraints` [05:27] using 'juju -v bootstrap --constraints="cpu-power=0"' [05:27] so you need to describe something that looks like a t1 micro [05:27] created the micro instance [05:27] yup, because there is nothing that is cpu-power0 [05:27] so the next largest is a t1.micro [05:27] same thing [05:27] we just did it a different way [05:28] sweet [05:28] now I see the instance running on my panel, but when I check juju status it gives me 'no instance running' [05:28] :s [05:29] x-warrior: i can only recommend juju destroy-environment [05:29] because you've probably damaged some invariants that juju was expecting [05:29] any other information I could provide? I'm able to connect to instance via ssh [05:30] juju status -v [05:30] I can see '15088 ? Ssl 0:00 /var/lib/juju/tools/machine-0/jujud machine --log-file /var/log/juju/machine-0.log --data-dir /var/lib/juju --machine-' on server [05:30] on instance* [05:30] what has probably happened is the control bucket provider-state file does not match the intsance id of your bootstrap node [05:31] 2013-08-27 05:30:37 INFO juju ec2.go:128 environs/ec2: opening environment "amazon", 2013-08-27 05:30:43 ERROR juju supercommand.go:235 command failed: no instances found [05:31] pop open the s3 console and get the contents of the provider-state file from your control bucket [05:31] i suspect it is missing or empty [05:32] should i paste it to pastebin? [05:32] http://pastebin.com/NvTmTBDi [05:32] right, so does that instance number match the machine that is running ? [05:33] yes it does [05:39] rebooting the instance and checking juju status again, gives me the same result [05:41] x-warrior: can you paste the output of juju status -v [05:41] i supect it will error very early [05:42] http://pastebin.com/4fDUyAmA [05:42] that is it [05:42] too short I guess [05:44] x-warrior: so juju looks in the control bucket, gets the instance id of the machine [05:44] converts it to an ip address [05:44] uses that ip to talk to mongodb running on the bootstrap node [05:45] for whatever reason that instance id, or the yaml is invalid [05:45] so that is all she wrote [05:45] x-warrior: two secs, checking something [05:45] I'm not sure that I'm following, but ok [05:45] :D [05:46] I'm trying to start without constraint option, to check if it is something related to micro instance [05:46] or something [05:47] x-warrior: short version is [05:47] that environment is broken [05:47] you will probably have to delete it via the aws console and start again [05:48] delete ec2, security groups and s3 is enough? [05:48] or it writes to some other place? [05:48] ok, a controller is needed for each region/environment, right? If so, is there any type of panel to keep track of everything as a whole? [05:48] delete the control bucket [05:48] and the instances you have lying around [05:49] MACscr: you need a bootstrap node per environment [05:49] and an environment can only cover one provider [05:49] so in effect you need one bootstrap node per ec2 region [05:49] what do you mean by control bucket? bootstrap instance? [05:50] control bucket is listed in your ~/.juju/environments.yaml [05:50] it is where juju records persistant state [05:50] bootstrap instance is that machine that you have left running [05:50] it is the machine that juju spawns to host the mongodb [05:50] davecheney: right, a bootstrap, make sense. So while juju helps get things deployed and related, is there not a tool for managing things after the fact? [05:51] MACscr: you'd have to define after the fact [05:51] we have the gui where you can view your juju environment [05:51] there is juju-gui [05:51] and commands like add-unit/remove-unit help you scale up and scale down the number of units of a particular service [05:52] but juju does not compete in the nagios/zabbix/xenoss space as a process monitoring tool [05:52] and you can deploy more then one service to the same machine now [05:52] x-warrior: --to should be used with care [05:52] really [05:52] it takes all the safety guards off [05:52] ok, but can juju-gui work with more than one environment? its restricted to a single one just like how a bootstrap is needed for each one. Correct? [05:53] MACscr: yes, correct [05:53] each environment is separate and unrelated [05:53] the juju client and switch between environments with the -e, or juju switch commands [05:53] but the juju-gui, being a charm itself, is deployed into an environment [05:53] right, so something is need to manage everything [05:53] so only controls that environment [05:53] not just a single environment at a time [05:54] MACscr: we have no product for that at this time [05:54] s3 cleaned up, instance terminated, .juju deleted... starting from scratch now [05:54] not when it comes to relational stuff, but general instance management, etc [05:54] MACscr: juju talks about services and units of services [05:54] davecheney: yeah, seems like landscape is the only option of yours and its ridiculously priced [05:54] the fact that it creates machines to host them is sort of co-incidental [05:55] MACscr: i cannot comment on the price, but you have certainly interpreted our marketing message correctly [05:55] hmm, well foreman with puppet can manage everything, but im trying not to have a bunch of overlap with tools [05:56] and i really havent figured out puppet yet =P [05:57] after bootstrapping should I wait a while, to zookeeper and stuff like that goes 'up'? [05:57] or get installed or something? [05:58] x-warrior: we don't use zookeeper anymore [05:58] we use mongodb [05:58] but the result is the same [05:58] ah sweet [05:59] juju status will block until the bootstrap node is up and running [05:59] you can see tht with [05:59] juju status -v [05:59] in fact, you should pass -v to everythig that you do with juju [05:59] otherwise you'll have to rerun the command with -v anyway [05:59] yeap I learned that [05:59] x) [05:59] we are working on our logging [05:59] it needs fixing [06:00] we're not done yet [06:00] no problem :D [06:00] http://pastebin.com/N9eFC8ur [06:00] that is all the outputs from a 'fresh' start... (deleted s3, groups, instance, .juju files) [06:00] x-warrior: something is very wrong with your setup [06:00] which juju [06:01] it smells like you have both 0.7 and 1.12 installed [06:01] 1.12.0-raring-amd64 [06:01] ok [06:01] at least juju version gives me that [06:01] and this is the first time I install juju (like 1 hour ago) [06:01] hmm, i'm a bit stumped [06:02] (on this computer ofc, I was trying on a mac before, but I had the same issue... so I thought it was a mac os related problem... then I moved to ubuntu...) [06:02] can you confirm that i-2a4ed636 is running in the aws console [06:03] yes I can see it [06:04] I can connect to it as well [06:04] i cannot explain why this is not working for you [06:04] the logic is [06:04] get the instance id from the provider-state file in the control bucket [06:04] looking the ip that the instance points too [06:04] is there a -vv option or something? [06:04] then connect to mongodb on that ip [06:05] which gets more verbous? [06:05] x-warrior: there is --debug [06:05] but I don't think it will make it much more verbose [06:05] and it is fialing at the first step [06:05] it's rejecting your provider-state file in the control bucket [06:05] i do not know why [06:05] i have not seen this failure mode before [06:06] x-warrior: just for shits and giggles [06:06] 17070/37017 are the corrects ports? [06:06] could you change the region: key in your enviornments.yaml to another region [06:06] yes I can try that [06:06] (after deleting the current environemnt of course) [06:06] 37017 is the correct port [06:06] but you don't get that far [06:07] uhmm [06:07] so that seems very weird I guess [06:07] x) [06:08] i have not seen that failure mode before [06:09] deploying to another region === tasdomas_afk is now known as tasdomas [06:19] davecheney, changing the region 'fix' a little it goes a little bit further [06:19] fix ? [06:19] davecheney: do you know this error? error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST [06:20] kurt_: i'm not a maas expert [06:20] i mainly do the public clouds [06:20] let me check [06:21] googling it now... [06:21] rough guess there is a permission problem creating or reading your control bucket [06:22] I believe it is this: https://bugs.launchpad.net/maas/+bug/1204507 [06:22] <_mup_> Bug #1204507: MAAS rejects empty files [06:22] http://pastebin.com/MiXJ4y9Z [06:22] I think jcastro warned me about this this morning [06:22] well it is not a 'bug fix' but it is going a little further... I have no idea what is different besides the region... [06:23] but some of the other folks thought it was fixed [06:23] I was using sa-east-1 region which is São Paulo, BR region... maybe some inconsistence between regions? :S [06:24] kurt_: it's a known bug in maas [06:24] if you switch your maas install to the daliy build [06:24] it is fixed there [06:24] i do not have a timeframe when the fix will be avilable in general [06:25] ok, was just chatting with bigjools too [06:25] lol [06:25] x-warrior: up, thatis normal operation for ec2 [06:25] it takes 3-5 mins for each instance to start up [06:26] once it is ready status will succeed, that is why it is retrying [06:26] yeap [06:26] now it listed the bootstrap machine [06:27] should the destroy-environment option [06:27] destroy the instance and do a correct cleanup? [06:27] x-warrior: which region did not work [06:27] and which region did work [06:27] destroy-environment will remove all regions and the control bucket [06:27] it might leave the security groups around [06:27] that is fine [06:27] ok [06:28] they are not expected to be deleted and can cope with being reused [06:28] so, now I'm using the default region option [06:28] without setting it on environment.yaml file [06:28] and using region: sa-east-1 [06:28] it does not work [06:28] ok, so there is something wrong with the sao paulo region atm [06:28] it happens [06:29] ap-southeast-2 was broken for several months for me [06:29] x-warrior: if you would care to, you should log a but about this on juju-core [06:29] although ec2 will denyu it [06:29] each region is subtly different [06:29] where should I log it? [06:29] launchpad? [06:30] launchpad.net/juju-core/ [06:30] ok, I will log that later today [06:30] :D [06:32] btw I will keep joining this channel [06:32] if you guys need more help to trace it [06:32] I will be glad to help you [06:32] x-warrior: thanks for the offer [06:32] pointing the finger at sa-east-1 is good enough for now [06:39] davecheney, ok, I will do that when I wake up later... need to get some sleep now, almost 4am here [06:39] thanks for all the help [06:39] :D [06:39] have a good one [06:40] x-warrior: ok, thanks [06:40] ttys === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [08:57] stub, hello, can I ask you how to use the persistent volume support of the postgres charm? That's how I changed the config http://paste.ubuntu.com/6030130/ [08:58] the volume exists already, of course [08:59] vds: The bit of the charm I'm not familiar with :) I can try, or invoke our devops if needed. [08:59] stub, who's the one to blame? :) [09:00] vds: You are modifying config.yaml, instead of passing configuration parameters to the charm? [09:00] stub, yes [09:04] vds: I think you are supposed to use a config file looking like http://paste.ubuntu.com/5751886/, and then do 'juju deploy --config=myconfig.yaml cs:postgresql' [09:07] hi vds, whats the issue ? [09:07] stub, thanks I'll try [09:08] gnuoy, when I try to deploy postgresql changing the config this way http://paste.ubuntu.com/6030130/ [09:08] gnuoy, I get this error http://paste.ubuntu.com/6030125/ [09:09] er, you're changing the config directly in the charm rather than passing it as an option to the charm? [09:09] looks like the version param is missing ./ [09:09] ? [09:09] HOOK KeyError: 'version' [09:20] mthaddon, is that bad? [09:21] vds: yes, you shouldn't change the charm itself before deploying [09:23] mthaddon, ok, thanks. === tasdomas is now known as tasdomas_afk === tasdomas_afk is now known as tasdomas [11:12] hola spanis? [11:12] spanish? [11:17] para que es esto? [11:18] Anybody have thoughts on why an experimental 12.04 image I'm creating with juju has ' agent-state: down' after restart? [11:19] Found this old stackoverflow - http://askubuntu.com/questions/218645/juju-instances-in-agent-state-down-after-turning-them-off-and-back-on-on-ec2 [11:20] but it's not relevant anymore (there's no juju-machine-agent.conf file) === natefinch is now known as natefinch-afk [14:21] jcastro: I can't find any docs on the upgrade-charm hook any more - is that expected? [14:23] mthaddon: no, not expected. Let me see if I can find them. If not what's your question, I'd be happy to answer it [14:25] marcoceppi: no question besides wondering where the docs were - thx [14:25] * marcoceppi makes notes about having documentation for all hooks in the author docs [14:36] marcoceppi: can you add it to the doc sprint spreadsheet? [14:36] so we don't forget? [14:36] evilnickveitch: ^^^ [14:36] jcastro: already did that [14:36] <3 [14:36] E> [14:37] what the heck is that? [14:37] heart in a box? [14:37] that's 'marco' [14:37] unhuh [14:37] we just smile and move on [14:37] :P [14:53] http://insights.ubuntu.com/news/juju-charm-championship-expands-with-more-categories-more-prizes/ [14:53] help me spread the word folks! [14:54] hmmm... getting intermittent 503's on manage.jujucharms.com again today === freeflying is now known as freeflying_away [14:59] 5 minute warning on the first UDS session [15:00] http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/ [15:00] we're starting with a charm policy review [15:01] jcastro: juju 1.12 did NOT work [15:01] ah, where did it fall over? [15:02] jcastro: I ran in to this bug: http://pastebin.ubuntu.com/6032955/ [15:02] error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST [15:02] on bootstrap [15:03] hrpmh [15:03] and, after talking with bigjools, there are no near term plans to put the fix in to maas in quantal [15:04] fix has already made it's way to precise and raring (8/15) [15:04] marcoceppi, I live here :) [15:04] of course you do, just broadcasting for those who don't already live around here [15:06] jcastro: for what I am trying to get done, do you suggest I start over on precise? Raring is not LTS, right? [15:06] mojo706: anything juju goes here, even "off-topic" juju, whatever that might be [15:07] thanks [15:09] Charm Policy review ongoing: http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/ [15:10] marcoceppi, have you and jcastro already starting working towards the netflix cloud prize? or is it just planning at the moment? [15:10] mattyw: I'm not sure, I think m_3 was spearheading that, IIRC [15:39] jcastro: would raring be a more viable option as long as my maas nodes are precise? [15:39] I think going precise all the way is the way to go personally [15:39] sorry I am unresponsive, we're doing UDS today [15:40] no worries [15:40] another acronym I don't know lol [15:40] doesn't matter [15:40] OK, I can try that. [15:43] kurt_: Ubuntu Developer Summit, http://summit.ubuntu.com/ [15:43] macroceppi: ah thanks! awesome! [15:43] kurt_: actually, I think this gives more information http://uds.ubuntu.com/ [15:46] kurt_: you might want to sit in on the openstack ones! [15:46] I was just looking for those [15:47] mattyw: re netflix cloud-prize... how can I help? [15:47] m_3, just wondering if there's a way I can help with it? [15:47] mattyw: I'm doing some netflixoss charms, but not for the cloud-prize [15:47] mattyw: I'm disqualified for the netflix cloud-prize proper [15:48] m_3, how come? [15:48] davecheney, just filled that bug report :D === x-warrior is now known as Guest28629 [15:49] mattyw: there're reciprocal prizes/judging between canonical and netflix [15:49] so canonical employees are excluded from both prizes === Guest28629 is now known as X-warrior` [15:50] mattyw: but there's lots to do... I'm getting recipes-rss working atm [15:50] lots of subs to be created [15:50] and I'm throwing around ideas about gradle/groovy hook impls === tasdomas is now known as tasdomas_afk [16:03] Starting in about 2 minuts, http://summit.ubuntu.com/uds-1308/meeting/21896/servercloud-s-eco-messaging === defunctzombie_zz is now known as defunctzombie [16:47] hey sinzui [16:47] http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/ [16:48] wanna attend or send someone so we can talk charm review queue stuff? [16:49] silly question: is local provider support all of the lxc stuff? [16:50] or is it *any* method in which juju is getting deployed locally? [16:50] yeah [16:50] when we say local provider we mean LXC support [16:50] currently. :p [16:50] ok [16:51] that appears to be hot topic [16:51] Checking my juju instance through Amazon's AWS console? | http://askubuntu.com/q/337987 [16:51] hazmat: i think we need a new release of juju-deployer, and then to ping jamespage to upload to saucy. the one currently in saucy is missing some important fixes. [16:53] sidnei, hazmat: let me know when and what [16:57] jamespage: when you consolidate services on to fewer nodes (juju --to) , do you have a suggestion on which charms/services stack best together or some kind of dev blueprint you guys use? [16:59] jcastro, that you for the reminder. I thought today was the 26. [16:59] it's the 27th! [17:00] your name is Curtis and we have a session today. [17:00] :) [17:00] I was looking at this, but I don't believe this follows the tenant client you were suggesting I research https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst === weblife is now known as web-brandon [17:13] sidnei, it is? i thought it got cut from the branch directly [17:14] hazmat: it's 0.2.1 :/ [17:17] sidnei, its possible we have a packaging version divergence against the same source. [17:17] jamespage, so latest rev 114 / version 0.2.3 (just incremented to be sure) is stable [17:17] hazmat: it's missing the fix from r110 at least [17:18] jamespage, if you could upload that we should be good for saucy. there are other changes inbound but they can land into the ppa for now. [17:22] hazmat: could you put that version in the stable ppa as well? [17:56] arosales: ok details updated for the next session [17:56] so we should be able to just start on time this time, heh [17:57] jcastro, sorry about the conflicts last time [17:57] jcastro, I am leaving getting session started in your capable hands :-) [17:57] no worries, first day is always rough [17:57] that seems to have been our only problem today, I'll take it! [17:57] jcastro, for sure [18:00] hazmat, I'm cutting from the tarballs on pypi [18:00] jcastro: link? [18:01] https://plus.google.com/hangouts/_/db564284829c94ffbbbd54b843fc04d071554fb5?authuser=0&hl=en [18:01] http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/ [18:04] jamespage, aha, thanks [18:04] jamespage, updated on pypi [18:19] hazmat, sidnei: uploaded to saucy [18:20] hazmat, is that compatible with juju-core 1.12.0 ? [18:20] jamespage, yes [18:21] kurt_, its probably easier to say what won't go together right now [18:21] nova-compute, quantum-gateway, nova-cloud-controller will all conflict with each other in config files [18:21] likewise ceph, cinder, glance and nova-compute (around /etc/ceph) === natefinch-afk is now known as natefinch [18:47] jamespage: thanks. do you use a two or three node deployment for testing? I would be curious to see what people typically cluster together on the same node [18:48] kurt_, if you just want compute (i.e. no cinder) then you can get away with three nodes [18:48] jcastro: is it normal I would have to "sync-tools" after fresh install of juju 1.12? [18:48] kurt_, right now its tricky and kinda unsupported because of the conflicts in the filesystem [18:48] kurt_, juju container support will help with taht [18:48] a charm assuming it has control over the filesystem is not unreasonable [18:49] juju container support = lxc = local support you guys were just talking about? [18:49] kurt_, kinda [18:49] lxc is used in the local provider right now [18:49] ok [18:49] but a feature is being worked on to allow you to add LXC machines in other providers [18:50] so you can slice up a server using LXC for deploying servers into [18:50] servers/charms [18:50] right, but just forcing with --to is going to cause problems? [18:50] that was the path I was going down [18:52] I wouldn't expect --to to work with every possible combination of charms [18:52] but _many_ combinations might work fine [18:53] sarnold: right. If someone could share their working blueprint for a successful deployment, that would be awesome [18:54] whether its 2, 3 or more nodes - I am just wondering what works [18:54] juju 1.12 -> YARGH! LOL http://pastebin.ubuntu.com/6033776/ [18:55] lol [18:56] * kurt_ beating fists, stomping feet, and rolling eyes [18:56] kurt_: destroying environment removes the bucket :\ [18:56] davecheney: ^ Might want to change that. [18:56] marcoceppi: does syncing tools bootstrap too? [18:56] that doesn't seem logical [18:57] So, juju destroy-environment; juju sync-tools; juju bootstrap [18:57] marcoceppi: dunno. I could wanting the billing to end when the environment is destroyed... [18:57] kurt_: you need to sync-tools prior to bootstrap but after destroy [18:58] marcoceppi: ok, the sync-tools is new for 1.12 for me [18:58] never had to do that before [19:00] marcoceppi: why does it say this then? "error: environment is already bootstrapped" [19:00] * kurt_ confused [19:02] kurt_: because when you run juju bootstrap it creates a file in the bucket that says "this is bootstraped", even if an instance doesn't launch. It's to prevent two bootstraps from happening if the cloud provider takes a long time to start up the bootstrap node [19:02] So there's a bug in there, in that if no tools are matched, or if there is a general bootstrap error, it should clean up that file [19:04] marcoceppi: but the tools are there after download, is the error being generated prior to the tools downloading? If I destroy my environment, there is no bootstrapped node, so that error seems misleading [19:04] kurt_: there's a juju bootstrap --upload-tools option, however people keep telling me that it's more for development versions of Juju and that sync-tools is the right way to go. [19:04] kurt_: I'm not sure of the nuances with they sync-tools, I've not had the pleasure of using it much [19:05] hazmat: sync-tools and when to use it? ^ [19:05] marcoceppi: I don't think one has a choice when bootstrapping. you must have the tools [19:06] kurt_: Yes, most public clouds have the tools already sync'd somewhere, so it's effortless [19:07] for private clouds and maas, I'm not sure of the proper procedure [19:07] I'm pretty sure you want to follow: juju destroy-environment; juju sync-tools; juju bootstrap [19:07] marcoceppi: I would hope it would function in nearly the same way. :) [19:07] I think the sync-tools is a pre-step to the process [19:07] that's exactly what I did and that error showed up [19:07] the bootstrap error? [19:07] yeah [19:08] kurt_: that's not right [19:08] what happens if you destroy-environment then bootstrap again? [19:08] and in looking at my maas, I have no bootstrapped node [19:08] same thing [19:08] let me try once more - I will paste to paste bin for you [19:09] so there's something else going on [19:09] kurt_: use -v for both commands [19:10] doing that... [19:12] marcoceppi: ah… heh, in reviewing what I cut and pasted above, there was a slight error [19:12] marcoceppi: notice: juju bootstrapdestroy-environment; juju sync-tools; juju bootstrap [19:13] sarnold: I understand the wanting to kill the billing, possibly a juju destroy-environment --preserve-tools would resolve that [19:13] should be: juju destroy-environment; juju sync-tools; juju bootstrap <- My bad for not picking that up [19:13] kurt_: np, give that a go [19:14] working :) [19:14] marcoceppi, when your in a private cloud, and don't have a compile env, ie just want to use juju [19:14] marcoceppi, it will copy the latest release from public ec2 bucket into private cloud object storage (for the env/user) [19:14] hazmat: so it's really for those who don't have go-land installed and don't want to compile the tools themeslves? IE majority of users? [19:15] marcoceppi, yup majority of users in a private cloud.. public clouds should already have tools installed [19:15] hazmat: ack, gotchya [19:15] We'll need to update our docs to let people know about sync-tools and maas/private clouds [19:15] there's another critical /high bug for private cloud usage, re allowing invalid ssl certs that causes issues for juju-core [19:16] atm, requires updating the client's os level certs to accept the private cloud ssl cert ca as trusted. [19:17] bug 1202163 fwiw [19:17] <_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs [19:17] * marcoceppi +1s [19:25] hazmat: yes I saw something about a cert error [19:25] kurt@maas-cntrl:~$ juju status [19:25] error: no CA certificate in environment configuration [19:25] that's a little different [19:26] rebooting the node, destroying, and restarting seems to have fixed that anyways [19:26] juju also internally uses ca certs to secure communications with mongodb and the api server [19:26] those certs are kept in the JUJU_HOME (default ~/.juju) [19:26] the certs referenced in the bug are the underlying iaas provider ssl certs [19:26] marcoceppi: kirkland has a bunch of bugs that your sentry interface autodocumenter should fix [19:26] where can he file bugs? [19:27] ah ok - but that's mostly for external provider scenarios, right? [19:27] jcastro: what kind of bugs? [19:27] kurt_, it also applies to openstack using ssl [19:27] in a private cloud [19:27] lp:amulet should suffice [19:27] jcastro: ^ [19:27] hazmat: ok, thnx [19:27] marcoceppi: okay, thanks [19:28] so I'm creating a charm, and now I used deploy on it, but some stuff went wrong and I'm fixing it, how can I remove this one to try the new one? or is it possible to update? [19:29] X-warrior`: you can either use juju upgrade-charm to upgrade the charm in place, you can deploy the charm again under a different alias (IE: `juju deploy --upgrade --repository /path/to/charm/repo local:charm-name now-your-alias) or you can destroy the environment and re-bootstrap [19:30] marcoceppi: thanks! https://bugs.launchpad.net/amulet/+bug/1217540 [19:30] <_mup_> Bug #1217540: Every interface defined by a charm should be documented with examples [19:31] kirkland: so I can do the first half of that, the documentation. The examples I might defer to another bug on another project. I'll let you know. [19:31] well, I can attempt to do the first part* [19:32] jcastro: marcoceppi: okay, and now I'd like to file a bug, requesting that the squid-reverseproxy charm add support for https_port -- where do I file that? against launchpad.net/charms? [19:32] yeah [19:32] that would be against the charm itself [19:34] If I include a folder in my charm with files. Where is it placed on the service instance server? [19:34] marcoceppi, ty, does the upgrade-charm will update already deployed instance? [19:35] or do I need to upgrade charm an then call deploy with --upgrade flag [19:35] Or is it only on the juju bootstrap instance [19:36] X-warrior`: So, upgrade charm will update the charm contents, but that's it. If you want it to run hooks again (like run hooks/install or hooks/config-changed) you'll need to create a new hook in hooks/ called upgrade-charm and put that logic in there, for example: http://bazaar.launchpad.net/~charmers/charms/precise/wordpress/trunk/view/head:/hooks/upgrade-charm [19:36] X-warrior`: nope, those are two different commands [19:36] one is just an upgrade charm, the other will deploy a fresh copy of the charm under a new service name, so it will follow the typical install and deploy as if it was freshly deployed [19:36] ok [19:37] and what about removing already deployed services? [19:37] X-warrior`: you can run juju destroy-service to remove deployed services, but you can't, IIRC, deploy a service again with the same name in an environment [19:38] So you'll need to use the `juju deploy --upgrade --repository ... local:charm ` syntax [19:39] X-warrior`: So, in the event of mysql, if you've already run `juju deploy --repository ... local:mysql` and then you want to deploy fixes. You could run `juju upgrade-charm --repository ... mysql` OR if you wanted to start fresh, `juju destroy-service mysql` then run `juju deploy --upgrade --repository ... local:mysql db` that'll deploy mysql under the alias of "db" so you don't have it deployed again as "mysql" [19:40] yeap, but if I would like to deploy fixes, my charm must have a upgrade-charm [19:40] hook [19:40] right? [19:41] X-warrior`: It's not required, using upgrade-charm without an upgrade-charm hook works, a new version of the charm will be deployed [19:41] However, that's all juju will do, is unpack the new version. It won't run any other hooks [19:42] oh I get it [19:42] X-warrior`: So, you could write an upgrade-charm hook right now, and then run upgrade-charm. Juju will unpack the new version then execute the new hook [19:42] should the destroy-service command destroy the instance as well? [19:42] But all hooks are considered "optional" for juju, if it doesn't exist juju just skips it and moves on [19:42] yeap [19:42] X-warrior`: no, instances will remain. It's juju's way of protecting data. You can remove them with `juju terminate-machine ` if you're done with them [19:43] ok [19:43] let me try [19:43] cool [19:46] ERROR juju supercommand.go:235 command failed: no machines were destroyed: machine 1 has unit "test/0" assigned [19:47] X-warrior`: is test/0 in an error state? [19:47] X-warrior`: it probably says something like life: dying and agent-state is in error? [19:48] I guess test/0 is an error state since the install hook failed and on log I can see "awaiting error resolution for "install" hook" [19:49] X-warrior`: so, when a charm is in an error state it stops all other events and leaves them in an events queue [19:49] In this case, config-changed and a bunch of other hooks are all queued up, with the last event being the service destruction. [19:49] ok [19:49] X-warrior`: so you'll need to "resolved" the errors for the charm before it can get to those other events [19:50] X-warrior`: `juju resolved test/0` should suffice [19:50] X-warrior`: you may need to run that command multiple times if it hits any other errors [19:50] ok [19:50] X-warrior`: Also, for future reference, you can have juju retry a hook, using `juju resolved --retry test/0` [19:51] what does the 0 stands for? install? [19:51] X-warrior`: that's the unit number. So each service you deploy gets at least one unit. If you wanted to scale out you could run juju add-unit test and you'd get test/0 and test/1 [19:51] if you notice in juju status they're listed under the "units" heading for the service [19:52] so if I send the --retry flag it will rerun the failed hook [19:53] so let's say in my case I had a problem with install hook, so it gets locked [19:53] if I use upgrade-charm, it will get enqueued [19:53] and if I use the --retry it will fail again [19:53] X-warrior`: yes, so you could, for instance, juju ssh test/0 (to ssh in to the node), switch to the root user, go to /var/lib/juju/agents/unit-test-0/charm/, edit hooks/install, fix whatever the problem was, log out and then run juju resolved --retry test/0 to try again [19:53] and then I need manually update the hook on the server [19:54] Right, so at this point you're editing things on the server, it's just one possible workflow for writing charms. It's dangerous, because if you don't copy the fix to your local charm and destroy the service, you lose your changes [19:54] yeap [19:54] the alternative is to destroy the service, fix it locally, re deploy, or to destroy-environment, fix it locally, re-bootstrap, then deploy [19:55] It's up to whatever way works best for you as an author [19:55] each has their pros and cons [19:56] yeap [19:56] If I include a folder in my charm with files. Where is it placed? I cannot find it for the life of me [19:57] let me try it again [20:23] marcoceppi, I have to go now, will keep trying later, ty for your help [20:23] have a good one [20:29] FunnyLookinHat: is there a charm for beansbooks? [20:32] web-brandon: It's placed inside the $CHARM_DIR, typically /var/lib/juju/agents/unit-*/charm [20:34] marcoceppi: thank you so much. [20:50] web-brandon: each hook is executed at the $CHARM_DIR (and that variable is available to hooks) [20:51] We recommend not hardcoding paths when possible [20:56] just ran through a test to spit out the path in to juju-log. I am about to perform some 'cp' actions with that as the base dir. [20:57] web-brandon: if you want to be safe, you can just do $CHARM_DIR as the prefix to the path [20:57] i understand. it can change from server to server [20:57] web-brandon: not only server to server, but version to version. However, all hooks will _always_ be executed from the $CHARM_DIR [20:58] marcoceppi: good to know. [21:00] gonna make a irc-bot charm [21:48] <_mup_> Bug #1217591 was filed: Had to manually add access to bootstrap node on 17070 and 37017 to the secgroup === freeflying_away is now known as freeflying