[00:41] <lifeless> SpamapS: hazmat: ^ ?
[00:51] <hazmat> lifeless, no
[00:51] <lifeless> thanks
[00:53] <hazmat> lifeless, the charm/service unit does have a concrete lifecycle though which includes an install phase
[00:54] <hazmat> and subordinate charms will allow installation into  existing units..
[00:55] <lifeless> hazmat: concrete use case - I want to configure egress firewall rules both on the node and on the network gateway when the node is brought up as a juju node; that is arguably a cloud API, but no clouds I know of (today) support egress fw rules
[00:58] <hazmat> lifeless, so two thoughts.. you can manage a firewall within a charm (although as far as local providers with network ns.. i'm not sure about the interaction).. i believe SpamapS had the memcache charm using fw rules to only allow access to related service units.
[00:59] <lifeless> right, thats the reverse :)
[00:59] <lifeless> this is egress rules, not ingress
[00:59] <lifeless> allowing nodes to only start communications that they are meant to be able to.
[00:59] <hazmat> ah
[00:59] <hazmat> same principle applies though
[01:00] <hazmat> in terms of modeling it with a relation, and manipulating the firewall by related units.
[01:00] <lifeless> right, though for the on-node rules you need zk to supply its own needed ports.
[01:00] <lifeless> unless you mode the zk connection as a relation? Seems a bit black-hole bootstrap to me though.
[01:00] <hazmat> yeah.. you have to establish a default for the zk node access for juju to run
[01:00] <lifeless> s/mode/model/
[01:00] <hazmat> but you can do that in the charm installation
[01:00] <hazmat> install hook
[01:01] <lifeless> hazmat: do we have multiple charms on one node yet ?
[01:01] <lifeless> or will we have to hack all the charms we use ?
[01:01] <hazmat> lifeless, not with isolation, but the subordinate work is testable now from a branch
[01:02] <lifeless> hazmat: well, this is about *nodes* - so wouldn't isolation be a problem ?
[01:02] <hazmat> true, although i'm curious about the interaction of lxc with firewall manipulation, if it properly only applies to the container network ns.
[01:03] <lifeless> if you're creating N networks one per container then yes
[01:04] <lifeless> you get a loopback network, bridged internally to the host + NAT rules for outbound stuff
[01:04] <lifeless> lxc is leaky in lots of other ways though :)
[01:04] <hazmat> indeed
[04:26] <_mup_> txzookeeper/managed-watch-and-ephemeral r48 committed by kapil.foss@gmail.com
[04:26] <_mup_> expiration handler integrated with retry client
[10:03] <d0od_> Yay I am a frankend0od
[12:09] <marcoceppi> d0od: the new OMG charm should be ready today. Which should take care of all the issues we've seen over the past two days.
[12:09] <d0od> \o/
[12:09] <d0od> Is the site on Smalls now?
[12:09] <marcoceppi> no, it's still on larges
[12:09] <d0od> Ouch!
[12:09] <d0od> $_$
[12:09] <marcoceppi> won't be moved to smalls until the new charm is ready
[12:10] <d0od> Okay
[12:10] <d0od> Who do I owe money to so far, you or Jorge?
[12:10] <marcoceppi> Talk to Jorge about that
[12:11] <d0od> Ok
[12:50] <niemeyer> Morning everybody
[12:56] <marcoceppi> d0od: everything still loading "quickly" for you?
[14:12] <imbrandon> heya jcastro /me returns
[14:12] <jcastro> imbrandon, meet marcoceppi
[14:12] <marcoceppi> o/ imbrandon
[14:12]  * imbrandon waves
[14:12] <jcastro> imbrandon, ok so we got him back up but in the charm
[14:12] <jcastro> we want to "do it right"
[14:12] <jcastro> so marco's pushing up his work
[14:12] <imbrandon> right on
[14:13] <jcastro> and we're having some php/caching issues (marco will explain)
[14:13] <imbrandon> ok so marco is ... ? owner ?
[14:13] <jcastro> so we'd figured you can have a look, and then do some ninja
[14:13] <imbrandon> ahh php cache issues are my thing :)
[14:13] <marcoceppi> imbrandon:  just a charmer
[14:13] <imbrandon> ahh cool ok :)
[14:13] <jcastro> no, he's just another everyday hero. :)
[14:13] <jcastro> imbrandon, the idea is to charm up the most slick way to run OMG
[14:13] <imbrandon> <-- is just a php bender
[14:14] <jcastro> and then deploy it
[14:14] <jcastro> and then point the elastic IP to the new stuff
[14:14] <imbrandon> jcastro: point the eip to the elb and have the elb to the charm
[14:14] <imbrandon> right ?
[14:14] <jcastro> right now the site is running fine, but I made a mistake and deployed on xlarges, so the only real rush is my wallet. :)
[14:14] <jcastro> we haven't done an elb yet.
[14:15] <jcastro> in fact, we'll have to do this twice, once on my account to make sure it's sweet.
[14:15] <marcoceppi> imbrandon: we were using APC + mod_php5 but a bug with the php5 cron and session+fuser was causing some weird child process issues. So I moved them to php-cgi and suPHP since that's what I use on my shared servers and know most intimately. However, suPHP and APC don't play nice. For this charm, suPHP is overkill + 1, fastcgi would be fine but that won't play well with APC either. So I just started
[14:15] <marcoceppi>  looking in to php-fpm then you arrived. Tag, you're it :)
[14:15] <imbrandon> downsizing should be ok, we up and down size boxen all day :) i think our "Drops" are at about 350 instances constantly :)
[14:15] <jcastro> and then redeploy to joey's account when we're ready to hand it over to him
[14:15] <jcastro> imbrandon, but the idea is to capture all the tweaks in the charm
[14:16] <imbrandon> jcastro: yup, we call those Drops , play on the drupal name, but they contain all the special sauce and can run on any number of instances
[14:16] <imbrandon> as one system
[14:17] <imbrandon> marcoceppi: definatly php-fpm with nginx , actually 2x nginx instances runing
[14:17] <imbrandon> on the same box
[14:17] <imbrandon> is the way to go
[14:17] <imbrandon> with APC , and memcached behind that, then finaly the DB
[14:17] <marcoceppi> 2 nginx services running the same box bound to port 80? or two services, same box, different ports, with a proxy in front?
[14:18] <imbrandon> 2 ports, php-fpm on a unix socket, and then 8080 for nginx that talks to the php, then the nginx on 80 runinng a microcache AKA varnish killer
[14:18] <marcoceppi> object caching would probably be overkill for this, it's the wp+plugin generation that's killing us, the db is happy as a clam
[14:18] <imbrandon> ahhh thats easy thats what te first one is for
[14:19] <imbrandon> and yea we can drop the memcache then
[14:19] <imbrandon> no worries there
[14:19] <marcoceppi> *nod*
[14:19] <imbrandon> basicly the nginx on 80 will cache things for 800ms, none the wiser BUT
[14:20] <imbrandon> that means php will only be hit 1 request per second MAC
[14:20] <imbrandon> MAX,
[14:20] <imbrandon> no matter hoiw many visits you get
[14:20] <imbrandon> and no one knows if their content is 800ms old
[14:20] <imbrandon> nice? nice .... :)
[14:21] <marcoceppi> imbrandon: I'm in
[14:21] <marcoceppi> the current charm is a few bits behind, it's setup for suPHP at the moment
[14:21] <imbrandon> ok so this on a 11.10 install ? sorry but we dont use juju so i'm a little blimd to that aspect
[14:22] <imbrandon> no worries i can wrangle it if you want
[14:22] <imbrandon> setup of a clean nginx + php-fpm 5.3.10 + apc and other extenstions is the easy part
[14:22] <imbrandon> :)
[14:23] <imbrandon> btw you want igbinary in addition to apc
[14:23] <imbrandon> for binary searialization of sessions and apc data cache
[14:23] <marcoceppi> imbrandon: charms are just bash scripts
[14:23] <imbrandon> but thats an easy "pecl i igbinary"
[14:23] <marcoceppi> well, these are just bash scripts, charms can be coded in any language
[14:24] <jcastro> (or in any language you want!)
[14:24] <imbrandon> marcoceppi: ahh ok, then i;m down, they are probably very similar to our drops
[14:24] <jcastro> hey can we move to #juju-wordpress for now?
[14:24] <imbrandon> the drop scripts are in python and spin up down instances and control tons of things
[14:24] <imbrandon> sure
[14:24] <jcastro> we can go into more detail there without flooding this channel
[14:24] <jcastro> rock
[14:38] <m_3> ec2 is nice and snappy this time of day
[14:38] <m_3> marcoceppi imbrandon jcastro nice job on omgubuntu!!
[14:40] <marcoceppi> m_3: <3
[14:45] <SpamapS> imbrandon: note that pecl is vulnerable to man-in-the-middle attacks
[14:52] <SpamapS> marcoceppi: that php cron bug is fixed in precise btw
[14:52] <marcoceppi> SpamapS: whew, good
[14:54] <SpamapS> Probably worth SRU to 11.10
[15:12] <marcoceppi> So, I'm trying to retroactively open a port, jumped in to debug hooks, ran open-port 8080/tcp
[15:12] <marcoceppi> getting an error about juju_agent_socket
[15:13] <gmb> SpamapS, Just a quick ping to let you know that I've updated https://code.launchpad.net/~gmb/charm-tools/add-charm-helpers/+merge/96204 with tests; python-shell-toolbox is available in ppa:yellow/ppa, too.
[15:21] <marcoceppi> how would I determine the "client-id" of an instance?
[15:26] <jcastro> SpamapS, hazmat ^
[15:27] <hazmat> marcoceppi, you mean the instance id of the machine? or the zk 'client-id' for an agent?
[15:27] <marcoceppi> not sure, whichever open-port is expecting: No JUJU_CLIENT_ID/--client_id option found
[15:27] <marcoceppi> I imagine it's zk 'client-id'
[15:27] <hazmat> ah.. that's the hook cli api..
[15:28] <hazmat> marcoceppi, so the first window of debug-hooks isn't actually a live environment.. live hook windows will pop in the session as hooks are executed
[15:28] <marcoceppi> ah, and if all the hooks have been executed already?
[15:29] <hazmat> marcoceppi, the hook windows have all the nesc. env variables set to use the hook cli api.
[15:30] <hazmat> marcoceppi, its not live then.. there's no remote api context attached to the socket.. the value for the cli client id is 'constant'
[15:31] <marcoceppi> Maybe I'm attaching to the wrong socket then.
[15:31] <marcoceppi> rather, maybe I'm asking the wrong question.
[15:32] <marcoceppi> Can you run commands like open-port without performing a charm upgrade?
[15:32] <hazmat> marcoceppi, on a live instance outside of a hook, no.
[15:33] <marcoceppi> damnit
[15:33] <marcoceppi> ok
[16:00] <hazmat> marcoceppi,  one off cli script to open a port on service from the client.. http://paste.ubuntu.com/890841/
[16:01] <marcoceppi> Oh! this runs from the machine
[16:01] <hazmat> marcoceppi, yeah. its run from the client not the env
[16:04] <marcoceppi> example usage hazmat ?
[16:05] <hazmat> marcoceppi, its got cli help.. python jport.py -h .. but an example python jport.py myblog 8080
[16:05] <hazmat> you can target individual units or the whole service
[16:05] <marcoceppi> that's what I'm doing wrong, thanks
[16:05] <hazmat> or spec udp via  631/udp
[16:07] <marcoceppi> I didn't notice it was missing the hashbang, so ./oneoff wasn't quite working
[16:15] <ninjix> is there an api for driving juju with python?
[16:25] <hazmat> ninjix, its written in python as a library
[16:25] <hazmat> but there is no supported public api from that library
[16:26] <ninjix> hazmat: tnx
[16:38] <jcastro> negronjl, lynxman, jamespage: all normal ~charmers are smoked today, any of you want to do a review of the incoming openerp charm?
[16:38] <negronjl> jcastro: ahh ... I see .. .so we are your Plan B right ?  :D
[16:38] <ninjix> I'm looking for some documentation covering the workflow of juju's provisioning process when working with orchestra server.
[16:38] <negronjl> jcastro:  I can review .... .give me a bug #
[16:40] <jcastro> https://bugs.launchpad.net/charms/+bug/912050
[16:41] <ninjix> Does juju try to completely reprovision instances via cobbler for each bootstrap?
[16:42] <jcastro> https://bugs.launchpad.net/charms/+bug/956259
[16:42] <jcastro> znc is available too if someone wants to snag it
[16:43] <lynxman> jcastro: negronjl jumped the shark before I could ;)
[16:46] <jcastro> lynxman, znc bro!
[16:47] <lynxman> jcastro: no prob hermano ;)
[16:50] <jcastro> also I don't know if someone reviewed negronjl's gearman stuff yet
[16:54] <negronjl> jcastro:  the gearman stuff is not ready to be in the official store yet
[16:54] <jcastro> ah ok
[16:54] <negronjl> jcastro:  Do you have a filter so I can see what needs to be reviewed ( as I get a chance, I can look them over )
[16:55] <jcastro> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
[16:55] <marcoceppi> https://bugs.launchpad.net/charms/+bugs?field.tag=new-charm
[16:55] <marcoceppi> ninja'd
[16:55] <jcastro> you want NEW and fix committed
[16:55] <jcastro> negronjl, flip gearman to incomplete if it's not ready please
[16:57] <negronjl> jcastro:  changed it to in progress.
[16:57] <jcastro> rock
[17:01] <SpamapS> gmb: *awesome*
[17:13] <ninjix> what are the differences between orchestra-juju-available and orchestra-juju-acquired mgmt classes?
[17:16] <jseutter> Question: When I try to deploy a service locally, I get "Invalid value for multicast_port: 34569".  Does anyone know what causes this?
[17:16] <jseutter> I'm fairly certain it was working last week, before a reboot.
[17:19] <SpamapS> ninjix: its all in the name. :) available means juju isn't using it but you want juju to use it if it needs a machine. acquired means juju has taken control.
[17:20] <SpamapS> jseutter: update all your charms (if you did a 'charm getall' just do 'charm getall /path/to/charms')
[17:20] <SpamapS> jseutter: a change landed about 2 weeks ago that is breaking on some older versions of charms.
[17:20] <ninjix> SpamapS: got it. thanks
[17:22] <ninjix> is Juju or Cobbler responsible for the machine start command?
[17:22] <SpamapS> ninjix: juju will tell cobbler to power off/on the machine after acquiring it. If you don't have power control, you'll have to manually reboot it.
[17:23] <ninjix> SpamapS: so it is up Cobbler to have the correct power mgmt script setup
[17:23] <SpamapS> ninjix: the power control stuff in cobbler is a bit of black magic (no juju pun intended.. ;) .. so good luck. :)
[17:23] <ninjix> :)
[17:23] <SpamapS> ninjix: we're using it in our test lab constantly.
[17:24] <ninjix> looks like the suck in bash scripts from /etc/cobbler/power through Cheetah
[17:31] <ninjix> SpamapS: is juju/providers/orchestra the place I should be looking for cobbler control commands?
[17:32] <SpamapS> ninjix: probably
[17:32] <SpamapS> ninjix: __init__.py should have a MachineProvider.start_machine() method that will lead you to where the actual code is
[17:32] <ninjix> SpamapS: ahh... thank you
[17:49] <negronjl> jamespage: ping
[17:49] <jamespage> negronjl, hey
[17:50]  * jamespage reads scrollback
[17:52] <negronjl> jamespage: bbcmicrocomputer tells me that you are interested in merging the tomcat6 and tomcat7 charms
[17:52] <negronjl> jamespage: I can work that
[17:52] <jamespage> negronjl, great - that would be fantastic
[17:53] <jamespage> I last touched tomcat7 - I did some work on config management and managing in multicast environments
[17:53] <negronjl> jamespage: I saw that.
[17:53] <negronjl> jamespage:  I'll add some config options to ask which tomcat to deploy and will default to 7.
[17:54] <jamespage> negronjl, great - that would be nice
[17:54] <negronjl> jamespage: I'll keep you posted
[17:54] <jamespage> I think the only diff I could spot was the format of tomcat-users.xmk
[17:55] <negronjl> jamespage: shouldn't be that difficult
[17:55] <jamespage> nah
[17:55] <jamespage> easy peasy
[17:56] <negronjl> jamespage:  cool.  I'll keep you posted
[17:56] <jamespage> ta
[17:56] <jamespage> bbcmicrocomputer, are you looking at the hive charm?
[17:56] <bbcmicrocomputer> jamespage: yeah
[17:57] <bbcmicrocomputer> jamespage: I'm somewhat slow atm, but I'm getting there
[17:57] <jamespage> bbcmicrocomputer, cool - I'll assign the bug to you so its clear
[17:57] <bbcmicrocomputer> jamespage: ok
[17:58] <jamespage> bbcmicrocomputer, bug 803531
[17:58] <_mup_> Bug #803531: Charm Needed: Hive <hot> <Juju Charms Collection:In Progress by robert-ayres> < https://launchpad.net/bugs/803531 >
[17:58] <jamespage> jcastro thinks its hot
[17:58] <bbcmicrocomputer> jamespage: ok, cool
[17:59] <jamespage> negronjl, do we still need a pig charm?  I've added it as a config option to the new hbase and hadoop charms
[17:59] <negronjl> jamespage: no, I don't think we do.
[17:59] <jamespage> negronjl, OK - I'll close of the bug report then
[17:59] <negronjl> jamespage: thx
[18:00] <jcastro> jamespage, anything tagged "hot" is from the original list of "man this would be epic" list.
[18:00] <jamespage> no pressure bbcmicrocomputer!
[18:01] <negronjl> bbcmicrocomputer: no pressure... jcastro will just hound you every day about it :D
[18:01] <jcastro> I am a loud and festive person
[18:01] <bbcmicrocomputer> ha ha awesome
[18:01] <negronjl> jcastro: you must be puertorrican ( loud ) :)
[18:01] <bbcmicrocomputer> I'm just happy to see my name on my first bug
[18:01] <jcastro> oh dude, you want more?
[18:02] <bbcmicrocomputer> jcastro: ah...
[18:02] <negronjl> bbcmicrocomputer: careful with what you ask xD
[18:02] <jcastro> jamespage, since I have you here I have a fun request, how's your jetty? And do we have anything using it that serves pages?
[18:03] <jamespage> jcastro, its OK - some of the hadoop stuff uses it - what are you looking todo?
[18:03] <jamespage> embedded tho so non-obvious
[18:08] <jamespage> jcastro, biab
[18:08] <jcastro> jamespage, I think it'd be cute to have jetty with it's spdy module turned on serving something
[18:08] <jcastro> but to browsers, not internal stuff nothing hits
[18:24] <jcastro> robbiew, any updates on osx client? imbrandon's looking to charm but is on OSX
[18:24] <robbiew> that guy went silent...so I don't see one coming for 12.04
[18:24] <robbiew> we'll have one for 12.10
[18:26] <SpamapS> 12.04/12.10 shouldn't matter to OSX users ;)
[18:26] <SpamapS> They can just grab bzr and install from trunk.
[18:27]  * SpamapS wonders if Cobbler would run on OSX
[18:29] <marcoceppi> So, juju deploy, then juju ssh that machine and I get a permission deined.
[18:30] <marcoceppi> I have ssh keys defined in my environment.yaml, and I can juju ssh older machines provisioned
[18:33] <marcoceppi> SpamapS hazmat any opinions :?
[18:37] <hazmat> odd
[18:38] <hazmat> marcoceppi, can you pastebin the console output from one of the newer machines
[18:38] <hazmat> ec2-get-console-output that is
[18:38] <marcoceppi> jcastro: ^^ You've got the AWS account
[18:39] <hazmat> jcastro, can you login into the machines with juju ssh
[18:39] <jcastro> ok that wants a -k
[18:39] <jcastro> not the new ones
[18:39] <marcoceppi> hazmat: neither of us can
[18:40] <marcoceppi> jcastro: weird, I can ssh in to the one you just desployed
[18:42] <_mup_> juju/relation-hook-commands-spec r7 committed by jim.baker@canonical.com
[18:42] <_mup_> Updated to use new relation id format, added better examples, and worked on some clarification
[18:43] <hazmat> jcastro, for one of the problematic instances, getting the output of ec2-get-console-output $instance_id AND ec2-describe-instance-attribute --user-data $instance_id should be helpful
[18:43] <jcastro> ok so I don't know what to put after the -k for those commands
[18:44] <marcoceppi> -K is the aws_secret_key
[18:45] <marcoceppi> -C is the .pem cert
[18:45] <jcastro> ah ok
[18:45] <marcoceppi> but we don't use .pem certs
[18:45] <marcoceppi> so not sure about that
[18:45] <marcoceppi> i-97fc86f3
[18:45] <_mup_> juju/relation-info-command-spec r7 committed by jim.baker@canonical.com
[18:45] <_mup_> Updated to use new relation id format and added back bug reference
[18:45] <marcoceppi> that's a troublesome instance id
[18:48] <jcastro> does juju put pems someplace?
[18:48] <jcastro> I only have my AWS secret key
[18:49] <jcastro> also, we could just redeploy a new environment right?
[18:49] <marcoceppi> not sure, but more information. I just did a destroy-service for the stackmobile charm, got this:
[18:49] <marcoceppi> 2012-03-19 14:48:19,365 ERROR no node
[18:49] <marcoceppi> I wonder if I'm missing something on this computer
[19:00] <SpamapS> jcastro: forget pems :)
[19:01] <SpamapS> jcastro: I put my aws creds in shell files
[19:01] <jcastro> yeah so, I have no idea how to use AWS without juju
[19:01] <SpamapS> awesome :)
[19:01] <jcastro> the concept of using AWS without juju was never an option for me, I am the new new devop. :p
[19:01] <SpamapS> jcastro: access-key should be in EC2_ACCESS_KEY and secret-key in EC2_SECRET_KEY
[19:02] <jcastro> (seriously using it via juju was my first introduction)
[19:02] <jcastro> ok
[19:02] <SpamapS> jcastro: also, use euca-* instead of ec2-*  .. they're written in python instead of java, so they start up about 2-3s faster. :)
[19:04] <jcastro> SpamapS, and for EC2_URL?
[19:05] <SpamapS> jcastro: "https://ec2.amazonaws.com"
[19:05] <jcastro> ok all set
[19:06] <jcastro> ok so now what I need to do to get hazmat what he needs
[19:07] <SpamapS> jcastro: euca-get-console-output i-xxxxxx
[19:08]  * SpamapS goes to lunch
[19:08] <jcastro> hazmat, http://paste.ubuntu.com/891103/
[19:09] <jcastro> hah man, what if it's that apt/ec2 mirror bug
[19:09]  * jcastro assumes it stopped at the Get:
[19:09] <hazmat> jcastro, that's not the whole console output
[19:10] <jcastro> i've run it a few times and that's all I get
[19:13] <marcoceppi> jcastro: I'm about to deploy to a new machine
[19:13] <marcoceppi> if you wnant to try against that one
[19:13] <SpamapS> jcastro: no it takes a few minutes
[19:13] <SpamapS> jcastro: its very lazy in updating
[19:13] <jseutter> SpamapS: Thanks.  I now get a different error: "Invalid value for force_https: False".  Any ideas?
[19:13] <jcastro> ok
[19:13] <SpamapS> jcastro: though I'd expect it to be up to date within 5 minutes
[19:14] <SpamapS> jseutter: more charms that are out of date. :-/
[19:14] <SpamapS> hazmat: ^^
[19:14] <SpamapS> hazmat: any news on that fix?
[19:14] <SpamapS> hazmat: I saw a merge proposal.. would it help if I review it?
[19:14] <hazmat> SpamapS, definitely
[19:14] <SpamapS> jseutter: getall might not have worked. If you have a recent version of charm-tools, try 'charm update --fix path/to/charms
[19:14] <SpamapS> hazmat: ok, will do after lunch
[19:16] <jseutter> SpamapS: nod.  Trying update without the --fix as my client must be too old for it
[19:16] <SpamapS> jseutter: should be in the PPA
[19:16] <jseutter> nope, no luck.
[19:16] <SpamapS> hmm actually looks like the PPA build isn't picking it up
[19:17] <jcastro> ok got it hazmat http://paste.ubuntu.com/891125/
[19:18] <SpamapS> jcastro: halted?
[19:19] <SpamapS> juju-machine-agent start/running, process 4432
[19:19] <SpamapS> thats a normal start... hrm
[19:19] <marcoceppi> Could it be something with my config?
[19:19] <marcoceppi> If jcastro launches a service, I can ssh to it
[19:19] <marcoceppi> If I deploy I can't and he can't
[19:20] <SpamapS> diff your environments.yaml
[19:20] <SpamapS> could be something out of sync there
[19:20] <SpamapS> like, perhaps, ssh-authorized-keys ;)
[19:20] <jcastro> do they need to be exactly the same? or just the specific stanza?
[19:20] <SpamapS> I'd recommend them to be exactly the same at least for that one environment
[19:20] <jcastro> hmm, but this was working fine
[19:21] <jcastro> odd
[19:21] <marcoceppi> jcastro: I am on a different machine now
[19:21] <SpamapS> also same juju versions?
[19:21] <marcoceppi> was on laptop, now at work desktop in the office
[19:21] <jcastro> aha!
[19:21] <jcastro> want me to resend it?
[19:21] <marcoceppi> jcastro: I copied it from my laptop
[19:21] <SpamapS> dpkg -l juju
[19:21] <marcoceppi> FUUUUUUU
[19:21] <marcoceppi> FUUUUUUU
[19:23] <SpamapS> http://cdn.memegenerator.net/instances/400x/16609403.jpg
[19:23] <jcastro> hahaha
[19:24] <marcoceppi> I had a slightly older version of juju, testing.
[19:24] <hazmat> jcastro,  can you pastebin  ec2-describe-instance-attribute --user-data $instance_id
[19:25] <hazmat> oh.. client drift
[19:25] <marcoceppi> not sure if that was the problem, but it could be
[19:25] <SpamapS> http://cdn.memegenerator.net/instances/400x/16609550.jpg
[19:25] <marcoceppi> <3
[19:25] <marcoceppi> DIDN'T FIX :(
[19:26] <marcoceppi> jcastro: i-e34a0f87 is the lastest instance id with this issue
[19:26] <SpamapS> I'd be interested in seeing the userdata for a working and non-working instance too
[19:26] <jcastro> does ec2-blah look for different EC2 creds than the variables I set?
[19:26]  * SpamapS is about to pass out from starvation
[19:26]  * SpamapS needs food.. BADLY
[19:27]  * SpamapS shot the food
[19:30] <marcoceppi> no changes between stanzas
[19:32] <marcoceppi> dang this is getting really annoying
[19:39] <_mup_> juju/status-changes r482 committed by kapil.thangavelu@canonical.com
[19:39] <_mup_> all unit/machine agent states reported in key 'agent-state'
[19:41] <jcastro> marcoceppi, how about just relaunching a new environment?
[19:41] <marcoceppi> jcastro: we'd have to destroy the current env, which would mean no more omgubuntu
[19:41] <jcastro> wait huh?
[19:42] <marcoceppi> you talking about destroy-environment?
[19:42] <jcastro> no
[19:42] <jcastro> just creating a new one
[19:42] <jcastro> and just "juju -enewthing deploy blah"
[19:42] <jcastro> we can do that right?
[19:42] <marcoceppi> Oh, like we talked about last night?
[19:42] <jcastro> nod
[19:42] <marcoceppi> yeah, we can try that
[19:44] <jcastro> ok so maybe create a new one, but have hazmat and spamaps in there too
[19:44] <_mup_> juju/status-changes r483 committed by kapil.thangavelu@canonical.com
[19:44] <_mup_> all unit/machine agent states reported in key 'agent-state'
[19:46] <hazmat> oi.. a clone env command sounds like fun
[19:46] <marcoceppi> hazmat: that would be sweet
[19:46] <marcoceppi> actually, hazmat, anything bad that might happen if we change the control-bucket and re-bootstrap an already "bootstrapped" aws account?
[19:47] <hazmat> marcoceppi, ever seen ghost busters.. don't cross the streams ;-)
[19:47] <hazmat> marcoceppi, if you change the control-bucket your changing the identity for all intents and purposes
[19:48] <marcoceppi> so..that's a don't go there? I mean, the crossed the streams and took out the marshmellow man.
[19:48] <marcoceppi> that was a good thing :)
[19:48] <jcastro> Alright, this bucket is TOAST.
[19:48] <hazmat> currently juju does environment identity by env name + control bucket data, changing those effectively orphan an existing an environment
[19:48] <marcoceppi> so, what I've done is copied the stanza and made omgthisisreal with a different control bucket
[19:48] <marcoceppi> so the old stanza is still there
[19:49] <hazmat> and switching to an existing one, that's already bootstrapped.. and then bootstrapping again, it should error at you that the env is already bootstrapped.
[19:50] <marcoceppi> darn
[19:50] <marcoceppi> jcastro: We can just launch smalls on my AWS instead
[19:50] <marcoceppi> nbd
[19:50] <jcastro> hazmat, right so my concern is the current setup has the live site
[19:50] <jcastro> indeed
[19:50] <jcastro> marcoceppi, wait one, let me ask about reimbursement
[19:50] <marcoceppi> *shrug* k
[19:51] <jcastro> you've done enough work for the past 3 days, $ is where I draw the line. :)
[19:52] <jcastro> oh hey, would generating a new pair work?
[19:53] <jcastro> like just generating a new pair of access creds for AWS?
[19:55] <_mup_> juju/status-changes r483 committed by kapil.thangavelu@canonical.com
[19:55] <_mup_> hyphenate unit agent error states
[19:57] <jcastro> hazmat, would that work?
[19:58] <hazmat> jcastro, ECONTEXT, you mean giving out an IAM/subaccount off AWS?
[19:58] <hazmat> or you mean getting access to the instances that are running that are not shell accessible?
[19:58] <jcastro> yeah, if I generate a new access and private key to marco
[19:58] <jcastro> and he puts that in his environments.yaml, and then bootstraps
[19:58] <hazmat> jcastro, yeah.. that's viable
[19:58] <jcastro> he can do what he needs without colliding with the existing running thing?
[20:01] <hazmat> jcastro, as long as its a separate env yes
[20:01] <hazmat> time to pick up the kido from day care, bbiab
[20:09] <ninjix> SpamapS: can juju ask cobbler to provision more available instances?
[20:10] <ninjix> looking at the cobbler.py, I only see functions that use only what cobbler has provisioned
[20:55] <SpamapS> ninjix: not sure I fully understand the question. Cobbler is for provisioning servers..
[20:55] <SpamapS> ninjix: it can treat VMs like servers.. but.. if you want to create new VM's, thats more koan's territory
[21:00] <ninjix> SpamapS: i see.
[21:01]  * SpamapS reads backscroll with agony
[21:01] <SpamapS> marcoceppi: I think hazmat may have misunderstood your question
[21:01] <SpamapS> jcastro: ^^
[21:02] <SpamapS> If you guys want to use the same AWS account, but with a new environment.. just create a new environment name, with a diffeent control bucket. That is all.
[21:02]  * hazmat back tracks
[21:02] <marcoceppi> sweet
[21:02] <hazmat> oh.. yeah.. totally
[21:03] <hazmat> i thought it was switching a bootstrapped env bucket to another bootstrapped env bucket, thus losing the original in the process
[21:03] <SpamapS> Yeah I know
[21:03] <SpamapS> No they're just making a new env
[21:05] <robbiew> I'd also recommend setting your default environment to the non-production one...so if you forget to use "-e", you don't hose the production deployment
[21:05]  * robbiew learned that the hard way...lol
[21:06] <SpamapS> YES
[21:06] <_mup_> juju/juju-status-changes-spec r7 committed by jim.baker@canonical.com
[21:06] <_mup_> Addressed review points, along with an expanded example
[21:06] <SpamapS> in fact, I was thinking the other day that production envs should be named impossible to type things like x848310f0f0assdf9a9sf-dont-delete-me
[21:06] <robbiew> heh
[21:07] <imbrandon> SpamapS: and you have to enter it in reverse so no c/p
[21:08] <SpamapS> and the terminal has to be in Cyrillic to read it
[21:08] <imbrandon> heh, wingdings
[21:08] <imbrandon> or whatever that gawd aweful font is named
[21:09] <imbrandon> actually I've been using a font called Anonymous Pro lately for terms and IDE's and such and grown quite attached to it, despite the name its free ( as in beer ) dunno bout oss
[21:10] <imbrandon> but quite nice
[21:10] <imbrandon> created especially for terms and ide code use
[21:13] <SpamapS> imbrandon: the Ubuntu fixed font was also created with that in mind
[21:14] <imbrandon> SpamapS: yea but i could not find a way to get that other than ON ubnutu, and even as a core dev i dont spend much time on a ubuntu desktop if any actually, mostly all via ssh or remote forwarded x sessions rarely
[21:15] <imbrandon> but if there is a ttf avail for me to use on OSX i'd be more than willing to give it a shot
[21:15] <imbrandon> :)
[21:17] <imbrandon> hrm i might look and see if the lic for it will allow me to package it in a brew app and make it easy for any osx users to use if thats the case
[21:18]  * imbrandon assumes its on LP somehwere and goes to dig
[21:19] <SpamapS> I'm quite confident the license is permissive :)
[21:19] <imbrandon> i assumed it would be, and i found font.ubuntu.com :)
[21:20] <imbrandon> ahh even a Cyrillic version , there we go SpamapS thats what the prod name must be in
[21:20] <imbrandon> :)
[21:44] <negronjl> jamespage: ping
[22:03] <jamespage> negronjl, pong
[22:04] <SpamapS> jamespage: did you disable the reboot test? I don't see it anymore on jenkins.qa.ubuntu.com
[22:04] <negronjl> jamespage: had a question about the tomcat7 charm but, i figured it out.
[22:05] <jamespage> SpamapS, its still running - just not appearing on the dash
[22:05] <jamespage> I need to land the final fixes to usit tomorrow
[22:05] <SpamapS> AHH ok
[22:14] <jcastro> robbiew, yeah I know about default to not be prod, I also hosed myself in the past with that
[22:16]  * SpamapS too
[23:49] <cowmix> the juju docs say the project is 'early access' and don't use it for deployments. will that change for 12.04?