[00:10] <InformatiQ> it seems that when destroying a mysql relation and adding it again the smae databse name is reused, is that expected?
[00:12] <_bjorne> and how i am fix that?
[00:18] <InformatiQ> charms are not as simple as I thought they were
[00:20] <_bjorne> and im need to make charms for get function in node2-XXX when node function good with juju status?
[00:21] <_bjorne> and see all others node with juju status?
[00:21] <_bjorne> this is happend when im install for the first time, for me that looks like bug in maas.
[00:22] <_bjorne> the first node run dist-upgrade and install lxc and mongodb and the rest of nodes have problem to read user-data.
[00:37] <arosales> InformatiQ, _bjorne: US folks are about end of day but some folks in later time zones should be coming on line shortly
[00:38] <InformatiQ> arosales: my eyes are getting blurry i'll end my day too
[01:22] <davecheney> _bjorne: hello
[01:22] <davecheney> lets start from the top
[01:22] <davecheney> 11:20 < _bjorne> and im need to make charms for get function in node2-XXX when node function good with juju status?
[01:22] <davecheney> could you explain a little more what you want to do
[02:14] <davecheney> _bjorne: ping
[03:03] <lazypower> Can someone point me in the direction of a charm that consumes MongoDB so i can evaluate the relationship change hooks? I need some examples
[03:04] <lazypower> Hey this handy pane in the charm store solves that, nevermind.
[09:18] <noodles775> marcoceppi: So I've updated to amulet 1.1.2 and I still get the same error I mentioned last week (No such file or directory: 'precise/relation-sentry/metadata.yaml'). I'll fork and play around.
[09:18] <noodles775> http://paste.ubuntu.com/6587980/
[09:23] <aquarius> I have two subscrriptions in my Azure account. I deployed to the first subscription with juju, and now the Azure team have transferred all my VMs etc into the second subscription.
[09:23] <aquarius> However, I now can't connect to my VMs with the juju command line util
[09:23] <aquarius> it says "2013-12-17 09:22:01 ERROR juju supercommand.go:282 cannot obtain storage account keys: GET request failed: ForbiddenError - The server failed to authenticate the request. Verify that the certificate is valid and is associated with this subscription. (http code 403: Forbidden)"
[09:24] <aquarius> I tried changing the "management-subscription-id" in ~/.juju/environments.yaml to be the new subscription ID but that hasn't helped.
[09:24] <aquarius> what do I need to do to fix this?
[09:25] <aquarius> aha! I need to edit .juju/environments/azure.jenv
[09:25] <aquarius> and now it works, hooray :)
[11:25] <noodles775> marcoceppi: https://github.com/marcoceppi/amulet/pull/3
[13:02] <marcoceppi> noodles775: DOH, thanks
[13:03] <noodles775> np :)
[13:07] <marcoceppi> noodles775: I'll merge this and bump to 1.1.3
[13:09] <noodles775> marcoceppi: there's no rush releasing for me - I can work from trunk (I'll probably have other changes later).
[13:10] <marcoceppi> noodles775: cool, I'll wait till end of week then
[13:34] <Informat1Q> what does a website relation needs to do in the case the app is not scalable ?
[13:53] <ashipika> hi.. can somebody please explain which hook is run when i run destroy-service? or when i remove a unit?
[14:16] <Informat1Q> ashipika: yo could always run juju debug-hooks unit and see what it wants to open there
[14:19] <Informat1Q> ashipika: destroy-unit runs : database-relation-departed database-relation-broken and stop
[14:19] <Informat1Q> i assume all other destroy do the same
[14:20] <ashipika> thanks!
[14:20] <Informat1Q> marcoceppi: if you feel like it you could look at my trac charm update
[14:23] <marcoceppi> ashipika: destroy-service will break all relations, then execute the stop hook
[14:23] <marcoceppi> Informat1Q: I see it's in the queue, I'll review it shortly
[14:23] <ashipika> excellent.. so if i want to un-install anything this is the palace to do it
[14:26] <jhf> hey jcastro just fyi I am rotating back to liferay/juju goodness. gonna update the charm today with the latest LR release and see about implementing (or at least POC'ing) your idea from oscon
[15:01] <X-warrior> I have a subordinate charm (logstash-agent) i deployed it, and later I tried to add relation between it and one of my services... the subordinate service status on this service was pending for a long time, then I tried to remove the relation... but the subordinate keeps showing when I use juju status :S
[15:06] <noodles775> marcoceppi: If I have a tests/00-initial-setup amulet test, and want tests/01-config-changes , can amulet pickup recreate it's deployment based on juju status, or do they need to be in the same file for now?
[15:07] <marcoceppi> noodles775: the typical pattern for juju-test is to destroy and re-create before each test is run. juju-test can be configured to not do that, but they should be in the same file for now
[15:08] <marcoceppi> noodles775: if you wanted, you could express your setup in like a helper/setup.py file within the tests dir, to keep from copying a lot of code around
[15:09] <X-warrior> Checking the juju log file (the one names as logstash-agent) I see a lot of exec: /var/lib/juju/tools/unit-logstash-agent-0/jujud: not found
[15:09] <noodles775> marcoceppi: the helper/setup.py is a good idea - let me try that. Thanks!
[15:10] <jcastro> jhf, heya, I have some updates for you
[15:10] <jcastro> jhf, if you have an updated "charm-tools" doing a "charm add readme" will add a new template.
[15:10] <jcastro> I added places for you to put links to your webpage and a bunch of other stuff.
[15:11] <jcastro> jhf, also, the designer of the juju store has told me that your icon is too square. :) But I have an inkscape  template for you if you want to give it a shot.
[15:11] <jhf> hahahahaha
[15:11] <jhf> too square. we can put your designers and our designers in a boxing ring and get some popcorn
[15:12] <jhf> crap. the boxing ring would be square too
[15:12] <jcastro> dude ours makes up words ... He says it needs to be a squircle.
[15:12] <jhf> hahahahahahahahahaha
[15:12] <jcastro> https://juju.ubuntu.com/docs/authors-charm-icon.html has the template
[15:12] <jhf> I remixed our logo one time and got admonished bigtime
[15:12] <jhf> they were all "this does not conform to branding" with a sad face
[15:12] <jhf> ok thx I'll get cracking
[15:13] <jhf> and blogging once I make progress on a nice clustered/haproxy'd POC I've been meaning to do
[15:14] <jhf> wow. I thought you were kidding about squircles
[15:15] <jcastro> I never kid when it comes to squircles
[15:36] <X-warrior> marcoceppi: sorry to call you directly, but I'm stuck on this for a while. Are u busy? Could u give me some help?
[15:37] <marcoceppi> X-warrior: if you're getting jujud not found, something went wrong during deployment
[15:37] <marcoceppi> X-warrior: also, np on pinging me directly, I sometimes miss messages in here
[15:37] <marcoceppi> X-warrior: do you have the latest juju 1.16.5?
[15:38] <X-warrior> nope I'm still at 1.16.3
[15:38] <marcoceppi> I think this was fixed in 1.16.5, but I can't be certain
[15:44] <X-warrior> marcoceppi: should I do same way as last time? upgrade from 1.16.3 to 1.16.4 and then to 1.16.5?
[15:45] <marcoceppi> X-warrior: you should be able to jump directly to 1.16.5, since it's all patch changes. However, to play it safe you may want to go .4 => .5
[15:45] <marcoceppi> sinzui_: opinions on upgrading within the 1.16 line? ^
[15:55] <X-warrior> marcoceppi: already update it
[15:57] <X-warrior> is that --force option on 1.16.5?
[15:58] <X-warrior> I have this logstash-agent subordinate as dying and never goes away, and the same to the 'deployed' logstash-agent...
[15:58] <X-warrior> :S
[16:00] <marcoceppi> X-warrior: can you --force destroy those?
[16:00] <X-warrior> juju destroy-service logstash-agent --force?
[16:00] <X-warrior> error: flag provided but not defined: --force
[16:04] <marcoceppi> X-warrior: is your local juju 1.16.5?
[16:04] <X-warrior> juju version say it is at 1.16.5
[16:05] <marcoceppi> huh
[16:08] <X-warrior> :S
[16:14] <rick_h__> sinzui_: ping, got time to chat sometime today?
[16:14] <sinzui_> rick_h__, in a meeting at the moment
[16:14] <X-warrior> and if I try to reploy to 'override' it, it shows that the service already exists :S
[16:15] <rick_h__> sinzui_: np, any time. No hurry at all. Even later in the week if that works.
[16:17] <rick_h__> arosales: also, can I nab a quick call if you get time about this azure stuff.
[16:19] <arosales> rick_h__, sure. I got time this afternoon if that works for you.
[16:20] <rick_h__> arosales: sure thing, thanks.
[16:20] <arosales> rick_h__, cool gcal invite sent
[16:21] <rick_h__> arosales: thanks
[16:22] <X-warrior> marcoceppi: is the only solution to destroy the environment?
[16:49] <sinzui_> X-warrior, marcoceppi , juju 1.16.x is compatible between all patch-levels (0 - 5). Juju will always pick the newest patch-level (1.16.5 as of today) when bootstrapping a deploying
[16:50] <sinzui_> X-warrior, marcoceppi  in CI we ensure the version under test by calling "juju upgrade-juju --version=1..16.<level>" which can downgrade as well as upgrade
[16:53] <X-warrior> sinzui_: well I did upgrade from .3 to .4 and then to .5 it worked
[16:53] <X-warrior> :D
[16:53] <X-warrior> anyway thanks
[16:53] <X-warrior> :D
[18:24] <ekristen> I see linux-lxc containers are supported
[18:24] <ekristen> are their plans to support docker?
[18:26] <lazypower> ekristen: there is already a docker charm if that helps. But hooking into docker from what I can tell is under active dev
[18:26] <ekristen> cool and cool
[18:27] <ekristen> I’m trying to understand how juju works with respect to how servers get stood up
[18:27] <lazypower> I have a question relating to using chef during my runtime of the charm hooks. Chef solo is getting executed beautifully, however the hooks are somewhat limited from what i can tell
[18:27] <ekristen> I see you can integrate with AWS
[18:27] <ekristen> is each service deployed a server? or does juju re-use a server for multiple charms?
[18:27] <lazypower> I've got an issue where including gems that are present in the global gemsets, but not from within the juju executed chef-solo instance and I haven't been able to put my finger on why. I have debug output in a bug filed against the charm skeletong i used.
[18:28] <lazypower> ekristen: you can deploy multiple services to the same machine with the --to flag
[18:28] <lazypower> eg: juju deploy wordpress --to 1
[18:28] <lazypower> juju deploy mysql --to 1
[18:28] <lazypower> however by default, it will deploy to seperate machines
[18:29] <lazypower> https://github.com/Altoros/juju-charm-chef/issues/2
[18:29] <ekristen> lazypower: how do new servers get setup and or when do new ones get added?
[18:29] <lazypower> is the open debug output for chef
[18:29] <lazypower> ekristen: its alll provisioned by juju during the dpeloyment phase. hang on i have a quick 1:1 with a co woker and i'll come back ot this.
[18:30] <ekristen> lazypower: ruby is a pain
[18:31] <ekristen> lazypower: depending on how it is setup the gem might not beavailable to the environment that chef is running in
[18:31] <ekristen> but it is available to your user
[18:31] <lazypower> Well i want to point a finger at bundler
[18:31]  * lazypower snaps
[18:32] <lazypower> i bet thats it, i bet its shelling out chef-solo using bundler. so its not in the gemset
[18:32] <lazypower> Thank you!
[18:34] <lazypower> ekristen: so, there's a few methods you can add machines to your juju environment
[18:34] <ekristen> I found the initial part I was missing
[18:34] <ekristen> juju bootstrap
[18:35] <lazypower> you can manually bootstrap them, or let the juju provisioner handle it for you. Which environment will you be running?
[18:35] <ekristen> AWS
[18:35] <ekristen> how to I contorl instance sizes?
[18:35] <lazypower> ok, is that after you've tested using the local provider? or do you want to do all your testing in AWS?
[18:35] <lazypower> using the constraints, i found a good post on it, 1 sec and i'll fish it up for you
[18:36] <lazypower> http://askubuntu.com/questions/52021/how-do-i-adjust-the-instance-size-that-juju-uses
[18:36] <lazypower> ekristen: ^ thats for you
[18:37] <lazypower> for example if you want to deploy a micro, its --constraints "cpu-cores=0"
[18:38] <ekristen> well I’m on mac, but I could spin up a ubuntu image to do local testing on
[18:39] <lazypower> My suggestion is to either use vagrant, or if you have access to a ubuntu server on bare metal, go that route. LXC and the local provider makes testing a breeze
[18:39] <lazypower> and the low cost alternative to using EC2 as your testbed
[18:39] <ekristen> lazypower: how does it handle scaling a web app and routing between multiple endpoints?
[18:40] <ekristen> or maybe it doesn’t?
[18:40] <lazypower> routing to multiple endpoints is dependent on your configuration, like if you're using haxproxy charm.
[18:40] <lazypower> but scaling is as easy as juju add-unit service-name
[18:41] <ekristen> so there are charms that I can use to build a relationship between multiple instances of a service?
[18:41] <lazypower> and it handles scaling down in a similar fashion with juju remove-unit service-name
[18:41] <lazypower> Correct
[18:46] <ekristen> :)
[18:51] <lazypower> ekristen: i want to make sure i'm ot leading you astray. Relationships define that, and juju relationships are added on the service level
[18:52] <lazypower> I'm fairly certain that machine level relationships are supported but may take some configuration. Perhaps someone else would like ot step in on that statement for validation?
[18:54] <ekristen> right, now is there a master juju server that gets created at bootstrap time?
[18:55] <lazypower> There is, the juju controller occupies machine 0
[19:01] <maxcan> hey marcoceppi, lazypower, you both have a paper trail charm in the store and as far as I can tell, you're partners.  is one of them preferred?
[19:14] <ekristen> I keep getting “no public ssh keys found” when I try to bootstrap local
[19:14] <ekristen> but I have a key
[19:19] <_bjorne> Hello, why can im only see one node in juju status, that node runs dist-upgrade and install lxc and mongodb, and the other nodes is NOT run dist-upgrade and installing lxc and mongodb, that nodes cant find user-data.. only the first one can find it, have im doing some wrong or is that a bug in maas or juju?
[19:20] <ekristen> once I expose something how to I get access to it? ie juju gui?
[19:20] <bac> ekristen: 'juju status' will show its public ip address when it is started.
[19:22] <ekristen> bac: I’m using local for testing
[19:22] <ekristen> so it has a 10 ip
[19:22] <bac> ekristen: ok, can you go to that address in your browser?
[19:23] <ekristen> whats the best way to get access to that since my network is no the 10 network, static routes?
[19:23] <bac> ekristen: lxc will have created routes for the 10. network from the host machine
[19:24] <ekristen> k
[19:25] <ekristen> I’m not using the host system to connect though
[19:25] <ekristen> so I’ll have to setup some other routing
[19:25] <_bjorne> why im alwas get this error: GET /MAAS/metadata//2012-03-01/user-data HTTP/1.1" 404 200 "-" "Python-urllib/2.7 from second node and up?!
[19:29] <ekristen> hrm it won’t route my traffic right :/
[19:30] <rick_h__> arosales: in call
[19:30] <arosales> rick_h__, joining
[19:34] <lazypower> maxcan: my charm is a s ubordinate service and it configures the remote_syslog gem for arbitrary logging and rsyslog/syslogng out of the box
[19:34] <lazypower> maxcan: so feel free to use my flavor of teh charm and file any bugs you run into. Its working like a dream here but I dont know what your particular setup is so all issues / comments welcome
[19:35] <maxcan> awesome
[19:35] <maxcan> also, did you get my note the other day about getting a very rough MMS charm up on github?
[19:35] <lazypower> I did not, i seem to have missed it
[19:35] <lazypower> if you shoot me the link i'll watch the repository :)
[19:36] <lazypower> and try to get it hooked into my juju managed mongodb stack here at work
[19:36] <maxcan> https://github.com/docmunch/mongodb-management-service-charm
[19:36] <ekristen> how do I search for charms via the cli? or can I?
[19:38] <lazypower> maxcan: hey i like the groudwork here
[19:39] <_bjorne> are that someone who now this: when you installing maas and juju, and begin to installs nodes, should nodes one after one, so i can see them in juju status? or is that only one node that coming there? or do need to do anything more for add the others nodes?`
[19:41] <_bjorne> none who now my question?
[19:46] <maxcan> lazypower: thank you sir
[19:46] <maxcan> are you referring to the DRY SVC_NAME variables and things like that?
[19:46] <maxcan> its because I wrote my internal charm that way so making new charms would be easy
[19:47] <ekristen> juju seems to take a long time to stand up new charms with no feedback? system is sitting at 0.01 load average, nothing seems to be happening, is there a console or log that can be watched?
[19:50] <ekristen> can someone explain to me why I have 3 machines now locally one being raring, the other being precise and now one for oneiric? is that because I’ve chosen to deploy some apps from different repositories that require different distros?
[19:50] <ekristen> or is that an artifact of using the local lxc model vs AWS
[19:51] <sarnold> _oneiric_??
[19:52] <ekristen> test nodejs app?
[19:52] <ekristen> I’m just going by juju status
[19:52] <marcoceppi> ekristen: lets take it one at a time. It seems you're using the local provider. You'll find logs for each of the units/machines in ~/.juju/local/log
[19:53] <marcoceppi> ekristen: machine 0 will always be whatever version of Ubuntu your computer is, in this case raring. If you deploy a charm with an oneiric series, you'll get an oneiric machine. We recommend you not use oneiric charms unless you have to. Most all charms should be coded to use precise
[19:53] <ekristen> it error’d out anyways on the oneiric
[19:55] <ekristen> so I’ll stick to precise ;)
[19:55] <ekristen> is there docs on how relationships are defined? do they set environment variables in other apps?
[19:56] <ekristen> and thanks marcoceppi
[19:56] <marcoceppi> ekristen: no, relationships are defined in the metadata.yaml file, and data is sent/recv using special juju commands
[19:59] <marcoceppi> ekristen: https://juju.ubuntu.com/docs/authors-charm-interfaces.html
[19:59] <ekristen> but doesn’t the relationship expose connection infromation?
[20:00] <ekristen> where does the node.js app look for the mongodb connection information once the relationship is established? maybe thats a better question?
[20:03] <marcoceppi> ekristen: using relation-get to pull the data
[20:03] <ekristen> ah
[20:03] <ekristen> so the app needs to know how to use relation-get
[20:03] <ekristen> gotcha
[20:03] <marcoceppi> ekristen: no, the charm needs to use relation-get
[20:04] <ekristen> ok
[20:04] <marcoceppi> ekristen: then it saves it to disk
[20:06] <ekristen> ok
[20:09] <ekristen> marcoceppi: so if had a webapp to deploy then I’d need to create a charm around it
[20:17] <lazypower> maxcan: yeah your variable structure and your thought patterns in hte hooks make sense to me
[20:18] <maxcan> sweeter words, a developer will never hear
[20:18] <lazypower> Flattery, one of the many services i offer. I'll be here all week.
[20:23] <ekristen> another question — if I am using AWS and I deploy my node-app or rails app and I scale it to 5, how are the apps not interfering with each other
[20:24] <ekristen> are they being put into lxc-containers? likewise if I stand up multiple mysql services
[20:28] <ekristen> lazypower: if I have an node app do I need to create my own charm or can I leverage teh node-app charm that already exists, it seems like I should be able to leverage the existing one?
[20:28] <lazypower> you can leverage the node-app charm as the framework tod eploy your application
[20:29] <ekristen> if I want my devs and stuff ot be able to setup their own environments in the own local setup, would I need to create my own custom charm and host it somewhere then?
[20:29] <lazypower> ekristen: if you deploy to aws and use unit-add to scale up your application, they all get their own application server unless you specify which unit to deploy to with --to, this may cause problems with some charms, it all depends on the underlying charm "recipe"
[20:30] <lazypower> yeah, you can either have them deploy the charms from your local repository or push them to launchpad for deployment from there, it depends on your requirements.
[20:30] <ekristen> really each gets their own ec2 instance :/
[20:30] <lazypower> I hesitate to give you blanket answers ekristen because juju is very flexible.
[20:31] <ekristen> fair enough, I’m in the learning state right now
[20:31] <lazypower> Again, it depends on your needs and how hte charm recipe operates. I'm not 100% familiar with the node charm. I'd b emore than happy to review it with you later this evening if you will be around.
[20:40] <ekristen> ok, well I’m going to dive into it more and learn about making charms
[21:10] <ekristen> anyone in here a node.js + juju kung fu expert?
[21:10] <lazypower> You'll yield better luck asking specific questions rather htan looking for someone thats an expert in the field :)
[21:11] <ekristen> fair enough, just looking for advice or docs, or lessons learned on deploying apps using juju
[21:14] <marcoceppi> ekristen: we have a node-app charm
[21:15] <ekristen> I did see that, sorry, I’m coming from a PaaS mentality like Cloud Foundry, or OpenShift, this is definitely different, so I’m trying to understand how the whole thing works
[21:15] <ekristen> the docs really do not talk about how charms are deployed, ie server per charm or do they share automatically, things like that
[21:18] <marcoceppi> ekristen: so juju is not a paas. You deploy a charm as a service, then you can scale that service. If you want to deploy multiple node-apps you can, but they'll be on their own servers
[21:19] <marcoceppi> ekristen: also, if you want to deploy cloud foundry, you can do that with juju too :)
[21:21] <ekristen> I get that, I’m honestly trying to figure out the best way forward for my company in terms of app deployment and scaling our apps in the cloud, I’ve used CF in the past, but its a pain to setup and supports a lot mroe then what we need right now
[21:24] <marcoceppi> ekristen: well, let me know the specific questions you have, would be happy to help you sort them out
[21:25] <ekristen> marcoceppi: is there any way to make juju deploy a service in lxc-containers on a machine, allowing you to more effectively utilize the hardware being provisioned?
[21:27] <marcoceppi> ekristen: yes, it's still a little rough, but you can do it https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-machines
[21:29] <marcoceppi> thumper: how is container support in juju w/t networking like now?
[21:29] <ekristen> marcoceppi: the considerations at the bottom regarding working on containerizing everything is that just using lxc for everything or using docker?
[21:30] <marcoceppi> ekristen: everything is with lxc at the moment. Support for other platforms is being added/considered
[21:31] <marcoceppi> ekristen: but as it stands, the default action is always one machine per unit, unless you explicitly state containers
[21:31] <ekristen> marcoceppi: ok, thanks, might I assume there is a way to specify instance size when deploying a unit?
[21:31] <ekristen> regarding aws specifically?
[21:32] <marcoceppi> ekristen: yes, using constraints, https://juju.ubuntu.com/docs/charms-constraints.html
[21:32]  * ekristen goes to read
[21:33] <marcoceppi> ekristen: at the moment, most constraints are generic, so you say how many cpu-cores and memory you want, juju will find an instance type that matches
[21:33] <ekristen> oh interesting
[21:33] <ekristen> ok
[21:47] <thumper> marcoceppi: oh hai
[21:47] <marcoceppi> o/
[21:47] <thumper> marcoceppi: pretty much as it was, container networking still has work pending
[21:47] <thumper> marcoceppi: works mostly on maas, but not too well on anything else
[21:47] <marcoceppi> thumper: ah, gotchya. Same story with KVM re: networking?
[21:47] <thumper> yep
[21:48] <thumper> KVMs are now possible
[21:48] <thumper> but communication is still limited
[21:54] <marcoceppi> thumper: thanks for the update
[21:55] <ekristen> yes thank you
[22:55] <ekristen> does juju support aws vpc?