[00:12] <pmatulis> i had a hard time with the mysql charm and discovered the following in the logs:
[00:12] <pmatulis> http://paste.ubuntu.com/7671996/
[00:13] <pmatulis> i ssh'd in to the respective lxc container and installed git manually. then, voila, everything started
[00:14] <pmatulis> (side question: why do i need git?)
[00:17] <jose> pmatulis: weird thing. I know Juju uses git in the background for some stuff, but not sure why it's giving the error
[00:54] <danharibo> juju local environment seems to be broken ootb, on my trusty install: http://pastebin.com/Aspq0zJd
[00:55] <danharibo> that's the result after following the getting started guide, no public addresses
[00:57] <galebba> could anyone tell me why my proxy doesnt seem to work on juju bootstrap node with env | grep proxy showing both http and https proxies ? getting ERROR juju.cmd supercommand.go:300 cannot upload charm to provider storage: gomaasapi: got error back from server: 504 Gateway Timeout
[01:03] <jose> danharibo: is this the first time you are booting in Local?
[01:04] <jose> galebba: maybe it's an issue with your proxy blocking the storage provider?
[01:05] <galebba> so if i set my env variable manually on the bootstrap node it works perfectly well. not just through juju
[01:06] <danharibo> jose: I've run the commands listed on https://juju.ubuntu.com/docs/getting-started.html up to juju-status, after following the lxc configuration
[01:09] <sarnold> danharibo: how long have you waited?
[01:09] <danharibo> a few minutes
[01:09] <sarnold> danharibo: iirc step 1 is "download a cloud ubuntu".. it can take a while
[01:10] <danharibo> is there any way to get more detailed information? like what it is waiting on?
[01:10] <lazypower-travel> danharibo: its a one time operation - you're fetching a 300mb cloud image
[01:10] <lazypower-travel> danharibo: if you pass -v (or -debug) you'll see output on what its doing
[01:11] <lazypower-travel> danharibo: once that cloud image is cached, it'll create a template container which makes the local provider crazy fast. Seconds to spin up an LXC container, vs the 5 to 10 minutes you'll wait on that cloud image to be fetched.
[01:15] <galebba> is it possible to do no proxy env variable on juju ? i have set http-proxy and https-proxy and it seems my maas server api access is being send through the proxy which i want to avoid
[01:15] <lazypower-travel> galebba: are you referring to the proxy being defined in /etc/apt/90-curtain?
[01:16] <lazypower-travel> or there abouts
[01:16] <galebba> proxy set with juju set-env like  juju set-env https-proxy=http://x.x.x.x:80/
[01:16] <lazypower-travel> oh i read that wrong. sorry
[01:17] <galebba> np, found a old thread saying this should be fixed but cant find the syntax
[01:17] <lazypower-travel> thumper: you kicking around over there?
[01:20] <galebba> got it , juju set-env no-proxy=127.0.0.1,172.18.112.140,localhost
[01:21] <thumper> lazypower-travel: whazzup?
[01:21] <thumper> ah proxy thing
[01:22] <lazypower-travel> thumper: is there a way to route only the juju traffic through a proxy so we dont blanket out the maas traffic?
[01:22] <thumper> yeah, galebba's got it
[01:22] <thumper> lazypower-travel: when you set the proxies through juju, they only get set in the hooks, and for juju run
[01:22] <thumper> lazypower-travel: it doesn't do any magic system things
[01:22] <lazypower-travel> ahhh ok. I thought it set system level proxy envs
[01:22] <thumper> no
[01:22] <lazypower-travel> well good to have that cleared up. Ty thumper.
[01:23] <sarnold> thumper: http://askubuntu.com/a/431192/33812  :)
[01:23] <thumper> I suppose I should say it is fixed now?
[01:24] <sarnold> that was the first google hit for 'juju proxy' so it'd be as good a place as any for the docs :) hehe
[01:25] <galebba> i know right
[01:30] <thumper> updated.
[01:30] <lazypower-travel> thanks thumper
[01:30] <sarnold> thanks thumper :)
[01:35] <jose> lazypower-travel: who can I confirm with that the LXC charm school is taking place tomorrow?
[01:36] <lazypower-travel> jose: i dont think anyone's around atm since the steam sales on
[01:36] <jose> oh that's right
[01:36] <lazypower-travel> jose: but i'd follow up in the am with jcastro.
[01:36] <jose> should be good
[01:36] <lazypower-travel> jose: or you can just email them... might be the better way to go
[01:37] <jose> will just email the list :P
[01:37] <jose> thanks!
[01:37] <danharibo> checking network traffic, nothing... machine-all log also doesn't seem to say it's downloading anything
[01:40] <lazypower-travel> danharibo: if you run sudo lxc-ls --fancy do you see a template container?
[01:41] <danharibo> I see juju-precise-template, stoped
[01:41] <lazypower-travel> danharibo: ok looks like the image was downloaded.
[01:41] <lazypower-travel> and the template was created as well
[01:42] <lazypower-travel> this is your first time running juju bootstrap on the local provider correct?
[01:42] <danharibo> I ran it a few times before, but ran destroy-environment afterwards
[01:44] <lazypower-travel> danharibo: you may need to nuke your local installation from orbit if its in an inconsistent state
[01:45] <lazypower-travel> this is available as a plugin - but if you're not savvy on that - https://github.com/juju/plugins/blob/master/juju-clean - the commands are listed here
[01:45] <marcoceppi> lazypower-travel: you able to do a quick review?
[01:45] <lazypower-travel> marcoceppi: link me
[01:46] <lazypower-travel> i need to plugin in 1 sec
[01:46] <marcoceppi> lazypower-travel: https://code.launchpad.net/~marcoceppi/charm-tools/fix-setup-py/+merge/223850
[01:46] <marcoceppi> lazypower-travel: just review, I'lldo the merge
[01:47] <lazypower-travel> ack
[01:47] <lazypower-travel> looking now
[01:48] <lazypower-travel> dude cool #TIL
[01:48] <lazypower-travel> find_packages
[01:48] <marcoceppi> lazypower-travel: yeah, thank tvansteenburgh for that one
[01:49] <lazypower-travel> marcoceppi: runs on my box
[01:49] <lazypower-travel> +1
[01:49] <marcoceppi> for some reason the debian python helpers are a PITA
[01:49] <marcoceppi> this fixes that
[01:49] <marcoceppi> thanks man
[01:53] <lazypower-travel> :)
[01:56] <lazypower-travel> danharibo: if that doesnt fix it, can you attach a copy of your all-machines && machine-0.log to a pastebin for me?
[02:01] <danharibo> running juju clean seems to have declogged it, mysql and wordpress charms are "started"
[02:01] <danharibo> I am getting a 502 from nginx on wordpress' public-address though
[02:02] <danharibo> oh no it was just a cached page, all works. thanks
[02:07] <marcoceppi> CLEAN ALL THE THINGS
[02:09] <jose> \o/
[02:21] <jose> marcoceppi: are we having the charm school tomorrow?
[02:22] <marcoceppi> oh god, we are aren't we
[02:22] <marcoceppi> what's the topic?
[02:25] <jose> marcoceppi: LXC Troubleshooting
[02:25] <jose> 19 UTC
[02:30] <marcoceppi> cool
[04:21] <mwhudson> ubuntu@mytrusty:~$ juju status
[04:21] <mwhudson> ERROR failed getting all instances: exit status 1
[04:21] <mwhudson> what might that mean?
[04:22] <mwhudson> oh, i logged in before my user was added to the libvirt group
[10:48] <galebba> Has anyone used juju to install openstack  with Neutron ?
[11:30] <lazypower-travel> Galebba: lots of talk around that. I'm aware of our OpenStack charmers working with neutron
[11:33] <Mosibi> galebba: yes, we installed neutron/openstack with juju
[11:33] <Mosibi> If youre question is about vlans, yes that's not supported yet..
[12:08] <nottrobin> is there any way to permanently set the location of my local charms in a config file?
[12:08] <nottrobin> it's a bit annoying typing "--repository=~/charms" all the time
[12:14] <marcoceppi> nottrobin: set the JUJU_REPOSITORY environment variable
[12:14] <nottrobin> ooh!
[12:14] <nottrobin> handy
[12:14] <nottrobin> thanks marcoceppi
[12:14] <marcoceppi> np
[12:30] <pmatulis> are 'charm-tools' and 'juju.plugins' what everybody uses?  anything else i should be looking at?
[12:31] <lazypower-travel> pmatulis: juju-quickstart
[12:31] <pmatulis> lazypower-travel: ok, will look thanks
[13:19] <automatemecolema> can anyone point me in the direction on how to deploy haproxy with HA using an amazon EIP?
[13:20] <automatemecolema> with the haproxy charm naturally
[13:32] <automatemecolema> anyone familiar with the haproxy charm?
[13:38] <automate_> Can someone point out how to find the addressupdater worker to force EIP changes on services?
[13:47] <marcoceppi> You can't provision an elastic ip from within a charm, you need to assign it in the control plane
[13:55] <automate_> So here's a question, say I provision two haproxy servers, I assign an EIP to one of the haproxy servers, it fails, and the other picks up. Is there a way I can make that instance recognize the EIP
[13:56] <automate_> and How can I force juju to recognize the EIP? Ive read there is an addressupdater worker that check every 15 minutes, but that's not good enough for production
[14:03] <automate_> I have deployed to haproxy units with juju, and I notice they realize their peer, but how do I front the Haproxy service with only one endpoint?
[14:03] <automate_> two*
[14:11] <automate_> https://gist.github.com/anonymous/29cc347bd9ce1015036e I need these to know about each other and fronted by an EIP Thoughts?
[14:17] <jcastro> hey marcoceppi
[14:17] <jcastro> frankban has a point wrt. mac users, I think juju-quickstart is fine imo
[14:17] <jcastro> bigger battles to fight, etc. etc.
[14:32] <marcoceppi> jcastro: what about windows users?
[14:32] <marcoceppi> It's still a poor argument
[14:33] <marcoceppi> it obscures juju and bothers me a bit with promoting this as juju
[14:33] <jcastro> well, it's not my fault juju needs quickstart in order to be usable. :()
[14:34] <marcoceppi> it's great it does all this, but if users get to juju-quickstart before getting to juju we've failed in documentation
[14:35] <jcastro> it's not really any worse
[14:35] <jcastro> it's just `juju-quickstart -i` instead of `juju generate-config`
[14:35] <marcoceppi> juju init*
[14:36] <jcastro> which was not mentioned in the docs
[14:36] <marcoceppi> I think quickstart is super important, and deserves to be on the getting started page, but people are here to learn about juju
[14:36] <jcastro> but that's an evilnick-ism
[14:36] <marcoceppi> and we've just hidden the  entire core of juju
[14:36] <jcastro> you're the only one who knows that
[14:36] <marcoceppi> bootstrapping, deploying, relating, etc
[14:37] <jcastro> as far as they know/care, quickstart is part of juju
[14:37] <marcoceppi> but it's not, it's juju-quickstart, not juju quickstart
[14:37] <marcoceppi> no other juju commands are hyphenated
[14:37] <marcoceppi> so either we tell people, On Mac OSX run brew install juju juju-quickstart then do juju quickstart -i
[14:37] <jcastro> IMO getting people into the curses menu to put their keys in is like the #1 thing we need to do
[14:37] <marcoceppi> sure, lets do that
[14:38] <marcoceppi> but lets not give users the wrong expectation of the command chain
[14:38] <jcastro> ok so give me details
[14:38] <jcastro> what exactly do you want me to change in the commands
[14:38] <marcoceppi> Everytime you tell a user how to install juju-quickstart, just also include the installation of juju
[14:38] <marcoceppi> brew install juju juju-quickstart
[14:38] <marcoceppi> apt-get install juju juju-quickstart
[14:38] <jcastro> oh ok
[14:38] <jcastro> is that all?
[14:38] <marcoceppi> windows: Sorry, can't do that
[14:39] <jcastro> man why didn't you say that. :)
[14:39] <marcoceppi> yes, then all you need to do is have `juju quickstart` instead of the -
[14:39] <marcoceppi> and the command UX is intact, quickstart keeps being amazing
[14:39] <marcoceppi> and the world rotates on
[14:39] <jcastro> rick_h_, any issues? I'm tired of my PR becoming mailing list discussions. :)
[14:39] <marcoceppi> rick_h_: also, whats the TLDR on Windows + Quickstart?
[14:40] <rick_h_> marcoceppi: your team can add it next cycle
[14:40] <marcoceppi> what/
[14:40] <rick_h_> marcoceppi: :)
[14:40] <marcoceppi> so quickstart won't install juju on windows?
[14:41] <rick_h_> marcoceppi: quickstart doesn't work at all on windows
[14:41] <rick_h_> and it's not planned for this cycle, but is brought up to be important for next
[14:41] <jcastro> marcoceppi, rick_h_ didn't even know python on windows was a thing. :)
[14:41] <rick_h_> jcastro: oh we know, but ncurses on windows isn't
[14:41] <marcoceppi> jcastro: hey man, charm-tools is on there ;)
[14:41] <rick_h_> marcoceppi: let me know when you put a gui on charm tools
[14:42] <marcoceppi> haha, no thanks. It was enough "fun" getting charm-tools packaged for windows
[14:42] <jcastro> I hate to be that guy, but a proper juju snapin for like sccm is the way to go
[14:42] <rick_h_> marcoceppi: we're handing off quickstart to eco next cycle so you guys can build a webui to it and enable it for windows then
[14:42] <jcastro> ok so if no one has any issues, I will amend the commands in the docs
[14:42] <rick_h_> jcastro: ok, it's fine. I was just letting you know you can make things simpler if you want
[14:42] <rick_h_> but I'm not anti install juju with quickstart
[14:43] <rick_h_> sometimes comments are just fyi :)
[14:43] <jcastro> rick_h_, you convinced me, it's Captain Pedantic over there ....
[14:43] <marcoceppi> hey man, you're the one that had to go document things, that's on you :P
[14:43] <jcastro> though seriously, quickstart should eventually be in core someday like deployer, etc.
[14:43] <rick_h_> lol
[14:44] <marcoceppi> seriously, yes. It simplifies the user experience 10 fold and makes everyone happy
[14:44] <rick_h_> jcastro: yea, it can't until deployer is and such.
[14:44] <jcastro> is the packagename "juju" or juju-core in brew?
[14:44] <rick_h_> jcastro: but true, at some point makes sense. The original goal of quickstart though was to help you avoid finding the ppa, etc.
[14:44] <marcoceppi> I'd love to see how core handles the ncurses issue on windows
[14:44] <marcoceppi> jcastro: it's juju
[14:44] <rick_h_> it started before trusty and there was a decent juju in universe
[14:45] <jcastro> http://www.projectpluto.com/win32a.htm
[14:46] <jcastro> ok guys, git pushed
[14:46] <rick_h_> jcastro: thanks for the docs. Appreciate it
[14:46]  * marcoceppi finds something else to be pedantic over
[14:46] <jcastro> marcoceppi, like say ... mysql
[14:48] <purpledog3> ASkUbuntu: I was wondering if you could help me with a very fundamental question.
[14:49] <purpledog3> Askubuntu: Its about juju with ubuntu trusty. Its too basic and unfortunatley I don't find documentation on it
[14:49] <jcastro> askubuntu is a bot
[14:50] <jcastro> but we're humans if you want to ask your question
[14:52] <jcastro> mbruzek, tvansteenburgh1, have you guys had a chance to review yet?
[14:53] <mbruzek> jcastro, review what?
[14:56] <jcastro> you guys are on review duty this week
[14:59] <purpledog3> jcasto: Ahhh thanks.. .
[15:00] <purpledog3> jcasto: Do you know the resource I could use to work with juju/maas/openstack. Am a total novice and depending on documentation which either I can't seem to get approproate of or isn't appropriate.
[15:00] <jcastro> https://insights.ubuntu.com/2014/05/21/ubuntu-cloud-documentation-14-04lts/
[15:00] <jcastro> it's currently a PDF
[15:00] <jcastro> but we're working on HTMLizing it
[15:00] <jcastro> that's the place to start
[15:00] <purpledog3> thanks.. let me dive into it ..
[15:01] <purpledog3> I have my oen dat servers on my table here.
[15:01] <purpledog3> dat->data
[15:01] <jcastro> jamespage, that Boyd guy on the juju list was having that problem with ceph-radosgw if either you or zul can respond that would be swell.
[15:01] <purpledog3> Do you know if it might have an example of how to use 2 VM nodes defined by virsh ... as a maas cluster and work stunts with them ? like setup Openstack etc ?
[15:07] <jamespage> jcastro, I need to make time to read that doc to figure out what he's asking
[15:50] <ahasenack> hi guys, any clues about what happened here?
[15:50] <ahasenack> http://pastebin.ubuntu.com/7675284/
[15:50] <ahasenack> juju 1.19.3
[16:19] <automate_> anyone available for questions around the HAproxy charm?
[16:22] <pmatulis> ask and see
[16:25] <automate_> I need some incite on how to deploy an HA pair of HAproxy servers in Amazon using and EIP? I up for both scenarios of either Active/Passive or Active/Active
[16:27] <automate_> We are playing with the idea of rolling our own haproxy charm with the haproxy beta package that supports SSL using puppet as the deployer.
[16:28] <automate_> Has anyone had experience using puppet as the means to deploying your app in a charm, as opposed to building it into a shell script to install packages.
[16:52] <wrale_> using juju on 14.04... my nodes are multihomed with public unmanaged.. juju bootstrap fails because name resolution fails upon http call.. must a public interface (or NAT) be up where bootstrap happens?
[16:53] <wrale_> maas is the underpinning of my cluster
[16:59] <hazmat> automate_, why not just go with an elb charm?
[17:03] <automate_> hazmat elb isn't near as flexible, and ELB's change addresses all the time, we want to keep our load balancers tied to one endpoint
[17:04] <lazypower-travel> ahasenack: thats really strange. you got locked out of your monodb instance?
[17:05] <ahasenack> lazypower-travel: it was just a normal bootstrap, like many others
[17:05] <lazypower-travel> ahasenack: when you ran destroy and rebootstrapped did it work?
[17:05] <ahasenack> yes
[17:05] <lazypower-travel> hmm interesting
[17:06] <ahasenack> well, if you look at the logs you can see that it ran destroy on its own
[17:06] <lazypower-travel> ahasenack: thats the first time i've seen that happen...
[17:06] <ahasenack> so I was left with nothing running
[17:06] <lazypower-travel> yeah, it should do that if it encounters an issue while trying to bootstrap
[17:06] <lazypower-travel> that way its tidy about a startup failure
[17:06] <ahasenack> sinzui: pointed me at an older bug in the 1.19 series but it was during destroy
[17:07] <automate_> I'm trying to figure out a way to have different instances to be provisioned to separate availability zones within the same juju environment. Has anyone figured out a clean way to do this? I saw a bug report on this trying to be fixed..
[17:39] <automate_> Scenario 1: What happens if my bootstrap node dies?
[17:40] <automate_> Scenario 2: I boostrapped my environment, everything was working, then a few hours later the environment eats itself and everything is gone.
[17:52] <jcastro_> automate_, in scenario 1, you lose orchestration
[17:53] <jcastro_> but like the instances all stay running, you're just down to ssh
[17:53] <jcastro_> when HA lands you'd just have one of the other bootstrap nodes take over
[17:54] <automate_> HA for bootstrap nodes are on the roadmap, do you know how long that will be before it's available?
[17:54] <jcastro_> I do not think we yet have a way to do multizone in one environment
[17:54] <jcastro_> it's supposed to be landing like RSN now if it hasn't already
[17:54] <jcastro_> jam, do you know the status of HA?
[17:54] <jcastro_> I lose track of which team is working on what
[17:55] <automate_> So, today if my bootstrap node completely dies there is no way to recover it?
[17:55] <jcastro_> you're pretty much doomed
[17:55] <sarnold> jcastro_: https://bugs.launchpad.net/juju-core/+bug/1183831
[17:55] <_mup_> Bug #1183831: unable to specify availability zone <charmers> <constraints> <ec2-provider> <landscape> <reliability> <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1183831>
[17:55] <automate_> :(
[17:55] <jcastro_> they've been focusing on HA for the node for a while though, it's like the last major thing we need for prod
[17:56] <rick_h_> jcastro_: juju ensure-availability --help is there now. I think it's fleshing out final bits but is testable
[17:56] <rick_h_> so don't do it with your prod environment yet
[17:57] <automate_> Can you explain if there is a way to force juju to know about address changes? I know there is an addressupdater worker, but there might be times where I want juju to know abut it on the fly
[17:58] <automate_> Scenario would be when I assign a node an EIP
[17:58] <automate_> I want it to know about the EIP right away
[17:58] <marcoceppi> automate_: juju should pick that up automatically now, if not now then soon
[17:58] <marcoceppi> besides, the EIP only affects outside access to it
[17:58] <marcoceppi> the machine will always show the current and the internal address in amazon will be used anways
[17:58] <marcoceppi> so it doesn't affect other services connected to a service with an EIP
[17:59] <automate_> Yea, when I meant address changes, I'm relating strictly to public addresses
[17:59] <automate_> I'm currently running 1.18.1
[18:00] <marcoceppi> automate_: well you're already a few juju releases behind, latest stable is 1.18.4 iirc and latest dev is 1.19.3
[18:00] <automate_> juju upgrade-juju get me on the latest stable?
[18:00] <marcoceppi> what are you relating that requires the public address?
[18:00] <marcoceppi> automate_: yes
[18:00] <marcoceppi> except for humans, almost all services should just be using the private addresses
[18:01] <automate_> Scenario: Two haproxy servers, need them in ha pair, I have one assigned an EIP, it fails, I want the other one to get the EIP reassignment
[18:01] <marcoceppi> that's something you have to do outside of juju though
[18:01] <marcoceppi> juju won't reassign EIPs
[18:02] <automate_> Yes, but I need juju to know about the EIP as soon as I reassign it
[18:02] <marcoceppi> why? Elastic search will just continue to listen to 0.0.0.0
[18:02] <marcoceppi> traffic will flow, haproxy* will continue to runn
[18:02] <marcoceppi> not elastic search, sorry about that
[18:03] <automate_> I'm not sure I'm following you properly
[18:03] <marcoceppi> I'm not sure I'm following you. Why does juju need to know about the EIP
[18:03] <marcoceppi> EIPs are transparent to the underlying machine
[18:03] <marcoceppi> they're done on the software level
[18:04] <marcoceppi> in Amazon
[18:04] <automate_> Ok let me explain it a little better.
[18:04] <marcoceppi> Amazon simply maps public ip to the private ip and routes traffic
[18:04] <marcoceppi> so the unit knowing it has a new public ip doesn't matter
[18:05] <automate_> I have two haproxy servers provisioned by juju, naturally they get a public address assigned to them by Amazon, I reassign node 1 with an EIP, juju knows about the auto assigned public address and I need it to refresh and know about the EIP i just assigned it.
[18:05] <marcoceppi> automate_: okay, I see what you're saying. My question is why does juju need to know that.
[18:06] <marcoceppi> how does that change how haproxy is running
[18:06] <automate_> Then I find that node1 just failed, and I need node 2 to pick the load balancing, so I reassign my EIP to node 2, and I need juju to know when I reassign it
[18:06] <automate_> Am I not exposing the service via juju?
[18:06] <marcoceppi> yes, you are
[18:06] <marcoceppi> now let me explain briefly what happens amazon side of things
[18:10] <marcoceppi> You launch an instance in amazon. That instance gets a private address on eth0 - 10.0.0.2; your other instance gets eth0 10.0.0.3. You have secruity groups that define ports and access to that instance. Amazon gives you two random public ipaddresses and has their switch software map 72.0.0.2 and .3 to those respective private ip aaddresses. The instance has no idea what it's public ip address is. Juju knows because it's querying the data from
[18:10] <marcoceppi> amazon directly using the API. Now you create EIP 74.0.0.15 and assign it to the first node. 72.0.0.2 goes away and instead 74.0.0.2 routes to the private IP of 10.0.0.2 The instance has no idea this change happened, Juju knows because it's chatting with the API and updates the metadata for that instance in juju. Now you failover, move the EIP to the second instance now 74.0.0.15 maps to 10.0.0.3 it has no idea the public ip address has changed.
[18:10] <marcoceppi>  It doesnt' need to, it simply continues listening to traffic on eth0 as it was designed to do
[18:11] <marcoceppi> there isn't a pressing need to have the unit data updated immediatly for a public address change in juju. For private addresses completely because that's what the unit uses when listening and what other services are talking to it on
[18:12] <sarnold> marcoceppi: very interesting, thanks :)
[18:12] <marcoceppi> The public IP addresses, EIP or not, are maped outside of the instance and outside of juju in Amazon's switches. The instance is blissfully unaware of what's going on
[18:12] <marcoceppi> OpenStack, with quantum/neutron, IIRC works very much the same way
[18:13] <marcoceppi> providers like Digital Ocean, do it differently, there's two nics on the machine private and public, and thats where magic happens
[18:13]  * marcoceppi spins up an amazon instance to verify that giant text dump is true
[18:15] <automate_> marcoceppi: thanks for that explanation, that was my understanding of EIP, If www.mywebaddress.com points to 70.0.0.15 and node 1 was assigned that EIP, node 1 fails, traffic stops until the EIP is reassigned to node 2? Just checking to make sure at least I
[18:15] <automate_> m right on the first part
[18:20] <automate_> marcoceppi, I completely am on the same page now...I understand that it doesn't matter if juju knows the public address change right away...I had to fight that battle in my small brain
[18:20] <marcoceppi> automate_: yeah, so if the node is no longer running
[18:20] <marcoceppi> traffic will hault, after you reassign it to another node traffic will flow again after a few mins
[18:21] <automate_> So... moving on then, I have more of an haproxy specifc charm question around keepalived EIP reassign...
[18:22] <automate_> Maybe this gets into me having to customize the haproxy charm... I need a keepalived script embedded so if node one goes down then it can talk to the Amazon API to reassign the EIP to the failover node
[18:22] <marcoceppi> automate_: so, this sounds like a good case for a subordinate charm
[18:23] <marcoceppi> where you don't have to hack the exisiting charm, but instead build a charm that only works once attached to another running charm
[18:23] <marcoceppi> then you can amend the functionality of the service running with additional scripts, configuration, or software
[18:23] <automate_> hmmmm that sounds like the plan for us then
[18:25] <automate_> So.......one more crazy question....Any charmers using puppet to deploy their apps in a charm? We use puppet enterprise, and want the ability to modify instances with the puppet master
[18:25] <marcoceppi> so, people have used chef, ansible, saltstack, and there's one or two puppet charms
[18:26] <marcoceppi> bascially it's using puppet standalone to drive machine setu
[18:26] <marcoceppi> I can't think of any examples off the top of my head at the moment though
[18:26] <marcoceppi> I'd have to hunt down which charms do that
[18:27] <automate_> Use case 1: We want to control haproxy configuration with puppet...Well deploying the charm store haproxy charm doesn't allow us to really modify it with puppet
[18:36] <jose> marcoceppi: in amulet, is it possible to configure a service after deployment?
[18:37] <marcoceppi> jose: yes
[18:37] <jose> cool, thanks
[19:18] <purpledog3> jcasto: I read a good portion of the document you pointed me to : http://insights.ubuntu.com/wp-content/uploads/UCD-latest.pdf?utm_source=Ubuntu%20Cloud%20documentation%20%E2%80%93%2014.04%20LTS&utm_medium=download+link&utm_content= But it does not talk about how to setup a maas controller when you create a cluster with VM nodes that are created with virsh.
[19:25] <jose> marcoceppi: 'this video is private' :(
[19:35] <purpledog3> Following the guide http://insights.ubuntu.com/wp-content/uploads/UCD-latest.pdf?utm_source=Ubuntu%20Cloud%20documentation%20%E2%80%93%2014.04%20LTS&utm_medium=download+link&utm_content= //
[19:36] <purpledog3> juju quickstart does not exist ?
[19:37] <marcoceppi> jose: I'muploading a new one
[19:37] <jose> oh cool
[19:37] <jose> purpledog3: afaik it's juju-quickstart
[19:38] <purpledog3> ahaha!! bug to note and correct in the documentation ::!! Thanks a bunch jose!!
[19:38] <arosales> purpledog3: yes you need to have juju installed first
[19:38] <jose> np :)
[19:39] <arosales> purpledog3: if you don't have juju installed first then you will need to issue juju-quickstart, and not the juju plugin command "juju quickstart"
[19:39] <marcoceppi> purpledog3: if you're getting that error, that means you didn't install juju
[19:40] <jose> there you go :)
[19:41] <purpledog4> marcoceppi: juju is installed for sure!!
[19:42] <purpledog4> juju <enter> shows me all the help commands etc
[19:42] <marcoceppi> purpledog3: what version of juju do you have installed? `juju version`
[19:43] <purpledog4> marcoceppi: I have a maas controller I setup with 2 nodes added and each node is a VM with ubuntu running on it/
[19:43] <marcoceppi> juju quickstart should most definitely work
[19:43] <purpledog4> 1.18.4-trusty-i386
[19:43] <marcoceppi> purpledog4: can you run `juju help plugins` and report the output to paste.ubuntu.com ?
[19:43] <purpledog4> ok
[19:44] <purpledog4> http://pastebin.com/gZf9tvFU
[19:46] <purpledog4> marcoceppi: I need for the maas controller to not try and commission these nodes since they already have ubuntu etc on them. What can I do ?
[19:48] <marcoceppi> purpledog4: nothing really, maas will always attempt to reboot the machines and pxe boot ubuntu to install a fresh image
[19:48] <marcoceppi> that's how maas works
[19:49] <purpledog4> The VM nodes already have ubuntu running.. if it tears down those nodes I don't see how it can create those again ? Those Vms are hosted on another server and created with virsh.
[19:49] <purpledog4> Does this mean adding a virsh node is no good and can't work with maas ?
[19:50] <marcoceppi> purpledog4: it's not going to tear them down
[19:51] <marcoceppi> it's going to attempt to boot them up
[19:51] <marcoceppi> and then, when it registers with maas DHCP, it will get a PXE boot image
[19:52] <purpledog4> marcoceppi: Not sure I understand, a VM node is already created, How can it reboot the VM node ? PXE is not natively supported on the hardware I work with (I think I need to get uboot to support PXE)
[19:52] <marcoceppi> purpledog4: virsh can reboot VMs
[19:52] <marcoceppi> if it can't PXE it'll just keep the same image
[19:53] <purpledog4> marcoceppi: Ok.. I have seen virsh do that.. If it can then it is settled I will used virsh locally to see if it can reboot the VM to get assurance it can. I know PXE is non existent.
[19:53] <purpledog4> Thanksfor your help..
[19:55] <purpledog4> BTW is maas dhcp same as the dns-masq ?
[19:56] <marcoceppi> nope, dns masq is for the domain name resolutions
[19:56] <marcoceppi> DHCP is a networking thing
[19:57] <purpledog4> So I do need to setup the maas nw interface for it I think
[19:58] <purpledog4> all this time the VM has been getting an IP address on its own network so those have been able to ping out but but not can reach that VM node .. so I guess that is the purpose of the MAAS dhcp I think
[20:01] <sarnold> marcoceppi: are you sure there? I wouldn't be surprised if dnsmasq is being used for dhcp
[20:01] <marcoceppi> sarnold: I thought maas was doing something differently with dhcp, I could be wrong though
[20:01] <sarnold> marcoceppi: it has been a while since I've looked..
[20:05] <purpledog4> sarnold, marcoceppi: Hmm .. after alot of struggle to get virsh running and creating nodes .. I found that if the local NW was 10.0.0.X the nodes on the VM were able to access the NW just fine.. but had IP addresses of 192.168.x.y/
[20:05] <purpledog4> the nodes could access anything outside but not vice versa and documentaion indicated that was to be expected.
[20:05] <purpledog4> It was using dns-masq etc.
[20:06] <purpledog4> Sarnold: Now with with maas dhcp in the mix I will need to resarch how the VMs get IP addresses and I am a novice at application work.
[20:07] <purpledog4> More comfortable at the low level flipping bits/waveforms/RTL/etc etc thanks for the help.. really need it and appreciate it!!
[20:27] <galebba> could anyone tell me how to clear  "agent-state-info: 'hook failed: "shared-db-relation-changed"' " left over from keystone charm ?
[20:30] <tvansteenburgh> galebba: if you just want to clear it, `juju resolved`
[20:33] <galebba> awsome, thank you.. was trying this without luck juju resolved --retry keystone/0
[21:13] <jcastro_> hey marcoceppi
[21:13] <jcastro_> so kirkland and I are doing a charm
[21:13] <jcastro_> and we fixed the bug we had in a hook
[21:13] <jcastro_> and then did `juju upgrade-charm`
[21:14] <jcastro_> and then `juju resolved --retry`
[21:14] <jcastro_> but the version of the charm on the deployed unit didn't upgrade
[21:14] <jcastro_> are we missing a step?
[21:19] <AskUbuntu> Can Ping but Cannot SSH to Openstack VM Instace | http://askubuntu.com/q/486151
[21:23] <dpb1> tvansteenburgh: hey -- around still?  Anyone I can poke to get this promulgated? https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102
[21:45] <jose> dpb1: someone from the ~charmers team will have to take a look, but bare in mind that there's a queue of other things that are waiting too :)
[21:46] <dpb1> thx jose. :)
[21:50] <kirkland> marcoceppi: I'm struggling with the varnish charm...do you know anything about it?
[22:04] <jose> kirkland: what's the specific prob?
[22:04] <kirkland> jose: /etc/varnish/* isn't getting automatically updated with the related web service's hostname when making the relation
[22:05] <jose> hmm, what I see on the hooks is that /etc/varnish/default.vcl is the file that needs to be updated, you say there are no changes there?
[22:12] <kirkland> jose: right, my hostname is not inserted in the top bit
[22:12] <kirkland> jose: my service is
[22:12] <kirkland> jose: so it's at least partially run
[22:12] <kirkland> jose: hmm, maybe I need to do soemthign more in my charm?
[22:12] <jose> kirkland: possibly. are you setting on the interface-relation-joined hook "hostname=`unit-get public-address`"?
[22:13] <jose> if that's not set, then varnish won't know the address of the unit
[22:13] <kirkland> jose: hm, no, I'm setting it in website-relation-joined
[22:13] <jose> interface is where the name of the interface goes
[22:13] <jose> and I assume you're doing `juju add-relation service:website varnish:reverseproxy`?
[22:15] <jose> kirkland: ^
[22:16] <kirkland> jose: ah, I think that's it
[22:16] <jose> let me know how that goes :)
[22:56] <marcoceppi> kirkland: who wrote the charm?
[22:56] <marcoceppi> jcastro_: yeah, you can't do resolved --retry
[22:56] <marcoceppi> the upgrade-charm event is still queued
[22:56] <marcoceppi> so you don't have the new payload yet
[22:58] <jose> it's event-based, correct