[00:59] <hazmat> jose, pong
[01:04] <jose> hazmat: hey, you wrote that juju docean plugin, right?
[01:18] <hazmat> jose, yes
[01:19] <hazmat> jose, anything in particular?
[01:19] <hazmat> they've got an api v2 in beta atm, haven't looked at it
[01:20] <jose> hazmat: I have a user who is having troubles when deploying to zone nyc2
[01:20] <hazmat> jose, could you be more specific?
[01:20] <jose> and also deploying mysql on precise fails with some yaml error
[01:20] <jose> sure
[01:20] <jose> when they try and bootstrap with a constraint to create the machine in nyc2
[01:20] <hazmat> jose, hmm.. the yaml error is something i should fix.. they respun their existing images to no longer have py-yaml
[01:20] <jose> it just fails for any reason, may be it not getting an IP, not completing, or anything else
[01:21] <hazmat> hmm.. odd that's the default region in the plugin
[01:21]  * hazmat gives it a whirl
[01:21] <jose> lemme see if the user is around so he can join the discussion
[01:21] <jose> (and deploying mysql on trusty does good)
[01:21] <hazmat> jose, or have him file a bug on the github issue tracker
[01:21] <jose> cool then
[02:28] <mwhudson> mwhudson@narsil:lava-dispatcher$ juju bootstrap -e manual
[02:28] <mwhudson> [sudo] password for mwh:
[02:28] <mwhudson> ERROR initialising SSH storage failed: failed to create storage dir: rc: 255 (Permission denied (publickey).)
[02:28] <mwhudson> what does that mean?
[02:35] <marcoceppi> mwhudson: you need to be root or hsve sudo access on the box
[02:35] <mwhudson> i do
[02:35] <mwhudson> (have sudo access)
[02:35] <marcoceppi> mwhudson: actually, it's an ssh login error
[02:35] <marcoceppi> mwhudson: is your key on the box?
[02:35] <mwhudson> yes
[02:35] <mwhudson> oh
[02:36] <mwhudson> there is already an ubuntu user on the box
[02:42] <axw> mwhudson: if you're not able to login as "ubuntu", you can set bootstrap-user
[02:42] <axw> that will be used for the initial login, and the ubuntu user's authorized_keys will be updated
[02:43] <mwhudson> axw: i think that sort of half-happend somehow
[02:45] <axw> mwhudson: hmm. do you perhaps have an environments.yaml lying around?
[02:45] <axw> mwhudson: err, .jenv
[02:45] <axw> mwhudson: the keys should be set up when the .jenv file is created
[02:46] <mwhudson> ah
[02:46] <mwhudson> this machine has an unexpectedly hardcore ssh config
[02:46] <mwhudson> ("AllowGroup sshuser")
[02:46] <mwhudson> so juju created ubuntu but then couldn't log in
[02:47] <axw> ah
[02:48] <axw> we probably should at least add a check after creating the user that it can log in
[02:48]  * axw files a bug
[02:51] <axw> mwhudson: if you've got anything else to add, https://bugs.launchpad.net/juju-core/+bug/1333496
[02:51] <_mup_> Bug #1333496: environs/manual: ubuntu user creation should check login succeeds <manual-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1333496>
[02:52] <mwhudson> axw: i think that covers it
[02:52] <mwhudson> axw: thanks
[02:52] <axw> nps, thanks for digging
[03:04] <mwhudson> oh er
[03:05] <mwhudson> how does the manual provider decide whether to use the amd64 or x86 tools?
[03:05] <mwhudson> i have a x86 userspace but amd64 kernel
[03:05] <mwhudson> and juju chose wrong
[08:51] <rbasak> sinzui: please could you take another look at bug 1328958?
[10:25] <jamespage> gnuoy, is https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor/+merge/224111 actually merged?
[10:26] <gnuoy> jamespage, if you look at charm-helpers you'll see it was merged in error and reverted
[10:26] <jamespage> gnuoy, so I still need to action that right?
[10:26] <jamespage> gnuoy, can you re-propose pls
[10:27] <gnuoy> jamespage, yes pls
[10:32] <jamespage> gnuoy, I can't merge it right now - bzr just tells me its already done :-)
[10:32] <gnuoy> jamespage, ok, let me great a new mp
[10:32] <jamespage> gnuoy, you might have to push a different branch - not sure
[10:35] <jamespage> gnuoy, iis this the diff - http://paste.ubuntu.com/7694495/ ?
[10:35] <gnuoy> jamespage, thats the one
[10:37] <jamespage> gnuoy, includnig the bit in utils?
[10:39] <gnuoy> jamespage, yes. New mp https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267
[10:43] <jamespage> gnuoy, can I have a unit test with those please :-)
[10:43] <jamespage> but other than that looks OK
[10:43] <gnuoy> sure
[10:50] <mfa298> with juju and maas is there a way to tell juju which zone within maas to either use or not use when deploying nodes?
[10:53] <jamespage> gnuoy, re determine_packages in neutron-openvswitch - I think that needs to be a list if lists of packages
[10:53] <jamespage> the reason being that if dkms is requirement, that must be installed first, otherwise openvswitch-switch might load a ovs module that's not openstack compatible
[10:54] <jamespage> gnuoy, review line 504 of the openstack context helper
[10:54] <jamespage>         [ensure_packages(pkgs) for pkgs in self.packages]
[11:32] <AskUbuntu> Unable to add nodes to ubuntu MAAS server on static vlan | http://askubuntu.com/q/487549
[13:17] <automatemecolema> found it, a charm done up all spiffy in puppet albeit very old http://bazaar.launchpad.net/~michael.nelson/charms/oneiric/apache-django-wsgi/trunk/files
[13:17] <rbasak> sinzui: re: bug 1328958, it also affects desktop users. This is key use case for users of the local provider.
[13:18] <rbasak> sinzui: why do you restrict the bug to just cloud users?
[13:18] <sinzui> rbasak, you are the only one reporting that
[13:18] <sinzui> rbasak, I don't have an ubuntu user on my machine, neither does the developer or the qa staff or the solutions staff
[13:19] <rbasak> sinzui: so you can't reproduce? Please could you note that in the bug? Then I'll test reproducing on a desktop running VM.
[13:19] <sinzui> rbasak, the only version of ubuntu that has an ubuntu user is a cloud/server image
[13:19] <rbasak> sinzui: that's my point. I think you understand me backwards.
[13:19] <rbasak> If the ubuntu user is not present, then juju bootstrap fails.
[13:19] <rbasak> I demonstrate this by taking a cloud image and removing the ubuntu user.
[13:20] <rbasak> I originally hit this on my desktop which also has no ubuntu user.
[13:20] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267 now with added unit tests
[13:22] <sinzui> rbasak, The juju client does not require an ubuntu user. The client requires it on the machines it deploys. The client might be confused if the host machine is a server image and that ubuntu as the user on it too
[13:23] <rbasak> sinzui: "The juju client does not require an ubuntu user." - that's the bug I'm reporting. It does. It happened on my desktop machine.
[13:23] <rbasak> If you think it affects only cloud images in a reproducible way, then I'll find you a reproducer for a desktop install I guess.
[13:23] <jamespage> gnuoy, merged
[13:23] <gnuoy> thanks
[13:24] <sinzui> rbasak you are alone in your experience, so I marked the bug medium. If it affected other users and blocked them, I would mark the bug high
[13:25] <jamespage> gnuoy, hmm - I merged and then thought again
[13:25] <gnuoy> ?
[13:25] <jamespage> gnuoy, inserting  rel_name='amqp', relation_prefix=None befor ssl_dir is probably not a good idea
[13:26] <jamespage> if code exists which is not using ssl_dir= (i.e. positional parameters) it will break
[13:27] <gnuoy> jamespage, ok, I create a new mp with the args reordered
[13:29] <jamespage> gnuoy, url?
[13:31] <gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/fix-AMQPContext-back-compat/+merge/224294
[13:33] <jamespage> gnuoy, merged - thanks!
[13:33] <gnuoy> ta
[13:35] <natewarr> Is there some undocumented trick to getting juju+vagrant+LXC to work? I'm using precise and keep getting 'error: error executing "lxc-create":' when it tries to deploy juju-gui
[13:36] <natewarr> ...or documented, but hard to find.
[13:42] <jamespage> gnuoy, I still see one test failure on https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/neutron-refactor/+merge/224069
[13:42] <jamespage> gnuoy, and did you see my note on irc re installation order of packages for openvswitch (its above) in the context of the neutron-openvswitch charm?
[13:43] <gnuoy> jamespage, that test failure is fixed (as of about 10s ago)
[13:44] <gnuoy> jamespage, I'll take a look at neutron-openvswitch charm comment
[13:45] <jamespage> gnuoy, nova-compute and quantum-gateway merges - thanks!
[13:45] <gnuoy> \o/
[13:49] <dpb1> hi marcoceppi, jcsackett -- could I get some more review love? https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102
[13:49] <marcoceppi> dpb1: it's on my list
[13:49] <marcoceppi> my short list
[13:50] <dpb1> marcoceppi: great! :)
[13:55] <natewarr> Has anyone had trouble with juju LXC on precise?
[13:55] <gnuoy> jamespage, determine_packages should return a list of lists which the install hook should iterate over passing each inner list to apt_install in turn ?
[13:55] <jamespage> gnuoy, yes
[13:55] <gnuoy> ack, ta
[13:55] <jamespage> gnuoy, ensures the dkms module gets install prior to openvswitch-switch trying to load it
[13:55] <jamespage> required on older kernels
[13:56] <jamespage> 3.2 for example
[13:56] <jamespage> otherwise no GRE
[14:03] <automatemecolema> I have a bug problem on precise with local provider https://gist.github.com/anonymous/9ecb23a51844627028b0 Anyone able to point out something here? Here's a bug that maybe somewhat related https://bugs.launchpad.net/juju-core/+bug/1330406 ??
[14:03] <_mup_> Bug #1330406: juju deployed services to lxc containers error executing "lxc-create" with bad template: ubuntu-cloud <bootstrap> <local-provider> <lxc> <juju-core:Incomplete> <https://launchpad.net/bugs/1330406>
[14:06] <jamespage> gnuoy, just neutron-openvswitch remaining right?
[14:12] <gnuoy> jamespage, shouldn't neutron-api be in next rather than trunk ?
[14:13] <jamespage> gnuoy, as it does not have a stable, in-store charm yet trunk is fine for now
[14:13] <gnuoy> jamespage, but its not compatible with the other trunk charms
[14:13] <jamespage> gnuoy, but its not in the store so meh
[14:14] <gnuoy> ok
[14:14] <jamespage> means its a straight promulgation at the end of cycle
[14:17] <gnuoy> jamespage, neutron-openvswitch updated and tested
[14:21] <jamespage> gnuoy, +1 pushed
[14:21] <jamespage> gnuoy, maybe they should be /next
[14:30] <automatemecolema> Anyone out there that can help figure out why I cant use local environment on precise because of an lxc-create error. Followed the instructions to a tee.
[14:49] <jamespage> gnuoy, gah - neutron-api has a bug
[14:49] <gnuoy> jamespage, I refuse to believe that
[14:50] <jamespage> the superclass of NeutronCCContext calls "_save_flag_file"
[14:50] <jamespage> which write out to /etc/nova
[14:50] <tvansteenburgh> automatemecolema: you might try asking in #juju-dev
[14:53] <gnuoy> jamespage, hmm, I see what you mean but I haven't hit that oddly. I'll do some digging
[15:04] <vladimiroff> Is there a way to just validate an index.json and products.json files? Without running a custom cloud and stuff?
[15:05] <jamespage> gnuoy, for some reason my flake8 is ignoring the issues yours throws up on nova-compute-vmware
[15:06] <gnuoy> jamespage, what flake version do you have?
[15:07] <jamespage> gnuoy, 2.1.0
[15:07] <gnuoy> hmm, me too
[15:07] <jamespage> gnuoy, s-ok figured it out
[15:07] <gnuoy> tip top
[15:11] <jamespage> gnuoy, pip installed pep8
[15:11] <jamespage> \o/
[15:11] <jamespage> gnuoy, anyway - I tied
[15:26] <gnuoy> jamespage, +1cinder-vmware and nova-compute-vmware . Want me to push them to next ?
[15:26] <jamespage> gnuoy, nah - these go straight to stable
[15:26] <jamespage> no other changes required
[15:26] <gnuoy> sounds good
[15:28] <jamespage> gnuoy, I'll sort them now
[15:47] <jamespage> coreycb, just a couple of minor things on the keystone MP - other than that happy to merge once you have those fixed.
[15:47] <coreycb> jamespage, ok thanks!
[16:12] <loki27_> Hi all
[16:12] <loki27_>  I have question about charms deployment and juju .. If i deploy the same charm , on different machine, (Let's say mysql and rabbitMQ) , how will the deployment work their relation, will the mysql instances be standalone , or are they going to replicate the same data trough all instances ?
[16:20] <jamespage> loki27_, the honest answer is that it depends on the charm
[16:20] <jamespage> loki27_, rabbitmq will configure itself as a native RabbitMQ cluster
[16:21] <jamespage> loki27_, mysql won't
[16:22] <automatemecolema> Can you debug-hooks install?
[16:23] <loki27_> jamespage really i could not find any HA documentation for MAAS/Juju
[16:23] <jamespage> loki27_, well its not really MAAS/Juju that's doing the HA - its the charm
[16:23] <jamespage> loki27_, and that depends on the charm author having done the right things
[16:23] <loki27_> Trying to figure the best architecture to have HA but it's hard to spot SPOF with all the magic going on..
[16:23] <jamespage> including documented stuff
[16:24] <jamespage> loki27_, with regards juju itself, the next stable (1.20/2.0) will have HA
[16:24] <jamespage> you can try this in the 1.19.x interim dev series now
[16:25] <jamespage> loki27_, MAAS - not 100% sure
[16:26] <jamespage> loki27_, I know its planned - but not certain on current status
[16:26] <jamespage> loki27_, re mysql - mysql is not natively active/active HA'able - however it's possible to use it with ceph and the hacluster charm to implement active/passive HA
[16:26] <jamespage> loki27_,
[16:26] <jamespage> loki27_, https://jujucharms.com/~openstack-charmers/trusty/percona-cluster-5/?text=percona-cluster
[16:27] <loki27_> https://wiki.ubuntu.com/ServerTeam/OpenStackHA talking about 28 servers
[16:27] <jamespage> is in the review queue - its a drop in replacement for mysql with is active/active
[16:27] <loki27_> Really i want to achieve 4 node HA
[16:27] <loki27_> something that is not that hard to do using Mirantis
[16:27] <jamespage> loki27_, its possible to deploy alot of those in LXC containers now that juju supports that
[16:28] <jamespage> loki27_, that documentation is a little out-of-date - I have it on my list to refresh
[16:31] <loki27_> ok , do you know the ETA for 1.20 ?
[16:32] <jamespage> alexisb, is there an eta for the next juju stable yet?
[16:33] <alexisb> jamespage, mostly that is dependent on when 1.19.4 (aka 1.20 release candidate) gets pushed out
[16:33] <alexisb> we are still working through critical bugs
[16:34] <loki27_> Ok , sounds like "Not that far" to me
[16:34] <alexisb> the goal is to have a weeks of exposure on 1.19.4 before we call it good for 1.20
[16:35] <alexisb> so no earlier then tuesday of next week at this point for 1.20
[16:35] <loki27_> that's pretty fast :)
[16:39] <loki27_> What are the machine number #num from (juju deploy charm --to #num)  relative to the MAAS machine name/ip ?
[16:41] <lazypower> loki27_: the principal behind that is the machine is already under juju's control
[16:41] <lazypower> if you want to specify a specific machine in your MAAS pool that juju hasn't already registered, you'll need to look at machine tagging and using those tags as a constraint.
[16:41] <loki27_> Ok so considering (I guess) my current only node i have is the bootstrapped node, and i need to deploy to other available node from MAAS
[16:42] <lazypower> loki27_: so, given a scenario where you did juju deploy mysql,   and it allocated node3.maas as unit #1 - you would then do juju deploy cs:phpmyadmin --to 1
[16:42] <lazypower> to colocate with MySQL
[16:42] <lazypower> loki27_: if you need specific machines, use tagging. Otherwise let juju pick from the 'cloud pool' provided by maas.
[16:43] <lazypower> loki27_: i use a VMAAS setup using VM's provisioned by MAAS -- here's my setup. http://i.imgur.com/d71Nedd.png  - Node9 is the bootstrap node. Ignore the 3 running at the top as they are part of a manual provider setup.
[16:44] <lazypower> if i wanted to allocate node5 because its got 2GB of ram assigned - i use constraints to do that, or maas tagging.  (i keep saying this, but i dont really know how to use the maas tagging feature. I should read up on that)
[16:45] <loki27_> hum
[16:45] <loki27_> see http://pastebin.com/U6FFAxDu
[16:45] <loki27_> That's the deployment i was planning
[16:45] <lazypower> the --to's need to be the machine #'s in the listing
[16:46] <lazypower> i dont think juju will work if you specify the hostname.
[16:46] <loki27_> no, it won't
[16:46] <lazypower> s/juju/--to/
[16:46] <lazypower> loki27_: so what you can do is juju add-machine
[16:46] <loki27_> Can i just add the node to juju (Assign to juju) witouth deploying anything
[16:47] <lazypower> which will spin up the machine, and allocate it without any services. Then use that machine # in the listing.
[16:47] <loki27_> and so it would get indexed in and assigned from MAAAS with it's UID..
[16:47] <lazypower> yep. juju add-unit will tell maas to return power on a node and install teh agent.
[16:47] <loki27_> hehe
[16:47] <loki27_> ok
[16:47] <loki27_> add-machine sounds like what i need
[16:48] <lazypower> my poor maas cluster. so abused, it gets no joy of long running instances outside of my manual provider setup
[16:53] <loki27_> add-machine works just fine
[16:54] <lazypower> loki27_: so for that script to work just make sure you've got the add-machine statements above to register X - Y and sub those node names with unit #'s and you're g2g
[16:54] <loki27_> yes.. it's deploying right now :)
[16:55] <loki27_> ERROR cannot add service "nova-cloud-controller": service already exists
[16:56] <lazypower> are you trying to scale a service?
[16:56] <loki27_> well i'm trying to have that whole openstack with fail-over / clustering
[16:56] <lazypower> i see you have it listed twice, which is why its complaining. You can only deploy a service name once - if you want 2 then you add-unit -n 1 after teh deploy.
[16:57] <lazypower> ir you want a secondary independent service, give it a new jame
[16:57] <lazypower> juju deploy nova-cloud-controller nova-cloud-failover -- as an example.
[16:57] <loki27_> Will that node get,s replicated with nova-cloud-controller at node 0 ?
[16:57] <lazypower> it would not get replicated no.
[16:58] <lazypower> sounds like you want juju add-unit <service> -n #
[16:59] <loki27_> Ok so i would deploy the node to 0, and then add-unit to 1 so it get's replicated and works in HA
[17:28] <lazypower> node0 should be your bootstrap node
[17:28] <lazypower> is that what you're intending to do?
[17:33] <marcoceppi> loki27_: what you can do is juju add-machine
[17:34] <marcoceppi> then target the machine specifically which corrleates to the maas name you want to use
[17:34] <marcoceppi> loki27_: alternatively, tag each node with a unique tag
[17:34] <marcoceppi> and use the --constraints="tag=<tag-name" on each deploy command
[17:34] <marcoceppi> to assign that service to that machine
[17:35] <lazypower> aha!
[17:35] <lazypower> thats what i was looking for when i was talking about tagging. ty marcoceppi
[17:35] <schegi> anyone can help with the ceph charm? after deploying 3 ceph nodes ceph only return the monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication stuff.
[17:35] <lazypower> but we got him lined out with add-machine
[17:39] <marcoceppi> loki27_: I keep getting weird buffer lag in quassel, you seeing that too?
[17:40] <marcoceppi> lazypower: ^^*
[17:41] <lazypower> marcoceppi: nope. How long has it been since you cleaned up your database?
[17:42] <lazypower> i drop the logs > 2 months old on the first of every month to keep the database nice and compact.  but i'm also using SQLITE not PGSQL
[17:43] <marcoceppi> lazypower: I have never cleaned my db
[17:43] <marcoceppi> I've got like a year of history in here across a bunch of networks
[17:43]  * marcoceppi considers cleaning
[17:43] <lazypower> marcoceppi: i'm willing to bet thats why
[17:44] <lazypower> marcoceppi: maybe take a dump and stick it in your archives so you've got it, and then just start fresh?
[17:45]  * marcoceppi shrugs, maybe
[17:47] <lazypower> marcoceppi: have you had any issues with nodes getting stuck in the pending state in maas? they look like they are reaching the end of the provisioner and handing off to juju, but not completing the run
[17:47] <lazypower> i'm rounding node #5 with this issue today
[17:50] <lazypower> wait, nvm. it appears it finished. the gui was lagging behind the actual status of the environment
[19:34] <automatemecolema> Anyone with experience leveraging cloud-init with juju ??? Maybe a concept of how one might use cloud-init
[19:37] <automatemecolema> I know behind the covers juju uses cloud-init, but was wondering how I could send directives to cloud-init whilst provisioning with juju
[19:46] <lazypower> automatemecolema: thats more the responsibility of the install / config-changed hook. What are you trying to do with cloud init?
[19:46] <lazypower> wait, we spoke about this earlier didn't we - it has to do with using a repository mirror right?
[19:57] <automatemecolema> lazypower: maybe that was another guy, or my headache is ruining my day where I can't remember anything
[19:59] <lazypower> :( boo
[19:59] <automatemecolema> I'm working adamantly with another guy building a continuous deliver pipeline that involves puppet enterprise, github, jenkins, and juju tying into multiple cloud providers
[19:59] <lazypower> Ok so far so good.
[19:59] <automatemecolema> and we are thinking about rolling our own images for a couple of reasons
[20:00] <automatemecolema> one being compliance
[20:00] <automatemecolema> two being able to have a certain feature set of tools readily available at time of provisioning without over complicating the charm
[20:01] <automatemecolema> Some of our hang up is going to be around node classification, and how we might specify custom facts for each charm we roll
[20:03] <automatemecolema> two is more of a problem with inexperience of haproxy, and how we can roll new production instances during revision pushes without notices downtime
[20:04] <automatemecolema> scenario one being having two tomcat servers behind a set of haproxy servers, when it's time to push new code to production, we need to provision two more instances, tell haproxy to send traffic to the new instances, and then have juju decommission the old instances
[20:05] <automatemecolema> it's a pretty fantastical approach, but learning curve seems a little steep up front
[20:05] <lazypower> yeah i get what you're telling me
[20:05] <lazypower> you know, i went a different approach and put my tools in a subordinate that i deploy with all of my services on my MAAS cluster
[20:06] <lazypower> its not ideal if you need the tools up front, so i can see where you're looking at cloud-init
[20:06] <lazypower> and prebaked images
[20:06] <lazypower> automatemecolema: so as i understand cloud-init, you can populate these with user-data scripts
[20:07] <automatemecolema> correct
[20:07] <lazypower> https://help.ubuntu.com/community/CloudInit - this is the document i reference when talking to people who are new to cloud-init. I'm fairly certain there are more exhaustive rsources - but this is a good place to get you started.
[20:08] <automatemecolema> yep I've read through that
[20:08] <lazypower> instead of moving to the 'golden image' approach - cloud init is a great way to get you moving with those tools provided at time of cloud boot instead of pressing the image and crossing fingers you dont need to provide rolling updates across your network.
[20:08] <lazypower> this is why i caution against running home rolled images. its not a problem persay if you're aware of teh short comings. You mentioned compliance so you may *need* to go that route, in which case, there's no problem doing that. I'm fairly certain you can specify AMI's
[20:09] <lazypower> i'd need to double check for you, but i think its a constraint you can pass juju
[20:10] <automatemecolema> hmmm lazypower: yea I think rolling images is maybe a last resort for compliance
[20:11] <lazypower> then my suggested thought is to build a user data script. From there I need to see if you can stuff those in constraints / the provider configuration
[20:12] <lazypower> asking for you now. it make take a bit - its getting late in the day for core devs
[20:12] <lazypower> basically middle of the transition of people coming on/off watch :)
[20:18] <automatemecolema> hmmm let me mull this over with my counterpart and see what his take is
[20:19] <lazypower> automatemecolema: ok. if your status changes just ping me and i'll adjust teh questions as required.
[20:19] <lazypower> alternatively you can join #juju-dev and follow along.
[20:19] <automatemecolema> lazypower: sounds like a plan, thanks for your help
[20:19] <automatemecolema> I'm in the juju-dev chat as well
[20:22] <lazypower> automatemecolema: looks like i'm remembering our python days - we dont support custom AMI's as of the GO Port unless something has changed and this answer needs updated - http://askubuntu.com/questions/84333/how-do-i-use-a-specific-ami-for-juju-instances
[20:23] <lazypower> automatemecolema: so looks like my subordinate to populate my tool chain was the way to go in the current implementation of juju.
[20:23] <lazypower> automatemecolema: its not ideal, especially if you depend on the tools to be present for teh charm's installation. The alternative would be to version that routine out into its own script, and then import/clone that in your charms installation hooks.
[20:23] <lazypower> so you make sure they are present when the install hook is done, then do your application heavy lifting in config-changed.
[20:24] <sarnold> 'course those answers are over a year old. it feels like something that ought to have been done by now, no?
[20:24] <automatemecolema> Yea...reviewing that kb...curious if anything has changed on that front
[20:28] <lazypower> sarnold: nate confirmed its unimplemented. i would imagine it caused some headache somewhere?
[20:28] <sarnold> lazypower: odd.
[20:29] <automatemecolema> lazypower: so here's a question my counterpart posed about rolling our own images, one being a dev to production issue with having a different image in dev than in prod
[20:30] <lazypower> sarnold: it may be one of those things, that unless it gets a sizeable amount of community want, it will remain unimplemented. I know we are currently hacking on HA as that's one of the highest requested features.
[20:30] <lazypower> automatemecolema: yeah i'm not a huge fan of the golden image approach
[20:31] <automatemecolema> lazypower: if we want the same testing results in dev, test, stage, and then to production, how can we guarantee that what worked in dev will work ing prod?
[20:31] <sarnold> lazypower: yeah, and it could just be that simplestreams makes doing something significantly better significantly easier, but I just don't know how to phrase the question in that case :)
[20:31] <lazypower> automatemecolema: at the end of the day, you have root in charms. so you can do whatever unholyness you need to accomplish the goal. My suggestion would be to build a sub and attach it to your deployed units. You can be responsive to what happens based on the relationship created.
[20:31] <lazypower> that way you get a consistent output in dev/staging/production
[20:31] <lazypower> you're not muxing with the base cloud image before juju touches it.
[20:32] <lazypower> and if you need something as a dependency. it should probably live in the install hook of the charm anyway
[20:32] <lazypower> but may it never be said that i stifle innovation. If you guys wanted to hack on cloud-init and we gave you those details, by all means - go forth and conquer
[20:34] <lazypower> sarnold: oh yeah - simplestreams. i forgot all about the fact we have a tutorial as of UOS on how to run your own simplestreams.
[20:34]  * lazypower makes a note to go back and watch that session
[20:35] <automatemecolema> lazypower: maybe I'm confused about how you suggest using subordinates, I'm just curious how one would say if today juju provisioned an instance with trusty 14.04.1 and tomorrow it rolls 14.04.2. Dev lives on 14.04.1, but we can't be sure it'll run on 14.04.2 ??
[20:36] <lazypower> automatemecolema: when you deploy on 14.04.1 its running 14.04.1 unless you run the dist-upgrade on your prod stack.
[20:37] <lazypower> automatemecolema: so the idea would be to not deploy 14.04.2 in staging if youre' on 14.04.1 in your production env. When that occurs, you run rolling upgrades. In terms of services and how we orchestrate, servers are more like cattle than they are pets.
[20:37] <automatemecolema> yes, but if I do a juju deploy haproxy and it rolls out me haproxy on 14.04.1 how can I control if tomorrow 14.04.2 is release and I do a juju deploy haproxy that I won't get 14.04.2 ??
[20:37] <lazypower> hmm. thats a good question. default-series only ensures you're within the 12.x and 14.x series presently
[20:38] <lazypower> we dont typically push breaking changes like what we're suggesting in a LTS though. thats reserved for the non LTS releases.
[20:38] <lazypower> eg: apache2 between precise => trusty - config file structure changed. it takes a series jump to run into that. not a minor revision.
[20:43] <lazypower> automatemecolema: i dont have a good answer for you about lockstepping minor revisions.
[20:57] <automatemecolem_> dang thing kicked me  off of freenode...
[20:58] <lazypower> boo. did you get any of our conversation?
[20:59] <automatemecolema> awry irc
[21:03] <automatemecolema> so here's something to throw out there, how would one build a paas solution leveraging juju. Just thinking about concepts etc etc.
[21:04] <lazypower> automatemecolema: Juju can be wrestled into a paas, but its intended to orchestrate solutions. We are actively working through charming up Cloud Foundry to provide a PAAS solution
[21:07] <lazypower> but if you want to do continuous delivery. There's 2 methods i am aware of
[21:07] <lazypower> 1 is git based delivery. You can either place git checkouts in your config-chagned and execute the hook from your CI if the build passes to run your config-changed hook, or when juju actions lands - build an action for it.
[21:08] <lazypower> the other is charm building in CI, and increment - charm-upgrade your deployment.
[21:08] <lazypower> The path is to add a files/ directory in your charm for example, and do local installation of your artifacts by copying them where they need to go if they exist, or checkout from version control / upstream.
[21:10] <lazypower> I'm doing method A with some rails deployments I have in the wild, and it works reasonably well. I gate in CI, and only master is ever deployed - to staging. Production is gated manually and has a revision flag on the charm that I populate.  (but now that i think about it, i  can just juju-set from CI and achieve the same goal in staging...)
[21:11] <automatemecolema> given a few things to think about.
[21:11] <automatemecolema> me*
[21:21] <lazypower> automatemecolema: feel free to ping if you need anything. I'm going to EOD
[21:22] <automatemecolema> lazypower: thanks for your time sir
[22:46] <Delair> Hi All, .. Can somebody please tell me that if quantum-gateway is the same as neutron-gateway when installing using juju
[23:13] <Delair> how do i know what is the default openstack origin type when installing openstack using juju