[00:59] jose, pong [01:04] hazmat: hey, you wrote that juju docean plugin, right? [01:18] jose, yes [01:19] jose, anything in particular? [01:19] they've got an api v2 in beta atm, haven't looked at it [01:20] hazmat: I have a user who is having troubles when deploying to zone nyc2 [01:20] jose, could you be more specific? [01:20] and also deploying mysql on precise fails with some yaml error [01:20] sure [01:20] when they try and bootstrap with a constraint to create the machine in nyc2 [01:20] jose, hmm.. the yaml error is something i should fix.. they respun their existing images to no longer have py-yaml [01:20] it just fails for any reason, may be it not getting an IP, not completing, or anything else [01:21] hmm.. odd that's the default region in the plugin [01:21] * hazmat gives it a whirl [01:21] lemme see if the user is around so he can join the discussion [01:21] (and deploying mysql on trusty does good) [01:21] jose, or have him file a bug on the github issue tracker [01:21] cool then === wallyworld_ is now known as wallyworld [02:28] mwhudson@narsil:lava-dispatcher$ juju bootstrap -e manual [02:28] [sudo] password for mwh: [02:28] ERROR initialising SSH storage failed: failed to create storage dir: rc: 255 (Permission denied (publickey).) [02:28] what does that mean? [02:35] mwhudson: you need to be root or hsve sudo access on the box [02:35] i do [02:35] (have sudo access) [02:35] mwhudson: actually, it's an ssh login error [02:35] mwhudson: is your key on the box? [02:35] yes [02:35] oh [02:36] there is already an ubuntu user on the box [02:42] mwhudson: if you're not able to login as "ubuntu", you can set bootstrap-user [02:42] that will be used for the initial login, and the ubuntu user's authorized_keys will be updated [02:43] axw: i think that sort of half-happend somehow [02:45] mwhudson: hmm. do you perhaps have an environments.yaml lying around? [02:45] mwhudson: err, .jenv [02:45] mwhudson: the keys should be set up when the .jenv file is created [02:46] ah [02:46] this machine has an unexpectedly hardcore ssh config [02:46] ("AllowGroup sshuser") [02:46] so juju created ubuntu but then couldn't log in [02:47] ah [02:48] we probably should at least add a check after creating the user that it can log in [02:48] * axw files a bug [02:51] mwhudson: if you've got anything else to add, https://bugs.launchpad.net/juju-core/+bug/1333496 [02:51] <_mup_> Bug #1333496: environs/manual: ubuntu user creation should check login succeeds [02:52] axw: i think that covers it [02:52] axw: thanks [02:52] nps, thanks for digging [03:04] oh er [03:05] how does the manual provider decide whether to use the amd64 or x86 tools? [03:05] i have a x86 userspace but amd64 kernel [03:05] and juju chose wrong === scuttle|afk is now known as scuttlemonkey === scuttlemonkey is now known as scuttle|afk === CyberJacob|Away is now known as CyberJacob === vladk|offline is now known as vladk === CyberJacob is now known as CyberJacob|Away === roadmr is now known as roadmr_afk === mwhudson is now known as mwhudson- === mwhudson is now known as mwhudson-bip === mwhudson-bip is now known as mwhudson [08:51] sinzui: please could you take another look at bug 1328958? === roadmr_afk is now known as roadmr === vladk is now known as vladk|offline === mwhudson is now known as mwhudson-bip [10:25] gnuoy, is https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor/+merge/224111 actually merged? [10:26] jamespage, if you look at charm-helpers you'll see it was merged in error and reverted [10:26] gnuoy, so I still need to action that right? [10:26] gnuoy, can you re-propose pls [10:27] jamespage, yes pls === roadmr is now known as roadmr_afk [10:32] gnuoy, I can't merge it right now - bzr just tells me its already done :-) [10:32] jamespage, ok, let me great a new mp [10:32] gnuoy, you might have to push a different branch - not sure === vladk|offline is now known as vladk [10:35] gnuoy, iis this the diff - http://paste.ubuntu.com/7694495/ ? [10:35] jamespage, thats the one [10:37] gnuoy, includnig the bit in utils? [10:39] jamespage, yes. New mp https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267 [10:43] gnuoy, can I have a unit test with those please :-) [10:43] but other than that looks OK [10:43] sure [10:50] with juju and maas is there a way to tell juju which zone within maas to either use or not use when deploying nodes? [10:53] gnuoy, re determine_packages in neutron-openvswitch - I think that needs to be a list if lists of packages [10:53] the reason being that if dkms is requirement, that must be installed first, otherwise openvswitch-switch might load a ovs module that's not openstack compatible [10:54] gnuoy, review line 504 of the openstack context helper [10:54] [ensure_packages(pkgs) for pkgs in self.packages] === vladk is now known as vladk|offline === vladk|offline is now known as vladk === vladk is now known as vladk|offline === vladk|offline is now known as vladk [11:32] Unable to add nodes to ubuntu MAAS server on static vlan | http://askubuntu.com/q/487549 === roadmr_afk is now known as roadmr === scuttle|afk is now known as scuttlemonkey === roadmr is now known as roadmr_afk === vladk is now known as vladk|offline === roadmr_afk is now known as roadmr [13:17] found it, a charm done up all spiffy in puppet albeit very old http://bazaar.launchpad.net/~michael.nelson/charms/oneiric/apache-django-wsgi/trunk/files [13:17] sinzui: re: bug 1328958, it also affects desktop users. This is key use case for users of the local provider. === vladk|offline is now known as vladk [13:18] sinzui: why do you restrict the bug to just cloud users? [13:18] rbasak, you are the only one reporting that [13:18] rbasak, I don't have an ubuntu user on my machine, neither does the developer or the qa staff or the solutions staff [13:19] sinzui: so you can't reproduce? Please could you note that in the bug? Then I'll test reproducing on a desktop running VM. [13:19] rbasak, the only version of ubuntu that has an ubuntu user is a cloud/server image [13:19] sinzui: that's my point. I think you understand me backwards. [13:19] If the ubuntu user is not present, then juju bootstrap fails. [13:19] I demonstrate this by taking a cloud image and removing the ubuntu user. [13:20] I originally hit this on my desktop which also has no ubuntu user. [13:20] jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267 now with added unit tests [13:22] rbasak, The juju client does not require an ubuntu user. The client requires it on the machines it deploys. The client might be confused if the host machine is a server image and that ubuntu as the user on it too [13:23] sinzui: "The juju client does not require an ubuntu user." - that's the bug I'm reporting. It does. It happened on my desktop machine. [13:23] If you think it affects only cloud images in a reproducible way, then I'll find you a reproducer for a desktop install I guess. [13:23] gnuoy, merged [13:23] thanks [13:24] rbasak you are alone in your experience, so I marked the bug medium. If it affected other users and blocked them, I would mark the bug high [13:25] gnuoy, hmm - I merged and then thought again [13:25] ? [13:25] gnuoy, inserting rel_name='amqp', relation_prefix=None befor ssl_dir is probably not a good idea [13:26] if code exists which is not using ssl_dir= (i.e. positional parameters) it will break [13:27] jamespage, ok, I create a new mp with the args reordered [13:29] gnuoy, url? [13:31] jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/fix-AMQPContext-back-compat/+merge/224294 [13:33] gnuoy, merged - thanks! [13:33] ta [13:35] Is there some undocumented trick to getting juju+vagrant+LXC to work? I'm using precise and keep getting 'error: error executing "lxc-create":' when it tries to deploy juju-gui [13:36] ...or documented, but hard to find. [13:42] gnuoy, I still see one test failure on https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/neutron-refactor/+merge/224069 [13:42] gnuoy, and did you see my note on irc re installation order of packages for openvswitch (its above) in the context of the neutron-openvswitch charm? [13:43] jamespage, that test failure is fixed (as of about 10s ago) [13:44] jamespage, I'll take a look at neutron-openvswitch charm comment [13:45] gnuoy, nova-compute and quantum-gateway merges - thanks! [13:45] \o/ [13:49] hi marcoceppi, jcsackett -- could I get some more review love? https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102 [13:49] dpb1: it's on my list [13:49] my short list [13:50] marcoceppi: great! :) [13:55] Has anyone had trouble with juju LXC on precise? [13:55] jamespage, determine_packages should return a list of lists which the install hook should iterate over passing each inner list to apt_install in turn ? [13:55] gnuoy, yes [13:55] ack, ta [13:55] gnuoy, ensures the dkms module gets install prior to openvswitch-switch trying to load it [13:55] required on older kernels [13:56] 3.2 for example [13:56] otherwise no GRE [14:03] I have a bug problem on precise with local provider https://gist.github.com/anonymous/9ecb23a51844627028b0 Anyone able to point out something here? Here's a bug that maybe somewhat related https://bugs.launchpad.net/juju-core/+bug/1330406 ?? [14:03] <_mup_> Bug #1330406: juju deployed services to lxc containers error executing "lxc-create" with bad template: ubuntu-cloud === scuttlemonkey is now known as scuttle|afk === scuttle|afk is now known as scuttlemonkey [14:06] gnuoy, just neutron-openvswitch remaining right? [14:12] jamespage, shouldn't neutron-api be in next rather than trunk ? [14:13] gnuoy, as it does not have a stable, in-store charm yet trunk is fine for now [14:13] jamespage, but its not compatible with the other trunk charms [14:13] gnuoy, but its not in the store so meh [14:14] ok [14:14] means its a straight promulgation at the end of cycle [14:17] jamespage, neutron-openvswitch updated and tested [14:21] gnuoy, +1 pushed [14:21] gnuoy, maybe they should be /next [14:30] Anyone out there that can help figure out why I cant use local environment on precise because of an lxc-create error. Followed the instructions to a tee. === natewarr_ is now known as natewarr [14:49] gnuoy, gah - neutron-api has a bug [14:49] jamespage, I refuse to believe that [14:50] the superclass of NeutronCCContext calls "_save_flag_file" [14:50] which write out to /etc/nova [14:50] automatemecolema: you might try asking in #juju-dev [14:53] jamespage, hmm, I see what you mean but I haven't hit that oddly. I'll do some digging [15:04] Is there a way to just validate an index.json and products.json files? Without running a custom cloud and stuff? [15:05] gnuoy, for some reason my flake8 is ignoring the issues yours throws up on nova-compute-vmware [15:06] jamespage, what flake version do you have? [15:07] gnuoy, 2.1.0 [15:07] hmm, me too [15:07] gnuoy, s-ok figured it out [15:07] tip top [15:11] gnuoy, pip installed pep8 [15:11] \o/ [15:11] gnuoy, anyway - I tied === boci^ is now known as b0c1 [15:26] jamespage, +1cinder-vmware and nova-compute-vmware . Want me to push them to next ? [15:26] gnuoy, nah - these go straight to stable [15:26] no other changes required [15:26] sounds good [15:28] gnuoy, I'll sort them now [15:47] coreycb, just a couple of minor things on the keystone MP - other than that happy to merge once you have those fixed. [15:47] jamespage, ok thanks! === vladk is now known as vladk|offline === roadmr is now known as roadmr_afk [16:12] Hi all [16:12] I have question about charms deployment and juju .. If i deploy the same charm , on different machine, (Let's say mysql and rabbitMQ) , how will the deployment work their relation, will the mysql instances be standalone , or are they going to replicate the same data trough all instances ? [16:20] loki27_, the honest answer is that it depends on the charm [16:20] loki27_, rabbitmq will configure itself as a native RabbitMQ cluster [16:21] loki27_, mysql won't [16:22] Can you debug-hooks install? [16:23] jamespage really i could not find any HA documentation for MAAS/Juju [16:23] loki27_, well its not really MAAS/Juju that's doing the HA - its the charm [16:23] loki27_, and that depends on the charm author having done the right things [16:23] Trying to figure the best architecture to have HA but it's hard to spot SPOF with all the magic going on.. [16:23] including documented stuff [16:24] loki27_, with regards juju itself, the next stable (1.20/2.0) will have HA [16:24] you can try this in the 1.19.x interim dev series now [16:25] loki27_, MAAS - not 100% sure [16:26] loki27_, I know its planned - but not certain on current status [16:26] loki27_, re mysql - mysql is not natively active/active HA'able - however it's possible to use it with ceph and the hacluster charm to implement active/passive HA [16:26] loki27_, [16:26] loki27_, https://jujucharms.com/~openstack-charmers/trusty/percona-cluster-5/?text=percona-cluster [16:27] https://wiki.ubuntu.com/ServerTeam/OpenStackHA talking about 28 servers [16:27] is in the review queue - its a drop in replacement for mysql with is active/active [16:27] Really i want to achieve 4 node HA [16:27] something that is not that hard to do using Mirantis [16:27] loki27_, its possible to deploy alot of those in LXC containers now that juju supports that [16:28] loki27_, that documentation is a little out-of-date - I have it on my list to refresh [16:31] ok , do you know the ETA for 1.20 ? [16:32] alexisb, is there an eta for the next juju stable yet? [16:33] jamespage, mostly that is dependent on when 1.19.4 (aka 1.20 release candidate) gets pushed out [16:33] we are still working through critical bugs [16:34] Ok , sounds like "Not that far" to me [16:34] the goal is to have a weeks of exposure on 1.19.4 before we call it good for 1.20 [16:35] so no earlier then tuesday of next week at this point for 1.20 [16:35] that's pretty fast :) [16:39] What are the machine number #num from (juju deploy charm --to #num) relative to the MAAS machine name/ip ? [16:41] loki27_: the principal behind that is the machine is already under juju's control [16:41] if you want to specify a specific machine in your MAAS pool that juju hasn't already registered, you'll need to look at machine tagging and using those tags as a constraint. [16:41] Ok so considering (I guess) my current only node i have is the bootstrapped node, and i need to deploy to other available node from MAAS [16:42] loki27_: so, given a scenario where you did juju deploy mysql, and it allocated node3.maas as unit #1 - you would then do juju deploy cs:phpmyadmin --to 1 [16:42] to colocate with MySQL [16:42] loki27_: if you need specific machines, use tagging. Otherwise let juju pick from the 'cloud pool' provided by maas. [16:43] loki27_: i use a VMAAS setup using VM's provisioned by MAAS -- here's my setup. http://i.imgur.com/d71Nedd.png - Node9 is the bootstrap node. Ignore the 3 running at the top as they are part of a manual provider setup. [16:44] if i wanted to allocate node5 because its got 2GB of ram assigned - i use constraints to do that, or maas tagging. (i keep saying this, but i dont really know how to use the maas tagging feature. I should read up on that) [16:45] hum [16:45] see http://pastebin.com/U6FFAxDu [16:45] That's the deployment i was planning [16:45] the --to's need to be the machine #'s in the listing [16:46] i dont think juju will work if you specify the hostname. [16:46] no, it won't [16:46] s/juju/--to/ [16:46] loki27_: so what you can do is juju add-machine [16:46] Can i just add the node to juju (Assign to juju) witouth deploying anything [16:47] which will spin up the machine, and allocate it without any services. Then use that machine # in the listing. [16:47] and so it would get indexed in and assigned from MAAAS with it's UID.. [16:47] yep. juju add-unit will tell maas to return power on a node and install teh agent. [16:47] hehe [16:47] ok [16:47] add-machine sounds like what i need [16:48] my poor maas cluster. so abused, it gets no joy of long running instances outside of my manual provider setup [16:53] add-machine works just fine [16:54] loki27_: so for that script to work just make sure you've got the add-machine statements above to register X - Y and sub those node names with unit #'s and you're g2g [16:54] yes.. it's deploying right now :) [16:55] ERROR cannot add service "nova-cloud-controller": service already exists [16:56] are you trying to scale a service? [16:56] well i'm trying to have that whole openstack with fail-over / clustering [16:56] i see you have it listed twice, which is why its complaining. You can only deploy a service name once - if you want 2 then you add-unit -n 1 after teh deploy. [16:57] ir you want a secondary independent service, give it a new jame [16:57] juju deploy nova-cloud-controller nova-cloud-failover -- as an example. [16:57] Will that node get,s replicated with nova-cloud-controller at node 0 ? [16:57] it would not get replicated no. [16:58] sounds like you want juju add-unit -n # [16:59] Ok so i would deploy the node to 0, and then add-unit to 1 so it get's replicated and works in HA [17:28] node0 should be your bootstrap node [17:28] is that what you're intending to do? === CyberJacob|Away is now known as CyberJacob [17:33] loki27_: what you can do is juju add-machine [17:34] then target the machine specifically which corrleates to the maas name you want to use [17:34] loki27_: alternatively, tag each node with a unique tag [17:34] and use the --constraints="tag= to assign that service to that machine [17:35] aha! [17:35] thats what i was looking for when i was talking about tagging. ty marcoceppi [17:35] anyone can help with the ceph charm? after deploying 3 ceph nodes ceph only return the monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication stuff. [17:35] but we got him lined out with add-machine [17:39] loki27_: I keep getting weird buffer lag in quassel, you seeing that too? [17:40] lazypower: ^^* [17:41] marcoceppi: nope. How long has it been since you cleaned up your database? [17:42] i drop the logs > 2 months old on the first of every month to keep the database nice and compact. but i'm also using SQLITE not PGSQL [17:43] lazypower: I have never cleaned my db [17:43] I've got like a year of history in here across a bunch of networks [17:43] * marcoceppi considers cleaning [17:43] marcoceppi: i'm willing to bet thats why [17:44] marcoceppi: maybe take a dump and stick it in your archives so you've got it, and then just start fresh? [17:45] * marcoceppi shrugs, maybe [17:47] marcoceppi: have you had any issues with nodes getting stuck in the pending state in maas? they look like they are reaching the end of the provisioner and handing off to juju, but not completing the run [17:47] i'm rounding node #5 with this issue today [17:50] wait, nvm. it appears it finished. the gui was lagging behind the actual status of the environment === CyberJacob is now known as CyberJacob|Away [19:34] Anyone with experience leveraging cloud-init with juju ??? Maybe a concept of how one might use cloud-init [19:37] I know behind the covers juju uses cloud-init, but was wondering how I could send directives to cloud-init whilst provisioning with juju [19:46] automatemecolema: thats more the responsibility of the install / config-changed hook. What are you trying to do with cloud init? [19:46] wait, we spoke about this earlier didn't we - it has to do with using a repository mirror right? [19:57] lazypower: maybe that was another guy, or my headache is ruining my day where I can't remember anything [19:59] :( boo [19:59] I'm working adamantly with another guy building a continuous deliver pipeline that involves puppet enterprise, github, jenkins, and juju tying into multiple cloud providers [19:59] Ok so far so good. [19:59] and we are thinking about rolling our own images for a couple of reasons [20:00] one being compliance [20:00] two being able to have a certain feature set of tools readily available at time of provisioning without over complicating the charm [20:01] Some of our hang up is going to be around node classification, and how we might specify custom facts for each charm we roll [20:03] two is more of a problem with inexperience of haproxy, and how we can roll new production instances during revision pushes without notices downtime [20:04] scenario one being having two tomcat servers behind a set of haproxy servers, when it's time to push new code to production, we need to provision two more instances, tell haproxy to send traffic to the new instances, and then have juju decommission the old instances [20:05] it's a pretty fantastical approach, but learning curve seems a little steep up front [20:05] yeah i get what you're telling me [20:05] you know, i went a different approach and put my tools in a subordinate that i deploy with all of my services on my MAAS cluster [20:06] its not ideal if you need the tools up front, so i can see where you're looking at cloud-init [20:06] and prebaked images [20:06] automatemecolema: so as i understand cloud-init, you can populate these with user-data scripts [20:07] correct [20:07] https://help.ubuntu.com/community/CloudInit - this is the document i reference when talking to people who are new to cloud-init. I'm fairly certain there are more exhaustive rsources - but this is a good place to get you started. [20:08] yep I've read through that [20:08] instead of moving to the 'golden image' approach - cloud init is a great way to get you moving with those tools provided at time of cloud boot instead of pressing the image and crossing fingers you dont need to provide rolling updates across your network. [20:08] this is why i caution against running home rolled images. its not a problem persay if you're aware of teh short comings. You mentioned compliance so you may *need* to go that route, in which case, there's no problem doing that. I'm fairly certain you can specify AMI's [20:09] i'd need to double check for you, but i think its a constraint you can pass juju [20:10] hmmm lazypower: yea I think rolling images is maybe a last resort for compliance [20:11] then my suggested thought is to build a user data script. From there I need to see if you can stuff those in constraints / the provider configuration [20:12] asking for you now. it make take a bit - its getting late in the day for core devs [20:12] basically middle of the transition of people coming on/off watch :) [20:18] hmmm let me mull this over with my counterpart and see what his take is [20:19] automatemecolema: ok. if your status changes just ping me and i'll adjust teh questions as required. [20:19] alternatively you can join #juju-dev and follow along. [20:19] lazypower: sounds like a plan, thanks for your help [20:19] I'm in the juju-dev chat as well [20:22] automatemecolema: looks like i'm remembering our python days - we dont support custom AMI's as of the GO Port unless something has changed and this answer needs updated - http://askubuntu.com/questions/84333/how-do-i-use-a-specific-ami-for-juju-instances [20:23] automatemecolema: so looks like my subordinate to populate my tool chain was the way to go in the current implementation of juju. [20:23] automatemecolema: its not ideal, especially if you depend on the tools to be present for teh charm's installation. The alternative would be to version that routine out into its own script, and then import/clone that in your charms installation hooks. [20:23] so you make sure they are present when the install hook is done, then do your application heavy lifting in config-changed. [20:24] 'course those answers are over a year old. it feels like something that ought to have been done by now, no? [20:24] Yea...reviewing that kb...curious if anything has changed on that front [20:28] sarnold: nate confirmed its unimplemented. i would imagine it caused some headache somewhere? [20:28] lazypower: odd. [20:29] lazypower: so here's a question my counterpart posed about rolling our own images, one being a dev to production issue with having a different image in dev than in prod [20:30] sarnold: it may be one of those things, that unless it gets a sizeable amount of community want, it will remain unimplemented. I know we are currently hacking on HA as that's one of the highest requested features. [20:30] automatemecolema: yeah i'm not a huge fan of the golden image approach [20:31] lazypower: if we want the same testing results in dev, test, stage, and then to production, how can we guarantee that what worked in dev will work ing prod? [20:31] lazypower: yeah, and it could just be that simplestreams makes doing something significantly better significantly easier, but I just don't know how to phrase the question in that case :) [20:31] automatemecolema: at the end of the day, you have root in charms. so you can do whatever unholyness you need to accomplish the goal. My suggestion would be to build a sub and attach it to your deployed units. You can be responsive to what happens based on the relationship created. [20:31] that way you get a consistent output in dev/staging/production [20:31] you're not muxing with the base cloud image before juju touches it. [20:32] and if you need something as a dependency. it should probably live in the install hook of the charm anyway [20:32] but may it never be said that i stifle innovation. If you guys wanted to hack on cloud-init and we gave you those details, by all means - go forth and conquer [20:34] sarnold: oh yeah - simplestreams. i forgot all about the fact we have a tutorial as of UOS on how to run your own simplestreams. [20:34] * lazypower makes a note to go back and watch that session [20:35] lazypower: maybe I'm confused about how you suggest using subordinates, I'm just curious how one would say if today juju provisioned an instance with trusty 14.04.1 and tomorrow it rolls 14.04.2. Dev lives on 14.04.1, but we can't be sure it'll run on 14.04.2 ?? [20:36] automatemecolema: when you deploy on 14.04.1 its running 14.04.1 unless you run the dist-upgrade on your prod stack. [20:37] automatemecolema: so the idea would be to not deploy 14.04.2 in staging if youre' on 14.04.1 in your production env. When that occurs, you run rolling upgrades. In terms of services and how we orchestrate, servers are more like cattle than they are pets. [20:37] yes, but if I do a juju deploy haproxy and it rolls out me haproxy on 14.04.1 how can I control if tomorrow 14.04.2 is release and I do a juju deploy haproxy that I won't get 14.04.2 ?? [20:37] hmm. thats a good question. default-series only ensures you're within the 12.x and 14.x series presently [20:38] we dont typically push breaking changes like what we're suggesting in a LTS though. thats reserved for the non LTS releases. [20:38] eg: apache2 between precise => trusty - config file structure changed. it takes a series jump to run into that. not a minor revision. [20:43] automatemecolema: i dont have a good answer for you about lockstepping minor revisions. === automate_ is now known as automatemecolem_ [20:57] dang thing kicked me off of freenode... [20:58] boo. did you get any of our conversation? [20:59] awry irc [21:03] so here's something to throw out there, how would one build a paas solution leveraging juju. Just thinking about concepts etc etc. [21:04] automatemecolema: Juju can be wrestled into a paas, but its intended to orchestrate solutions. We are actively working through charming up Cloud Foundry to provide a PAAS solution [21:07] but if you want to do continuous delivery. There's 2 methods i am aware of [21:07] 1 is git based delivery. You can either place git checkouts in your config-chagned and execute the hook from your CI if the build passes to run your config-changed hook, or when juju actions lands - build an action for it. [21:08] the other is charm building in CI, and increment - charm-upgrade your deployment. [21:08] The path is to add a files/ directory in your charm for example, and do local installation of your artifacts by copying them where they need to go if they exist, or checkout from version control / upstream. [21:10] I'm doing method A with some rails deployments I have in the wild, and it works reasonably well. I gate in CI, and only master is ever deployed - to staging. Production is gated manually and has a revision flag on the charm that I populate. (but now that i think about it, i can just juju-set from CI and achieve the same goal in staging...) [21:11] given a few things to think about. [21:11] me* === CyberJacob|Away is now known as CyberJacob [21:21] automatemecolema: feel free to ping if you need anything. I'm going to EOD [21:22] lazypower: thanks for your time sir === alexisb is now known as alexisb_bbl === mwhudson-bip is now known as mwhudson === hatch__ is now known as hatch === makyo_ is now known as Makyo === CyberJacob is now known as CyberJacob|Away === tvansteenburgh1 is now known as tvansteenburgh [22:46] Hi All, .. Can somebody please tell me that if quantum-gateway is the same as neutron-gateway when installing using juju === scuttlemonkey is now known as scuttle|afk [23:13] how do i know what is the default openstack origin type when installing openstack using juju