=== jhf_ is now known as jhf === defunctzombie_zz is now known as defunctzombie === lifeless_ is now known as lifeless [01:20] jhf: sorry I was at dinner. [01:20] jhf: mind if I put your questions from your mail into the bug report? === thumper is now known as thumper-cooking === defunctzombie is now known as defunctzombie_zz === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === thumper-cooking is now known as thumper === TheRealMue is now known as TheMue [09:29] hello === danilos_ is now known as danilos [12:41] blues-man: hello o/ [12:48] hi marcoceppi [12:48] blues-man: anything we can help you with? [12:50] marcoceppi, I would investigate orchestration in openstack and analyze differences between heat and juju for a thesis work [12:51] over the ubuntu dependecies, the theory beside the project [12:53] I'd like to understand how to deploy a multple-node terracota juju with some HA techniques or component [13:06] jcastro: yup, fine with me (inserting questions into bug report) [13:07] jhf: good morning! OK, m_3 is on mountain time so when he's around we'll start the review! [13:07] marcoceppi: I sent you a mail where I replied to jhf wrt. charm tools for sha checking [13:07] marcoceppi: We still have that convenience function in there somewhere don't we? [13:08] jcastro: yeah ch_get_file [13:08] ok great, thanks for such quick responses :) I'll be sure to update it today/tomorrow as needed. [13:08] jhf: o/ === Guest60046 is now known as mars === BradCrittenden is now known as bac === gary_poster|away is now known as gary_poster [13:47] hello [13:47] ;-) [13:48] "juju deploy ubuntu" on local MAAS--->> machine 1: instance-id: pending forever [13:49] maas doesnt alocate nodes, were can i start to look for a failure? === rogpeppe3 is now known as rogpeppe === wedgwood_away is now known as wedgwood [14:04] any? [14:04] Can't allocate more than one node of my MAAS pool [14:43] bicyus: hey, not very good with maas but I'll see if I can help you out [14:43] Were you able to bootstrap? [14:44] thanks marcoceppi [14:44] y was able yes [14:44] but it always get one node [14:44] So I'm guessing you have more than one node "ready", etc? [14:44] yes 4 nodes [14:45] juju deploy mysql ----> OK on machine: 0 [14:45] juju deploy ceph ---> machine 1 --- pending [14:45] forever [14:46] Do the nodes show up as provisioned, commissioned, (or whatever the status for 'used' is) in maas? [14:46] on maas they are ready [14:46] machine 0 get's allocated too ... [14:47] but the machine 1 pending.... doesn't change anything on maas pool [14:47] it should allocate another node [14:47] but it doesn't [14:48] it would be great to have juju without maas [14:48] registering normaly installed servers [14:48] i think maas is too buggy.... :-( [14:48] I know a few of the #maas guys were answering questions about juju and maas in that room. It's been so long since I've used the two together I'm not sure what to say :\ [14:49] ;-) [14:49] thanks marcoceppi, I appreciate [14:49] don't worry [14:49] No problem, best of luck [14:50] i'm going the good old mens way! ;-) [14:50] xD [14:52] jcastro: yo... what're we reviewing? liferay? [14:53] m_3: yeah, I sent you a mail, it would be awesome if you could review it today [14:53] m_3: jhf is the upstream, he's in the channel [14:54] * m_3 waves to jhf [14:54] * jhf waves === defunctzombie_zz is now known as defunctzombie [14:54] please be gentle :) I'm new at this. [14:55] jcastro: k, lemme bump mongodb merge to later since eveyrbody's here for liferay now [14:56] jcastro: we've really gotta get the time-in-the-queue display fixed [14:56] jcastro: can I help with that? [14:56] i.e., figure out what it has to show and I'll play with charm-tools later === BradCrittenden is now known as bac_ [14:57] time in queue looks like it's working to me [14:58] 108 minutes since he attached the branch [15:00] * m_3 sees 372 days [15:00] m_3: CLI tool? [15:00] `charm review-queue` [15:00] yup [15:00] m_3: oh dude, they fixed the queue in the web UI, I don't think we fixed it in the CLI Juan wrote [15:00] http://jujucharms.com/review-queue [15:00] well, the cli is what we actually use [15:00] :) [15:00] is correct [15:01] or just make it slurp the web and print it all pretty-like [15:04] marcoceppi: where's the best place to point somebody for charm-helper bits to verify download hashes? [15:05] charm-tools/helpers/sh/net.sh [15:05] marcoceppi: so is that near landing in lp:charm-helpers? [15:05] m_3: no where near. It's needs to be ported to python first, then the "bash" interface needs to be written [15:05] marcoceppi: ack, thanks! [15:06] m_3: err, I think the ppa packaging for old helpers is still broken... [15:07] * marcoceppi double checks [15:09] m_3: nope, charm-helper-sh is still in ppa:juju/pkgs. He'll have to add it to his charm, then install the charm-helper-sh package to make use of /usr/share/charm-helper/sh/net.sh [15:09] m_3: finally, cc me on your reploy since jorge forgot to :P === racedo` is now known as racedo [15:12] heh [15:12] hey so maybe just have him check the sha by hand until we sort out charm tools? === bac_ is now known as bac [15:15] He can just copy the ch_get_file code in to his lib/common for now [15:15] it'd probably be the easiest way to ensure it just works (tm) [15:22] marcoceppi: m_3: last call for updating the etherpad from yesterday's meeting [15:22] before I push it to the list [15:22] jcastro: I'm good [15:33] jcastro: post away [15:34] jhf: jcastro: first review pass done: https://bugs.launchpad.net/charms/+bug/1006064 [15:34] looks great... just a couple of little things to clean up [15:34] upgrades, cryptographically verifying dloads, etc [15:35] he had some questions in the emails wrt. the db hooks [15:35] which you can either post in the bug or respond via email [15:35] jhf: please ping for any questions [15:35] emails [15:35] I'll dig [15:35] heh, it's that thing we're supposed to check [15:36] what century are we living in? [15:36] * m_3 loves email [15:36] especially mailing lists [15:37] ok, thanks guys for the review. I'll get on the fixes asap. and yeah, we struggle with email vs. non-email all the time in liferay. I end up posting to forums, then emailing the same set of people I want to communicate with, because half of them refuse to subscribe to forums. [15:38] jhf: ack [15:38] jhf: so your basic idea of delaying startup until the db is related is sound... there're plenty of services that behave that way [15:39] ok, my only concern is if the relation is set up / torn down quickly, Liferay (and probably many other services) can't react that quickly. [15:39] jhf: your relation guard looks good too... bail (gracefully... exit0) if the other side isn't up yet [15:39] jhf: it'll get called again once the other side's up [15:40] jhf: hooks don't execute asyncronously inside a charm. So if you hooks/start (startup.sh) blocks until complete it won't get to any other hooks queued for that charm until it exits [15:40] jhf: 'set up' is not a problem.... 'torn down' is [15:41] jhf: but let's get the primary flow working, then work through removals [15:41] tomcat's startup.sh doesn't block, unfortunately. [15:41] jhf: that should be fine as long as it's not called until the juju part of the relation is good (and the configuration is written) [15:41] which is what the relation guard does [15:42] i.e., your [ -z "$user" ] && exit 0 bit [15:43] the rest of the script ( w/ the tomcat strartup ) won't be run until the db is there and happy [15:43] jhf: this one issue is the real juju differentiator === defunctzombie is now known as defunctzombie_zz [15:44] ok, sounds good.. so I'll fix it up then talk about removals (the concern being, if liferay+tomcat is happily running, and a db relation is broken, in order for it to "really" be broken, liferay has to immediately exit, reconfigure, and restart, otherwise it will fail all over the place trying to access the db that was just taken away). [15:44] ack [15:45] arosales: sigh, I suck, have the pad URL handy for yesterday's meeting? [15:47] jhf: and we should add data um... continuity... to that workflow too :)... but first thing's first [15:49] jcastro, http://pad.ubuntu.com/7mf2jvKXNa [16:01] sinzui: ping... do you have any other mongodb tarballs handy for testing restore? (besides the embedded fixtures?) [16:01] I do! [16:01] awesome... chinstrap? [16:03] sinzui: also, the next part I'm gonna dig through is how this handles ordering wrt replset formation [16:04] I'm a bit confused by that atm, but still poking around [16:04] m_3 this is a real world case http://people.canonical.com/~curtis/charmworld-20130425-184952.tar.gz [16:05] sinzui: danke [16:06] downloaded if you want to remove/clean that up === BradCrittenden is now known as bac === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [16:38] sinzui: hit a bug with the 'nothing to restore' case [16:38] in the MP [16:40] oh dear === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz === medberry is now known as med_ [20:08] Anyone seen an issue whereby they bootstrap juju on MAAS and everything seems to be going well in the install process but at a point, on the console of the machine being bootstrapped you get an "Installation Step Failed" message with a UI but the console fails to respond to any keystrokes etc? === BradCrittenden is now known as bac === defunctzombie_zz is now known as defunctzombie === defunctzombie is now known as defunctzombie_zz [22:33] I got to the bottom of my PXE install problem. The Dell M620s I am using have 2GB flash drives on them as well as the drives hanging off the RAID controller. Ubuntu was trying to install the flash drives. Resolution was to disable the flash and it goes and installs to the HDDs. === wedgwood is now known as wedgwood_away === wedgwood_away is now known as wedgwood === wedgwood is now known as wedgwood_away [23:42] hazmat: I noticed you keep referring to juju-deploy -W -T, but I can't find what capital W refers to, is it just the same as -w ? [23:42] doh [23:42] yah.. probably [23:42] cool, just checking === defunctzombie_zz is now known as defunctzombie === thumper is now known as thumper-afk