[01:20] <jcastro> jhf: sorry I was at dinner.
[01:20] <jcastro> jhf: mind if I put your questions from your mail into the bug report?
[09:29] <blues-man> hello
[12:41] <marcoceppi> blues-man: hello o/
[12:48] <blues-man> hi marcoceppi
[12:48] <marcoceppi> blues-man: anything we can help you with?
[12:50] <blues-man> marcoceppi, I would investigate orchestration in openstack and analyze differences between heat and juju for a thesis work
[12:51] <blues-man> over the ubuntu dependecies, the theory beside the project
[12:53] <blues-man> I'd like to understand how to deploy a multple-node terracota juju with some HA techniques or component
[13:06] <jhf> jcastro: yup, fine with me (inserting questions into bug report)
[13:07] <jcastro> jhf: good morning! OK, m_3 is on mountain time so when he's around we'll start the review!
[13:07] <jcastro> marcoceppi: I sent you a mail where I replied to jhf wrt. charm tools for sha checking
[13:07] <jcastro> marcoceppi: We still have that convenience function in there somewhere don't we?
[13:08] <marcoceppi> jcastro: yeah ch_get_file
[13:08] <jhf> ok great, thanks for such quick responses :)  I'll be sure to update it today/tomorrow as needed.
[13:08] <marcoceppi> jhf: o/
[13:47] <bicyus> hello
[13:47] <bicyus> ;-)
[13:48] <bicyus> "juju deploy ubuntu"  on local MAAS--->> machine 1: instance-id: pending   forever
[13:49] <bicyus> maas doesnt alocate nodes, were can i start to look for a failure?
[14:04] <bicyus> any?
[14:04] <bicyus> Can't allocate more than one node of my MAAS pool
[14:43] <marcoceppi> bicyus: hey, not very good with maas but I'll see if I can help you out
[14:43] <marcoceppi> Were you able to bootstrap?
[14:44] <bicyus> thanks marcoceppi
[14:44] <bicyus> y was able yes
[14:44] <bicyus> but it always get one node
[14:44] <marcoceppi> So I'm guessing you have more than one node "ready", etc?
[14:44] <bicyus> yes 4 nodes
[14:45] <bicyus> juju deploy mysql  ----> OK on machine: 0
[14:45] <bicyus> juju deploy ceph ---> machine 1 --- pending
[14:45] <bicyus> forever
[14:46] <marcoceppi> Do the nodes show up as provisioned, commissioned, (or whatever the status for 'used' is) in maas?
[14:46] <bicyus> on maas they are ready
[14:46] <bicyus> machine 0 get's allocated too ...
[14:47] <bicyus> but the machine 1 pending.... doesn't change anything on maas pool
[14:47] <bicyus> it should allocate another node
[14:47] <bicyus> but it doesn't
[14:48] <bicyus> it would be great to have juju without maas
[14:48] <bicyus> registering normaly installed servers
[14:48] <bicyus> i think maas is too buggy.... :-(
[14:48] <marcoceppi> I know a few of the #maas guys were answering questions about juju and maas in that room. It's been so long since I've used the two together I'm not sure what to say :\
[14:49] <bicyus> ;-)
[14:49] <bicyus> thanks marcoceppi, I appreciate
[14:49] <bicyus> don't worry
[14:49] <marcoceppi> No problem, best of luck
[14:50] <bicyus> i'm going the good old mens way! ;-)
[14:50] <bicyus> xD
[14:52] <m_3> jcastro: yo... what're we reviewing?  liferay?
[14:53] <jcastro> m_3: yeah, I sent you a mail, it would be awesome if you could review it today
[14:53] <jcastro> m_3: jhf is the upstream, he's in the channel
[14:54]  * m_3 waves to jhf
[14:54]  * jhf waves
[14:54] <jhf> please be gentle :) I'm new at this.
[14:55] <m_3> jcastro: k, lemme bump mongodb merge to later since eveyrbody's here for liferay now
[14:56] <m_3> jcastro: we've really gotta get the time-in-the-queue display fixed
[14:56] <m_3> jcastro: can I help with that?
[14:56] <m_3> i.e., figure out what it has to show and I'll play with charm-tools later
[14:57] <jcastro> time in queue looks like it's working to me
[14:58] <jcastro> 108 minutes since he attached the branch
[15:00]  * m_3 sees 372 days
[15:00] <jcastro> m_3: CLI tool?
[15:00] <m_3> `charm review-queue`
[15:00] <m_3> yup
[15:00] <jcastro> m_3: oh dude, they fixed the queue in the web UI, I don't think we fixed it in the CLI Juan wrote
[15:00] <jcastro> http://jujucharms.com/review-queue
[15:00] <m_3> well, the cli is what we actually use
[15:00] <m_3> :)
[15:00] <jcastro> is correct
[15:01] <m_3> or just make it slurp the web and print it all pretty-like
[15:04] <m_3> marcoceppi: where's the best place to point somebody for charm-helper bits to verify download hashes?
[15:05] <marcoceppi> charm-tools/helpers/sh/net.sh
[15:05] <m_3> marcoceppi: so is that near landing in lp:charm-helpers?
[15:05] <marcoceppi> m_3: no where near. It's needs to be ported to python first, then the "bash" interface needs to be written
[15:05] <m_3> marcoceppi: ack, thanks!
[15:06] <marcoceppi> m_3: err, I think the ppa packaging for old helpers is still broken...
[15:07]  * marcoceppi double checks
[15:09] <marcoceppi> m_3: nope, charm-helper-sh is still in ppa:juju/pkgs. He'll have to add it to his charm, then install the charm-helper-sh package to make use of /usr/share/charm-helper/sh/net.sh
[15:09] <marcoceppi> m_3: finally, cc me on your reploy since jorge forgot to :P
[15:12] <jcastro> heh
[15:12] <jcastro> hey so maybe just have him check the sha by hand until we sort out charm tools?
[15:15] <marcoceppi> He can just copy the ch_get_file code in to his lib/common for now
[15:15] <marcoceppi> it'd probably be the easiest way to ensure it just works (tm)
[15:22] <jcastro> marcoceppi: m_3: last call for updating the etherpad from yesterday's meeting
[15:22] <jcastro> before I push it to the list
[15:22] <marcoceppi> jcastro: I'm good
[15:33] <m_3> jcastro: post away
[15:34] <m_3> jhf: jcastro: first review pass done: https://bugs.launchpad.net/charms/+bug/1006064
[15:34] <m_3> looks great... just a couple of little things to clean up
[15:34] <m_3> upgrades, cryptographically verifying dloads, etc
[15:35] <jcastro> he had some questions in the emails wrt. the db hooks
[15:35] <jcastro> which you can either post in the bug or respond via email
[15:35] <m_3> jhf: please ping for any questions
[15:35] <m_3> emails
[15:35] <m_3> I'll dig
[15:35] <jcastro> heh, it's that thing we're supposed to check
[15:36] <m_3> what century are we living in?
[15:36]  * m_3 loves email
[15:36] <m_3> especially mailing lists
[15:37] <jhf> ok, thanks guys for the review. I'll get on the fixes asap. and yeah, we struggle with email vs. non-email all the time in liferay.  I end up posting to forums, then emailing the same set of people I want to communicate with, because half of them refuse to subscribe to forums.
[15:38] <m_3> jhf: ack
[15:38] <m_3> jhf: so your basic idea of delaying startup until the db is related is sound... there're plenty of services that behave that way
[15:39] <jhf> ok, my only concern is if the relation is set up / torn down quickly, Liferay (and probably many other services) can't react that quickly.
[15:39] <m_3> jhf: your relation guard looks good too... bail (gracefully... exit0) if the other side isn't up yet
[15:39] <m_3> jhf: it'll get called again once the other side's up
[15:40] <marcoceppi> jhf: hooks don't execute asyncronously inside a charm. So if you hooks/start (startup.sh) blocks until complete it won't get to any other hooks queued for that charm until it exits
[15:40] <m_3> jhf: 'set up' is not a problem.... 'torn down' is
[15:41] <m_3> jhf: but let's get the primary flow working, then work through removals
[15:41] <jhf> tomcat's startup.sh doesn't block, unfortunately.
[15:41] <m_3> jhf: that should be fine as long as it's not called until the juju part of the relation is good (and the configuration is written)
[15:41] <m_3> which is what the relation guard does
[15:42] <m_3> i.e., your [ -z "$user" ] && exit 0 bit
[15:43] <m_3> the rest of the script ( w/ the tomcat strartup ) won't be run until the db is there and happy
[15:43] <m_3> jhf: this one issue is the real juju differentiator
[15:44] <jhf> ok, sounds good.. so I'll fix it up then talk about removals (the concern being, if liferay+tomcat is happily running, and a db relation is broken, in order for it to "really" be broken, liferay has to immediately exit, reconfigure, and restart, otherwise it will fail all over the place trying to access the db that was just taken away).
[15:44] <m_3> ack
[15:45] <jcastro> arosales: sigh, I suck, have the pad URL handy for yesterday's meeting?
[15:47] <m_3> jhf: and we should add data um... continuity... to that workflow too :)... but first thing's first
[15:49] <arosales> jcastro, http://pad.ubuntu.com/7mf2jvKXNa
[16:01] <m_3> sinzui: ping... do you have any other mongodb tarballs handy for testing restore? (besides the embedded fixtures?)
[16:01] <sinzui> I do!
[16:01] <m_3> awesome... chinstrap?
[16:03] <m_3> sinzui: also, the next part I'm gonna dig through is how this handles ordering wrt replset formation
[16:04] <m_3> I'm a bit confused by that atm, but still poking around
[16:04] <sinzui> m_3 this is a real world case http://people.canonical.com/~curtis/charmworld-20130425-184952.tar.gz
[16:05] <m_3> sinzui: danke
[16:06] <m_3> downloaded if you want to remove/clean that up
[16:38] <m_3> sinzui: hit a bug with the 'nothing to restore' case
[16:38] <m_3> in the MP
[16:40] <sinzui> oh dear
[20:08] <Campbell> Anyone seen an issue whereby they bootstrap juju on MAAS and everything seems to be going well in the install process but at a point, on the console of the machine being bootstrapped you get an "Installation Step Failed" message with a UI but the console fails to respond to any keystrokes etc?
[22:33] <Campbell> I got to the bottom of my PXE install problem. The Dell M620s I am using have 2GB flash drives on them as well as the drives hanging off the RAID controller. Ubuntu was trying to install the flash drives. Resolution was to disable the flash and it goes and installs to the HDDs.
[23:42] <marcoceppi> hazmat: I noticed you keep referring to juju-deploy -W -T, but I can't find what capital W refers to, is it just the same as -w ?
[23:42] <hazmat> doh
[23:42] <hazmat> yah.. probably
[23:42] <marcoceppi> cool, just checking