[08:51] <themonk> why i am getting this, error: invalid service name "local:precise/mycharm"
[08:52] <themonk> marcoceppi, hi
[12:42] <themonk> after bootstrap my deploy unit is in pending state for so long?
[12:43] <themonk> it makes testing very slow
[12:47] <themonk> why deploy a charm is very slow?
[12:47] <themonk> is there any way to make it fast?
[12:52] <marcoceppi> hi themonk
[12:52] <themonk>  hi
[12:52] <themonk> marcoceppi, hi
[12:53] <marcoceppi> still getting an invalid service name?
[12:53] <themonk> marcoceppi, why deploy a charm is very slow? is there any way to make it fast?
[12:53] <marcoceppi> themonk: which provider are you using?
[12:53] <themonk> marcoceppi, local
[12:54] <themonk> marcoceppi, lxc container
[12:54] <marcoceppi> themonk: local shouldn't be that slow, does it ever go from pending to started or is it always stuck in pending?
[12:55] <themonk> marcoceppi, yes it goes started state from pending but takes to long
[12:58] <themonk> marcoceppi, it does not stuck in pending state
[12:59] <marcoceppi> themonk: well, there are a number of reasons why it may take a while. What version of juju are you using now?
[13:00] <themonk> marcoceppi, 1.16.6-precise-amd64
[13:01] <marcoceppi> themonk: ah, there's be a lot of improvements to juju since that stable release, unless you're pushing things in to producution today I recommend you use 1.17 series of juju where this work is taking place
[13:40] <themonk> marcoceppi, do need to compile src to install juju 1.17?
[13:41] <marcoceppi> themonk: no, just add ppa:juju/devel
[13:41] <themonk> marcoceppi, ok thanks :)
[13:48] <lazyPower> ok starting with the rev queue today
[13:48] <lazyPower> here goes something :)
[14:25] <bloodearnest> lazyPower: <cough>gunicorn</cough>
[14:25] <bloodearnest> :)
[14:25] <lazyPower> bloodearnest: going down the list, it'll be reviewed shortly :)
[14:26] <lazyPower> however, i am open to pizza as a bribe if you want me to "bump" it in priority
[14:29] <jcastro> hey marcoceppi
[14:29] <marcoceppi> hey jcastro
[14:29] <jcastro> so I am in the mysql unit via debug-hooks, because mysql bombed out again
[14:29] <jcastro> if I'm in the hook scope
[14:29] <jcastro> shouldn't "install" just work?
[14:30] <marcoceppi> jcastro: install should always work
[14:31] <jcastro> if I give the full path the install works
[14:31] <marcoceppi> jcastro: how so?
[14:31] <marcoceppi> jcastro: OH, you need to do hooks/install
[14:31] <jcastro> ta
[14:31] <marcoceppi> youre' in $CHARM_DIR not $CHARM_DIR/hooks
[14:34] <jcastro> https://gist.github.com/castrojo/9347549
[14:34] <jcastro> hey that looks familiar!
[14:35] <marcoceppi> jcastro: this is a different error than was fixed
[14:35] <marcoceppi> jcastro: set dataset-size to 256M
[14:35] <marcoceppi> then re-run install hook
[14:35] <marcoceppi> err config changed
[14:40] <jcastro> hmm, no luck
[14:41] <jcastro> with either 256M or 512M
[14:41] <jcastro> InnoDB: Initializing buffer pool, size = 12.3G
[14:41] <jcastro> seems like it'
[14:42] <jcastro> s not taking the dataset-size
[14:46] <marcoceppi> jcastro: it's reading based on you machine
[14:46] <jcastro> yeah, I had to manually fix it in my.cnf
[14:49] <jcastro> 14/03/04 09:48:57 INFO mapred.JobClient:  map 8% reduce 0%
[14:49] <dpb1> Hi -- is it possible to pin AWS to stick to a particular AZ?
[14:49] <jcastro> but, it's working!
[14:50] <bloodearnest> jcastro: hey - whe you do the hadoop demo, what do you show people as the end result? The gui? Or some webpage or something?
[14:51] <jcastro> https://bugs.launchpad.net/juju-core/+bug/1183831
[14:51] <_mup_> Bug #1183831: unable to specify ec2 availability zone <constraints> <ec2-provider> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1183831>
[14:51] <jcastro> dpb1, ^^^
[14:51] <jcastro> bloodearnest, Just a terasort CLI is fine
[14:51] <jcastro> bloodearnest, I am working on the README now to make it use juju run, I'll be done in an hour
[14:51] <jcastro> bloodearnest, you can add ganglia and stuff if you want pretty charts
[14:52] <bloodearnest> jcastro: lovely. talk is tonight, so just in time :)
[14:52] <bloodearnest> jcastro: ELI5: terasort?
[14:52] <jcastro> it's a hadoop benchmark thing
[14:52] <jcastro> it mapreduces a TB of stuff
[14:53] <jcastro> basically, to prove it works, etc and that you're just not making up pretty boxes on a web page
[14:53] <jcastro> https://code.launchpad.net/~jorge/charms/bundles/hadoop-cluster/bundle
[14:53] <jcastro> is where I'll be pushing
[14:53] <jcastro> dpb1, though I am surprised we haven't fixed that
[14:54] <dpb1> jcastro: it's introducing some failures when trying to put in storage solutions.
[14:54] <dpb1> since your EBS has to be in the same AZ as your instance
[14:54] <jcastro> bloodearnest, this will be of interest also http://markmims.com/cloud/2012/06/04/juju-at-scale.html
[14:55] <jcastro> hey sinzui, how do we put that bug on core's radar?
[14:56] <sinzui> jcastro, report it, tag it with the a stakeholder tag (charmers) and then ask me to review the milestone
[14:58] <jcastro> dpb1, does that bug I linked pretty much cover your issue?
[15:00] <dpb1> jcastro: indeed.  I think there are future plans to better support AZs, but that one is kind of a no-brainer as a first-start to it.  I'll bring it up at our next meeting (Thursday)
[15:01] <jcastro> sweet, do you have a cool stakeholder tag we could put on the bug? perhaps we can both +1 it and get it more weight
[15:01] <jcastro> bloodearnest, http://imgur.com/UFhGwaB
[15:01] <jcastro> port 50070 on the master node
[15:01] <bloodearnest> jcastro: that'll do nicely thanks#
[15:02] <dpb1> jcastro: yes, I added 'landscape'  which is what we have been using, I think.
[15:02] <bloodearnest> jcastro: given I'll have no internet and be deploying to local provider, I may not use terasort, or my machine might freeze up :)
[15:03] <jcastro> I just ran it on my laptop
[15:03] <jcastro> worked fine, load went up but didn't crush the machine
[15:03] <bloodearnest> jcastro: cool! will try shortly
[15:03] <jcastro> very demoable, i7, 16GB of RAM though
[15:03] <sinzui> jcastro, "charmers" is the tag you probably want
[15:05] <jcastro> juju run --unit hadoop-master/0 "sudo -u hdfs /usr/lib/hadoop/terasort.sh"
[15:05] <jcastro> there's the magic right there ^
[15:06] <bloodearnest> awesomesauce
[15:06]  * bloodearnest tries it all out
[15:07] <jcastro> we leave the results all over the master though
[15:07] <jcastro> so you can only run it once
[15:11] <bloodearnest> jcastro: hm, just realised I'm really not gonna be able to do this without internet :/
[15:12]  * bloodearnest sets up EC2
[15:12] <jcastro> all you need is a video of the terasort
[15:12] <bloodearnest> jcastro: tru
[15:12] <lazyPower> marcoceppi: we are going to have a slew of incoming ssl-everywhere changes. Before I +1 this, I want your input on how you feel about the move from ssl_enabled to ssl => [on, off, only] - as I have a charm using the same convention. I didn't see another charm already approved with this at first glance - is this going to be a sticking point?
[15:25] <cory_fu> I have a question about the vagrant config: https://juju.ubuntu.com/docs/config-vagrant.html
[15:26] <cory_fu> I have an http service deployed and listening, and I can connect to it from the JujuQuickStart VM using wget http://10.0.3.132:8080/, and I have sshuttle running, but when I try to connect to that URL locally, I get "no data received" and a couple of STOP_SENDING and EOF lines show up from sshuttle
[15:27] <cory_fu> TIA for any suggestions
[15:29] <marcoceppi> cory_fu: what's the sshuttle line you're running?
[15:29] <cory_fu> sshuttle -e 'ssh -o UserKnownHostsFile=/dev/null vagrant@localhost:2222' 10.0.3.0/24
[15:30] <marcoceppi> cory_fu: try this instead, sshuttle -r vagrant@localhost:2222 10.0.3.0/24
[15:30] <noodles775> marcoceppi: Hi! I've just added a +1 with some comments to bloodearnest's new gunicorn charm. Not sure if that counts, or whether you're happy to then merge it, but let me know if there's anything else we can do to help get it landed: https://code.launchpad.net/~bloodearnest/charms/precise/gunicorn/cleanup/+merge/208558
[15:30] <marcoceppi> cory_fu: if that works, then we need to update our docs
[15:31] <marcoceppi> noodles775: we're dredging through the queue today, so it will likely have eyes on very soon, but your +1s do help!
[15:31] <cory_fu> marcoceppi: That does in fact work, and also explains the "vagrant" password line from the docs.  Thanks!
[15:32] <marcoceppi> cory_fu: cool, we'll need to update the docs to use that variant of the sshuttle command
[15:33] <noodles775> marcoceppi: great, thanks.
[15:37] <marcoceppi> lazyPower: there is a precidence set that the values should be on, off, only already
[15:39] <lazyPower> ack
[15:41] <jcastro> marcoceppi, I am getting a weird error with bundle proof
[15:42] <jcastro> https://gist.github.com/castrojo/9348790
[15:42] <jcastro> but earlier today it was working
[15:42] <marcoceppi> you don't proof a bundles.yaml
[15:42] <marcoceppi> jcastro: just run juju bundle proof from the bundle directory
[15:42] <jcastro> aha!
[15:42] <marcoceppi> jcastro: as for that bug, that's something else, the error you should be getting is FATAL: not a bundle
[15:43] <marcoceppi> jcastro: I'll make sure the next release of charm-tools has better output for that
[15:43] <jcastro> want me to file it?
[15:43] <marcoceppi> jcastro: plz
[15:46] <jcastro> marcoceppi, juju bundle not being tab completeable, is that a charm-tools/plugin bug?
[15:46] <marcoceppi> yes
[15:46] <marcoceppi> charm-tools
[15:46] <marcoceppi> the debian package should create an autocomplete bash profile for charm-tools, it's a feature req
[15:47] <jcastro> "The review job grabs any branch with a linked branch that has ~charmers as a subscriber"
[15:47] <jcastro> you said you were 110% positive!
[15:48] <marcoceppi> jcastro: It was that way at one point
[15:48] <jcastro> yeah, but jcsackett hooked us up, o/
[15:48] <marcoceppi> hazmat: https://code.launchpad.net/~hazmat/charms/precise/rabbitmq-server/ssl-everywhere/+merge/207912 removes a config option which we've typically frowned upon in the past as it breaks backwards compat
[15:49] <marcoceppi> I mean, I get the change
[15:49] <marcoceppi> but it would be a break from the policy we've been telling people
[15:56] <hazmat> marcoceppi, it doesn't remove it.. it deprecates it and provides a richer option
[15:56] <hazmat> and maintains compatibility with both
[15:56] <marcoceppi> hazmat: ah, I missed the compat, thanks for the clarification
[15:57] <hazmat> marcoceppi, those merge proposals are a bit old.. we've been working against a set of openstack-charmers branches.. the gist of it for rabbit is the same
[15:57] <marcoceppi> hazmat: cool, so is rabbitmq ready to go for review or?
[15:57] <hazmat> but there's a few more commits in the other branch.. for mysql there are siginficant changes in the openstacl-charmers branch
[15:58] <hazmat> marcoceppi, i'll yank the mps on the original ones
[15:58] <marcoceppi> hazmat: ack
[15:59]  * hazmat goes hunting 2fa device
[15:59] <jcastro> lazyPower or mbruzek, either of you have time to hit up Apache Syncope in the review queue today?
[16:00] <jcastro> all the metis stuff is fix committed as well
[16:00] <lazyPower> jcastro: slogging through the queue adding +1's
[16:02] <lazyPower> I'm at metis right now, will be on syncope shortly
[16:02] <marcoceppi> just got ot metis
[16:03] <lazyPower> marcoceppi: i just pulled the source want me to skip down?
[16:04] <marcoceppi> no, go ahead and review + deploy/test
[16:04] <marcoceppi> that's the josh strobul guy
[16:04] <jcastro> lazyPower, case of beer if you can get Syncope reviewed today
[16:04] <lazyPower> ack
[16:05] <lazyPower> i'll stop after syncope so I can claim that offer of suds
[16:05]  * marcoceppi was about to review it :(
[16:06] <lazyPower> marcoceppi: yeah, he's been a real champ in the community taking part in a lot of the discussions
[16:06] <lazyPower> i'm excited to see such involvement around juju from him. We should send him a shirt
[16:07] <lazyPower> jcastro: ^
[16:09] <lazyPower> i'm confused as to why metis is in the queue twice
[16:09] <lazyPower> any insight?
[16:10] <jcastro> probably a bug, you get a 2 for 1!
[16:12] <marcoceppi> lazyPower: idk why, just press on
[16:12] <jcastro> hey bcsaller_
[16:12] <marcoceppi> bloodearnest: how do you run the tests for gunicorn? I didn't have a make target for it
[16:13] <bcsaller_> hey
[16:13] <bloodearnest> marcoceppi: nosetests
[16:13] <jcastro> bcsaller_,  I am working on the charm review assignments, are you a reviewing charmer?
[16:13] <bloodearnest> marcoceppi: I can add a make target
[16:14] <bcsaller_> jcastro: Normally yes, I'll have a hard time getting to it till my current project is over
[16:14] <jcastro> I'll put you last in the first rotation then, that should give you a few weeks
[16:14] <marcoceppi> bloodearnest: that would be fantasic, just make unit_test seems to be the convention most people use
[16:14] <bcsaller_> jcastro: sounds good
[16:15] <marcoceppi> bloodearnest: otherwise, lgtm
[16:16] <bloodearnest> marcoceppi: pushed up r35 with make unit_test
[16:17] <bloodearnest> marcoceppi: with a cleaner foundation, I have bunch of features to add (syslog, etc), hopefully amulet tests too
[16:17] <marcoceppi> bloodearnest: sweet, I look forward to those
[16:23] <jcastro> evilnickveitch, pushed the reviewer page retitle
[16:23] <evilnickveitch> jcastro, cool
[16:25] <Ursinha> jcastro: hey :) is it possible to use juju and rackspace at the moment?
[16:25] <jcastro> Ursinha, sort of
[16:25] <jcastro> https://juju.ubuntu.com/docs/config-manual.html
[16:25] <lazyPower> daww :( doesn't look like the latest copy of his code is up in BZR
[16:26] <lazyPower> should I help him out and pull from github and note thats what I did?
[16:26] <bloodearnest> marcoceppi: and thanks for the merge :)
[16:28] <Ursinha> jcastro: so it works but kind of manually only?
[16:29] <lazyPower> or not... its not up in github
[16:29] <marcoceppi> lazyPower: wrt what?
[16:30] <marcoceppi> Ursinha: rackspace's openstack provider is still too far off from OpenStack upstream releases, so it doesnt' work with the openstack provider plugin in juju
[16:30] <marcoceppi> Ursinha: you have to manually spin up machines then enlist them in juju using the manual provider
[16:30] <Ursinha> right, got it marcoceppi
[16:30] <lazyPower> marcoceppi: metis charm didn't get the repository update that wsa promised in the card - i'm reviewing old code
[16:31] <lazyPower> none of the sha1sum's are in here, etc. I'll continue through the review but i'm hitting the exact same points you did.
[16:31] <dalek49> hi all, I'm not understanding relations. I've read and reread all the hook documentation, and 1) can't make most of it work (the tmux session seems to be broken) and 2) don't understand what I'm not doing. I'm trying to write a charm that interacts with the rabbitmq-server charm. I have an amqp-relation-changed hook in my charm, and I have rabbitmq & amqp listed in my metadata.yml; however, it keeps saying that there isn't any known relation betwee
[16:33] <lazyPower> I reached out to him 1:1 via email. I'll give it an hour before i submit my responses and set the bug back to incomplete.
[16:34] <marcoceppi> lazyPower: just ping him here JoshStrobl ^^
[16:35] <jcastro> Ursinha, yes, there's no provider for rackspace currently
[16:35] <lazyPower> or that
[16:36] <mgz> Ursinha: the openstack provider should work for rackspace... were they to actually provide several openstack features we expect
[16:36] <mgz> we could put some effort into working around their weirdness, but there's been no pressing need as of yet
[16:45] <jcastro> lazyPower, we sent him a shirt already, you know I got dis!
[16:46] <lazyPower> woot
[16:49] <marcoceppi> jcastro: why didn't you have mediawiki-scalable and mediawiki-simple in the same bundle branch?
[16:49] <jcastro> what do you mean
[16:49] <marcoceppi> actually, rick_h_, does quickstart/gui support inheritence for bundles?
[16:50] <rick_h_> marcoceppi: yes
[16:50] <marcoceppi> rick_h_: cool
[16:50] <marcoceppi> jcastro: you could have done this:
[16:51] <marcoceppi> jcastro: http://paste.ubuntu.com/7033898/
[16:51] <marcoceppi> in one mediawiki bundles branch
[16:51] <jcastro> wait, what.
[16:51] <jcastro> we can do that?
[16:51] <marcoceppi> bundle branches can contain multiple bundles
[16:51] <marcoceppi> they get injested one bundle at a time, but a single branch can have multiples
[16:51] <rick_h_> marcoceppi: now those can be imported into charmorld, but not dragg/dropped in the gui
[16:52] <marcoceppi> jcastro: oh yeah, since the early days of deployer
[16:52] <marcoceppi> rick_h_: why not?
[16:52] <rick_h_> marcoceppi: because you drag the file and it doesn't know which bundle to deploy
[16:52] <marcoceppi> rick_h_: OH, you mean from the desktop
[16:52] <rick_h_> marcoceppi: there's a low priority item to build a drop UX to ask you but that's falling into the bundle pre-deployment config work
[16:52] <rick_h_> marcoceppi: right
[16:52] <rick_h_> marcoceppi: that's all I mean
[16:52] <marcoceppi> right, but the from GUI, when it gets injested, it's deployable
[16:53] <rick_h_> +1
[16:53] <marcoceppi> sweet
[16:53] <marcoceppi> jcastro: so I may have prematurely promulgated mediawiki-scalable, but you can merge simple -> scalable
[16:54] <marcoceppi> and then both will show, the branch name at this point is abritrary
[16:54] <marcoceppi> name of the bundle is taken from the top level key(s) in the bundles.yaml file
[16:54] <marcoceppi> which is why everything is plural
[16:54] <marcoceppi> I'm not sure if we'd want to define a best practice of one or the other, each have their pros and cons
[16:55] <jcastro> so don't we want to keep them seperate until we have UX?
[16:55] <marcoceppi> jcastro: we have the UX, unless you drag and drop from your computer
[16:55] <jcastro> that's like a main use case.
[16:55] <marcoceppi> but from the charm browser side, you'll see mediawiki-simple and mediawiki-scalable if they're in charmworld
[16:56] <marcoceppi> is it?
[16:58] <lazyPower> marcoceppi: is there a way to get the output of what the linter thinks is duplicate content from the Readme? *warning* I haven't looked at the code code in the linter yet
[16:59] <marcoceppi> lazyPower: there's a -v flag that is planned, but not implemented yet
[16:59] <lazyPower> ack
[16:59] <marcoceppi> but it's probably something simple
[16:59] <marcoceppi> usually in the contact link list
[17:00] <lazyPower> yeah, it matched on line6 of the boilerplate -- which is a blank line
[17:00] <lazyPower> @_@
[17:00] <lazyPower> clever charm proof... you win this round
[17:00] <marcoceppi> lazyPower: lines are only lines that are matchable
[17:01] <marcoceppi> not lines that are there, so it skips blanks lines
[17:01] <marcoceppi> and headers
[17:01] <marcoceppi> it's stupid, I know
[17:01] <lazyPower> well tis far better than nothing
[17:01] <lazyPower> and i value its presence
[17:52] <themonk> marcoceppi, is it possible to setup juju container in rackspace vm
[17:55] <themonk> marcoceppi, if possible then how do i config my environments.yaml
[18:01] <lazyPower> themonk: have you seen this? http://askubuntu.com/questions/166102/how-do-i-configure-juju-for-deployment-on-rackspace-cloud
[18:02] <themonk> marcoceppi, no thanks :)
[18:02] <lazyPower> we dont currently support rackspace, but there has been some WIP on that.
[18:28] <marcoceppi> themonk: you can use the manual provider
[18:29] <themonk> marcoceppi, what is manual provider?
[18:29] <marcoceppi> themonk: it's where you can bootstrap and deploy to arbitrary machines
[18:30] <marcoceppi> so like, any server with ssh access
[18:30] <marcoceppi> the downside, you have to spin up the machines in rackspace manually
[18:31] <themonk> marcoceppi, hmm yes i can i guess :)
[19:49] <Lord_Set2> Where can I find the Juju daily PPA?
[19:49] <Lord_Set2> Or what is the address of it?
[19:54] <marcoceppi> Lord_Set2: we don't have a daily ppa, if you want daily you can build the source yourself. Otherwise we release stables in ppa:juju/stable and devs in ppa:juju/devel
[20:00] <Lord_Set2> Thanks
[20:05] <Lord_Set2> Any reason why I would be getting the following error?
[20:05] <Lord_Set2> ERROR bootstrap failed: cannot start bootstrap instance: gomaasapi: got error back from server: 403 FORBIDDEN (You are not allowed to start up this node.)
[20:08] <marcoceppi> Lord_Set2: well, you're using maas, so this is a few things. Either you don't have any nodes available to your user, you're using the wrong API key, you don't have an SSH key in MAAS
[20:08] <marcoceppi> etc
[20:09] <Lord_Set2> I have an ssh key, I have 4 nodes available... and I generated a fresh api key for juju
[20:11] <marcoceppi> Lord_Set2: are they ready or commisioning?
[20:11] <Lord_Set2> Ready
[20:12] <marcoceppi> huh, well the 403 is maas's way of saying something's wrong and what you requested is not allowed
[20:12] <Lord_Set2> Alright. Are there any specific logs I can check for more information?
[20:14] <marcoceppi> Lord_Set2: I don't have my maas setup atm, so I can't say for certain unfortunately, you can try running juju bootstrap with --show-log and --debug flag to see where if anything things are not connecting
[20:14] <marcoceppi> Lord_Set2: I should have my MAAS setup working by tomorrow so if you don't have it by then we can dig deeper
[20:14] <Lord_Set2> Alright thank you
[21:05] <kirkland> Lord_Set: Lord_Set2: howdy!  are you guys joining the hangout for our Juju/MAAS discussion?
[21:16] <thumper> jcastro: around?
[21:16] <jcastro> yo!
[21:17] <thumper> jcastro: I just did "juju add-unit ubuntu -n 10" on the local provider
[21:17] <thumper> to test timings
[21:17] <jcastro> pffft. 10 is easy, do 100.
[21:17] <thumper> this is using lxc-clone, but no special backing at this stage
[21:17] <thumper> took 72s for all agents to be fully up
[21:17] <jcastro> Oh, I see
[21:17] <thumper> I'll do 100 when I have better backing storage
[21:18] <jcastro> that is much nicer, I've been testing the hadoop bundle and it's been about 10 minutes from nothing to fully up
[21:18] <thumper> each image is ~900Mb on disk
[21:18] <thumper> I think a lot of that time is file i/o copying approx 9gig on disk
[21:18] <themonk> marcoceppi, do need to bootstrap in local machine if i deploy in amazon ec2
[21:18] <thumper> I want to get some timings with overlayfs and btrfs
[21:18] <jcastro> hey so also, one thing we do
[21:18] <jcastro> is each instance does like apt-get update, upgrade, then the install hooks install a bunch of stuff
[21:19] <marcoceppi> thumper: no, you'll need to bootstrap ec2 to deploy in ec2
[21:19] <Lord_Set2> So once a node comes up is there any reason why it would prompt for a password as just the ssh should work correct?
[21:19] <thumper> marcoceppi: not me, themonk
[21:19] <marcoceppi> thumper: sorry, themonk ^^
[21:19] <thumper> jcastro: when using clone, it disables the initial apt-get update/upgrade
[21:19] <jcastro> thumper, your thing caches everything apt pulls down though right, not just the charms?
[21:19] <marcoceppi> Lord_Set2: if you're getting a password prompt something went wrong during cloudinit, you can access the node using the ubuntu password
[21:19] <thumper> so the images come up faster
[21:19] <jcastro> ah, so you're cheating. :)
[21:20] <thumper> not cheating, makeing awesome
[21:20] <jcastro> they eventually update though? you're not disabling just delaying I assume?
[21:20] <Lord_Set2> There is no password though unless it is a password that I don't know
[21:20] <thumper> jcastro: no, they don't update
[21:20] <thumper> the intent is to keep the template up to date
[21:20] <marcoceppi> Lord_Set2: the default ubuntu password for the ubuntu image is ubuntu
[21:20] <Lord_Set2> Oh ok
[21:20] <marcoceppi> Lord_Set2: but juju disables passwords via cloudinit
[21:21] <marcoceppi> so if you're getting a password, something went wrong
[21:21] <thumper> jcastro: also, heads up, I am making a core plugin 'juju-local'
[21:21] <marcoceppi> thumper: what's the plugin do?
[21:21] <thumper> it will get installed with the juju-local package
[21:21] <thumper> marcoceppi: it will have suspend/resume
[21:21] <thumper> and other methods for dealing with the template
[21:21] <marcoceppi> sweet
[21:21] <thumper> that is created for fast lxc cloning
[21:21] <Lord_Set2> So yeah... ubuntu doesn't work as a password
[21:22] <marcoceppi> Lord_Set2: how did you get these to spin up? with juju?
[21:22] <Lord_Set2> Yes
[21:22] <thumper> jcastro, marcoceppi: one thing that hazmat really wanted was for the local provider to not auto-restart on reboot
[21:22] <thumper> so I'm making it a config option
[21:23] <thumper> which will happen to default to "false"
[21:23] <thumper> which is one reason why we need "resume"
[21:23] <jcastro> that is perfect
[21:23] <hazmat> ie. how jorge killled his a machine
[21:23] <jcastro> because sometimes I want to suspend
[21:23] <thumper> to bring back up a suspended environment
[21:23] <marcoceppi> Lord_Set2: yeah, something has gone wrong then. how are you getting to this prompt? via console or trying to use juju ssh?
[21:23] <thumper> hazmat: acknowledged
[21:23] <hazmat> thumper, awesome btw
[21:23] <Lord_Set2> trying to ssh ubuntu@node.maas
[21:24] <thumper> hazmat: I've been thinking about overlayfs vs btrfs
[21:24] <marcoceppi> Lord_Set2: :\ something has gone wrong then
[21:24] <hazmat> thumper, though shall not mention overlayfs ;-)
[21:24] <hazmat> its aufs or not
[21:24] <Lord_Set2> Hmm ok
[21:24] <thumper> hazmat: starting containers on my machine is super fast, because I have a fast SSD
[21:24] <thumper> hazmat: ok, aufs
[21:24] <hazmat> thumper, its not starting.. its creating
[21:24] <hazmat> thumper, and cloning is superfast with aufs or btrfs
[21:24] <thumper> however my image size is around 900 meg per instance
[21:24] <hazmat> thumper, and its efficient with aufs or btrfs
[21:25] <thumper> the interesting bit is not modifying the underlying image when clones are running
[21:25] <thumper> I think in practice it will be fine
[21:25] <thumper> but I need to encode safety checks
[21:25] <hazmat> thumper, yes.. btrfs does the right thing, at the cost of duplicate bits.. aufs .. probably not
[21:25] <thumper> right
[21:25] <marcoceppi> Lord_Set2: stop the instance in maas, try again?
[21:25] <hazmat> thumper, i'd suggest not doing modifying the base, but creating new basea
[21:26] <hazmat> thumper, its possible aufs will do the right thing, but in an image or layer based world thats just not how you do it.. you always roll forward on a new commit/rel
[21:26] <thumper> hazmat: my suggestion is to just say "sorry, can't update the template with a local environment running"
[21:26] <thumper> do you think that is too harsh?
[21:26] <hazmat> thumper, its not right either though.. i think you can create a new layer or snapshot
[21:26] <hazmat> for the maintenance
[21:27] <hazmat> thumper, this is an issue i have with the current ubuntu-cloud template which i've been trying to address
[21:27] <thumper> what's that?
[21:27] <hazmat> thumper, it saves its image download/cache to the same file name
[21:27] <hazmat> regardless of the actual version / release
[21:27] <thumper> haha
[21:27] <thumper> yeah
[21:27] <thumper> my precise image is like almost a year out of date
[21:27] <hazmat> thumper, so i've been downloading them to the correct filename, and symlinking it to the target
[21:28] <hazmat> smoser, says its correct so we don't break people's scripts..
[21:28]  * thumper shrugs
[21:28] <hazmat> its broken for this use case, so working around is reasonable
[21:28] <thumper> so, who knows about aufs?
[21:28]  * hazmat raises hand
[21:28] <thumper> and should we use it over btrfs?
[21:28] <hazmat> thumper, you need trusty or lxc stable ppa
[21:28] <thumper> hazmat: I'm only doing fast lxc for trusty and above
[21:28] <hazmat> thumper, you auto decide based on underlying fs on /var/lib/lxc
[21:28] <thumper> so not an issue
[21:29] <hazmat> thumper, btrfs is more reliable
[21:29] <thumper> sure, that is a given
[21:29] <smoser> hazmat, the name was only changing at best every 6 months, when the content was changing every 3 weeks.
[21:29] <thumper> but if, like me, you have ext4
[21:29] <hazmat> thumper, aufs is pretty good, but it has issues with charms that do silly things
[21:29] <hazmat> like wordpress
[21:29] <thumper> we need some snapshot tech
[21:29] <smoser> so your illusion of "up to date" was bad anyway.
[21:29] <hazmat> installing nfs by default
[21:29] <hazmat> smoser, that's my point/issue exactly
[21:30] <thumper> hazmat, smoser: so can we not fix lxc to refetch images?
[21:30] <hazmat> smoser, i'm trying to use stream data now to download with the date rel name and then symlink into place on cache
[21:30] <smoser> yeah, your'e right . the lxc template is ont being smart.
[21:31] <thumper> hazmat: do I need to specify a backing for lxc-clone if on btrfs, or will it just do the right thing?
[21:31] <hazmat> thumper, that would help
[21:31] <smoser> we'd like to have some more intelligent like behavior elsewhere.
[21:31] <hazmat> thumper, you need to specify backing store on create if on btrfs as well
[21:31] <thumper> hazmat: so, back to aufs, is it likely to cause issues with charms?
[21:31] <hazmat> thumper, it generally works.. there are cases where it won't
[21:32] <hazmat> wordpress installing nfs without need was an example i hit of where it won't
[21:32] <thumper> that seems sub-optimal
[21:32] <hazmat> thumper, its way better than overlayfs
[21:32] <thumper> sure... not useful though
[21:32] <thumper> given that wordpress is one of the first things people install if following examples
[21:33] <thumper> we don't want to make it a shit experience
[21:33] <hazmat> thumper, fair enough.. i'd suggest we fix the charm :-)
[21:33] <thumper> what is the fundamental fault?
[21:34] <jcastro> do btrfs btw, I thought we were trying to get rid of aufs eventually
[21:34] <hazmat> thumper page fault'd out of memory.. don't remember anymore.. i'm planning on just adding options to allow user to select backing store with auto detect
[21:35] <jcastro> marcoceppi, when doing debug-hooks on my mysql instance it seems I am not in the charm's path
[21:35] <hazmat> thumper, the other option is lvm thinp but its  a bit more work
[21:35] <marcoceppi> jcastro: you have to be in a hook context
[21:35] <jcastro> because it doesn't recognize "hooks/blah", I have to give it the huge pwd of the hooks
[21:35] <thumper> ok, let me test this new behaviour with a btrfs loopback device
[21:35] <hazmat> jcastro, in the actual debug hooks context you will be
[21:35] <marcoceppi> if you're in the 0 window, you're not in a hook context
[21:35] <hazmat> thumper, dealing with size management on loop devs is going to be a pain, plus lack fsync..
[21:36] <thumper> yes...
[21:36] <thumper> so... what do we do?
[21:36] <hazmat> thumper, openvz did this ploop device for this case .. http://openvz.livejournal.com/40830.html
[21:36] <jcastro> juju debug-hooks mysql/0 start
[21:36] <jcastro> is that incorrect if I want to debug the start hook?
[21:37] <hazmat> thumper,  er.. better link. . http://openvz.org/Ploop .. doesn't help us though
[21:38] <thumper> hazmat: because lxc doesn't recognize it?
[21:39] <jcastro> marcoceppi, I don't think it's getting to the point where it's launching like a full debug-hooks environment
[21:39] <jcastro> this is the innodb pool size thing
[21:39] <marcoceppi> jcastro: you have to trigger a hook in order to be in a hook context
[21:39] <jcastro> yeah but the service never starts at all
[21:39] <marcoceppi> jcastro: the 0 window in tmux is not a hook environment, it's just a standby window
[21:39] <thumper> hazmat: I could put all the btrfs loopback device management code into the juju-local plugin
[21:39] <thumper> and allow people to choose to use it
[21:39] <marcoceppi> jcastro: just do a configuration change
[21:39] <jcastro> which is probably why no hooks exectute to begin with?
[21:39] <marcoceppi> jcastro: to trigger config-changed
[21:40] <thumper> that way it becomes explicit
[21:40] <thumper> but juju itself doesn't have to care
[21:40] <jcastro> k, I'll investigate tomorrow, in the meantime, the hadoop bundle works now and is finished, just need to run the terasort
[21:40] <thumper> it just passes the right flags based on the fs of the container directory
[21:40] <jcastro> thumper, I did my first real juju run today
[21:40] <jcastro> juju run --unit hadoop-master/0 "sudo -u hdfs /usr/lib/hadoop/terasort.sh"
[21:41] <jcastro> after the hadoop bundle = success.
[21:41] <thumper> cool
[21:42] <hazmat> lazyPower, does your nagios upgrade-charm hook get called for config-change?
[21:43] <hazmat> thumper, putting it into a plugin doesn't change dev size mgmt issue
[21:44] <thumper> hazmat: I know
[21:44] <thumper> but if we make the user have to create it manually
[21:44] <thumper> we can inform them
[21:44] <thumper> and they are more aware of the issues
[21:44] <thumper> rather than just doing it by magic
[21:44] <thumper> and then they get confused as to what the problem is
[21:44] <hazmat> thumper, and they do what when it runs out of space?
[21:44] <thumper> hopefully we can catch that early
[21:45] <thumper> maybe...
[21:45] <thumper> and let them know
[21:45] <thumper> wouldn't be too hard
[21:45] <hazmat> let them know they should stop using it? we need research into remediation options
[21:45] <thumper> we can create another file/device and add to the btrfs volume, yes?
[21:47] <thumper> hazmat: but yes, that would be part of making it all work well
[21:47] <hazmat> thumper, not exactly.. its multi-dev raid
[21:48] <hazmat> thumper, ie. your not increasing the size per se
[21:48] <thumper> if we set it up as raid 0
[21:48] <thumper> why not?
[21:48] <thumper> I'm sure I read something about this
[21:49] <hazmat> thumper, don't forget fsync lies with loops...
[21:49]  * thumper sighs
[21:50] <hazmat> raid 0 on multiple devs with each of them lying about fsync..
[21:50] <thumper> you are the one that told me to do this in the first place...
[21:50] <thumper> I don't understand what you are getting at
[21:50] <thumper> are you saying "don't do it"?
[21:50] <thumper> or are you saying "warn the user"?
[21:50] <hazmat> thumper, i'm just trying to get to something that's reliable and works well if people use it alot
[21:50] <thumper> and if so, what is the real issue?
[21:50] <hazmat> thumper, its awesome your doing this, but i just want it to be thought through
[21:51] <thumper> ok, so fsync lies, what is the implication
[21:51] <hazmat> thumper, unclean shutdown on laptop..
[21:52] <hazmat> fs disk corruption, maybe needs btrfs repair
[21:52] <thumper> given that we'd need to bring btrfs back up prior to lxc starting
[21:53] <thumper> on the loopback devices
[21:53] <thumper> can't we just do the repair in the upstart job?
[21:54] <hazmat> sounds reasonable... smoser ^ any comments?
[21:55] <thumper> hazmat: I talked to one of hallyn or stgraber about this a few days ago
[21:55] <thumper> hazmat: about how to hook up the upstart jobs
[21:55] <thumper> so the fs is mounted prior to lxc trying to auto start containers
[21:56] <hazmat> cool
[21:56]  * hazmat has meeting bbiab
[21:56] <thumper> ack
[22:06] <jcastro> marcoceppi, you still working?
[22:06] <jcastro> I am thinking charm school next Fri
[22:06] <jcastro> here is our list of topics:
[22:06] <jcastro> - Writing your own Juju plugins
[22:06] <jcastro> - Using Juju Bundles
[22:06] <jcastro> - Using Juju in HA configurations *
[22:06] <jcastro> - Configuring Juju with the manual provider
[22:06] <jcastro> - Starting small and then going big, using Juju with containers.
[22:06] <jcastro> - Troubleshooting Juju Part I
[22:06] <jcastro> - Troubleshooting Juju Part 2
[22:06] <jcastro> - Using the Juju GUI in depth.
[22:20] <themonk> marcoceppi, does local container has any log fole to tail
[22:23] <lazyPower> hazmat: symlinked files. indeed.
[22:24] <lazyPower> it was like that when I found it, I should probably remove the upgrade-charm hookfile as it serves no real purpose, the same routine is run in succession when upgrading the charm as is.
[22:44] <hazmat> jcastro, writing bundles..
[22:45] <hazmat> jcastro, shout out for digital ocean
[22:46] <hazmat> thumper, back.
[22:46] <thumper> hazmat: kk, I'm currently testing with loopback devs
[22:46] <thumper> I'll report back later
[22:47] <thumper> about to go to the gym
[22:47] <hazmat> thumper, cool, i'm still thinking we should have aufs as an option
[22:47] <thumper> maybe...
[22:47] <hazmat> cause its the simplest thing that generally works
[22:47] <thumper> let's go for one first
[22:47] <thumper> then we can consider aufs
[22:48] <thumper> at least make juju btrfs aware
[22:48] <thumper> so containers are created and cloned fast and correctly
[22:48] <hazmat> sounds good
[22:48]  * thumper focuses on that
[23:17] <marcoceppi> themonk: yes, ~/.juju/local/log
[23:17] <marcoceppi> jcastro: sounds good, was running errands