[10:18] <stub> If my charm only runs under precise with the cloud:icehouse archive, should I automatically add this repository? Or should I require the operator to explicitly specify it, in case they need to pull dependencies from elsewhere?
[11:04] <blackboxsw> hey gents
[14:51] <jcastro> heya lazyPower
[14:51] <jcastro> Do you have the URL to your bundle?
[15:40] <StoneTable> wrt charm-tools on lp, what determines when a build is made available for download? i.e., 1.2.10 is the latest in that branch, but only 1.2.9 is packaged up?
[15:41] <StoneTable> I ask because homebrew (osx) pulls 1.2.9, which doesn't support templates w/charm-create
[15:52] <marcoceppi> StoneTable: 1.2.10 should be packaged up
[15:52]  * marcoceppi checks
[15:52] <marcoceppi> StoneTable: rather, 1.3.2
[15:52] <marcoceppi> I need to update homebrew
[15:53] <StoneTable> Maybe I'm looking in the wrong place? https://launchpad.net/charm-tools/+download
[15:53] <marcoceppi> StoneTable: refresh, I just updated the 1.3.2 release
[15:53] <marcoceppi> I forgot to do so for the last two releases
[15:54] <StoneTable> Excellent, thanks! I'll update the homebrew formula and test/send pull request
[15:54] <marcoceppi> StoneTable: awesome, thank you!
[15:55] <marcoceppi> StoneTable: it should be noted, i think we added some dependencies in 1.3 series, I think that the way the forumual works this won't be a problem but I dont' have a mac to test
[15:55] <StoneTable> I noticed that. I'll test and update as needed. Thanks!
[16:05] <jcastro> asanjar, got it, bundle deploying!
[16:06] <asanjar> great
[16:23] <jcastro> mysql:
[16:23] <jcastro>       charm: "cs:trusty/mysql-1"
[16:23] <jcastro>       num_units: 1
[16:23] <jcastro>       dataset-size: 512M
[16:23] <jcastro> is that the correct formatting for dataset size?
[16:24] <marcoceppi> yes
[16:24] <jcastro> not sure if I should have " or not around the 512M
[16:24] <marcoceppi> jcastro: it will work either way, there's no special characters tehre
[16:52] <jcastro> http://paste.ubuntu.com/7794328/
[16:52] <jcastro> marcoceppi, I'm still getting that despite setting that config value
[16:52] <jcastro> any ideas?
[16:53] <marcoceppi> "Initializing buffer pool, size = 12.3G"
[16:53] <marcoceppi> that is really wrong
[16:53] <jcastro> yeah, I set the config value though
[16:54] <marcoceppi> jcastro: idk what to say
[16:54] <marcoceppi> I don't have time to debug atm
[16:55] <jcastro> I mean, I resolved it by hand
[16:55] <jcastro> I am just wondering why it didn't take with the bundle config
[16:55] <jcastro> marcoceppi, I will  buy you a bottle of something when you land your new rev of this charm
[16:55]  * marcoceppi now has incentive
[16:58] <jcastro> asanjar, hey so
[16:58] <jcastro> juju run --service hadoop-master “/usr/local/hadoop/terrasort.sh”
[16:58] <jcastro> that is both mispelled and doesn't exist on disk
[16:59] <jcastro> juju run --service hadoop-master "/var/lib/juju/agents/unit-hadoop-master-0/charm/scripts/terasort.sh"
[17:00] <jcastro> I found that on the hadoop-master node
[17:00] <jcastro> but running it tells me hadoop command not found
[17:00] <jcastro> asanjar, ^
[17:04] <jrwren> jcastro: its likely I broke that charm.
[17:04] <jrwren> jcastro: I'll post a fix
[17:09] <jrwren> jcastro: nevermind, I'm totally wrong. taht wasn't me.
[17:14] <asanjar> jcastro: lets look at it .. https://plus.google.com/hangouts/_/gvngjcf6moezlu53i4noau5x4ia
[17:14] <jcastro> This party is over...
[17:14] <jcastro> asanjar, go to the team hangout
[17:16] <asanjar> jcastro: I am there
[17:16] <asanjar> can u hear me
[17:21] <jcastro> lazyPower, ping
[17:55] <lazyPower> pong
[17:56] <lazyPower> https://code.launchpad.net/~lazypower/charms/bundles/oscondemo/bundle
[18:52] <lazyPower> JoshStrobl: ping
[19:17] <schegi> jamespage, online??
[19:40] <jamespage> schegi, hey!
[19:44] <jcastro> lazyPower, http://pythonhosted.org/juju-deployer/config.html
[19:44] <jcastro> that has example constraints for deployer
[19:44] <lazyPower> jcastro: thanks, looking now
[19:45] <lazyPower> jcastro: circling back - i can make another bundle with constraints as like oscon-recommended - would that be acceptable?
[19:46] <jcastro> sure
[19:48] <lazyPower> ok, i'll sneak that in the next update
[20:36] <mbruzek> Hello Jose
[20:37] <mbruzek> Are you available?
[20:37] <jose> hey mbruzek
[20:37] <jose> yeah
[20:37] <mbruzek> The chamilo charm failed to install, I saw this error in the log files. Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
[20:38] <mbruzek> is that a known bug?
[20:38] <marcoceppi> mbruzek: this sounds like apt-get update not being run in the local provider template
[20:38] <mbruzek> yes
[20:38] <mbruzek> likely it is.
[20:38] <mbruzek> not getting run
[20:39] <mbruzek> You will have to bug thumper for that fix.
[20:39] <marcoceppi> mbruzek: that's isolated to local provider templates
[20:39] <marcoceppi> mbruzek: just rebuild your template or start the template, run apt-get update/ apt-get dist-upgrade, then stop it
[20:39] <mbruzek> I deleted the template today and got a new one today.
[20:40] <marcoceppi> mbruzek: did you also delete the cloud image cache?
[20:40] <mbruzek> marcoceppi, I did juju clean local
[20:40] <mbruzek> I believe that deletes the image cache now right?
[20:40] <marcoceppi> mbruzek: for s&g, can you check the timestamp on /var/cache/lxc/cloud-*/*
[20:41] <jose> mbruzek, marcoceppi: is this a known/reported bug? it's been affecting me and some other people on the local provider
[20:41] <mbruzek> Jun 30 05:29
[20:43] <marcoceppi> mbruzek: last cloud images was released the 12th and the 14th
[20:43] <marcoceppi> you have "stale" cache
[20:43] <mbruzek>  for power?
[20:43] <marcoceppi> I'll check the juju-clean plugin
[20:43] <marcoceppi> I don't know about that
[20:43] <mbruzek> sudo rm -rf /var/cache/lxc/cloud-* || true
[20:43] <mbruzek> I see that line in juju-clean
[20:43] <marcoceppi> is that ppc63el ?
[20:43] <marcoceppi> is that ppc64el ?
[20:44] <mbruzek> -rw-r--r-- 1 root root 188205427 Jun 30 05:29 /var/cache/lxc/cloud-trusty/trusty-server-cloudimg-ppc64el-root.tar.gz
[20:44] <marcoceppi> yeah, it's out of date
[20:44] <marcoceppi> if you run that command does it actaully delete the image?
[20:45] <mbruzek> no...
[20:45] <marcoceppi> well, there's the problem!
[20:45] <marcoceppi> can you remove the || true and see if it errors?
[20:45] <mbruzek> http://pastebin.ubuntu.com/7795340/
[20:46] <marcoceppi> mbruzek: what does `sudo rm -rf /var/cache/lxc/cloud-*` say?
[20:49] <mbruzek> marcoceppi, http://pastebin.ubuntu.com/7795354/
[20:49] <marcoceppi> wtf, why isn't sudo rm -rf wildcard working
[20:49] <mbruzek> I can not do sudo ls -l /var/cache/lxc/cloud-*
[20:50] <mbruzek> I have to always use cloud-trusty
[20:50] <marcoceppi> mbruzek: oh, that's because of permissions
[20:50] <marcoceppi> you don't have o+x on /var/cache/lxc
[20:50] <marcoceppi> so you can't list, so rm fails
[20:50] <mbruzek> tricky.
[20:50] <marcoceppi> luuhhammeeee
[20:50] <marcoceppi> I'll put a proper patch up tonight
[20:50] <marcoceppi> for now, just delete the cloud-trusty and cloud-precise
[20:51] <marcoceppi> rebootstrap, etc
[20:51] <mbruzek> well power does not support precise so I will just add cloud-trusty for now.
[20:52] <lazyPower> marcoceppi: should we be implicit with that pathing in the rm command then?
[20:53] <marcoceppi> lazyPower: no, I'm just going to loop over all the known working series for now
[20:53] <lazyPower> ok.
[20:53] <marcoceppi> I mean, it's yes to your question
[20:53] <marcoceppi> but I'm just going to loop it
[20:53] <lazyPower> i got what you were saying :)
[20:53] <marcoceppi> I think there is a way to get all the code names on Ubuntu somehow
[20:55] <marcoceppi> ah, found it
[20:55] <mbruzek> marcoceppi, I would like to know more about that
[20:57] <mbruzek> jose I did open a bug about this here: https://bugs.launchpad.net/juju-core/+bug/1336353
[20:57] <_mup_> Bug #1336353: juju should run apt-get update <apt-get> <proxy> <stale> <update> <juju-core:Triaged> <https://launchpad.net/bugs/1336353>
[20:57] <jose> thanks mbruzek
[20:57] <mbruzek> jose the developers have not fixed it yet it is unassigned. I would appreciate some actual user input on this bug.
[20:58] <jose> mbruzek: I'm writing a comment right now :)
[20:58] <mbruzek> jose, I knew you would!
[20:58] <jose> :P
[21:00] <marcoceppi> mbruzek: egrep -Eio "^Suite: ([a-z]+)$" /usr/share/python-apt/templates/Ubuntu.info | awk '{print $2}' | sort | uniq
[21:01] <marcoceppi> there's probably a more condensed way to do that, but  that will worok
[21:02] <mbruzek> Works on Power!
[21:02] <mbruzek> One of the few things to.
[21:02] <lazyPower> gnarly
[21:04] <mbruzek> Actually marcoceppi, it has "devel" in there.  That was not one of the releases was it?
[21:18] <dpb1> Hi all -- is calling config-changed on node reboot expected?
[21:19] <ahasenack> marcoceppi: hi, do you know if juju is really supposed to call config-changed when a unit is rebooted?
[21:22] <schegi> someone experience with the hacluster charm?
[21:23] <mbruzek> dpb1, ahasenack, I suspect more information would be needed, but I am going to say, no the state of the VM should not cause config-changed to be called.
[21:25] <dpb1> mbruzek: here is the log from unit start to where our config-change logging starts: https://pastebin.canonical.com/113471/
[21:25] <mbruzek> ahasenack, Can you give us more information?
[21:26] <dpb1> mbruzek: I can give you whatever you would like.  I'm on the node here
[21:26] <ahasenack> mbruzek: it just happens. I'll reboot a "ubuntu" node and see the logs where it tries to call config-changed
[21:28] <mbruzek> ahasenack, This is news to me, I have no idea how juju would keep track of the state of the charm like that.
[21:28] <mbruzek> ahasenack, Is this behavior causing a problem?
[21:28] <ahasenack> oh yeah
[21:29] <mbruzek> ahasenack, Because if config-changed is truly idempotent it should be OK to run that script over and over again.
[21:29] <ahasenack> dpb1 is debugging it, so far seems it's a race between the "service foo start" from config-changed and another "service foo start" from the normal boot process
[21:30]  * mbruzek is testing this on his machine
[21:32] <dpb1> yes, at least with start-stop-daemon, there is no locking, so there are races around creation of the pidfile..
[21:34] <ahasenack> mbruzek: from the docs, it might even be it's on purpose:
[21:34] <ahasenack> config-changed runs in several different situations.
[21:34] <ahasenack>     immediately after "install"
[21:34] <ahasenack>     immediately after "upgrade-charm"
[21:34] <ahasenack>     at least once when the unit agent is restarted (but, if the unit is in an error state, it won't be run until after the error state is cleared).
[21:34] <ahasenack> see the "at least" bit
[21:35] <mbruzek> ahasenack, well there you have it.  I was unaware of that.
[21:36] <mbruzek> ahasenack, The unit agent must signal to juju that it was restarted
[21:36] <mbruzek> ls
[21:51] <schegi> anyone experienced with the hacluster charm?
[23:28] <sebas5384> i'm starting the scaling part of the charm I'm doing
[23:28] <sebas5384> what should be the better path here?
[23:28] <sebas5384> ceph, nfs, glusterFS ... ?
[23:29] <sebas5384> there's a storage charm too
[23:29] <sebas5384> i'm really tented into ceph
[23:41] <sebas5384> someone? :P
[23:42] <sebas5384> ceph seems to really big for a drupal charm hehe
[23:44] <jose> well, I've seen mediawiki use nfs
[23:44] <jose> as well as owncloud
[23:46] <sebas5384> yeah jose, and the wordpress charm too
[23:46] <jose> give nfs a shot, maybe?
[23:47] <sebas5384> yeah but something about distributed file systems took me attention
[23:47] <sebas5384> ceph and glusterfs distribution is awsome
[23:48] <sebas5384> but yeah at first nfs is a good option
[23:49] <sebas5384> i don't know man, scaling with nfs not seems right to me hehe
[23:50] <sebas5384> but yeah, I should give it a try first