[01:09] <IceyEC> I'm trying to setup the ceph charm for development of a charm that will connect to it, I try: `juju deploy -n 3 ceph`
[01:09] <IceyEC> and then juju status shows that each node is at: ` 'hook failed: "mon-relation-changed"'`
[01:11] <IceyEC> looks like I need to add the monitor-secret and fsid, adding those now
[11:56] <IceyEC> I'm looking at the Charmhelpers code, specifically, https://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/storage/linux/ceph.py#L292
[11:57] <IceyEC> am I correct that this cannot work on LXC?
[11:57] <IceyEC> I'm developing with the local provider and am getting modprobe errors, digging in it looks like modprobe is not supported in LXC containers
[11:57] <marcoceppi> IceyEC: you are correct, at the moment ceph requires block devices and there are none in the LXC/Local provider with Juju
[11:57] <marcoceppi> IceyEC: you can use KVM instead of LXC for the backend to local provider
[11:58] <marcoceppi> IceyEC: that way modprobe would work as epected
[11:58] <IceyEC> ok, thanks
[11:59] <marcoceppi> IceyEC: I see zero docs on this, but lp|conference and mbruzek have done this recently
[12:03] <IceyEC> cool, I'm trying to add ceph support to the solr-jetty charm and discovered that it isn't letting me setup the block device :-P
[12:04] <marcoceppi> IceyEC: heh, yeah, ceph on local provider def does not work in LXC, might work in KVM, jamespage would probably know better
[12:05] <jamespage> IceyEC, marcoceppi: actually it does
[12:05] <marcoceppi> :O
[12:05] <IceyEC> sounds great :)
[12:05] <jamespage> you can provide a directory name as a block device - e.g. /tmp/osd
[12:05] <jamespage> for example
[12:05] <IceyEC> in lxc?
[12:05] <IceyEC> because, I did that
[12:05] <jamespage> IceyEC, yes
[12:05] <IceyEC> /var/ceph was the device name I gave it, and it still tries to do modprobe
[12:05] <jamespage> IceyEC, the ceph charm does?
[12:06] <IceyEC> yeah
[12:06] <IceyEC> well
[12:06] <IceyEC> no
[12:06]  * jamespage looks
[12:06] <IceyEC> I'm using the code that the MySQL charm uses to configure Ceph
[12:06] <IceyEC> which is almost the same as the CharmHelpers code for Ceph
[12:06] <jamespage> IceyEC, right - that does not work under LXC at all
[12:06] <IceyEC> both of which do hit modprobe in the configure function
[12:06] <IceyEC> so, I can setup cep but I cannot use it under lxc?
[12:07] <jamespage> IceyEC, its a limitation of running in a LXC container - you can't modprobe or suchlike
[12:07] <IceyEC> yeah, will just reconfigure to use kvm :)
[12:07] <jamespage> IceyEC, ack - that would work
[12:07] <IceyEC> sounds good :)
[12:07] <jamespage> even if its a bit heavier on resources
[12:07] <IceyEC> hope to have something for you and arosales soon ;-)
[12:07] <IceyEC> this machine has 32G of RAM, dual dual core XEON's
[12:08] <IceyEC> have resources to deal with :)
[12:08] <g3naro> jooooojooooo
[12:08] <marcoceppi> IceyEC: I'll swing mbruzek your way when he comes online, should be able to get you the short of the long of KVM as local provider
[12:10] <IceyEC> cool, thanks marcoceppi
[12:15] <suchvenu> Hi Matt
[12:16] <mbruzek> Hello
[12:16] <suchvenu> I got a review comment on  Performance for DB2 charm
[12:16] <suchvenu> Can you please help me to understand that better
[12:16] <mbruzek> Sure!
[12:17] <marcoceppi> mbruzek: when you get a second, can you share how to use KVM as the provisioner for local with IceyEC? They're having some modprobe issues with LXC and IIRC you addressed that in docker with KVM on local
[12:17] <mbruzek> When I deployed db2 on amazon the disk space on the VM was highly consumed.
[12:18] <suchvenu> ok
[12:18] <mbruzek> suchvenu:  I don't remember the exact number but it was nearly out of hard drive space.  With Juju you can request larger instances with constraints, but there was also a /mnt directory that had 190GB free
[12:19] <suchvenu> I checked the DB2 and found that we an create Databases and logs to using "create database db name on <>" commands
[12:19] <suchvenu> Is that the one you are looking for ?
[12:19] <mbruzek> suchvenu: I also know that the minimum Amazon instance has low RAM 1.7GB.  I don't know how much RAM DB2 needs to run at optimum performance
[12:20] <suchvenu> ok
[12:20] <mbruzek> suchvenu: No, I was thinking if the README could make some suggestions on disk size and or RAM size for the best performance of DB2
[12:20] <suchvenu> ok
[12:21] <mbruzek> suchvenu: I don't know where the database is written on that charm,but if the database gets used and grows I fear the filesystem on those small instances will be totally consumed.
[12:21] <mbruzek> Running out of disk space is a very difficult problem to diagnose, because even log files can not be written.
[12:21] <suchvenu> okk
[12:22] <suchvenu> So you are suggesting to add the RAM and disk size requirements in Readme ?
[12:22] <mbruzek> suchvenu: The juju constraints you can set are root-disk (file system) and mem (RAM)
[12:23] <mbruzek> Yes to what is recommended.  And perhaps a warning that an install of DB2 will consume a large percentage of free space if the constraints are not used.
[12:25] <suchvenu> okk
[12:25] <suchvenu> All these in README, right ?
[12:26] <marcoceppi> IceyEC: there are docs! Just in devel still: https://jujucharms.com/docs/devel/config-KVM
[12:26] <suchvenu> Ok Thanks mbruzek... I will add the details in README and push it back for review.
[12:26] <mbruzek> suchvenu: Yes find out what is recommended and put that in the README.  But if we find that the small instance is not sufficient we should list that as well in the RADME
[12:27] <IceyEC> heh, I have that open already ;-) following along, will hopefully have ceph mounts purring along shortly :) thanks marcoceppi
[12:27] <suchvenu> Ok
[12:32] <mbruzek> suchvenu: Also consider deleting the unneeded RPMs or packages for other architectures in the charm code
[13:58] <IceyEC> so, I've got it creating the block device, but when I try to mount it with charmhelpers.contrib.charmsupport.volumes.configure_volume, I'm getting `mount: block device /dev/rbd1 is write-protected, mounting read-only\nmount: you must specify the filesystem type`
[14:08] <IceyEC> something I broke removing other code? possibly!
[14:40] <jcastro> hey beisner
[14:40] <beisner> hi jcastro, back in 1 min..
[14:41] <jcastro> ack, I want to talk about: https://bugs.launchpad.net/bugs/1373862
[14:41] <mup> Bug #1373862: MySQL doesn't deploy due to oversized dataset <mysql (Juju Charms Collection):Fix Committed by marcoceppi> <https://launchpad.net/bugs/1373862>
[14:47] <beisner> jcastro, howdy, back
[14:48] <jcastro> yeah so basically, people keep running into this issue when trying Juju on LXC
[14:49] <beisner> jcastro, right.  i totally understand.
[14:49] <beisner> jcastro, but imho, we can't just change a long-standing default value.
[14:50] <jcastro> iirc, all we did was just pick a percentage
[14:50] <marcoceppi> clint chose a default over 3 years ago
[14:50] <marcoceppi> Oracle says that default is not a sane value
[14:50] <marcoceppi> and provided us with a better value
[14:50] <marcoceppi> I understand teh upgrade delemia, I want to see if Juju does teh right thing or not
[14:51] <beisner> jcastro, right, but that makes it even more important to discuss.  we have an established expectation with a fairly large user base, who are accustomed to that default value.
[14:51] <jcastro> beisner: sure, happy to do that, maybe we should do it on the list?
[14:51] <jcastro> and I'd like to counter that we keep losing users when they try juju and it doesn't work
[14:52] <arosales> IceyEC, good to hear :-)
[14:53] <jcastro> beisner: should I bring it up on the list? I am curious as to what would happen during upgrades
[14:53] <beisner> jcastro, marcoceppi - here's a use case to consider:
[14:54] <beisner> i'm joe admin and company X.  i have a couple of juju-deployed database servers with 256GB of ram (the charm's established behavior gives mysql 204GB).  i like it.  i need to add more servers at another site location in a new juju environment.
[14:54] <jamespage> rick_h_, urulama: hey - fyi I've finally got the openstack-base bundle updated and in-store
[14:54] <beisner> joe will now have 128 MB whereas he had 204GB for his service, unless he changes his ways.
[14:55] <marcoceppi> beisner: so beacuse we made a mistake 3 years ago we can't fix it? Sounds like Joe should be using bundles and explicitly setting values. While the GUI won't export default config options, it doesn't mean that shouldn't change
[14:55] <jcastro> so I suspect someone using it in production is setting explicit values for everything with like a yaml file, but I could be wrong
[14:55] <beisner> marcoceppi, i don't necessarily disagree.  arbitrary value setting stinks.
[14:56] <beisner> jcastro, could be.
[14:56] <marcoceppi> beisner: the way you've set up this defense means that no default values should really ever be changed ever
[14:56] <beisner> i do think it merits this type of convo, and advanced notice before committing, if that is what is decided (and i can be completely fine with that).
[14:57] <jcastro> ok cool, I'll summarize your examples and post nowish
[14:57] <jcastro> beisner: marco and I are on a mission today to crush papercut charm bugs
[14:57] <jamespage> jcastro, hey - we've discussed having the percentage still - but capping at a sane max based on 'defaults'
[14:57] <beisner> marcoceppi, kind of yeah.  we try very hard to never change default behavior in the openstack charms for this reason.   reason:  any time you alter default behavior, you risk borking someone.
[14:58] <beisner> marcoceppi, not to say it doesn't happen, or can't happen.  just raising the "we should talk about this" flag.
[14:58] <marcoceppi> jamespage: there may well be databases big enough to need 28GB of the 32G it's installed on
[14:58] <marcoceppi> for innodb-buffer
[14:58] <jamespage> marcoceppi, sure
[14:59] <jamespage> marcoceppi, I was thinking like 80% or 32G, whichever is smaller
[14:59] <marcoceppi> jamespage: still won't fix the container story
[14:59] <marcoceppi> or local provider issue
[14:59] <beisner> marcoceppi, setting the value will fix both, no?
[14:59] <jamespage> marcoceppi, the only thing that will fix the container story is lxcfd
[14:59] <jamespage> lxcfs
[14:59] <jamespage> rather
[15:00] <marcoceppi> jamespage beisner I'd even be find with having this set to the default innodb-buffer-pool-size from distro, which might be the best way to solve this
[15:00] <marcoceppi> jamespage: true, can't wait for that
[15:00] <jamespage> marcoceppi, agreed
[15:01] <beisner> marcoceppi, yeah keeping somewhat in line with the underlying product's default sounds like a good thing, generally speaking.
[15:02] <jcastro> ok, mail sent to list.
[15:03] <jcastro> that way we have more people chime in
[15:04] <marcoceppi> jamespage beisner FWIW, deafult innodb_buffer_pool_size is ~134MB
[15:04] <marcoceppi> I'll update my merge proposal to reflect this, the readme to outline it better, and the tests to work with the update (while teh discussion on the list happens)
[15:05] <jcastro> marcoceppi: man, I was really hoping we could have just lit the thing on fire and walked away in slow motion with dramatic music playing.
[15:09] <marcoceppi> jcastro beisner: if a value is marked as "default" in juju, on upgrade it gets the new default value
[15:09] <marcoceppi> i hate everything
[15:09] <beisner> marcoceppi, bahh.  i was hoping the opposite.
[15:10] <marcoceppi> I guess mysql will just be forever broken until lxcfs
[15:11] <jcastro> hah man, awesome.
[15:11] <marcoceppi> I just wish there was a way in upgrade-charm to mark configuration as no longer default
[15:13] <jcastro> marcoceppi: ok, so we need to ask core for it to behave the other way
[15:13] <marcoceppi> jcastro: not exactly
[15:14] <marcoceppi> there is no right answer to this, except to give the charm more control over aspects like configuration
[15:14] <jcastro> I am fine with anything that isn't "leave it broken until lxcfs"
[15:14] <marcoceppi> jcastro: well, asking for a feature like this in core is basically tandamount to that
[15:20] <hazmat> you can try and detect running in the container and the self cgroup, but juju doesn't touch cgroup constraints unfortunately so there's no valid value to assume (even with lxcfs).
[15:20] <hazmat> there's a long standing bug to have juju set constraints on the container per the service constraints
[15:21] <marcoceppi> hazmat: I wonder if moving from direct lxc to lxd for local provider would help this
[15:21] <hazmat> marcoceppi: in and of itself, probably not.. juju needs to set constraints on the container else any given container will still see host resources as its own.
[15:21] <marcoceppi> just gaining an actual bootstrap node on local would be a win, and I think thumper is working on this, but I'm not sure if it would address the constraints/cgroup issue
[15:21] <hazmat> marcoceppi: moving to lxd has other benefits though, like making machine 0 not special
[15:22] <hazmat> marcoceppi: i still use my jury rigged lxc scripts for local dev, just so i can avoid the machine 0 as host setup that local provider currently tries.
[15:22] <marcoceppi> hazmat: ack, I'll go find the bug and see if I can't get it a little higher in priority
[15:22] <hazmat> marcoceppi: https://bugs.launchpad.net/juju-core/+bug/1323446
[15:22] <mup> Bug #1323446: constraints on local provider <constraints> <local-provider> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1323446>
[15:23] <marcoceppi> jcastro: ^
[15:24] <jcastro> looking, sorry was on the phone
[15:44] <rick_h_> jamespage: cool, I know the team was getting a fix in for the nested lxc issue and a new deployer/gui/quickstart to make sure that works as well.
[15:44] <rick_h_> jamespage: I'm on holiday today but don't hesitate to grab frankban or Makyo if anything isn't right there.
[15:44] <Makyo> o/
[15:56] <jamespage> its all cool ta
[15:56] <jamespage> rick_h_, enjoy your holiday
[20:03] <pmatulis> with HA, how does the juju master get associated with the mongodb master? is it random (among the ones which are about to be created) or is it always the initial single state server?
[21:42] <hatch> is there a way for me to get a single configuration value from the cli?
[21:42] <rick_h_> hatch: can't you juju get the single option?
[21:42] <thumper> yes
[21:43] <thumper> hatch: environment or service?>
[21:43] <hatch> service
[21:43] <hatch> you can't according to the 'get' docs
[21:44] <rick_h_> hatch: yea, see thats in the docs. I might be thinking of the charmhelpers in python :/
[21:44] <rick_h_> thumper: service config for a single config item
[21:44] <rick_h_> thumper: help is showing nothing to help at that level
[21:44] <hatch> I suppose it's partially irrelevant, my issue is that I have to put keys into a config option (because juju doesn't allow files as configuration options)
[21:44] <thumper> rick_h_, hatch: wow... that sucks
[21:44] <hatch> but when I write them out to the file they are malformed
[21:44] <thumper> you should be able to
[21:45] <rick_h_> thumper: yea, seems like it'd be an obvious one but just adding that as an narg fails
[21:45] <thumper> also, the format difference between 'juju get' and 'juju get-env' is jarring
[21:45] <rick_h_> thumper: gotta love a consistent api :/
[21:45]  * thumper sighs
[21:45] <thumper> we suck
[21:46]  * thumper adds a tech-debt bug
[21:46] <hatch> basically what I'm trying to determine if the `config-get mykey` returns the value formatted, the key file ends up with a bunch of \'s instead of real newlines
[21:47] <rick_h_> hatch: hmm, I know we use the @filename.txt thing in a charm script to get a file contents read in properly
[21:47] <rick_h_> hatch: maybe need to lok at that?
[21:47] <hatch> rick_h_: I have to set about 15 config values so I have a yaml file which I provide when deploying
[21:48] <rick_h_> hatch: ah ok
[21:48] <hatch> rick_h_: so you're saying to do it manually for each key?
[21:48] <rick_h_> hatch: no
[21:49] <hatch> I think that the `config-get mykey` is returning the value with \'s for newlines
[21:59] <hatch> from within the service unit is there any way for me to open up a window that's in a 'hook environment' ?
[22:00] <rick_h_> open up a window?
[22:00] <rick_h_> e.g. debug-hooks or dhx?
[22:00] <rick_h_> let you get a terminal in a hook context
[22:00] <hatch> right but I have to drop out to my normal machine, set up debug hooks, then go make a config change or something
[22:01] <hatch> then it'll pop up
[22:01] <hatch> I feel like I should be able to do something like... juju ssh service/0 --hook-context
[22:02] <rick_h_> check out charmhelpers contrib? does dhx do it?
[22:02] <hatch> nope :)
[22:02] <hatch> it does some cool things like allowing you to customize a window
[22:03] <hatch> but not let you actually open one in the hook context
[22:04] <rick_h_> well, debug-hooks + config change it is :P
[22:04] <hatch> yeah....I've just done a few of these 'simple' charms and have noticed a lot of small pain points that start to add up
[22:05] <rick_h_> yea, have to see/add them to the eco list they've had on charming pain points.
[22:05] <rick_h_> hatch: maybe some use of juju-run can help? isn't that in a hook context or is it not? I can't recall
[22:05] <hatch> it is when you're executing a file (running the hook file works) but not when you're passing a command odly enough
[23:36] <marcoceppi> hatch: there's no way
[23:36] <marcoceppi> welcome to the wonderful world of the developer experience
[23:37] <marcoceppi> you have to trigger a hook to get the hooke context in juju, and the only way to do that is with with debug hooks or using juju-run (but that's not an interactive session)