/srv/irclogs.ubuntu.com/2015/08/13/#juju.txt

=== CyberJacob is now known as zz_CyberJacob
IceyECI'm trying to setup the ceph charm for development of a charm that will connect to it, I try: `juju deploy -n 3 ceph`01:09
IceyECand then juju status shows that each node is at: ` 'hook failed: "mon-relation-changed"'`01:09
IceyEClooks like I need to add the monitor-secret and fsid, adding those now01:11
=== med_ is now known as Guest471
=== zz_CyberJacob is now known as CyberJacob
=== med_ is now known as Guest61394
=== CyberJacob is now known as zz_CyberJacob
=== anthonyf is now known as Guest27914
=== urulama_ is now known as urulama
IceyECI'm looking at the Charmhelpers code, specifically, https://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/storage/linux/ceph.py#L29211:56
IceyECam I correct that this cannot work on LXC?11:57
IceyECI'm developing with the local provider and am getting modprobe errors, digging in it looks like modprobe is not supported in LXC containers11:57
marcoceppiIceyEC: you are correct, at the moment ceph requires block devices and there are none in the LXC/Local provider with Juju11:57
marcoceppiIceyEC: you can use KVM instead of LXC for the backend to local provider11:57
marcoceppiIceyEC: that way modprobe would work as epected11:58
IceyECok, thanks11:58
marcoceppiIceyEC: I see zero docs on this, but lp|conference and mbruzek have done this recently11:59
IceyECcool, I'm trying to add ceph support to the solr-jetty charm and discovered that it isn't letting me setup the block device :-P12:03
marcoceppiIceyEC: heh, yeah, ceph on local provider def does not work in LXC, might work in KVM, jamespage would probably know better12:04
jamespageIceyEC, marcoceppi: actually it does12:05
marcoceppi:O12:05
IceyECsounds great :)12:05
jamespageyou can provide a directory name as a block device - e.g. /tmp/osd12:05
jamespagefor example12:05
IceyECin lxc?12:05
IceyECbecause, I did that12:05
jamespageIceyEC, yes12:05
IceyEC/var/ceph was the device name I gave it, and it still tries to do modprobe12:05
jamespageIceyEC, the ceph charm does?12:05
IceyECyeah12:06
IceyECwell12:06
IceyECno12:06
* jamespage looks12:06
IceyECI'm using the code that the MySQL charm uses to configure Ceph12:06
IceyECwhich is almost the same as the CharmHelpers code for Ceph12:06
jamespageIceyEC, right - that does not work under LXC at all12:06
IceyECboth of which do hit modprobe in the configure function12:06
IceyECso, I can setup cep but I cannot use it under lxc?12:06
jamespageIceyEC, its a limitation of running in a LXC container - you can't modprobe or suchlike12:07
IceyECyeah, will just reconfigure to use kvm :)12:07
jamespageIceyEC, ack - that would work12:07
IceyECsounds good :)12:07
jamespageeven if its a bit heavier on resources12:07
IceyEChope to have something for you and arosales soon ;-)12:07
IceyECthis machine has 32G of RAM, dual dual core XEON's12:07
IceyEChave resources to deal with :)12:08
g3narojooooojooooo12:08
marcoceppiIceyEC: I'll swing mbruzek your way when he comes online, should be able to get you the short of the long of KVM as local provider12:08
IceyECcool, thanks marcoceppi12:10
suchvenuHi Matt12:15
mbruzekHello12:16
suchvenuI got a review comment on  Performance for DB2 charm12:16
suchvenuCan you please help me to understand that better12:16
mbruzekSure!12:16
marcoceppimbruzek: when you get a second, can you share how to use KVM as the provisioner for local with IceyEC? They're having some modprobe issues with LXC and IIRC you addressed that in docker with KVM on local12:17
mbruzekWhen I deployed db2 on amazon the disk space on the VM was highly consumed.12:17
suchvenuok12:18
mbruzeksuchvenu:  I don't remember the exact number but it was nearly out of hard drive space.  With Juju you can request larger instances with constraints, but there was also a /mnt directory that had 190GB free12:18
suchvenuI checked the DB2 and found that we an create Databases and logs to using "create database db name on <>" commands12:19
suchvenuIs that the one you are looking for ?12:19
mbruzeksuchvenu: I also know that the minimum Amazon instance has low RAM 1.7GB.  I don't know how much RAM DB2 needs to run at optimum performance12:19
suchvenuok12:20
mbruzeksuchvenu: No, I was thinking if the README could make some suggestions on disk size and or RAM size for the best performance of DB212:20
suchvenuok12:20
mbruzeksuchvenu: I don't know where the database is written on that charm,but if the database gets used and grows I fear the filesystem on those small instances will be totally consumed.12:21
mbruzekRunning out of disk space is a very difficult problem to diagnose, because even log files can not be written.12:21
suchvenuokk12:21
suchvenuSo you are suggesting to add the RAM and disk size requirements in Readme ?12:22
mbruzeksuchvenu: The juju constraints you can set are root-disk (file system) and mem (RAM)12:22
mbruzekYes to what is recommended.  And perhaps a warning that an install of DB2 will consume a large percentage of free space if the constraints are not used.12:23
suchvenuokk12:25
suchvenuAll these in README, right ?12:25
marcoceppiIceyEC: there are docs! Just in devel still: https://jujucharms.com/docs/devel/config-KVM12:26
suchvenuOk Thanks mbruzek... I will add the details in README and push it back for review.12:26
mbruzeksuchvenu: Yes find out what is recommended and put that in the README.  But if we find that the small instance is not sufficient we should list that as well in the RADME12:26
IceyECheh, I have that open already ;-) following along, will hopefully have ceph mounts purring along shortly :) thanks marcoceppi12:27
suchvenuOk12:27
mbruzeksuchvenu: Also consider deleting the unneeded RPMs or packages for other architectures in the charm code12:32
=== anthonyf is now known as Guest46427
=== Guest46427 is now known as anthonyjf
IceyECso, I've got it creating the block device, but when I try to mount it with charmhelpers.contrib.charmsupport.volumes.configure_volume, I'm getting `mount: block device /dev/rbd1 is write-protected, mounting read-only\nmount: you must specify the filesystem type`13:58
IceyECsomething I broke removing other code? possibly!14:08
jcastrohey beisner14:40
beisnerhi jcastro, back in 1 min..14:40
jcastroack, I want to talk about: https://bugs.launchpad.net/bugs/137386214:41
mupBug #1373862: MySQL doesn't deploy due to oversized dataset <mysql (Juju Charms Collection):Fix Committed by marcoceppi> <https://launchpad.net/bugs/1373862>14:41
beisnerjcastro, howdy, back14:47
jcastroyeah so basically, people keep running into this issue when trying Juju on LXC14:48
beisnerjcastro, right.  i totally understand.14:49
beisnerjcastro, but imho, we can't just change a long-standing default value.14:49
jcastroiirc, all we did was just pick a percentage14:50
marcoceppiclint chose a default over 3 years ago14:50
marcoceppiOracle says that default is not a sane value14:50
marcoceppiand provided us with a better value14:50
marcoceppiI understand teh upgrade delemia, I want to see if Juju does teh right thing or not14:50
beisnerjcastro, right, but that makes it even more important to discuss.  we have an established expectation with a fairly large user base, who are accustomed to that default value.14:51
jcastrobeisner: sure, happy to do that, maybe we should do it on the list?14:51
jcastroand I'd like to counter that we keep losing users when they try juju and it doesn't work14:51
arosalesIceyEC, good to hear :-)14:52
jcastrobeisner: should I bring it up on the list? I am curious as to what would happen during upgrades14:53
beisnerjcastro, marcoceppi - here's a use case to consider:14:53
beisneri'm joe admin and company X.  i have a couple of juju-deployed database servers with 256GB of ram (the charm's established behavior gives mysql 204GB).  i like it.  i need to add more servers at another site location in a new juju environment.14:54
jamespagerick_h_, urulama: hey - fyi I've finally got the openstack-base bundle updated and in-store14:54
beisnerjoe will now have 128 MB whereas he had 204GB for his service, unless he changes his ways.14:54
marcoceppibeisner: so beacuse we made a mistake 3 years ago we can't fix it? Sounds like Joe should be using bundles and explicitly setting values. While the GUI won't export default config options, it doesn't mean that shouldn't change14:55
jcastroso I suspect someone using it in production is setting explicit values for everything with like a yaml file, but I could be wrong14:55
beisnermarcoceppi, i don't necessarily disagree.  arbitrary value setting stinks.14:55
beisnerjcastro, could be.14:56
marcoceppibeisner: the way you've set up this defense means that no default values should really ever be changed ever14:56
beisneri do think it merits this type of convo, and advanced notice before committing, if that is what is decided (and i can be completely fine with that).14:56
jcastrook cool, I'll summarize your examples and post nowish14:57
jcastrobeisner: marco and I are on a mission today to crush papercut charm bugs14:57
jamespagejcastro, hey - we've discussed having the percentage still - but capping at a sane max based on 'defaults'14:57
beisnermarcoceppi, kind of yeah.  we try very hard to never change default behavior in the openstack charms for this reason.   reason:  any time you alter default behavior, you risk borking someone.14:57
beisnermarcoceppi, not to say it doesn't happen, or can't happen.  just raising the "we should talk about this" flag.14:58
marcoceppijamespage: there may well be databases big enough to need 28GB of the 32G it's installed on14:58
marcoceppifor innodb-buffer14:58
jamespagemarcoceppi, sure14:58
jamespagemarcoceppi, I was thinking like 80% or 32G, whichever is smaller14:59
marcoceppijamespage: still won't fix the container story14:59
marcoceppior local provider issue14:59
beisnermarcoceppi, setting the value will fix both, no?14:59
jamespagemarcoceppi, the only thing that will fix the container story is lxcfd14:59
jamespagelxcfs14:59
jamespagerather14:59
marcoceppijamespage beisner I'd even be find with having this set to the default innodb-buffer-pool-size from distro, which might be the best way to solve this15:00
marcoceppijamespage: true, can't wait for that15:00
jamespagemarcoceppi, agreed15:00
beisnermarcoceppi, yeah keeping somewhat in line with the underlying product's default sounds like a good thing, generally speaking.15:01
jcastrook, mail sent to list.15:02
jcastrothat way we have more people chime in15:03
marcoceppijamespage beisner FWIW, deafult innodb_buffer_pool_size is ~134MB15:04
marcoceppiI'll update my merge proposal to reflect this, the readme to outline it better, and the tests to work with the update (while teh discussion on the list happens)15:04
jcastromarcoceppi: man, I was really hoping we could have just lit the thing on fire and walked away in slow motion with dramatic music playing.15:05
marcoceppijcastro beisner: if a value is marked as "default" in juju, on upgrade it gets the new default value15:09
marcoceppii hate everything15:09
beisnermarcoceppi, bahh.  i was hoping the opposite.15:09
marcoceppiI guess mysql will just be forever broken until lxcfs15:10
jcastrohah man, awesome.15:11
marcoceppiI just wish there was a way in upgrade-charm to mark configuration as no longer default15:11
jcastromarcoceppi: ok, so we need to ask core for it to behave the other way15:13
marcoceppijcastro: not exactly15:13
marcoceppithere is no right answer to this, except to give the charm more control over aspects like configuration15:14
jcastroI am fine with anything that isn't "leave it broken until lxcfs"15:14
marcoceppijcastro: well, asking for a feature like this in core is basically tandamount to that15:14
hazmatyou can try and detect running in the container and the self cgroup, but juju doesn't touch cgroup constraints unfortunately so there's no valid value to assume (even with lxcfs).15:20
hazmatthere's a long standing bug to have juju set constraints on the container per the service constraints15:20
marcoceppihazmat: I wonder if moving from direct lxc to lxd for local provider would help this15:21
hazmatmarcoceppi: in and of itself, probably not.. juju needs to set constraints on the container else any given container will still see host resources as its own.15:21
marcoceppijust gaining an actual bootstrap node on local would be a win, and I think thumper is working on this, but I'm not sure if it would address the constraints/cgroup issue15:21
hazmatmarcoceppi: moving to lxd has other benefits though, like making machine 0 not special15:21
hazmatmarcoceppi: i still use my jury rigged lxc scripts for local dev, just so i can avoid the machine 0 as host setup that local provider currently tries.15:22
marcoceppihazmat: ack, I'll go find the bug and see if I can't get it a little higher in priority15:22
hazmatmarcoceppi: https://bugs.launchpad.net/juju-core/+bug/132344615:22
mupBug #1323446: constraints on local provider <constraints> <local-provider> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1323446>15:22
marcoceppijcastro: ^15:23
jcastrolooking, sorry was on the phone15:24
rick_h_jamespage: cool, I know the team was getting a fix in for the nested lxc issue and a new deployer/gui/quickstart to make sure that works as well.15:44
rick_h_jamespage: I'm on holiday today but don't hesitate to grab frankban or Makyo if anything isn't right there.15:44
Makyoo/15:44
jamespageits all cool ta15:56
jamespagerick_h_, enjoy your holiday15:56
=== scuttle|afk is now known as scuttlemonkey
=== natefinch is now known as natefinch-afk
=== zz_CyberJacob is now known as CyberJacob
=== CyberJacob is now known as zz_CyberJacob
=== zerick_ is now known as zerick
pmatuliswith HA, how does the juju master get associated with the mongodb master? is it random (among the ones which are about to be created) or is it always the initial single state server?20:03
=== scuttlemonkey is now known as scuttle|afk
=== Guest61394 is now known as med_
hatchis there a way for me to get a single configuration value from the cli?21:42
rick_h_hatch: can't you juju get the single option?21:42
thumperyes21:42
thumperhatch: environment or service?>21:43
hatchservice21:43
hatchyou can't according to the 'get' docs21:43
rick_h_hatch: yea, see thats in the docs. I might be thinking of the charmhelpers in python :/21:44
rick_h_thumper: service config for a single config item21:44
rick_h_thumper: help is showing nothing to help at that level21:44
hatchI suppose it's partially irrelevant, my issue is that I have to put keys into a config option (because juju doesn't allow files as configuration options)21:44
thumperrick_h_, hatch: wow... that sucks21:44
hatchbut when I write them out to the file they are malformed21:44
thumperyou should be able to21:44
rick_h_thumper: yea, seems like it'd be an obvious one but just adding that as an narg fails21:45
thumperalso, the format difference between 'juju get' and 'juju get-env' is jarring21:45
rick_h_thumper: gotta love a consistent api :/21:45
* thumper sighs21:45
thumperwe suck21:45
* thumper adds a tech-debt bug21:46
hatchbasically what I'm trying to determine if the `config-get mykey` returns the value formatted, the key file ends up with a bunch of \'s instead of real newlines21:46
rick_h_hatch: hmm, I know we use the @filename.txt thing in a charm script to get a file contents read in properly21:47
rick_h_hatch: maybe need to lok at that?21:47
hatchrick_h_: I have to set about 15 config values so I have a yaml file which I provide when deploying21:47
rick_h_hatch: ah ok21:48
hatchrick_h_: so you're saying to do it manually for each key?21:48
rick_h_hatch: no21:48
hatchI think that the `config-get mykey` is returning the value with \'s for newlines21:49
hatchfrom within the service unit is there any way for me to open up a window that's in a 'hook environment' ?21:59
rick_h_open up a window?22:00
rick_h_e.g. debug-hooks or dhx?22:00
rick_h_let you get a terminal in a hook context22:00
hatchright but I have to drop out to my normal machine, set up debug hooks, then go make a config change or something22:00
hatchthen it'll pop up22:01
hatchI feel like I should be able to do something like... juju ssh service/0 --hook-context22:01
rick_h_check out charmhelpers contrib? does dhx do it?22:02
hatchnope :)22:02
hatchit does some cool things like allowing you to customize a window22:02
hatchbut not let you actually open one in the hook context22:03
rick_h_well, debug-hooks + config change it is :P22:04
hatchyeah....I've just done a few of these 'simple' charms and have noticed a lot of small pain points that start to add up22:04
rick_h_yea, have to see/add them to the eco list they've had on charming pain points.22:05
rick_h_hatch: maybe some use of juju-run can help? isn't that in a hook context or is it not? I can't recall22:05
hatchit is when you're executing a file (running the hook file works) but not when you're passing a command odly enough22:05
marcoceppihatch: there's no way23:36
marcoceppiwelcome to the wonderful world of the developer experience23:36
marcoceppiyou have to trigger a hook to get the hooke context in juju, and the only way to do that is with with debug hooks or using juju-run (but that's not an interactive session)23:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!