/srv/irclogs.ubuntu.com/2016/05/11/#juju.txt

=== natefinch-afk is now known as natefinch
josemarcoceppi: hey, do you know if anyone submitted a talk for txlf? CFP closes in less than 2h and I wanna make sure we have a juju talk03:14
=== rodlogic is now known as Guest70790
stublutostag: https://bugs.launchpad.net/postgresql-charm is probably best, but I should see a report just about anywhere you can make one :)06:22
stublutostag: Huh. I assumed there might be some minor glitches, but I certainly didn't assume that one :-/06:26
=== rodlogic is now known as Guest52700
=== frankban|afk is now known as frankban
simonklbafter upgrading to 2.0-beta6 my local charm has started complaining about hook missing - I let them be auto-generated by the reactive template and that worked fine before07:58
simonklbanyone know what changed and what needs to be done for it to find the hooks again?07:58
simonklbI should add that it seems that it finds the hooks if I create them manually, but then I get "ImportError: No module named 'charms'"07:59
simonklbSo it's like it looks in the root folder of the charm now instead of in ./trusty or whatever07:59
hoenirsimonklb, you tried to debug the hook?08:33
simonklbhoenir: is it possible to manually execute hooks?08:36
hoenirI don't hink soo08:47
hoeniryou tried to read the docs about hooks?08:48
simonklbhoenir: yea, I mean, it hasn't been an issue for the last couple of weeks08:49
simonklbonly now, after updating to a newer version it's starting to act up08:49
=== rodlogic is now known as Guest73940
wesleymasonsimonklb: you can use juju run to manually execute a hook, e.g. juju run --unit {unit_name} 'hooks/hook-name'09:17
wesleymasonor as hoenir said use debug-hooks and run 'hooks/hook-name' inside the tmux session09:19
simonklbwesleymason: /tmp/juju-exec390172100/script.sh: line 1: hooks/install: No such file or directory09:50
wesleymasonsimonklb: that definitely looks like the PWD for hooks has changed :-/09:51
simonklbyup09:51
simonklbis there a changelog somewhere?09:51
* wesleymason is still on 1.25.x for current infra so haven't tested 2.0 hooks yet09:51
simonklbright right09:51
wesleymasonsimonklb: https://jujucharms.com/docs/devel/temp-release-notes09:53
wesleymasonnot really a proper changelog though09:53
simonklbthanks, I'll read it and see if I can find something about the hooks working directory09:56
RAJITHHello09:59
RAJITHI am working on layered charm, simple charm is WORKLOAD-STATE : maintainence  AGENT-STATE :idle and message is Updating apt cache, please let me know what could be issue10:02
=== rodlogic is now known as Guest74633
stubRAJITH: If it hangs there, your units likely have a networking or apt issue, like an inaccessible or unavailable apt proxy. Its trying to run 'sudo apt update', and it is failing.11:29
stubRAJITH: I'd connect to the unit, kill the hook, and try running 'sudo apt update' yourself and debugging from there.11:29
RAJITHsure will try that11:34
=== rodlogic is now known as Guest32358
RAJITHif I tried to add a new machine , agent-state going to pending11:57
RAJITHhave tried this steps: 1. Destroy juju environment, if running. Use --force option if destroy fails.  Uninstall juju  Reboot the system using reboot command reboot     2. Ensure bind9 dns process is stopped, if running  ps -ef | grep named   if named is running: service bind9 stop  To do: ensure that named does not start on reboot, maybe uninstall it  3. Reinstall juju    4. Run the following command:  ifconfig | grep lxcbr0  If lxcbr0 11:59
RAJITHlxc-net; service start lxc-net         Run ifconfig | grep lxcbr0              If lxcbr0 not present, do:            brctl addbr lxcbr0           ifconfig lxcbr0 10.0.3.1 netmask 255.255.255.0 up  5. Run the following command:          sudo iptables-save | grep lxcbr0          The output should be as follows:  -A POSTROUTING -o lxcbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill         -A INPUT -i lxcbr0 -p tcp -m tcp --dport 12:00
=== rodlogic is now known as Guest63806
simonklbhoenir: wesleymason: I wasn't able to find the cause but works if you deploy from the build folder instead of the root13:12
simonklbmy guess is that they changed something in juju-deployer13:13
wesleymasonsimonklb: well juju-deployer definitely hasn't changed, it's been in maintenance mode for a while...but I think local repo got removed, so it would need to be a full path to the charm itself, not the "root" dir, so perhaps before it was deploying $ROOT and not $ROOT/$SERIES/$CHARM13:15
simonklbsomething like that13:16
=== rodlogic is now known as Guest68017
gnuoywolsen, I assume you have no major objection to https://github.com/openstack-charmers/charm-interface-hacluster/pull/1 ?14:17
wolsengnuoy: that's correct14:18
tvansteenburghahasenack: re juju-deployer packaging, python-bzrlib should be removed, python-six added14:18
tvansteenburghahasenack: (deps)14:18
ahasenacktvansteenburgh: ok14:20
gnuoywolsen, do you have the power to hit merge?14:20
ahasenacktvansteenburgh: and tox added14:20
ahasenackand other stuff14:20
tvansteenburghwell not really, make test will do that14:20
wolsengnuoy: it doesn't appear that I do14:21
gnuoywolsen, well thats not right. thanks for looking14:21
wolsengnuoy: sure, thanks for looking@ that finally14:22
gnuoywolsen, np, thanks for doing all that work14:22
gnuoyjamespage, can you add thedac, dosaboy and wolsen to the oipenstack-charmers on github please? (Or enable me to)14:23
jamespagegnuoy, done (enabled you)14:24
gnuoythanks14:24
SaltySolomonhi14:24
gnuoywolsen, with any luck you've got an invite to join14:26
wolsengnuoy: awesome,thanks14:26
gnuoywolsen, np ... any chance of hitting the button on my pull request if you have a sec?14:27
wolsengnuoy: I'm logging back in now14:27
gnuoy\o/14:27
gnuoythanks14:27
wolsengnuoy: "only those with write permissions"14:28
gnuoyargh, let me look14:28
wolsengnuoy: could be on my end too - let me look14:30
wolsengnuoy: it's my mistake14:30
gnuoyah, tip top14:30
wolsengnuoy: done14:30
gnuoythanks14:30
gnuoywolsen, tinwood I've created a starter for 10 for enabling management of HA resources in the Openstack layer https://github.com/openstack-charmers/charm-layer-openstack/pull/4/files . Please feel free to comment on it if you have any thoughts/objections14:37
gnuoytinwood (I know the importing is not standards compliant and I'll fix that)14:37
tinwoodgnuoy, thanks for the heads-up; I'm wrangling barbican as we speak.14:38
wolsengnuoy: will do - it'll likely be later this week - I'm in Austin this week14:56
beisnerthedac, here's the sentry unit foo as an ex: https://review.openstack.org/#/c/314773/3..4/tests/basic_deployment.py15:54
iceyany estimate of when charmhelpers tip will get onto pip?15:56
thedacbeisner: thanks, I'll start consuming that now.16:05
thedacbeisner: gnuoy I added ODL to the mojo specs https://code.launchpad.net/~thedac/openstack-mojo-specs/odl/+merge/29431116:19
gnuoythedac, excellent, thanks16:19
cory_fukjackal_: Hey, are we dropping the separate execution_mode config option for the Big Top Spark charm and moving to auto-switching to yarn mode based on hadoop.yarn.ready?16:20
cory_fu(Also, if we keep the config, can we shorten the name a bit?  spark_execution_mode seems unnecessarily long, and somewhat redundant.)16:21
=== frankban is now known as frankban|afk
cory_fukjackal_: When you have a chance, can you take a look at https://github.com/juju-solutions/layer-apache-bigtop-base/pull/2  Those are some of the refactors I had played around with when looking at your Spark charm.  I'm thinking it should let you clean up some of the code  in that charm layer to something along these lines: http://pastebin.ubuntu.com/16365652/16:47
cory_fuAnyway, that's all completely untested of course, and was just what I had from the time I spent looking at it yesterday.  Let me know what you think16:47
tvansteenburghicey: that has historically been by request. i just took a quick look and there are a number of failing tests.17:19
iceytvansteenburgh: ok, that's unfortunate; there's stuff that we've been using in the openstack charms for a while that aren't in the pip version, making a migration to layered charms more difficult -_-17:20
tvansteenburghicey: i would fix them myself but i just don't have time right now17:22
iceytvansteenburgh: same boat -_-17:22
aisraelAny ideas on a way to force destroy a controller in juju 2 beta 6? There's no more --force flag17:34
rye_How can I remove a service that has a unit in an error state in juju2?17:35
tvansteenburghaisrael: do you really want to destroy the controller, or just the model?17:35
aisraeltvansteenburgh: Either at this point. I think I've found a bug that's preventing destroy-controller, destroy-model, and kill-controller from working17:36
tvansteenburghrye_: you need to `juju resolve` the unit first17:36
tvansteenburghaisrael: juju destroy-model -y default # or whatever your model name is <- that doesn't work for you?17:37
aisraeltvansteenburgh: Nope, the model is stuck in destroying17:39
rye_tvansteenburgh: thanks!17:39
tvansteenburghaisrael: well if you just want to keep working, bootstrap a new controller :D17:40
aisraeltvansteenburgh: Good point. That'll do while I file a bug17:40
rye_Hmm, upon resolution, it continues onto fail another hook (not unexpected). Is there a way to halt that process so that I can remove it?17:40
tvansteenburghrye_: you just have to keep resolving until it finishes what's queued17:41
rye_tvansteenburgh: fair enough, thanks again17:42
iceyis it possible to have a function execute _after_ every hook run with reactive?17:45
iceyspecifically, with reactive+layers17:46
tvansteenburghicey: hookenv.atexit17:48
iceytvansteenburgh: would I att an empty @when to get that to register on each invocation?17:49
iceyalso, thanks tvansteenburgh!17:49
tvansteenburghicey: i think you could just do it in module scope17:50
iceygot it, the leadership layer has a nice example of atstart :)17:51
zeus`the'res a way to modify the default image user "ubuntu" to another custom user when doing a juju bootstrap?17:57
gnuoythedac, if you get a sec would you mind taking a look at https://github.com/openstack-charmers/charm-tempest/pull/518:05
thedacgnuoy: sure18:05
gnuoythanks18:05
LiftedKiltis maas 2 support merged yet?19:20
LiftedKiltI thought I saw something about that a bit ago19:21
rick_h_LiftedKilt: yes, in trunk and will be in the next beta19:26
rick_h_LiftedKilt: it comes out of feature flag in this next release19:26
LiftedKiltrick_h_: What's the release timeline on beta7?19:26
rick_h_LiftedKilt: soooooon :) hopefully next two days19:27
LiftedKilthaha ok19:28
LiftedKiltmy maas 1.9 has gotten buggy and I wanted to blow it away and move to 2.019:28
LiftedKiltgotta wait for integration though19:28
rick_h_gotcha, yea there's rough work in the beta6 behind a feature flag, but I'd wait the day/two for the next one before jumping19:29
DavidRamahello folks, trying to deploy the openstack charm but got ceph-mon lxc's in maintenance/executing status since the start (about 3 hours now) any idea why ? Running on Xenial/juju2.019:30
LiftedKiltrick_h_: for sure - I'll just hang tight19:31
bdxDavidRama: I've got the same thing going on here19:31
bdxicey: ^19:31
bdxicey: is that a thing right now?19:32
rick_h_DavidRama: bdx so there's a corner there for lxc because it's going away and should be lxd?19:32
bdxrck_h_, DavidRama: my bad ... I've been experiencing that issue on lxd19:33
bdxas well as lxc19:34
DavidRamasame19:34
DavidRamaceph-mon/0              maintenance     executing   2.0-beta4 3/lxc/0                10.0.3.181     Bootstrapping MON cluster19:34
bdxicey, rick_h_, can we get some <3 in that^ area please19:35
iceybdx: DavidRama is that stable or next?19:36
bdxstable19:36
rick_h_bdx: yes, there's work ongoing (one of the reasons we're still beta) to finish pulling lxc and making lxd 100% equiv.19:36
bdxcs:xenial/ceph-019:36
rick_h_bdx: DavidRama what provider is this on?19:37
bdxrick_h_: lxd19:37
rick_h_bdx: ah, so this is the issue, you can't do nested containers in lxd by default, you have to use the 'docker' profile lxd offers19:37
rick_h_bdx: that's got a set of discussions at our sprint next week on how to handle this, as lxd doesn't enable it ootb as a security issue19:37
bdxoooh, rick_h_: I'm not nesting...19:38
rick_h_bdx: ? so on lxd you're deploying openstack? or am I misreading?19:38
rick_h_bdx: is this on maas then? with the lxd container on there?19:38
bdxrick_h_: `juju bootstrap lxd lxd-test; juju deploy cs:xenial/ceph-0`19:39
rick_h_bdx: oic, ok. Do this is different than what DavidRama pasted my bad19:39
* rick_h_ is confusing the different issues19:39
bdxoh my bad19:39
rick_h_so one at a time, bdx there is love going into it19:40
bdxnice, thx19:40
rick_h_DavidRama: can you try beta6? lots of lxd work between beta4 and 619:40
rick_h_DavidRama: and nested lxd (on the lxd provider) isn't going to work atm, still more to be done19:40
* rick_h_ thinks that's the summary for the moment19:40
DavidRamai'm on local provider19:41
bdxicey, rick_h_: while I've got you both here, whats the status of cs:xenial/{ceph,ceph-osd}-0 on for the MAAS provider?19:42
rick_h_bdx: so the maas provider will be out of feature flag and available for testing in the next beta release hopefully out by EOW19:42
rick_h_bdx: so we'll know more then once the charm maintainers can validate/etc19:42
bdxrick_h_: entirely .... what are we looking at here 2-3 weeks, or 16.07 charm release?19:44
rick_h_bdx: in between? for final GA of 2.0 I think19:45
rick_h_bdx: the charm will be there/working with 2.0 beta in a week I'd say, but GA of 2.0 is on the long end of that 2-3 weeks but well before July19:45
bdxrick_h_: ok, great. So what you are saying, is that ceph deploys  will hopefully be squared away within around a weeks time ...e.g. ceph, ceph-osd will deploy successfully using 16.04 in a weekish?19:48
rick_h_bdx: I'd hope so, cholcombe is there something else to watch out for besides juju2/maas2? ^19:49
=== rodlogic is now known as Guest93911
DavidRamarick_h_ got the same symtom with beta6:21:30
DavidRamaID                      WORKLOAD-STATUS JUJU-STATUS VERSION   MACHINE PORTS          PUBLIC-ADDRESS MESSAGE21:30
DavidRamaceph-mon/0              maintenance     executing   2.0-beta6 2/lxc/0                10.0.3.247     Bootstrapping MON cluster21:30
magicaltroutkwmonroe:21:37
magicaltroutmarcoceppi: you around?21:53
cory_fumagicaltrout: I don't think kwmonroe is on IRC, but you can ping him on Telegram (or I can ping him if you need me to)21:59
magicaltroutits alright thanks cory_fu i figured it out21:59
magicaltroutis marcoceppi on holiday or something?21:59
cory_fuI don't think so, but he might be travelling22:00
magicaltroutbleh22:00
magicaltroutfair enough22:00
=== rodlogic is now known as Guest14174
magicaltroutcory_fu: actually just quickly, can i juju expose hdfs from a remote location?22:03
cory_fuWhat do you mean?22:03
cory_fuDo you mean, use a Juju-deployed HDFS with something outside of Juju?22:03
cory_fuOr vice-versa?22:03
magicaltroutyeah so if I have one of your bundles running22:03
magicaltroutexpose HDFS so I can write from outside the juju network22:04
magicaltrouthdfs://.... uri22:04
cory_fumagicaltrout: You can, but there are two issues right now.  One is that the port is not opened by default.  That's easy enough to fix with `juju run --service namenode "open-port 8020"; juju expose namenode`22:07
magicaltroutis that one issue? or both the issues?22:09
cory_fuThe other is that I'm not sure if it listens on the public interface by default.  We had a change put in to make absolutely sure it does, but that's not been released yet.22:10
magicaltroutah right, i'm sure I can hack that around22:10
magicaltroutits only for this afternoons demo22:10
cory_fumagicaltrout: Here's the change to force it, if it doesn't by default: https://github.com/juju-solutions/jujubigdata/commit/0f0ff1bd98eb06661375c699a4172b3e2b94396b22:11
magicaltroutta22:11
cory_fuYou can just add those to the /etc/hadoop/conf/hdfs-site.xml file manually22:11
cory_fuYou'll still need to do the open-port & expose thing, tho22:11
magicaltroutyup22:12
cory_fumagicaltrout: Let me know if you need me to ping kwmonroe for  you on telegram to arrange a rendezvous22:12
magicaltroutit's alright i can normally hear his texan drawl from a distance away ;)22:13
cory_fulol, true22:13
rye_I'm trying to run bundletester, but jujuclient.py is throwing an EnvironmentNotBootstrapped exception. After digging into it a bit, it looks like I'm missing the account['password'] field here: http://bazaar.launchpad.net/~juju-deployers/python-jujuclient/trunk/view/head:/jujuclient.py#L29122:31
rye_I do have a password set up, however (as admin@local)22:31
rye_Any suggestions for what else I can check?22:31
=== rodlogic is now known as Guest67947
tvansteenburghrye_: juju1 or 2?23:14
tvansteenburghrye_: make sure you have latest jujuclient from pypi. if you're testing on juju2, you must be bootstrapped before running bundletester, and you need to pass -e $(juju switch)23:17
tvansteenburghrye_: also make sure you have latest bundletester from pypi23:18
tvansteenburghrye_: if none of those things fixes your problem, pastebin the output23:18
rye_tvansteenburgh: juju2, and thanks!23:37

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!