thumper | anyone want a quick easy review? | 01:11 |
---|---|---|
thumper | https://github.com/juju/juju/pull/8754 | 01:11 |
acwork | Hello, can anybody explain to me what files control the system setting on a LXC container provisioned with JUJU? | 01:30 |
thumper | acwork: what problems are you having? | 01:48 |
thumper | acwork: also, which version of juju | 01:48 |
acwork | Version of juju 2.3, I have installed an openstack cluster and I am trying to set the bridge interface on the LXC containers to a higher MTU. | 01:51 |
acwork | I was hoping to be able to persistently set the mtu for the LXC containers. Not sure how to do that from JUJU. | 01:52 |
thumper | I think that juju sets the MTU on the containers based on the host device MTU | 01:59 |
thumper | the settings for each particular container aren't touched after creation | 01:59 |
thumper | so the standard lxc commands on the host could be used to tweak the containers | 01:59 |
acwork | So if I set the mtu via dhcp then the container should inherit the that on creation? | 02:08 |
kelvinliu_ | would anyone have a quick look this tiny fix ? https://github.com/juju/juju/pull/8755 thanks. | 02:08 |
acwork | not having luck with the lxdbr0 interface mtu | 02:16 |
thumper | acwork: I'm sorry, you are outside my networking knowledge | 02:20 |
thumper | I just know that juju doesn't do anything special with the MTU of the containers | 02:20 |
acwork | okay thanks for the response | 02:20 |
thumper | kelvinliu_: lgtm | 02:22 |
kelvinliu_ | thumper, thx, | 02:23 |
jam | acwork: bionic or xenial? and what version of Juju? Generally when we create the bridges for containers we should set the MTU on the bridge to the same MTU as the original device, and when we create the MTU for containers we should set it to the MTU of the bridge they are being added to. But as Tim said, I don't think we update that post-create | 02:39 |
acwork | Xenial , juju 2.3 | 02:43 |
anastasiamac | can i have areally tiny review https://github.com/juju/juju/pull/8756 (without this change, tabular status with only machines is not separated from controller info)... | 02:50 |
anastasiamac | thumper: this is the bit that i have removed from my last pr but really needs to be in :) m checking develop now... | 02:51 |
* thumper looks | 02:52 | |
thumper | anastasiamac: hmm... | 02:53 |
thumper | how will this be in 2.3? | 02:53 |
thumper | the one we are releasing now? | 02:53 |
anastasiamac | thumper: yuck :( but so far i have only seen machine section only... i doubt many systems will have just one section in display.... | 02:54 |
* thumper nods | 02:54 | |
thumper | fair enough | 02:54 |
thumper | anastasiamac: let's get it merged | 02:54 |
anastasiamac | thumper: the scary bit is the one where there could 2 lines... | 02:54 |
thumper | anastasiamac: I have one for you | 02:54 |
* anastasiamac checking what sections displayed may end up with 2line separation | 02:55 | |
thumper | why two lines? | 02:55 |
thumper | it seems like there needs to be a bit of highter level thought about who is putting in newlines and why | 02:56 |
thumper | https://github.com/juju/juju/pull/8754 | 02:56 |
anastasiamac | because 'applications' ended up with a new line and say 'offers'... so if u have an output with application and offers (no machines) u'll have 2 lines... so rare but possible... | 02:57 |
anastasiamac | thumper: yes, i could re-work it to ensure that some central place is responsible for newlines... | 02:58 |
anastasiamac | thumper: files changed 156!? | 02:58 |
thumper | I'm ok with rare | 02:58 |
thumper | yeah, but look at the changes | 02:58 |
thumper | there are two global replaces :) | 02:58 |
anastasiamac | thumper: i thought u'd b... but in the long term, i'd like newlines mngmt :) | 02:58 |
thumper | anastasiamac: I agree, we should rework with better logic | 02:59 |
thumper | but it isn't urgent | 02:59 |
anastasiamac | thumper: k. then don;t worry about my pr. I'll ping when m ready again | 02:59 |
anastasiamac | but i'll look at urs :) | 02:59 |
thumper | anastasiamac: I approved your PR | 02:59 |
anastasiamac | \o/ | 03:00 |
thumper | anastasiamac: I'm saying we should add better logic later | 03:00 |
anastasiamac | well, 'later' will b in the next hour at most ;) | 03:00 |
anastasiamac | thumper: lgtm | 03:03 |
babbageclunk | does anyone have a nice trick for a scrollable but still refreshing `juju status` command for when you can't just watch status because there are too many applications and units in your model to fit on one screen? | 03:09 |
anastasiamac | babbageclunk: use smaller font on ur console to fit everything? :) | 03:11 |
anastasiamac | babbageclunk: but srsly, i do not :( | 03:11 |
babbageclunk | anastasiamac: done that already but there are limits! I'm getting old. :( | 03:12 |
anastasiamac | babbageclunk: yes :( the only thing that i could do was to not display empty sections... is there something in particular that u r watching? maybe u can filter status to only show what u want? | 03:13 |
babbageclunk | yeah, that's probably the solution, but it's a hassle in this case since I have lots of different applications | 03:14 |
babbageclunk | ooh, I didn't realise the filtering took wildcards too! | 03:15 |
babbageclunk | yay thanks anastasiamac | 03:16 |
anastasiamac | babbageclunk: \o/ | 03:16 |
anastasiamac | babbageclunk: yes, we'd need to be very explicit in new docs about how powerful status actually is :D | 03:16 |
anastasiamac | thumper: updated 8756 to have explicit start/end section methods... hopefully it's clearer as too what needs to happen... | 04:14 |
thumper | anastasiamac: here's one for you https://github.com/juju/juju/pull/8757 | 04:22 |
* thumper takes a deep breath before addressing merge conflicts | 04:24 | |
thumper | babbageclunk: you around? | 05:06 |
babbageclunk | thumper: yup | 05:06 |
* thumper waves at alexisb's autojoiner | 05:06 | |
thumper | babbageclunk: https://github.com/juju/juju/pull/8757 | 05:06 |
thumper | babbageclunk: very simple branch | 05:06 |
babbageclunk | curses! | 05:06 |
babbageclunk | ha, looking | 05:06 |
babbageclunk | ooh, that is suimple | 05:07 |
babbageclunk | simple | 05:07 |
thumper | yeah | 05:07 |
thumper | very | 05:07 |
babbageclunk | thumper: approved | 05:08 |
thumper | thanks | 05:09 |
* thumper EODs | 05:18 | |
=== frankban|afk is now known as frankban | ||
manadart | Anyone able to take a look at https://github.com/juju/juju/pull/8752 ? | 07:34 |
wallyworld | manadart: lgtm modulo a lack of expert LXD knowledge :-) | 07:44 |
manadart | wallyworld: Thanks. | 07:45 |
=== axw_ is now known as axw | ||
manadart | For review; softens the default networking verification/creation when local LXD server is clustered: https://github.com/juju/juju/pull/8760 | 09:47 |
=== grumblr is now known as grumble | ||
stickupkid | manadart: i'm looking now... | 10:45 |
manadart | stickupkid: OK, thanks. | 10:46 |
stickupkid | manadart: lgtm | 10:49 |
manadart | stickupkid: Ta. | 10:51 |
rick_h_ | stickupkid: watch out, saw some other status stuff landing from anastasiamac, just a heads up. | 11:45 |
rick_h_ | and morning party people | 11:45 |
stickupkid | rick_h_: thanks, will do | 12:18 |
stub | cory_fu: Do you know if there is a way to get a wheelhouse.txt like such as ' cassandra-driver --global-option="--no-cython" | 15:23 |
stub | ' to do what I mean? I think I need add options for when charms.reactive bootstrap is installing the contents of the wheelhouse into the venv | 15:23 |
stub | hmm, I look out of luck. The base layer is just doing pip install wheelhouse/*, rather than something like pip install -r wheelhouse.txt | 15:31 |
cory_fu | stub: Hrm. I don't think so. | 15:31 |
* stub wonders if there is some sort of setup.cfg to override pip | 15:31 | |
cory_fu | stub: We should probably add something like that. It does seem like it would be needed for several packages. I'm actually surprised it hasn't come up yet | 15:32 |
cory_fu | stub: A non-ideal solution would be to create your own WheelhouseTactic in your layer | 15:32 |
stub | I've tripped over it before, but just used deb dependencies to avoid chasing it at the time | 15:32 |
cory_fu | There are some bugs around custom tactics that are only fixed in edge, though | 15:33 |
stub | I could stick a file in lib/charms/layer/__init__.py or similar that monkey patches the base layer ;) | 15:36 |
need-help | I'll repost this from #conjure-up | 15:39 |
need-help | Hi, I'm trying to use conjure-up to deploy CDK, but I'm getting failures both on my Mac as well as using an Ubuntu 16.04 jump box. The Mac attempt fails on the kubectl step, while the Ubuntu jump box fails on the cni integrations | 15:39 |
need-help | The CNI integration fails due to not being able to associate an instance profile with a running ec2 instance, but it looks like this is because it takes a few seconds for instance profiles in IAM to finish provisioning | 15:39 |
need-help | Can't add a sleep 60s to the bash script because it's an uneditable squashfs read-only mount for snap | 15:40 |
stub | cory_fu: It also broke with the 'natsort' package, which declares a dependency in its setup.py of 'argparse ; python_version < 2.7' | 15:41 |
cory_fu | I answered need-help in #conjure-up but for posterity, the cloud integration stuff is being overhauled in https://github.com/conjure-up/spells/pull/191 which includes a fix for the profile delay | 17:04 |
=== frankban is now known as frankban|afk | ||
spiffytech | cory_fu: I'm trying the updated AWS/k8s steps you suggested to needs-help earlier, and conjure-up is failing, with the enable-cni step complaining that 'juju trust' isn't a command. Do I need a newer version of juju to use conjure edge, or that spells branch? | 17:48 |
cory_fu | Hrm, it should fall back to setting the config | 17:49 |
spiffytech | I cloned the spells branch, and used the Snap conjure-up/juju to do `conjure-up --spells-dir /path/to/spells/repo --channel=edge`. That's all that's necessary, right? | 17:50 |
spiffytech | I didn't see any new configuration options for AWS integration either, like I see for e.g., the Helm or Prometheus spells. | 17:52 |
kwmonroe | spiffytech: cory_fu: i wonder if it's actually failing, or if the stderr from this is just being displayed: https://github.com/conjure-up/spells/blob/k8s-integration/canonical-kubernetes/steps/04_enable-cni/ec2/enable-cni#L10 | 17:53 |
kwmonroe | iow, may need a 2>/dev/null if it's just leaky output | 17:53 |
spiffytech | Possible | 17:53 |
spiffytech | I do get a big red message telling me conjure failed, though. | 17:53 |
spiffytech | It tells me to check the generic conjure log, which has this. I can't understand the problem from this alone. http://haste.spiffy.tech/hizexudiru.coffee | 17:54 |
kwmonroe | spiffytech: if your deployment is still around, does "juju config aws credentials" say anything? (don't tell us what it says, btw) | 17:55 |
spiffytech | Sorry, I tore it down already. | 17:55 |
spiffytech | I can run it again, if there are no other suggested changes to make first. | 17:56 |
kwmonroe | spiffytech: you have a local spells directory cloned, right? | 17:56 |
spiffytech | Yep, `git clone -b k8s-integration https://github.com/conjure-up/spells.git juju-spells-` | 17:56 |
kwmonroe | spiffytech: may as well try to gobble up the stderr if you're gonna run it again.. adjust your spells directory ./canonical-kubernetes/steps/04_enable-cni/ec2/enable-cni to add a 2>/dev/null on line 10: if ! juju trust -m "$JUJU_CONTROLLER:$JUJU_MODEL" aws 2>/dev/null; then | 17:59 |
spiffytech | Okay | 17:59 |
kwmonroe | spiffytech: it's just a guess on my part that the stderr is causing conjure-up to bark unnecessarily | 17:59 |
spiffytech | Change made, running conjure-up now. | 17:59 |
cory_fu | Just having output on stderr wouldn't cause it to explode, but it's possible the check is not falling through as intended or that something else is blowing up and the stderr message is masking or confusing it | 18:00 |
cory_fu | It definitely could use the 2>/dev/null bit, though | 18:03 |
cory_fu | spiffytech: What channel of the conjure-up snap are you using? | 18:05 |
spiffytech | GA | 18:06 |
spiffytech | GA download, plus the --channel=edge flag when running it on the command line | 18:06 |
cory_fu | Ok, mine failed on the AWS charm, which died with "AWS was not able to validate the provided access credentials" | 18:11 |
cory_fu | I've not seen that before | 18:12 |
cory_fu | Wait, that's odd. I have another model from another run, and that's the one that failed | 18:13 |
cory_fu | Oh, ha. Now they've both failed. Lovely | 18:13 |
cory_fu | It looks like it's not setting the credentials correctly via the config options | 18:15 |
spiffytech | kwmonroe: I reran conjure-up and got the same failure. `juju config aws credentials` prints out a single line, looks like a token. | 18:15 |
cory_fu | spiffytech: Yes, that's expected. It's actually base64 encoded. If you run it through base64 -d, it will have "null" as the key and secret | 18:17 |
cory_fu | Which is not expected | 18:17 |
kwmonroe | yup spiffytech - me too. cory_fu, from ~/.cache/conjure-up/canonical-kubernetes/deploy-wait.err: | 18:17 |
kwmonroe | DEBUG:root:aws/0 workload status is error since 2018-05-24 18:13:38Z | 18:17 |
kwmonroe | ERROR:root:aws/0 failed: workload status is error | 18:17 |
kwmonroe | cory_fu: any my aws status log: https://paste.ubuntu.com/p/wpHGm5nJqv/ | 18:17 |
kwmonroe | and the hook error: https://paste.ubuntu.com/p/BVvSTw5JV2/ | 18:19 |
cory_fu | spiffytech, kwmonroe: Damn. It looks like the format of `juju credentials --format=json` changed between stable and beta | 18:20 |
cory_fu | Anyone know how to tell jq to pick one key or another, whichever has a value? | 18:21 |
kwmonroe | sorry, i always have to search stackoverflow anytime i get near jq | 18:24 |
jhobbs | two jq's and a grep :) | 18:24 |
kwmonroe | yas! | 18:25 |
cory_fu | kwmonroe, spiffytech: PR updated, I'm testing the fix now | 18:26 |
spiffytech | Great | 18:29 |
cory_fu | Seems to be working. Feel free to pull the branch and test yourself as well | 18:34 |
kwmonroe | jq if then else?!?! i've seen it all. | 18:35 |
cory_fu | kwmonroe: :) | 18:35 |
cory_fu | I'm sure there's a nicer way to do that, but I like that it's all in one command | 18:35 |
kwmonroe | cory_fu: updated branch and juju config aws credentials is 2legit2quit | 18:37 |
cory_fu | kwmonroe: I'm glad that spiffytech helped catch this before stokachu's merge finger got too twitchy. ;) | 18:39 |
kwmonroe | lool, true dat | 18:40 |
kwmonroe | cory_fu: remind me the order of step scripts.. it's before-config, before-wait, and after-deploy, right? and the before-wait happens when you click the final Deploy button? | 18:42 |
cory_fu | kwmonroe: Yep. It's a little confusing, because before-wait is after the deploy is started but before juju-wait is called. after-deploy is after juju-wait finishes, and is a legacy name | 18:43 |
cory_fu | kwmonroe: This list is in order of the phases, though it's not explicit about exactly when each is run: https://github.com/conjure-up/conjure-up/blob/master/conjureup/consts.py#L41-L46 | 18:44 |
cory_fu | I should at least add comments there | 18:44 |
cory_fu | Have to run, changing locations again | 18:51 |
cory_fu | bbiab | 18:51 |
kwmonroe | cool cory_fu! TIL more phases | 18:56 |
kwmonroe | cory_fu: my deployment completed with the updated spells branch. nice job! | 18:57 |
kwmonroe | spiffytech: ^^ fyi | 18:57 |
spiffytech | Excellent! Thanks a bunch! | 19:01 |
=== balloons is now known as Guest67301 | ||
=== mwhudson is now known as Guest32565 | ||
=== skay is now known as Guest37846 | ||
=== braderhart is now known as Guest74533 | ||
=== jam is now known as Guest91417 | ||
=== jose is now known as Guest16456 | ||
=== smgoller is now known as Guest23193 | ||
=== CyberJacob is now known as Guest6907 | ||
=== icey is now known as Guest76648 | ||
=== kirkland is now known as Guest97190 | ||
=== Guest37846 is now known as skay | ||
=== Makyo is now known as Guest97810 | ||
=== Guest67301 is now known as balloons | ||
=== balloons is now known as Guest64117 | ||
=== Guest64117 is now known as balloons__ | ||
bdx | kwmonroe: supsup | 21:05 |
bdx | kwmonroe: cs:~omnivector/slurm-node and cs:~omnivector/slurm-controller | 21:05 |
kwmonroe | roger that bdx | 21:10 |
bdx | kwmonroe: thanks | 21:16 |
bdx | Oooo a shiny https://paste.ubuntu.com/p/nCcvwKjfMZ/ | 21:16 |
kwmonroe | bdx: took 'em both to the prom | 21:21 |
bdx | kwmonroe: I see that, thank you! | 21:22 |
zeestrat | Yay | 21:24 |
=== Guest32565 is now known as mwhudson | ||
cory_fu | kwmonroe: I made a small update to the logging in the spell PR; mind taking a look? | 21:43 |
thumper | rick_h_: not sure if you are around, but know status of https://bugs.launchpad.net/juju/+bug/1770051 ? | 22:38 |
mup | Bug #1770051: ERROR detecting credentials for "localhost" cloud provider: adding certificate "juju": Unknown request type <lxd> <juju:Fix Committed by manadart> <https://launchpad.net/bugs/1770051> | 22:38 |
rick_h_ | thumper: per the bug that's fixed last week? | 22:40 |
thumper | rick_h_: yeah... got an email from the bug from ryan saying he is still seeing it | 22:40 |
thumper | just looking up the hash | 22:40 |
thumper | rick_h_: hmm... the comment mentions c173 which is tip of develop | 22:41 |
thumper | bollocks | 22:41 |
veebers | kelvinliu: ah, that seems like the issue you where hitting the other day ^^ | 22:44 |
rick_h_ | thumper: where's the email? | 22:47 |
rick_h_ | thumper: I don't see anything in the bug that ryan commented | 22:47 |
rick_h_ | thumper: do you mean this one? https://bugs.launchpad.net/juju/+bug/1771885 | 22:53 |
mup | Bug #1771885: bionic: lxd containers missing search domain in systemd-resolve configuration <bionic> <network> <juju:Fix Committed by ecjones> <juju 2.3:Fix Released by ecjones> <https://launchpad.net/bugs/1771885> | 22:53 |
rick_h_ | that one has some back/forth right now as we collect more details on the failure from the OS folks | 22:53 |
* rick_h_ steps back away and will keep an eye out for thumper replies | 22:58 | |
thumper | rick_h_: no, I was getting emails on that bug | 23:13 |
thumper | rick_h_: although ryan is now saying it is resolved... | 23:14 |
thumper | I'm confused | 23:14 |
thumper | but looks now like we are good | 23:15 |
kelvinliu | veebers, yes, i got this error before on lxc 3. | 23:20 |
wallyworld | vino: just a few small tweaks. the main one is to re-use the existing addService struct in the status test, instead of making a new one. plus the handing of endpoint bindings in the status formatter - the value can be assigned directly rather than making a map and copying values | 23:45 |
vino | wallyworld: ok let me take a look. | 23:47 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!