tychicus | if you made changes to a juju config, is there a way to have juju "redeploy" the original config? | 00:55 |
---|---|---|
lazyPower | tychicus you should be able to extrapolate what that original config was by looking at the defaults in config.yaml | 01:17 |
lazyPower | tychicus otherwise, you just juju config application foo=bar baz=bam | 01:17 |
tychicus | yeah, I did that | 01:17 |
tychicus | I think I horked something with openvswitch | 01:17 |
tychicus | isn't there a juju config —reset | 01:19 |
lazyPower | ah yeah | 01:22 |
lazyPower | juju config --reset foo,baz | 01:23 |
lazyPower | args being the keys you wish to reset to default | 01:23 |
lazyPower | #TIL | 01:23 |
tychicus | guessing that is only for charms and not for bundles | 01:26 |
lazyPower | Correct | 01:26 |
umbSublime | Is there any documentation to use juju to deploy applications to a public-cloud openstack ? | 03:02 |
Budgie^Smore | umbSublime https://jujucharms.com/docs/1.25/config-openstack | 03:14 |
umbSublime | thx Budgie^Smore | 03:15 |
Budgie^Smore | umbSublime that is for 1.25, for 2.1 https://jujucharms.com/docs/stable/help-openstack | 03:15 |
umbSublime | another quick question is, as an openstack public-cloud provider, is there a guide to make the OS compatible with juju ? | 03:15 |
umbSublime | I mean to be fully compatible and also shown in "juju clouds" | 03:16 |
Budgie^Smore | I haven't seen any yet, CentOS was added pretty recently | 03:17 |
umbSublime | hmm, I don't see CentOS with "juju clouds" on version 2.0.2-xenial-amd64 | 03:18 |
Budgie^Smore | juju clouds just shows the different clouds and isn't related to the OS installed by Juju | 03:18 |
umbSublime | yes of course | 03:18 |
Budgie^Smore | umbSublime actually I am getting maas and juju mixed up, been a long day... juju probably will never support non-apt based OSes (and probably only Ubuntu OSes) | 03:22 |
umbSublime | that's fine. What I mean is, if I would be a openstack public-cloud provider, How would I make sure my openstack infra is compatible with juju (so my potential users can deploy to ubuntu instances with juju) | 03:23 |
umbSublime | I found this https://jujucharms.com/docs/stable/howto-privatecloud which can probably be adapted for my use-case I think | 03:26 |
Budgie^Smore | cool | 03:35 |
kjackal | Good morning Juju world! | 07:51 |
=== frankban|afk is now known as frankban | ||
Mmike | Hi, lads. What are 'hosted models'? I'm getting an error when running the 'juju create-backup' command saying that 'backups are not supported for hosted models' | 09:23 |
Mmike | It's a deployment against openstack provider. So, if I specify a controller model then backups run fine, but is the default model also backed up then? | 09:24 |
Zic | lazyPower: hi, just to report you a strange things that just happens in our production cluster : the iptables's FORWARD chain default-policy pass to DROP from ACCEPT without *any* action of our team | 11:14 |
Zic | lazyPower: all pods has lost their network | 11:14 |
Zic | just re-switch the default-policy with "iptables -P FORWARD ACCEPT" and it was fixed | 11:14 |
Zic | did you ever see that? | 11:14 |
Zic | it affected only one node / 5 | 11:15 |
Zic | the only other service witch touch iptables on this node is lxd, but it's not used on this server (it's just here because it's now a part of the metapackage ubuntu-server) | 11:20 |
Zic | wich* | 11:20 |
Zic | which* | 11:20 |
* Zic need moar coffee | 11:20 | |
Zic | needs* </flood> | 11:20 |
Zic | ok, find it, it's not LXD, it's UFW : /etc/default/ufw | 11:23 |
Zic | DEFAULT_FORWARD_POLICY="DROP" | 11:23 |
Zic | confirmed, if I reboot another node, the default-policy for the chain FORWARD switch to DROP instead of ACCEPT | 11:36 |
Zic | trying to systemctl disable ufw | 11:37 |
Zic | & reboot | 11:37 |
Zic | hmm, same, maybe it's not ufw either :( | 11:41 |
lazyPower | Zic : thats a new one to me, i haven't seen that | 12:22 |
=== petevg is now known as petevg_afk | ||
magicaltrout | ERROR cannot add application \"gitlab\": state changing too quickly; try again soon\ | 14:01 |
magicaltrout | what does that mean in bundletester? | 14:01 |
admcleod | magicaltrout: its waiting for the state of 'gilab' to settle, but apparently its not? | 14:03 |
admcleod | magicaltrout: maybe its hook error/retrying very quickly? | 14:03 |
magicaltrout | its not even in the environment | 14:06 |
Zic | lazyPower: I did some new debugging session : it's the cluster which is still in 1.5.3 with docker_from_upstream=true (upgrading is already planed but customer have very few moment to do maintenance outage...), in this Docker version, Docker switch the FORWARD chain default-policy to DROP when doing "systemctl restart docker" | 14:06 |
Zic | lazyPower: in the Docker Ubuntu version, it does not | 14:06 |
lazyPower | Zic: ah good insight | 14:07 |
Zic | lazyPower: it mades me think if the last time we spoke about this docker_from_upstream, where all my containers had no network, it was maybe the exact same problem | 14:07 |
lazyPower | werid i would think kube-proxy would re-read teh tables and fi that. | 14:07 |
admcleod | magicaltrout: huh? | 14:07 |
magicaltrout | well juju status is empty | 14:07 |
lazyPower | *the | 14:07 |
lazyPower | *fix | 14:07 |
Zic | anyway, I plan to downgrade to docker_from_upstream=false before upgrading to 1.6 when I will have the maintenance area from my customer :) | 14:07 |
lazyPower | i think my bluetooth keyboard is dying on me | 14:07 |
admcleod | magicaltrout: is there some weird multi-model oops thing going on | 14:08 |
lazyPower | Zic just FYI with this weeks release that will happen for you automatically | 14:08 |
lazyPower | we're migrating users who have enabled from upstream to archive, and disable the config flag | 14:09 |
magicaltrout | nope as vanilla as it comes | 14:09 |
magicaltrout | although now i paste the command i see the same as well | 14:09 |
ryebot | I'm trying to bootstrap lxd on an aws box: | 14:10 |
ryebot | $ juju bootstrap lxd lxd-test | 14:10 |
ryebot | ERROR no addresses match | 14:10 |
ryebot | lxdbr0 exists | 14:10 |
ryebot | not sure what to check here | 14:10 |
admcleod | ryebot: try with --debug | 14:11 |
admcleod | magicaltrout: try with --debug | 14:11 |
admcleod | :} | 14:11 |
ryebot | will do, thanks | 14:11 |
=== bdx_ is now known as bdx | ||
Zic | lazyPower: nice, I expect that we will have no GC error on docker-ubuntu / K8s 1.6... or that pod limits will mitigate this bad effect | 14:12 |
tychicus | has any one run into an issue where neutron-l3-agent neutron-metadata-agent neutron-dhcp-agent sporadically go missing from neutron gateway? | 15:03 |
tychicus | issuing apt install neutron-l3-agent neutron-metadata-agent neutron-dhcp-agent | 15:03 |
tychicus | then sudo systemctl restart jujud-unit-neutron-gateway-1.service | 15:03 |
tychicus | will re-create the corresponding config files and bring things back online | 15:04 |
admcleod | tychicus: not i - you may want to also ask this in #openstack-charms | 15:06 |
tychicus | admcleod: didn't realize that was there, thanks! | 15:08 |
tychicus | separate question, I did something really silly I accidentally deployed units to my juju controller and juju remove-unit won't remove the units | 15:22 |
tychicus | is there any way to force the removal, or would I be better up backing up and re-provisioning my juju controller? | 15:22 |
magicaltrout | weirdly admcleod ... or maybe not maybe its a feature | 15:31 |
magicaltrout | if i destroy my controller | 15:31 |
magicaltrout | it runs bundletester again | 15:31 |
magicaltrout | but then i have to kill it again and rebootstrap | 15:31 |
magicaltrout | https://gist.github.com/buggtb/09ccdea9771f950ce324d2e46c24126d | 15:32 |
magicaltrout | some random stuff about it being a bundle | 15:33 |
admcleod | magicaltrout: what happens if you manually deploy th gitlab-server charm? | 15:34 |
magicaltrout | if i deploy it without the additional flags | 15:35 |
magicaltrout | it deploys | 15:35 |
rick_h | tychicus: you can force a bit with remove-machine the units on are | 15:42 |
rick_h | tychicus: I'm curious why they won't remove. Maybe there's an error/etc and it needs some juju resolved xxxx ? | 15:43 |
admcleod | magicaltrout: maybe something for tvansteenburgh / coryfu | 15:44 |
tychicus | rick_h: here is what I see | 15:45 |
tychicus | nova-compute/3 blocked executing 0 10.110.0.111 (leader-settings-changed) Missing relations: messaging, image | 15:45 |
tychicus | neutron-openvswitch/11* error idle 10.110.0.111 hook failed: "install" | 15:45 |
rick_h | tychicus: yea, it's in an error state. So to remove it you'll need to first mark it resolved to tell juju "yea yea, it's ok carry on" | 15:46 |
tychicus | are there any specific logs that you would like me to examine | 15:46 |
tychicus | rick_h, I tried that once before, but I'll try it again | 15:46 |
rick_h | tychicus: so as you do that it'll ignore one hook issue and it might run into an issue in the next hook and such | 15:46 |
rick_h | tychicus: so honestly, sometimes you have to do it a few times while removing things | 15:47 |
rick_h | tychicus: but as I said, one thing you can do is to take out hte machine under the unit as well | 15:47 |
tychicus | my only hesitation is that the machine is my juju controller | 15:47 |
rick_h | tychicus: ? well the unit isn't the controller. What machine is neutron-openvswitch/11 on ? | 15:48 |
tychicus | machine 0 | 15:49 |
rick_h | tychicus: so this is an openstack on the controller dense mode? | 15:49 |
rick_h | tychicus: yea, that won't work then. best thing to do is run debug-log in one terminal and keep marking resolved and watch it keep trying to tear down the unit | 15:49 |
tychicus | what I did was: juju add-unit nova-compute —to 0 | 15:50 |
tychicus | meant to do juju add-unit nova-compute —to 10 | 15:51 |
tychicus | but missed a keystroke | 15:51 |
rick_h | tychicus: huh, didn't it fail with there being no machine 10? | 15:52 |
admcleod | tychicus: sometimes, i put "juju resolved thing/0 ; juju remove-unit thing/0" in a for loop, and go do something else for a minute | 15:53 |
tychicus | I think it may be a dependency issue because nova-compute depends on neutron-openvswitch | 15:54 |
tychicus | rick_h: no I never got the machine 10 command typed in, I had intended to place on 10 but accidentally placed on 0 | 15:54 |
tychicus | admcleod: I'll give that a try | 15:55 |
rick_h | tychicus: oic, I got it the other way around. | 15:55 |
lazyPower | magicaltrout thats a fun bug for sure. | 16:25 |
lazyPower | magicaltrout i'm not sure why it would be complaining about quick state changes. I suspect thats something we need ot get filed as a bug though | 16:25 |
bdx | has anyone else noticed dangling security groups left behind by juju? | 17:17 |
bdx | http://imgur.com/GF8uPTg | 17:17 |
bdx | :-( | 17:17 |
bdx | only 57 instances deployed .... http://imgur.com/a/Z2e1s | 17:18 |
bdx | yet I have 640 security groups | 17:18 |
* bdx weeps | 17:19 | |
rick_h | bdx: huh, there's been bugs there but they were fixed I thought. https://bugs.launchpad.net/juju/+bug/1385276 and https://bugs.launchpad.net/juju/+bug/1625624 | 17:21 |
mup | Bug #1385276: juju leaves security groups behind in aws <bug-squad> <destroy-environment> <ec2-provider> <jujuqa> <repeatability> <security> <juju:Fix Released> <juju-core:Won't Fix> <juju-core 1.25:Won't Fix> <https://launchpad.net/bugs/1385276> | 17:21 |
mup | Bug #1625624: juju 2 doesn't remove openstack security groups <ci> <landscape> <openstack-provider> <sts> <juju:Fix Released by gz> <juju 2.0:Fix Released by gnuoy> <juju 2.1:Fix Released> <juju 2.2:Fix Committed> <juju-core:Fix Released by gnuoy> <https://launchpad.net/bugs/1625624> | 17:21 |
bdx | rick_h: I just tagged you on that bug | 17:36 |
Budgie^Smore | anyone got any suggestions for intel nuc like systems? | 17:36 |
bdx | possibly you can help get some heat on it | 17:36 |
rick_h | bdx: rgr | 17:37 |
rick_h | Budgie^Smore: sorry, just use nucs here. I've not tinkered with the others like it | 17:37 |
rick_h | Budgie^Smore: biggest thing is the AMT support for MAAS which most of them I don't think do. Even in the NUC line you have to get a couple specific models | 17:37 |
Budgie^Smore | rick_h yeah I figured as much :-/ | 17:38 |
Budgie^Smore | rick_h I am just having the one large system running windows (cause I like to play a game that doesn't run well under linux even with wine) struggle | 17:39 |
Budgie^Smore | rick_h trying to decide it is better to get nuc or nuc-like systems or spend the money on cloud credit | 17:41 |
rick_h | Budgie^Smore: so the cloud stuff is handy as you can move it around and make it bigger/smaller/etc. However, real hardware is always faster. | 17:42 |
rick_h | Budgie^Smore: when I push my nucs I'm always floored at how much faster they are than a VM | 17:42 |
Budgie^Smore | rick_h yeah I geet that having been a bare metal guy most of my career | 17:43 |
tychicus | Budgie^Smore: what's your budget, the intel xeon D stuff is really good, but a bit more expensive than a nuc | 17:55 |
Budgie^Smore | tychicus really only want to spend a couple hundred on each and have 4 or 5 | 17:57 |
lazyPower | Budgie^Smore i have a handfull of gigabyte brix systems myself | 17:58 |
Budgie^Smore | lazyPower running MaaS? | 17:59 |
lazyPower | negative | 17:59 |
lazyPower | they dont support AMT so i image them every so often | 17:59 |
lazyPower | that or i could enlist them in maas and pull sneaker-net power | 17:59 |
tychicus | May be more than you want to spend, but you can get much better density than the nuc, since the xeon D can handle 128GB ram vs 16GB with the AMT capable NUC https://www.amazon.com/dp/B01M0VTV3E/ref=psdc_11036071_t1_B01M0X365V | 18:02 |
Budgie^Smore | yeah I also don't need 6Gb NICs and SFP+ ports... not really looking at doing SDN at home yet | 18:09 |
Budgie^Smore | admittedly 2 would be nice but I can "fake" that easily | 18:10 |
Budgie^Smore | https://www.amazon.com/ASRock-DESKMINI-110W-BB-US/dp/B01L3J1JFQ/ might be promising | 18:13 |
bodie_ | Can't figure out how to get rid of my bad configs. I have a couple of GCE controllers that I destroyed the real VMs for and now 'status' and so on hang | 18:18 |
bodie_ | OS X | 18:18 |
bodie_ | 2.1.2-elcapitan-amd64 | 18:18 |
bodie_ | Any pointers? There doesn't appear to be a .jenv file in $HOME | 18:20 |
rick_h | bodie_: sorry, I don't follow. Status hangs, but the controllers are gone? What status are you checking? | 18:27 |
rick_h | bodie_: you can use juju unregister to remove controllers that you manually pulled from your controllers list | 18:28 |
bodie_ | Hi Rick. The controllers are still in my list, since destroy-controller also failed to finish... | 18:28 |
bodie_ | I just want to erase everything and start over from scratch. | 18:28 |
rick_h | bodie_: k, you can use unregister to clean up the controllers from your listing. If you want to remove config/etc they're int he data directory (~/.local/share/juju/) | 18:31 |
bodie_ | Ok :+1: thanks @rick_h | 18:34 |
rick_h | bodie_: k, let me know how you get along and if there's anything else you hit. | 18:35 |
bodie_ | rick_h -- the real issue was that my deploy was stuck waiting for some machines to come up | 18:36 |
bodie_ | GCE provider got set up OK, and I was able to deploy and use juju-gui | 18:36 |
bodie_ | I'd previously tried to deploy canonical kube from my CLI, but the machines were stuck waiting | 18:37 |
bodie_ | (Maybe due to a bad connection on the train leaving some API calls unfinished?) | 18:38 |
bodie_ | So, I blew that setup away and tried to use juju-gui to deploy it | 18:38 |
bodie_ | But I suspect somewhere in there something got mixed up | 18:38 |
lazyPower | tychicus the perk to a nuc vs that xeon is its a 1u vs 3 slices of toast. | 18:40 |
lazyPower | i myself have some space constraints. I actually have a 2u hp proliant sitting in my closet collecting dust because fan noise and space :| | 18:40 |
lazyPower | i had it running in my office -once- since i moved in. It sounded like a b2 bomber landed in my office and hangouts were out the window :{ | 18:40 |
tychicus | don't get me wrong, I really like the form and noise factor of the nuc | 18:42 |
Budgie^Smore | not to mention those of us that want that environment aren't running huge workloads just need multiple small machines | 18:42 |
Budgie^Smore | How many times have you deployed CDK to it lazyPower? | 18:43 |
lazyPower | Budgie^Smore every release | 18:44 |
lazyPower | I also have an older i7 wokstation hooked in as a beefier worker. I wanted something to run the game servers on | 18:45 |
lazyPower | so to date: its 2 Brix, and 1 i7 | 18:45 |
tychicus | my home lab still consists of an old dell core2duo machine and a mac mini, both were free so they were the right price | 18:46 |
Budgie^Smore | I think someone is missing out a significant niche market | 18:48 |
jrwren | those old core2duo's work wonders IME ;) | 18:56 |
tychicus | can the import-ssh-key or add-ssh-key command be associated with a controller rather than a model? | 18:56 |
tychicus | rick_h: admcleod: got the stuck charms removed | 19:17 |
tychicus | juju resolved --no-retry neutron-openvswitch/11; juju remove-unit neutron-openvswitch/11; juju remove-unit nova-compute/3 | 19:18 |
tychicus | didn't know about the --no-retry option for resolved, but that did the trick | 19:19 |
tychicus | thanks again for your help, and pointing me in the right direction | 19:19 |
admcleod | tychicus: ah cool | 19:54 |
lazyPower | well to be fair i didn't include any of the appliances and arm based boards i have running | 19:55 |
lazyPower | i seem to run machines for the sake of blinken lights | 19:56 |
bodie_ | rick_h: I managed to clean up and re-bootstrap my google provider, but when I `deploy canonical-kubernetes`, some machines remain pending. | 19:56 |
lazyPower | bodie_ do you have a fresh google cloud account? | 19:56 |
Budgie^Smore | lazyPower hehehe... I am just missing my lab systems | 19:57 |
bodie_ | Yeah. I created the project / juju credentials per the user guide | 19:57 |
rick_h | tychicus: good to hear | 19:57 |
lazyPower | bodie_ there are CPU restrictions on a new account, and i know that canonical-kubernetes is a fairly large requirement... | 19:57 |
bodie_ | Mmm... that could be it | 19:57 |
lazyPower | bodie_ you're running into that limit :( i'm 80% certain of this | 19:57 |
rick_h | lazyPower: bodie_ yea, I think you need to request > 10 cpus or something | 19:57 |
lazyPower | bodie_ in the interim to get you un-blocked, you can deploy kubernetes-core, which is a much lighter bundle for experimentation | 19:57 |
bodie_ | Righto | 19:58 |
bodie_ | Thanks | 19:58 |
lazyPower | No problem, if that continues to be problematic, just let me know and i'll do what i can | 19:59 |
lazyPower | even if its perform a rain dance to appease our google overlords :) | 19:59 |
admcleod | i think they're pretty quick at increasing limits? | 19:59 |
lazyPower | admcleod in my experience it can take a few hours from request to fulfilment | 19:59 |
bodie_ | Hail to the Basilisk | 20:00 |
admcleod | a few HOURS? | 20:00 |
lazyPower | but theyre pretty good about just handing it over | 20:00 |
lazyPower | yeah, submit support ticket | 20:00 |
lazyPower | wait | 20:00 |
lazyPower | profit | 20:00 |
lazyPower | unless you have an account rep, then you can call | 20:00 |
* admcleod hops on spaghetti monster and flies off towards the horizon | 20:00 | |
bodie_ | Apparently "free tier" (new $300 of credit mode) doesn't let you have more than 8 cores up at a time | 20:04 |
bodie_ | c'est la vie | 20:04 |
Budgie^Smore | what would be a good price point for say a 5 (maas + 4 node) system cluster for a "home lab"? | 22:12 |
magicaltrout | one million dollars........ | 22:13 |
Budgie^Smore | you got a million magicaltrout, I will build you the best "home lab" cluster ;-) | 22:14 |
magicaltrout | i'm building one actually similar to the ubuntu orange box that these chaps take on tour | 22:14 |
magicaltrout | except i'm cheap and cheating | 22:15 |
magicaltrout | but i am using these: http://www.udoo.org/udoo-x86/ | 22:15 |
Budgie^Smore | yeah I liked the look of the orange box | 22:15 |
Budgie^Smore | what do you use for "power management"? | 22:16 |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!