[05:44] please make remove-unit really work [05:49] also does set-model-constraints tags=test work for all machines und unit deployments? [05:50] the problem im having is that constraints do not work when add-unit is used [05:51] so to scale out there is no way to use a specific machine set or is there? [05:55] this really a showstopper if i cant scale out to machine sets who share the same tags [06:51] Hi Folks. I am facing issue in publishing our charm for review [06:52] charm list resources shows [06:52] [Service] RESOURCE REVISION install -1 [06:52] but running "charm release cs:~vtas-hyperscale-ci/hyperscale-0 --resource install-1 --channel edge" Gives error [06:53] ERROR cannot release charm or bundle: bad request: charm published with incorrect resources: cs:~vtas-hyperscale-ci/hyperscale-0 resource "install/1" not found [08:25] Good morning Juju world! === saber is now known as Guest50267 [08:53] Hey all. Any one has idea on charm push-term? [08:53] getting error ERROR unrecognized command: charm push-term [08:54] charm version is 2.2.0-0ubuntu1~ubuntu16. [09:00] hi team, I am trying to install all OpenStack services using Juju. However, due to this bug - https://bugs.launchpad.net/charms/+bug/1417407, the haproxy cfg files are getting overridden by new services. [09:00] Bug #1417407: haproxy enabled by default for keystone, cinder, openstack-dashboard, nova-cloud-controller and glance breaks deploying multiple light services to the same node. [09:01] How to fix this issue ? [09:01] Is there a way to install OpenStack without haproxy ? [09:10] hmm [09:10] '3': [09:10] series: xenial [09:10] contraints: tags=devcloud,node [09:10] ah forget it [09:11] i should learn how to write constraints [09:25] ybaumy: what is wrong with it? [09:26] i wrote constraints instead of constraints [09:26] -s [09:26] in the first [09:26] ah :P [09:26] dem morning typos [09:27] im up since 5 so its forgiven [09:28] cnf have you ever had the problem to add a unit with constraints? [09:28] uhm, i learned of constraints yesterday [09:28] i have yet to successfully use them [09:29] i wonder if i have to add-machine first then add-unit --to [09:29] ask me again in 20 minutes, i'll tell you if my srtuff came up :P [10:14] tejaswi: You could ask at #openstack-charms [10:15] ybaumy: you do not need to first add a machine and then deploy the app. You just do a juju deploy .... --constraints "mem=4G ...." [10:16] kjackal: no its about scale out with add-unit [10:17] how do i make constraints inherited [10:17] if that is the correct term [10:21] kjackal: ok [14:23] need a hack, how do i manually unset a state? [14:32] lazyPower: ping [14:35] magicaltrout: charms.reactive remove_state [14:35] I actually had another question, but that will hopefully help resolve it [14:35] ever seen this [14:36] 2017-03-17 14:30:26 INFO install Generating RSA private key, 2048 bit long modulus [14:36] 2017-03-17 14:30:27 INFO install unable to write 'random state' [14:36] 2017-03-17 14:30:27 INFO install e is 65537 (0x10001) [14:36] O_o [14:36] unable to write random state? [14:36] the weird thing is if i actually run the command it runs fine [14:36] thats a new one on me [14:36] marcoceppi: ^ have you seen this? [14:36] you've seen like every nit noid thing we've come up with [14:37] its in your code, thats why i'm asking :P [14:37] * lazyPower crosses fingers [14:37] magicaltrout: well lets not point fingers [14:37] what code? [14:37] hehe, k8s master [14:37] but this is on my weird openstack deployment platform, so i'm not blaming you, its probably entropy rubbish [14:38] yeah thats what i was about to say is that it sounds like the unit hasn't generated enough entropy [14:38] but my openstack cluster and canonical k8s really hate each other :) [14:38] and we host entropy.ubuntu.com [14:38] which should have given you enough of a random seed for entropy [14:38] I'm gonna remove authentication.setup [14:38] in the hope a rerun will kick it enough [14:39] cause it just stops, but weirdly, i wouldn't have thought it got set [14:39] so I don't know why it isn't run a second time [14:41] bombed again [14:41] what the [14:42] weird and interesting [14:43] magicaltrout: not enough /dev/random bits? [14:44] http://pastebin.com/5WUhAr7Y [14:44] well [14:44] if I run the hook [14:44] it fails [14:44] if I run the command via juju run [14:44] it fails [14:44] if I ssh into the box and run it manually [14:44] it runs [14:45] * magicaltrout gets out the big # based crowbar [14:47] magicaltrout: is there a .rnd file in /root ? [14:47] yup [14:56] sorry lazyPower not trying to grumble, just trying to figure out juju/k8s weirdness on a random openstack cluster I have zero control over but have been asked to deploy kubernetes to [14:56] :) [14:56] anyway, i've commented it out, we'll see how we go [15:10] magicaltrout: oh not even upset man, just trying to keep my head above water atm. I'm sick, heavily medicated, and in/out of meetings for the last hour. [15:11] join the club, my nose was streaming on the way back from NYC [15:11] i'm not sure my fellow passengers were overly impressed [15:12] yeah, must be going around. [15:12] * lazyPower reads more of the backscroll [15:12] magicaltrout: this does seem like an issue with entropy [15:13] i'm not sure why its working when you manually run it vs script execution though. [15:13] yeah its a bit odd [15:13] perhaps the fact you've logged in and issued the command gave it the bump it needed in terms of entropy [15:13] this is outside my scope of knowledge in crypto [15:13] but also fails in juj run as well [15:13] so its happy when you have a shell open [15:13] oh well [15:13] i would probably be poking dustin about this and asking him to teach me to fish, or links so i can read about fishing. [15:14] this is possibly the slowest openstack cluster the US Govt could possibly own [15:14] pfft [15:14] you had to challenge them didnt you? [15:14] your next stack will be a bunch of minnowboards [15:14] well i got more ram [15:14] but apt update takes ~25 minutes per server :) [15:14] I think i might be a bit IO bound [15:15] magicaltrout: welcome to my slow-server case. There are some apt tunings you can do. [15:15] o/ juju world! [15:15] hope your feeling better today lazyPower [15:15] magicaltrout: I've always felt that charms should be optimized to run apt-update minimally, but instead they run it rather often. It really makes for slow charm deploys when things are on slow IO :( [15:15] Budgie^Smore: i'm mostly dead inside because of all the medication, but i'm present and accounted for. [15:16] tell me about it jrwren, slow or completely time out [15:16] * rick_h ships lazyPower some OJ [15:16] Budgie^Smore: how are things giong for you and your maas k8s deploy? [15:16] i spin up units manually and run apt update befoe dumping juju workloads on them [15:16] rick_h: <3 [15:16] magicaltrout: are you using the daily image stream? [15:17] lazyPower oh I keep hitting hurdle after hurdle there and all of them of my own making probably [15:17] magicaltrout: if you're using daily images your deltas should be pretty minimal [15:17] there aren't that many lazyPower, but it takes forever installing a new kernel [15:17] magicaltrout: i know if you're using the "stable" image stream you could be somewhere near 3 months of package updates, depending on patch release and which image. [15:18] lazyPower the latest one is having me rethink using VBox in favor of kvm due to not be able to change the power types in MaaS 2.1 [15:18] magicaltrout: do you find that dumping juju workloads on them is also slow because the juju workload apt update / upgrade as well? [15:19] na thats not as bad jrwren cause the delta is 0 by this time [15:19] lazyPower still need to figure out how to solve the cert issue on my clound k8s cluster too [15:19] magicaltrout: ah, well that is good. [15:19] Budgie^Smore: - cert issue? i think i've missed context here... sounds like you've told me about this one and i forgo about it.... [15:19] Budgie^Smore: do you mind re-iterating and i can try to help? [15:20] lazyPower but that one is getting some larger instances as dev created a tool that will swallow memory like it is going out of fashion [15:20] hah [15:20] I spent about 6 hours on tuesday trying to get the k8s routing working and it kept failing, i suspect because the update times were so rubbish juju failed or hadn't updated some services [15:20] is it java based? :D [15:20] I have more ram now, just slow disks, so hopefully I'll get further :) [15:21] magicaltrout: k8s routing... meaning ingress? [15:21] yeah, and dashboard [15:21] they all failed to route out [15:21] lazyPower sure, give me a few mins to pull the acctual error but basically it looks like one of the tools is complaining about an unknown CA which is causing it not to display logs either in the dash board or from kubectl logs [15:21] ok, you do know that we dont enable external dashboard access outside of proxy by default right? [15:21] you can easily default that, but we dont make it easy because its a huge security hole [15:21] yeah i run the kubctrl proxy [15:21] just get 504's [15:22] Budgie^Smore: ok, that sounds like 1 of 2 things. I"ll wait for the error before i diagnose [15:22] anyway, i'm not there yet today, when I am, we'll see :) [15:22] magicaltrout: ok, when you do get there, make sure you have my attention and i'll help [15:22] we have a bug open from another user that ran into unexpected behavior [15:22] stuff that i haven't been able to reproduce [15:22] i mean i've only been spinning up openstack nodes for it for like 3 hours! :P [15:22] so i'm highly interested in findign out wtf is going on [15:22] yeah i saw that ticket [15:22] it looked similar [15:22] yeah man, if its the same, lets find it and nail it to the wall [15:22] i dont like bugs i cant reproduce [15:26] I have one of those with the MaaS team right now lazyPower ;-) [15:32] oh I just noticed that I am using IPv6 at home! [15:38] Budgie^Smore: you mean a non-reproduceable bug by the maintainers? [15:38] Budgie^Smore: kill it with fire [15:39] lazyPower yeah, install related issue where the rack controller doesn't register to the region controller [15:39] :S [15:39] fun times, that one [15:40] Budgie^Smore: i'm guessing nothing in the logs of the region controller? Nothing indicative that the network is not the culprit? [15:41] lazyPower I saved a copy of the virtual disk one of the times it happened but haven't heard from the maintainers if they would like it... there is an error in there but didn't point to the culprit, I suspect network latency issues though [15:47] ok so lazyPower, here is the output from kubectl --v=8 logs - http://paste.ubuntu.com/24195846/ [15:49] ok, i see the x509 errors in here. This is consistent with a couple problems that might have arisen... [15:50] lazyPower, it is worth noting that I only started getting this error once I upgrade my kubectl client version to 1.5.3, with 1.5.1 it was an annoying generic catch all failure [15:50] 1 sec, let me finish gettign this multi-cloud test run of the etcd3 stuff going. Can you get me the output of juju run-action kubernetes-master/0 debug [15:50] post that somewhere secure, it'll have logs + config files that you dont want to leak [15:51] stormmore: that sounds like some of the extra debugging that landed in 1.5.2, so it makes sense that was expanded when you upgraded. [15:51] but i'll want the output of the debug action on your master so i can verify the tls components were installed correctly [15:52] I am wondering if this is an affect from upgrading the cluster [15:53] (and messing it up when I just did a juju deploy over the top of the current one) [15:54] lazyPower, OK to DDC it to you? [15:56] hatch: I added a comment to https://github.com/juju/docs/issues/1651 [15:56] hatch: I also opened up https://github.com/juju/docs/issues/1723 [15:56] kewl thanks [15:58] stormmore: hang on and i'll see about deploying a secure upload [15:58] i'm on znc + weechat and haven't used DCC since the 90's [15:59] that combination means i'm probably going to have tears at the end of it :P [15:59] right! yeah it has been awhile for me too... not even sure how well Hexchat handles it [16:02] DCC! jesus, i'd forgotten that was a thing [16:02] those were the days [16:02] it didn't work then [16:02] it won't work now :) [16:05] Hey folks. I am unable to get juju terms to work with my charm. [16:06] Eventhough the charm is not agreed upon, juju still doesn't prompt me [16:06] the terms is not agreed upon* [16:13] OK I have the debug log :) reminds me of sosreport back in the day [16:22] I figured out my dashboard faux pas lazyPower , no UDP routes between the nodes [16:22] oops :) [16:30] hatch: one more https://github.com/juju/docs/issues/1724 [16:30] magicaltrout: ah interesting [16:31] hmm seems I missed psarwate [16:31] I wonder if Juju was actually deploying his charm though . . . [16:33] thanks arosales [16:34] also if psarwate was deploying locally then terms will be bypassed [17:39] cs:~containers/etcd-25 (edge channel) was just released, introducing deb=>snap migration path and roll forwards to etcd 3.1.3 (juju config etcd channel=3.1/stable) -- you'll need to follow the migration path in the readme however. All early feedback appreciated [17:42] plus sesnding a notice to the list with a bit more details than this blurb ^ [17:43] lazyPower: congrats! [17:44] rick_h: its pretty cool to see the capability of rolling forward and rolling backwords using a single config option. (potential data mismatch if you're actively reciving data during the forward jump, rollback will restore to the state just prior to the upgrade). but its nice to know we support this now in the event of troubled clusters. [17:44] lazyPower: yea, cool how things are moving === hml_ is now known as hml === hml_ is now known as hml [20:19] cory_fu: you did some work to making rebuilding the python-libjuju _client easier, right? [20:26] btw lazyPower I am looking at using SDN in this cluster [20:47] petevg: Yes. Are you planning to rebuild it? [20:48] cory_fu: yes. Though we may have to tackle the multiple versions thing sooner rather than later. [20:48] cory_fu: filing a bug and pushing a repro right now ... [20:50] cory_fu: issue at https://github.com/juju/python-libjuju/issues/90, and test reproducing it at https://github.com/juju/python-libjuju/pull/91 [20:50] cory_fu: do the docs on building live anywhere? [20:51] petevg: Yes, but my Makefile changes seem to have disappeared [20:51] cory_fu: that's why I couldn't find them :-/ [20:53] wth [20:56] cory_fu: git ate your homework! [20:59] petevg: Well, I can't for the life of me figure out what happened to the Makefile changes I made, but they're not strictly necessary and were relatively minor. The important bit is at https://github.com/juju/schemagen [21:00] petevg: Build and run that, directing the output to juju/client/schemas.json, and then run the "client" target: https://github.com/juju/python-libjuju/blob/master/Makefile#L13 [21:00] cory_fu: cool. thx. [21:01] cory_fu: since the alpha has breaking changes, this is also the time to worry about different versions of the _client.py in python-libjuju, right? As in: I shouldn't just slam these changes in ... [21:03] petevg: Well, it's just one API, and we could work around it in the wrapper around FullStatus that we definitely should have. But we do need to support facade versions eventually so if you can come up with a good way to handle that, I'm down [21:04] so lazyPower if I run the microbot service and then want to check it [21:04] cory_fu: cool. That sounds more fun than downgrading juju and having to deal with halfway destroyed models cluttering my env :-) [21:04] do I have to munge my hosts file to make it resolve? [21:04] petevg: It would be good to have _client.py split up anyway, and splitting it by facade version shouldn't be much harder. [21:05] petevg: The hard bit would be figuring out which facade versions to use based on the controller. I don't know if there's a way to query that [21:05] petevg: Also, I would imagine that the object layer code would need to know how to work with the different facade versions as well. [21:05] Ayup. [21:07] petevg: Aren't the facades supposed to keep us from having version-specific logic in the code? I'm a little unclear on how they're supposed to help [21:08] * magicaltrout is getting 502's and nothing else \o/ [21:08] cory_fu: My expectation would be that if you do nothing, you are operating against stable, and if you pass in a flag somewhere, you operate against some other version. [21:08] cory_fu: it would be cool to do some magic with imports, but I don't think that import time is the right time. [21:08] connection time probably isn't right, either. [21:10] petevg: Maybe tvansteenburgh has some insight for us here? [21:12] * tvansteenburgh wakes up [21:13] * tvansteenburgh reads scrollback [21:15] petevg: cory_fu: https://github.com/juju/python-libjuju/issues/49. we can dynamically use the right facade version pretty easily. the trick will be handling changing api signatures in the object layer [21:16] happy to discuss that at some point, but that point is not today :) [21:16] okay... lazyPower my microbot containers are all paused... no idea why or what that is, answers on a postcard when you get bored [21:16] tvansteenburgh: Cool. I'm going to head out to dinner soon, anyway. Thank you for the linky :-) [21:16] sure thing [21:29] magicaltrout Budgie^Smore - i think i foudn the issue with your x509 validations - https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238 [21:31] magicaltrout Budgie^Smore - if you could check your /etc/default/kubelet file's for the presence of the tls flags + disabling anonymous auth on kubelet, that would be choice to validate its the root cause of the validation issues post 1.5.3 update [21:33] i don't have cert issues :) [21:33] not yet anyway [21:34] http://pastebin.com/raw/gSWGEk1E [21:34] any idea lazyPower \o/ :) [21:34] even docker run -d dontrebootme/microbot:v1 [21:34] is failing on me [21:34] which is weird cause I have other containers running fine [21:35] whats this about ipv6 [21:36] and it cant seem to find any of the image layers [21:36] i'm not sure how you got into this state but this is certainly fun magicaltrout [21:36] lol [21:36] this is just post install [21:36] not without some effort it isnt [21:37] how do you get docker to know about the meta of the image but have none of the data to back it up? [21:37] i swear gov'ner over than installing the charms i've not touched it from base xenial [21:40] yeah i'm not sure what to recommend here magicaltrout [21:40] i've not seen this before. I suspect i'm missing pieces to the puzzle [21:40] magicaltrout best i can suggest at this juncture would be to capture the model with a juju-crashdump report and post that for post analysis [21:44] reboot managed to make things worse \o/ [21:44] ho ho, nothing better on a friday night [21:45] oh nice lazyPower! let me look, sorry I have taken today as a chance to write some documentation [21:48] you still there lazyPower? http://paste.ubuntu.com/24197670/ is the /etc/default/kubelet from one of the workers [21:50] stormmore yeah looks like the tls flags + auth flags didn't make it in the upgrade [21:50] Cynerva not sure if you're still here, its pretty late in the day. ^ [21:50] i filed https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/238 in reference to this. [21:51] stormmore thanks for validating, we should have a patch or working fix for this soon [21:51] the manual fix is pretty simple too, but i'd rather the charms take care of themselves and update that defaults file. [21:51] so the "workaround" would be to add those flags? [21:51] yeah [21:52] yeah no problem, I think I have a bought a little bit of time on that anyway. need to add 3 larger instances and remove 3 [21:57] lazyPower: the ipv6 stuff is cause flannel has an ipv6 address and dishes them out to the container eth adapters [21:58] i removes /var/lib/docker which seems to have resolved the layer issues [21:58] what is amazing though is [21:59] nginx-ingress-controller runs fine on the same machine?! [21:59] * magicaltrout needs hard liquor [22:02] cory_fu: my first (probably naive) attempt at "make client" failed with this error: https://pastebin.canonical.com/182956/ (This is using a schema.json that I generated with schemagen). [22:02] cory_fu: it's dinner time, though, so I'm going to do that, and then worry about things more on Monday. [22:02] Have a happy weekend! [22:02] oooh fsck [22:02] you can't docker exec into microbot [22:03] that explains that then [22:03] petevg: Damnit! That was the error I fixed in the same code that I modified the Makefile target [22:03] petevg: I really don't want to have to re-do that. It was a bit of a PITA [22:04] cory_fu: darn! I will try doing some git Magic on Monday. Maybe I can fin your change ... [22:04] *find [22:06] oooh blimey microbot is running [22:06] i take it all back lazyPower [22:06] its an amazing platform [22:06] never questioned it! [22:10] lazyPower: other question, trying to understand ingress stuff. Is the microbot supposed to route through the loadbalancer? [22:11] or mbruzek you're on the hook as well :P [22:24] ooh the loadbalancer is only for the masters? [22:42] magicaltrout: no microbot is not going through the kubeapi-load-balancer, it goes through a kubernetes load balancer [22:42] when you create ingress rules [22:49] petevg: ah ha! It's in the branch bug/77-deploy-resources! You can easily see the changes I made in this commit: https://github.com/juju/python-libjuju/commit/ef59035b2596a1998615cb0ad3f73fe539531898 [22:49] Bug #77: Maybe drop you in the 'edit bug' page after adding a bug? [22:49] cory_fu: yay! Thx. [22:50] petevg: If you want, I could submit that commit by itself as a PR [22:50] Probably worth doing [22:51] cory_fu: sure. Thx. [22:53] petevg: https://github.com/juju/python-libjuju/pull/92 [23:02] lazyPower, you don't have any toys to help devs assess the resource usage - cpu, mem - of their containers do you?