/srv/irclogs.ubuntu.com/2017/11/22/#juju.txt

=== nevermind is now known as Guest66778
=== frankban|afk is now known as frankban
R_P_Shey, so I've got a cluster that I created a few days ago with juju, and now all juju commands are simply hanging.  I thought maybe I got auto logged out?  (using admin user).  But every juju command I've tried just hangs17:14
R_P_Sjuju had created a k8s cluster, and kubectl appears to be working, but juju is completely non-responsive17:15
=== rogpeppe1 is now known as rogpeppe
jamesbensonhow do you issue the post-processing steps?18:07
=== frankban is now known as frankban|afk
jamesbensonfor k8s conjure-up18:25
R_P_Sthat is so weird... so juju was hanging for any commands including status for hours there... but it's suddenly come back.18:35
R_P_SI'm not sure I like the juju command being randomly unavailable for hours on end.  Any idea how I would troubleshoot this?  the juju client literally just hung for hours with any command until I noticed it started working again18:36
rick_hjuju show in just over 15!!!!!!18:43
rick_hhttps://www.youtube.com/watch?v=Kq1sgJs9miU for those that want to watch the stream18:54
rick_hhttps://hangouts.google.com/hangouts/_/xgsuvuvuyrcpnk3hkqgz4u7opee for those that want to hop in and chat18:55
R_P_SI'm trying to setup one of my coworkers on my conjure-up juju cluster.  ME: juju add-user <username>, juju grant superuser  HIM: juju register <stuff>, juju switch <controller>, juju add-ssh-key "$(cat ~/.local/share/juju/ssh/juju_id_rsa.pub)", juju ssh kubernetes-master/0   ERROR permissions denied (unauthorized access)20:07
bdxR_P_S: `juju grant <username> admin|read|write <model-name>`20:08
bdx`juju grant superuser` grants controller acls20:09
R_P_Ssuperuser doesn't have model access?20:09
bdxR_P_S: superuser is a role20:09
R_P_Sif he's superuser, could he technically grant himself model acess?20:09
bdxnah20:09
bdxyou would think20:09
bdxsuperuser is just controller level20:10
bdxadmin|read|write are model level20:10
R_P_Sso if you create a new model, the admin user has to go in and grant admin access to the new model to each user already in the system?20:10
bdxyeah20:10
bdxbecause the users aren't added to the model20:11
bdxthey just exist at the controller level20:11
bdxso you must explicitly grant them the access level you want them to have as the model owner or as a model admin20:11
R_P_Sah, I just listed model users20:12
R_P_SI was watching the auth.log, and my coworker wasn't even hitting it... so something definitely felt off.  In other news, I got to watch all the attack attempts because ssh was given 0.0.0.0/0 ingress ACLs >.>20:14
R_P_SSo I'm running through a quick demo with my coworker on what should be a vanilla conjure-up cluster20:36
R_P_Shttps://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/20:36
R_P_Sand we get to step 4 - exposing the service... and the external-ip gets stuck at <pending>20:36
R_P_Sthe tutorial talks about a "minikube" getting stuck at pending, but doesn't provide any reasoning as to why this happens20:37
R_P_Sso despite being "exposed" there's no actual external access since it's only available via IPs that are tied to the flannel networking layer20:39
bdxR_P_S: is that subnet its deployed to configured to auto assign public ips?20:46
bdxand also using the igw and correct routing table(s)?20:46
R_P_Swell, it's in ec2-classic... the vanilla install doesn't even stick it into a VPC20:46
bdxso its deployed in a vpc, or in default ec2-classic?20:47
R_P_Sec2-classic20:48
R_P_Swhich is always assign public IP by default20:48
R_P_Seach of these instances do have public IPs, and juju status shows each instance with their actual public IP20:49
R_P_Sand as for ec2-classic... the concept of igw and routing tables are completely hidden.20:58
elmaciejHi! Just want to ask - do you have guys any bundle for openstack without ceph or I have to make it manually from charms21:09
R_P_SWhat are the S3 buckets used for when you select helm in a conjure-up?  I just realized that the IAM credentials I provided was never given S3 access... but it never at any point stated there was a failure since the conjure-up succeeded.22:15
R_P_Sso of course the buckets don't exist... do I need to rebuild the entire cluster?  or can I just modify the IAM policy now and the system will pickup and get into the state it's expected?22:16
stokachuR_P_S: https://docs.ubuntu.com/conjure-up/2.4.0/en/cni/k8s-and-aws22:37
R_P_Sstokachu: that page pretty much mirrors the link I was following: https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/ except that it actually talks about what happens in AWS22:41
stokachuHeh ok22:42
R_P_Sbut no ELB was ever created.  My policies are "effect allow, action elasticloadbalancing:*, resource *" and "effect allow, action ec2:*, resource *"22:43
R_P_Sthat's cool that it's supposed to make an ELB... but it's strange that no ELB was ever created :(22:43
stokachuWell the docs state where to look for the error logs22:44
stokachuSo that's probably where I would start22:45
R_P_Shrmm... I don't have any permissions for EBS... but I see the hint for checking logs on kubernetes master at the bottom of your page22:46
R_P_SEBS should be contained within EC2:* though... I'll have to check that... but that shouldn't be blocking anything now anyways since I'm not doing anything with volumes yet22:50
R_P_Syup create volume and attach volume are covered by ec2:*22:51
R_P_Sweird, not found anything in the logs... but in ~/.cache/conjure-up/canonical-kubernetes/steps/04_enable-cni/ec2, there's a pair of files call policy-master and policy-worker that specify a pile of access including ec2, elasticloadbalancing, route53, s3 etc23:08
R_P_Shum... it looks like conjure-up needs to make aws roles?  It sure doens't have any permissions to do that...23:08

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!