[17:14] <R_P_S> hey, so I've got a cluster that I created a few days ago with juju, and now all juju commands are simply hanging.  I thought maybe I got auto logged out?  (using admin user).  But every juju command I've tried just hangs
[17:15] <R_P_S> juju had created a k8s cluster, and kubectl appears to be working, but juju is completely non-responsive
[18:07] <jamesbenson> how do you issue the post-processing steps?
[18:25] <jamesbenson> for k8s conjure-up
[18:35] <R_P_S> that is so weird... so juju was hanging for any commands including status for hours there... but it's suddenly come back.
[18:36] <R_P_S> I'm not sure I like the juju command being randomly unavailable for hours on end.  Any idea how I would troubleshoot this?  the juju client literally just hung for hours with any command until I noticed it started working again
[18:43] <rick_h> juju show in just over 15!!!!!!
[18:54] <rick_h> https://www.youtube.com/watch?v=Kq1sgJs9miU for those that want to watch the stream
[18:55] <rick_h> https://hangouts.google.com/hangouts/_/xgsuvuvuyrcpnk3hkqgz4u7opee for those that want to hop in and chat
[20:07] <R_P_S> I'm trying to setup one of my coworkers on my conjure-up juju cluster.  ME: juju add-user <username>, juju grant superuser  HIM: juju register <stuff>, juju switch <controller>, juju add-ssh-key "$(cat ~/.local/share/juju/ssh/juju_id_rsa.pub)", juju ssh kubernetes-master/0   ERROR permissions denied (unauthorized access)
[20:08] <bdx> R_P_S: `juju grant <username> admin|read|write <model-name>`
[20:09] <bdx> `juju grant superuser` grants controller acls
[20:09] <R_P_S> superuser doesn't have model access?
[20:09] <bdx> R_P_S: superuser is a role
[20:09] <R_P_S> if he's superuser, could he technically grant himself model acess?
[20:09] <bdx> nah
[20:09] <bdx> you would think
[20:10] <bdx> superuser is just controller level
[20:10] <bdx> admin|read|write are model level
[20:10] <R_P_S> so if you create a new model, the admin user has to go in and grant admin access to the new model to each user already in the system?
[20:10] <bdx> yeah
[20:11] <bdx> because the users aren't added to the model
[20:11] <bdx> they just exist at the controller level
[20:11] <bdx> so you must explicitly grant them the access level you want them to have as the model owner or as a model admin
[20:12] <R_P_S> ah, I just listed model users
[20:14] <R_P_S> I was watching the auth.log, and my coworker wasn't even hitting it... so something definitely felt off.  In other news, I got to watch all the attack attempts because ssh was given 0.0.0.0/0 ingress ACLs >.>
[20:36] <R_P_S> So I'm running through a quick demo with my coworker on what should be a vanilla conjure-up cluster
[20:36] <R_P_S> https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/
[20:36] <R_P_S> and we get to step 4 - exposing the service... and the external-ip gets stuck at <pending>
[20:37] <R_P_S> the tutorial talks about a "minikube" getting stuck at pending, but doesn't provide any reasoning as to why this happens
[20:39] <R_P_S> so despite being "exposed" there's no actual external access since it's only available via IPs that are tied to the flannel networking layer
[20:46] <bdx> R_P_S: is that subnet its deployed to configured to auto assign public ips?
[20:46] <bdx> and also using the igw and correct routing table(s)?
[20:46] <R_P_S> well, it's in ec2-classic... the vanilla install doesn't even stick it into a VPC
[20:47] <bdx> so its deployed in a vpc, or in default ec2-classic?
[20:48] <R_P_S> ec2-classic
[20:48] <R_P_S> which is always assign public IP by default
[20:49] <R_P_S> each of these instances do have public IPs, and juju status shows each instance with their actual public IP
[20:58] <R_P_S> and as for ec2-classic... the concept of igw and routing tables are completely hidden.
[21:09] <elmaciej> Hi! Just want to ask - do you have guys any bundle for openstack without ceph or I have to make it manually from charms
[22:15] <R_P_S> What are the S3 buckets used for when you select helm in a conjure-up?  I just realized that the IAM credentials I provided was never given S3 access... but it never at any point stated there was a failure since the conjure-up succeeded.
[22:16] <R_P_S> so of course the buckets don't exist... do I need to rebuild the entire cluster?  or can I just modify the IAM policy now and the system will pickup and get into the state it's expected?
[22:37] <stokachu> R_P_S: https://docs.ubuntu.com/conjure-up/2.4.0/en/cni/k8s-and-aws
[22:41] <R_P_S> stokachu: that page pretty much mirrors the link I was following: https://kubernetes.io/docs/tasks/access-application-cluster/load-balance-access-application-cluster/ except that it actually talks about what happens in AWS
[22:42] <stokachu> Heh ok
[22:43] <R_P_S> but no ELB was ever created.  My policies are "effect allow, action elasticloadbalancing:*, resource *" and "effect allow, action ec2:*, resource *"
[22:43] <R_P_S> that's cool that it's supposed to make an ELB... but it's strange that no ELB was ever created :(
[22:44] <stokachu> Well the docs state where to look for the error logs
[22:45] <stokachu> So that's probably where I would start
[22:46] <R_P_S> hrmm... I don't have any permissions for EBS... but I see the hint for checking logs on kubernetes master at the bottom of your page
[22:50] <R_P_S> EBS should be contained within EC2:* though... I'll have to check that... but that shouldn't be blocking anything now anyways since I'm not doing anything with volumes yet
[22:51] <R_P_S> yup create volume and attach volume are covered by ec2:*
[23:08] <R_P_S> weird, not found anything in the logs... but in ~/.cache/conjure-up/canonical-kubernetes/steps/04_enable-cni/ec2, there's a pair of files call policy-master and policy-worker that specify a pile of access including ec2, elasticloadbalancing, route53, s3 etc
[23:08] <R_P_S> hum... it looks like conjure-up needs to make aws roles?  It sure doens't have any permissions to do that...