Overview


This note describes how to set up BastionZero to control and log access to a Kubernetes cluster via kubectl. As shown in the figure, the launch server is established in front of the Kubernetes cluster and used as a proxy for kubectl commands.



The launch server has the BastionZero agent installed on it, and thus autodiscovers itself to BastionZero. (Learn more about autodiscovery.)


You may decide to start the launch server as a long lived instance, or as part of your Terraform or CloudFormation deployment where it spins up and autodiscovers itself to BastionZero whenever you spin up the cluster. Alternatively, you could have an ephemeral server launch that spins up whenever a user wishes to access the cluster by using dynamic access targets.  (Learn more about dynamic access targets.)


Regardless of how you've decided to configure the launch server the following configuration steps are required:

  1. Install the BastionZero agent on the launch server
  2. Install the kubectl CLI on the launch server
  3. Configure the launch server with kubectl access to the cluster
  4. Set up BastionZero policies for your users to access the launch server


Once configured and launched, a user Alice will connect to the cluster via kubectl as follows:

  1. Alice authenticates to BastionZero using your Identity Provider (IdP) and BastionZero's MFA
  2. Alice will attempt to connect to the launch server
    • BastionZero checks policy to ensure Alice has permission to connect to the launch server
    • If so, Alice connects to the launch server and is placed in a shell as the Linux user identified by the policy
  3. Alice runs kubectl commands from the launch server
  4. All of Alice's commands and actions are logged by BastionZero

Examples

Configure access permissions on the launch server for AWS eks or unmanaged k8s


For this example, we assume a Kubernetes deployment that uses RBAC to distinguish the privileges of different users.  We demonstrate how to set up Roles for an admin user and a monitor user. The admin will have access to all Kubernetes APIs while the monitor will only be able to get and list pods and pod logs.


To achieve isolation between these two kubenetes Roles, we set up corresponding admin and monitor Linux accounts on the launch server as well.  

Our launch server is named k8s-BastionLaunchServer and it is autodiscovered into the environment k8s-Bastion.


To set up the two linux users on the k8s-BastionLaunchServer:

 - Login to the launch server as a sudo-er

 - Create two new users

        sudo adduser admin

    echo 'admin ALL=(ALL) NOPASSWD:ALL' | sudo tee /etc/sudoers.d/admin-user

    sudo adduser monitor

 - Install the kubectl CLI for each user, per the kubectl installation guide.

 - For EKS:

    - You also have to update the kube config file for each user. You can use the following AWS cli command to do so for each user: 

    - aws eks update-kubeconfig --name {cluster_name} --region {cluster_region}

    - Note: You will also have to ensure that your IAM role attached to the instance has the necessary permission to interact with the EKS cluster (i.e. "eks:DescribeCluster"). 


Next, we set up Roles for the kubernetes cluster. This is a sample monitor Role. 

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: monitor
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]


Note this is not a ClusterRole, meaning that this role only applies to the namespace "default." If you want the monitor user to see more than a particular namespace, please consider setting up a ClusterRole.


This RoleBinding will bind a Kubernetes user to a Role:

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: monitor-launch-server # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: monitor # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io


What we are doing is associating the role binding "read-pods" to the "monitor" Kubernetes Role we created in block above. We can see this under the "roleRef" section. We then associate the user "monitor-launch-server" to be a subject for our new RoleBinding. We can see this under "subjects".


For an unmanaged Kubernetes cluster:

Now we create the bearer tokens that map the API request to the Role:

cat token.csv
<TOKENSTRINGADMIN>,admin-launch-server,100,"admin,monitor"
<TOKENSTRINGMONITOR01>,monitor-launch-server,500,monitor

When we start our cluster we make sure to indicate the token file:

 https://kubernetes.io/docs/reference/access-authn-authz/authentication/#static-token-file.


Finally we set the token for each launch server user by running the following as that user:
kubectl config set-credentials --token=<TOKENSTRING>


For an EKS Managed Kubernetes cluster:

First ensure that your AWS users have access to the Kubernetes cluster. To do this you can follow the instructions found here: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html under the section "To add an IAM user or role to an Amazon EKS cluster". Ensure that you map your AWS IAM users or IAM roles to the Kubernetes username of the RoleBinding. Following is an example of mapping a AWS IAM "role_arn" to our Kubernetes username of the RoleBinding "monitor-launch-server". You can use the following command to edit your Auth ConfigMap:

    - kubectl edit configmap -n kube-system 


mapRoles: |
...
- groups:
- system:authenticated
rolearn: {role_arn}
username: monitor-launch-server


More details on RBAC for Kubernetes:

Kubernetes RBAC: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ 

Using kubeconfig files: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/


Setting up access permissions to a launch server using GCP gke


The first step is to create your cluster using the following command:

gcloud container clusters create demo-cluster


The next step is to set up and map your GKE users to Kube users.   Say you want to create 3 different types of users: owner, editor, viewer. Then you simply create those 3 Linux accounts on your launch server.  Use the following:

- Login to the launch server as a sudo-er

 - Create three Linux users

        sudo adduser owner

    sudo adduser editor
    sudo adduser viewer

 - Install the kubectl CLI for each user, per the kubectl installation guide.

 - For each user update your kube config using: 


     gcloud container cluster get-credentials demo-cluster 


These users will be the linux users that are assigned to your IdP users when authoring bastionzero policies.


GKE has predefined IAM roles that map to Kubernetes roles (see here). If you are using those no further action is needed.  The linux users that were created above will map directly to one of the pre-defined IAM roles.  

Else if you are using kube RBAC to assign users to roles then you must create ClusterRoles and RoleBindings to map each user into the appropriate k8s role.  In this example we will use the name 'monitor' as the k8s role.  Please note that in the previous step you must create a linux user called 'monitor' and that linux username is the linux name that must be used in the bastionzero policy.   You can find more information on this in this GCP understanding roles guide.  The sample below maps demo-user@acme.co to a k8s monitor role:


apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: viewer
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "edit", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
  name: demo-user@acme.co # "name" is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  # "roleRef" specifies the binding to a Role / ClusterRole
  kind: Role #this must be Role or ClusterRole
  name: monitor # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io


At the completion of this step we have either:

  • Set up to use the launch server with the default GCP IAM mapping to owner, editor,  & monitor.

  • Set up to use the launch server with to use ClusterRoles and RoleBindings.  In our example we have the k8s role monitor, which must also be a user on our launch server.


Setting up BastionZero Policies

Next, dependent on your use case we set up BastionZero policies with either the admin & monitor roles per the unmanaged k8s and eks example, or the owner, editor, viewer or monitor role as in the gke example.


The policies will authorize your IdP to manage which users and groups can connect to the launch server, and which linux user they will connect as. 


In this first example we generate a policy to place IdP users on the launch server as the monitor user.  We may want the entire SRE team to be able to login to the cluster as monitor; the list of users in our SRE team is maintained our IdP in a directory group called SRE_ReadOnly.  

Recall that the launch server was autodiscovered into the environment k8s-Bastion. 


Note that we could have written this policy in terms of a specific target (i.e. our launch server k8s-BastionLaunchServer), rather than a specific environment  (i.e. our launch server k8s-Bastion). In this case, we would have set the resource type to "targets" and written the policy in terms k8s-BastionLaunchServer.


We now create our admin policy, which is restricted to two specific users (rather than a group of users like SRE_ReadOnly) and is wriitten against the same environment but this time using the admin user:




If using GCP gke this policy could also be written against any of the pre-defined users; owner, editor, or viewer. 


These policies can also be created programmatically through BastionZero API's.  


Summary

We have up set up a kubernetes launch server, configured the Linux users with kubectl permissions to the cluster,  and set up BastionZero policies that allow your users to connect to a launch server.  

Users access the cluster by logging into BastionZero, connecting to the launch server, and then running kubectl commands from the launch servers. All commands are logged and associated to the IdP user, launch server user and date.