Getting Started with Kubernetes in Thalassa Cloud
Welcome to Kubernetes on Thalassa Cloud. This guide will help you get started with your first Kubernetes cluster, whether you’re new to Kubernetes or new to Thalassa Cloud. By the end of this guide, you’ll have a running cluster and understand the basics of how to interact with it.
Understanding Kubernetes
Kubernetes is an open-source platform for managing containerized applications. Think of it as an operating system for your cloud infrastructure that handles deployment, scaling, and management of applications automatically. When you deploy an application to Kubernetes, you tell it what you want (for example, “run three copies of my web application”), and Kubernetes ensures that state is maintained, even if individual components fail.
In Thalassa Cloud, we provide fully managed Kubernetes clusters. This means we handle the control plane—the brain of your Kubernetes cluster—including updates, high availability, and maintenance. You focus on deploying and running your applications while we ensure the infrastructure remains stable and secure.
Prerequisites
Before creating your first cluster, you’ll need a few things in place. First, ensure you have access to a Thalassa Cloud organisation. If you don’t have one yet, contact your administrator or sign up for an account through the Thalassa Cloud Console.
You’ll also need a Virtual Private Cloud (VPC) in the region where you want to deploy your cluster. A VPC provides network isolation for your resources. If you haven’t created one yet, you can do so through the Console by navigating to Networking and creating a new VPC. Within that VPC, create at least one subnet. The subnet defines the IP address range for your cluster nodes.
Finally, make sure you have the necessary permissions to create Kubernetes clusters. Your organisation administrator can grant these permissions through the IAM system.
Creating Your First Cluster
Creating a Kubernetes cluster in Thalassa Cloud is straightforward. You can do this through the web console or using the command-line interface. The console provides a guided experience, while the CLI offers automation-friendly workflows.
Using the Console
Step 1: Navigate to Kubernetes in the Console
Open the Thalassa Cloud Console and go to the “Kubernetes” section in the top navigation menu.
Step 2: Start Creating Your Cluster
Click the Create Cluster button to begin the cluster setup process.
Step 3: Enter Basic Cluster Information
- Give your cluster a descriptive name (e.g.,
production-clusterordevelopment-k8s). - Select the region where the cluster should run.
- Choose the Kubernetes version. The latest stable version is recommended.
Step 4: Configure Networking
- Select the VPC and subnet you prepared earlier.
- Decide whether the cluster API should be public (accessible from the internet) or private (accessible only from your VPC):
- Public: Convenient for development, accessible from anywhere.
- Private: More secure, restricts access to your network only.
Step 5: Review and Create the Cluster
Review your input and click Create Cluster. Provisioning typically takes 5–15 minutes. You can monitor progress in the console while Thalassa Cloud sets up the control plane and networking.
Step 6: Add a Node Pool
- After the cluster is created, navigate to the Node Pools page.
- Click to create a new node pool.
- Enter a name, select the availability zone, pick an instance type (node size), and set the desired number of nodes.
- Click Deploy. The nodes will be provisioned and automatically joined to your cluster.
Configuring Cluster Access
Once your cluster is provisioned, you need to configure access from your local machine. Kubernetes uses a configuration file called kubeconfig that contains connection details and authentication credentials. Thalassa Cloud provides several ways to obtain and use this configuration.
Using tcloud Connect
The easiest way to connect to your cluster is using the tcloud kubernetes connect command. This command fetches your cluster’s kubeconfig, configures your shell environment, and sets up authentication automatically. After running this command, you can use kubectl immediately without additional configuration.
# Connect to a specific cluster
tcloud kubernetes connect <cluster-id>
# Or use interactive selection
tcloud kubernetes connectThe connect command handles all the complexity of authentication and context switching. When you’re done working with the cluster, you can disconnect or switch to another cluster context.
Manual kubeconfig Setup
If you prefer managing kubeconfig files manually, you can retrieve the configuration using tcloud kubernetes kubeconfig. This command outputs the kubeconfig content, which you can save to a file and reference using the KUBECONFIG environment variable.
# Get kubeconfig and save to file
tcloud kubernetes kubeconfig <cluster-id> > ~/.kube/config-cluster
# Set KUBECONFIG environment variable
export KUBECONFIG=~/.kube/config-cluster
# Verify access
kubectl get nodesThis approach gives you more control over multiple cluster configurations. You can maintain separate kubeconfig files for different clusters and switch between them by changing the KUBECONFIG environment variable.
Installing kubectl
If you don’t have kubectl installed yet, you’ll need it to interact with your cluster. The Kubernetes command-line tool is available for all major operating systems. On macOS, you can install it using Homebrew:
brew install kubectlOn Linux, you can download the binary directly from the Kubernetes release page or use your distribution’s package manager. Windows users can download the executable or use package managers like Chocolatey.
Verify your installation by running kubectl version --client. For complete installation instructions, see the official Kubernetes documentation.
Verifying Your Cluster
After configuring access, verify that everything works correctly. Start by checking that you can see your cluster nodes:
kubectl get nodesThis command lists all nodes in your cluster. You should see the nodes from your node pool, each showing a status of “Ready”. The output includes information about the node’s Kubernetes version, internal IP address, and resource capacity.
Next, check the cluster information:
kubectl cluster-infoThis displays the addresses of your cluster’s control plane components, confirming that you can reach the API server.
You can also inspect the system components that Kubernetes runs automatically. These include networking plugins, DNS servers, and monitoring agents:
kubectl get pods --all-namespacesThis shows all pods running in your cluster, including system pods in namespaces like kube-system. Seeing these pods running indicates that your cluster is healthy and operational.
Understanding Cluster Components
Now that your cluster is running, it’s helpful to understand what’s happening behind the scenes. Kubernetes consists of several components that work together to manage your applications.
The control plane is the brain of your cluster. It includes the API server (which you interact with through kubectl), the scheduler (which decides where to run pods), and controllers that maintain desired state. In Thalassa Cloud, the control plane is fully managed, so you don’t need to worry about these components.
Your applications run on worker nodes. These are the virtual machines you configured in your node pool. Each node runs a container runtime (like containerd), the kubelet (which communicates with the control plane), and networking components. When you deploy an application, Kubernetes schedules pods onto these nodes.
Networking connects everything together. Thalassa Cloud uses Cilium as the default Container Network Interface (CNI), which provides secure, high-performance networking between pods. Each pod gets its own IP address, and services provide stable endpoints for accessing your applications.
Storage is handled through the Container Storage Interface (CSI). When your applications need persistent storage, Thalassa Cloud provisions block storage volumes that persist even if pods are recreated.
Next Steps
With your cluster running and access configured, you’re ready to start deploying applications. The next guide, Deploying Applications, walks you through deploying your first application to Kubernetes.
As you continue learning, explore the other sections of the Kubernetes documentation. The Workloads section explains different types of applications you can run, while Networking covers how applications communicate. The Security section helps you secure your workloads, and Storage explains persistent data management.
Kubernetes Guides
Once you’re comfortable with the basics, explore our step-by-step guides for integrating popular Kubernetes tools and services with Thalassa Cloud:
For deeper understanding of Kubernetes concepts, the official Kubernetes documentation provides explanations of all Kubernetes components and features. Many concepts apply directly to Thalassa Cloud, though we handle the infrastructure management for you.
Common Questions
- How do I upgrade my cluster? Thalassa Cloud manages control plane upgrades automatically. For node upgrades, you can trigger them through the console or CLI. See the release notes for information about available versions.
- Can I resize my cluster? Yes, you can add or remove nodes from your node pools at any time. You can also create additional node pools with different instance types for workloads with specific requirements.
- What happens if a node fails? Thalassa Cloud automatically detects node failures and replaces them. Your applications continue running on other nodes, and Kubernetes reschedules any pods that were on the failed node.
- How do I monitor my cluster? Thalassa Cloud provides built-in monitoring through the metrics server. You can also integrate external monitoring solutions. Check the monitoring documentation for details. We are currently in the process of launching our Managed Prometheus Service as part of our Observability product suit, which will integrate with our Kubernetes Service.
If you encounter issues or have questions, consult the troubleshooting guides or contact support through the Thalassa Cloud Console.