Skip to content

Installation Overview and Preparation Guide⚓︎

Expeto offers scalable and flexible solutions for deploying private and public 4G/5G networks within Kubernetes environments.

This guide provides step-by-step instructions to help you install and configure Expeto products effectively, whether you're managing a greenfield deployment or integrating it into an existing network. While the guide assumes advanced technical expertise, it is structured to accommodate the specific needs of Mobile Network Operators (MNOs), enterprise customers, and telco integrators.

Who Should Use This Guide?⚓︎

  • Mobile Network Operators (MNOs): MNOs deploy Expeto products as part of a broader ecosystem including xControl and xRouter to manage large-scale, multi-site networks for internal use or customer traffic. These users typically require advanced configurations for multi-tenancy, redundancy, and scalability, and asusme extensive expertise in telecommunications and networking such as BGP and IPX.

  • Enterprises Deploying public xCores: These users deploy xCore as a "router" for mobile traffic to egress onto private networks without the need for full telco infrastructure. They usually have enterprise networking experience and require a streamlined installation process leveraging Expeto's managed services.

  • Integrators and Enterprise networking experts using Private Radios: Specialists and advanced enterprise staff integrating with private radios (gNodeB/eNodeB) for customized network solutions. They possess advanced telco experience and expertise in configuring radio hardware, PLMN IDs, and complex networking interfaces.

This guide assumes that foundational decisions about deployment, such as PLMN IDs, radio hardware, and Kubernetes platform configurations, have been made. While Expeto uses a Product Intake Form internally to gather this information, this guide provides a comprehensive installation path even if that data hasn’t been formalized.

Overview of the Installation Process⚓︎

The installation process for Expedo products is structured into five main stages:

  1. Prerequisites:

    • Ensure Kubernetes readiness, install necessary plugins, and set up required tools like kubectl and helm.
    • On OpenShift/MicroShift all commands performed with kubectl can be performed with the oc command instead, in this case we recommend using an alias, e.g. alias kubectl=oc
  2. Add Expeto Helm Repositories:

    • Authenticate and retrieve the necessary Helm charts.
  3. Customize Configuration:

    • Create and configure the values.yaml file to define behavior and connectivity, including PLMN IDs, networking interfaces, and scaling requirements.
  4. Deploy:

    • Use Helm to install and monitor the deployment process.
  5. Validate and Optimize:

    • Verify the deployment, ensure proper functionality, and apply performance optimizations.

Prerequisites⚓︎

Before you begin the installation, ensure that your environment meets the following requirements. This section outlines system requirements, tools, and preparatory steps necessary for a smooth deployment process.

System Requirements⚓︎

Kubernetes Cluster⚓︎

  • Version: Kubernetes 1.23 or later.
  • Cluster Resources:

    • At least 3 worker nodes for redundancy.
    • Minimum of 4 CPU Cores and 8 GB memory per node for a total of at least 16 cores and 32 GB per cluster.
    • Persistent storage system available from all nodes configured for ReadWriteOnce volumes.

    For more information on resource sizing, see System Resource / Sizing Guide

  • Plugins/Operators/Features required:

    • DNS Resolver.
    • Cert Manager.
    • Storage plugin compatible with ReadWriteOnce volumes.
  • Node Kernel:

    • Linux kernel 5.13 or later, compatible with Kubernetes.

Tools and Utilities⚓︎

Local Tools⚓︎

  • kubectl: Command-line tool for interacting with Kubernetes clusters.
  • helm: Kubernetes package manager for managing Helm charts.

Kubernetes Cluster Tools⚓︎

Depending on your Kubernetes environment, the following tools must be configured to ensure a functional deployment. Adjust the steps based on whether you are using MicroK8s, OpenShift/MicroShift, or VMware Tanzu.

Important

Review the document below of your chosen Kubernetes platform to understand the common installation and configuration challenges:

DNS Resolver⚓︎

A DNS resolver provides name resolution for Kubernetes services, so that services are able to communicate with each other.

sudo microk8s enable dns

OpenShift/MicroShift have DNS capabilities enabled by default.

For further configuration options, please refer to DNS Operator in OpenShift.

VMware Tanzu comes with integrated DNS resolution capabilities via CoreDNS.

If you require public service resolution, Tanzu supports synchronizing with external DNS providers via ExternalDNS.

Cert Manager⚓︎

Cert Manager is required to manage TLS certificates for secure communication.

sudo microk8s enable cert-manager

Please refer to Cert-manager documentation for configuring cert-manager with OpenShift.

Please refer to Tanzu Cert Manager Installation for installation steps with VMware Tanzu.

Persistent Storage⚓︎

A storage class must be configured to support persistent volumes.

MicroK8s does not ship with a storage class plugin to span multiple worker nodes. For Demo purposes the following can be used on single node installs only. This will not work for multi-node installs because the storage will not be replicated across nodes causing Pods to loose data if they migrate between Nodes. Other options are available for larger MicroK8s installs, see Cannonical documentation for examples and guidelines.

sudo microk8s enable hostpath-storage

OpenShift supports a wide variety of persistent storage options.

Read more at Understanding Persistent Storage in OpenShift.

See following document for MicroShift MicroShift Storage Configuration.

Tanzu also supports a variety of persistent storage options.

Read more at Using Persistent Storage in vSphere with Tanzu.

Cluster Preparation⚓︎

  1. Get a copy of the kube-config file for your cluster, name it config and place it in the .kube directory residing in your home directory, be sure to set permissions to 0600 or equivalent on non Linux platforms.

  2. Verify Kubernetes Access:

    • Ensure you can connect to your Kubernetes cluster using kubectl:

      kubectl get nodes
      

      Confirm all nodes are in a Ready state.

  3. Helm Installation:

    • Verify that Helm is installed and functioning:
      helm version
      
  4. Ulimits:

    The Expeto solution ships with a DaemonSets/MachineConfigs that will automatically configure things like kernel modules, buffer sizes, sysctl parameters and ulimits on most platforms but in some cases slight additional modifications might be recommended.

    For MicroK8s: Edit the container environment configuration and increase the ulimit to 1048576

    echo 'ulimit -n 1048576 || true' >> '/var/snap/microk8s/current/args/containerd-env'
    
  5. Configure allowed NodePort range:

    The default installation depends on the availability of NodePorts and having a wider range of ports enabled than is default with most k8s deployments.

    Unless your install only uses Multus based interfaces make sure to allow node ports in the range of 30000-38413 before installing the chart.

    echo '--service-node-port-range=1024-38413' >> /var/snap/microk8s/current/args/kube-apiserver
    

    Then, restart microk8s (sudo microk8s.stop && sudo microk8s.start)

    oc patch network.config.openshift.io cluster --type=merge -p \
    '{
        "spec":
        { "serviceNodePortRange": "1025-38413" }
    }'
    

    Note that it might take several minutes for the new config to be applied.

    For a multi-master cluster see the following doc for how to identify the leader: Leader identification

    For a single master cluster or once the cluster leader has been identified make the following change:

    echo '- "--service-node-port-range=-1024-38413"' >> /var/vcap/jobs/kube-apiserver/config
    

    Then, restart the kube-apiserver

  6. Advanced Networking:

    • Using LoadBalancers:

      Please refer to Services, Load Balancing, and Networking in Kubernetes documentation for detailed information regarding setting up load balancers and other advanced networking topics.

    • Using Multus for Advanced Networking:

      For platforms that support Multus it can either be installed separately or installed by setting multus.enabled to true during install.

      It's recommended to also install Whereabouts for IPAM at the same time.

      For installations that have a Multus plugin, it is recommended to use the built-in version.

      sudo microk8s enable community
      sudo microk8s enable multus
      

      Please refer to Multus CNI documentation for configuring Multus and Whereabouts with OpenShift.

      Please refer to Cert-manager documentation for configuring Multus with VMWare Tanzu.

      For further information regarding installation and configuration of Multus on other platforms, please refer to Multus Quick-Start Guide or Multus Documentation.

  7. Role-Based Access Control (RBAC):

    Note: If you plan to use Role-Based Access Control (RBAC) in your Kubernetes cluster, it must be enabled before deploying Expeto products.

    If RBAC is enabled after deployment, the cluster must be redeployed before it will function correctly.

    When the cluster is deployed, the helm chart will automatically configure the required roles, bindings, and permissions for RBAC.

    Refer to Using RBAC Authorization in Kubernetes documentation for further information.

Adding the Expeto Helm Repositories⚓︎

All Expeto deployments rely on Helm charts hosted in Expeto's private repositories. This step outlines how to configure access to these repositories, ensuring that the necessary charts and container images can be downloaded during installation.

Gather Deployment Credentials⚓︎

Before proceeding, ensure you have:

  1. Helm Repository Credentials: Provided by Expeto Support for accessing container images and Helm charts.

  2. Deployment-Specific Information:

    • PLMN IDs, RAN details, and other configuration specifics (if applicable).
    • Network interface and CIDR details for northbound (N2/N3/N9) and southbound (N6) interfaces.

Add Repositories⚓︎

  1. Use the provided credentials to add the Expeto Helm repositories to your local Helm configuration.

    helm repo add expeto-ngc https://repo.expeto.io/repository/ngc --username '<your_username>' --password '<your_password>'
    

    Replace <your_username> and <your_password> with the credentials provided by Expeto Support.

  2. Update your local Helm repository cache to ensure you have the latest chart versions:

    helm repo update
    

Verify Repository Access⚓︎

After adding the repositories, confirm that they are accessible and contain the required charts:

  1. List the added repositories:

    helm repo list
    
  2. Search for available charts to confirm connectivity:

helm search repo xcore
helm search repo xrouter
helm search repo xcontrol
helm search repo expeto-docs

These commands should return a list of available charts for each product in the Expeto repository.

Troubleshooting⚓︎

If you encounter issues adding the repositories or accessing charts:

  • Ensure your username and password are correct and match the credentials provided by Expeto Support.
  • Confirm that your machine has access to the internet and can reach https://repo.expeto.io.
  • Ensure that Helm is installed and functioning properly and is more recent than 3.0.9 than by running:

    helm version
    
  • Contact Expeto Support for assistance if the problem persists.

Customize Configuration⚓︎

The values.yaml file is a critical part of the Expeto xCore deployment process.

It defines parameters for the entire deployment, including networking, resource scaling, radio configuration and advanced features. This guide will help you customize the file to align with your deployment requirements.

Important

  • All changes to deployments must be made in the values.yaml file. This file serves as the source of truth for your Helm deployments. Any manual changes made directly to resources (e.g., via kubectl) will be overwritten when scaling occurs or when the values.yaml file is applied again.
  • For more details on applicable parameters in the values.yaml, see values.yaml Reference

To ensure consistency and prevent loss of changes, always update the values.yaml file and reapply it with Helm commands.

Make sure you store a copy of the values.yaml file in a secure place, this file acts as your disaster recovery and will allow you to rebuild the entire solution with minimal effort very rapidly.

Locate the values.yaml File⚓︎

To access the values.yaml file:

  1. Pull the Helm chart from the expeto-ngc repository:

    helm show values expeto-ngc/xcore
    
    helm show values expeto-ngc/xrouter
    
    helm show values expeto-ngc/xcontrol
    

    This is to get the default values, multiple examples are also available to demonstrate different setup options and possibilities for a variety of Kubernetes distributions

    The values.yaml file will be in the extracted directory.

  2. Pull examples from the expeto-ngc repository:

    export tmp_dir="$(mktemp -d)" && helm pull ngc/xcore  --untar --untardir ${tmp_dir} && mv "${tmp_dir}/xcore/examples" ./ && rm -Rf "${tmp_dir}"    === "xCore"
    
    export tmp_dir="$(mktemp -d)" && helm pull ngc/xrouter  --untar --untardir ${tmp_dir} && mv "${tmp_dir}/xrouter/examples" ./ && rm -Rf "${tmp_dir}"
    

    This will download the chart, extract it to a temporary directory and then move the examples directory to the current directory. Alternatively the whole chart can simply be downloaded with "helm pull" and extract.

Editing and Validating values.yaml⚓︎

  • Edit the File: Open values.yaml and update the sections relevant to your deployment. Reference the comments in the file for additional guidance.
  • Validate Changes: Use Helm to validate the configuration before applying it:
helm template expeto-ngc/xcore -f values.yaml > validated_output.yaml
helm template expeto-ngc/xrouter -f values.yaml > validated_output.yaml
helm template expeto-ngc/xcontrol -f values.yaml > validated_output.yaml

Review the validated_output.yaml for any errors or issues before proceeding.

Best Practices⚓︎

  • Start with Defaults: Use the default values.yaml file as a baseline and modify only the required fields.
  • Document Changes: Add comments to track customizations for easier troubleshooting or future updates.
  • Test Incrementally: Validate and test changes incrementally to catch errors early.

Deploy and Verify⚓︎

With your environment prepared, Helm repositories configured, and the values.yaml file customized, you are ready to deploy into your Kubernetes cluster. Following the links below for details of the deployment process, verification steps, and troubleshooting tips of the Expeto platform components: