Windows Management Server¶
The Windows Management server is part of the Remote deployment model - Deployment and platforms.
This server is used for installing and maintaining the IFS Cloud middle tier and its infrastructure.
The Windows Management server is a prerequisite for carrying out the installation with the IFS Cloud installer.
*** Throughout this guide IFS recommends using Powershell over CMD when possible***
How To Setup An Environment¶
Download the artifacts as described in the IFS Lifecycle Experience Guide.
Once downloaded, right click the downloaded zip files and select properties. In properties, tick the unblock checkbox at the bottom and click ok.
Unzip both zips to the same windows folder path (eg: "Extract Here"). It will create a folder structure as shown below,
Folder Structure¶
ifsroot
├── artifact-download # Artifact Upload and Downloading Scripts (**Note**: For Airgap users)
├── backups # reserved for backup of the kubernetes namespace (**Note**: backup will leak secrets)
├── bin # Binaries required to install/upgrade IFS Apps
├── config # All the configuration files required to install/upgrade IFS Apps
│ ├── certs # holds the certificates created during setup for ifs-ingress,ifs-monitoring, etc.
│ ├── ifs-storage-values.yaml # holds config for the storage engines
│ ├── ifs-ingress-values.yaml # holds config for the ingress engines
│ ├── ifs-monitoring-values.yaml # holds config for the monitoring engines
│ ├── kube # Kubernetes config file taken from Linux Box
│ ├── secrets # holds all the user secrets
│ ├── supported_platforms #
│ ├── ifscloud-values.yaml # The ifscloud configuration
│ ├── main_config.json.template # Default parameters required to install/upgrade IFS Apps
| └── ...
├── deliveries # All deliveries and build_home are kept here - old deliveries can be zipped or removed
├── logs # Main log folder which contains all the relevant logs
│ ├── ifscloudinstaller # ifscloud installer log folder
│ ├── main-script # Infrastructure script log folder
│ ├── remote-log-client # log folders of all containers in all namespaces in kubernetes
| └── ...
├── remote-scripts # Bash scripts which execute against Linux Box
├── utils # Required utility scripts to install/upgrade IFS Apps
│ ├── utils.psm1 # Utility powershell module for scripts
│ ├── common.psm1 # Common powershell module for scripts
│ └── local.psm1 # Local powershell module for scripts
│
└── main.ps1 # Main powershell script that handles the rest of the script execution
NOTE: the filename of config\ifscloud-values.yaml is important.
Main Configuration Parameters File¶
The main_config.json.template and main_config.json file located at ./ifsroot/config.
Parameter | Description | Required |
---|---|---|
Ifs.Base | Base Script Location. | Mandatory. Must keep the Default value. |
Ifs.Logs | All Log Location. | Mandatory. Must keep the Default value. |
Ifs.LinuxUserName | Management Server UserName. | Mandatory. Must keep the Default value. |
Ifs.Linuxhost | Management Server Host Name. | Mandatory. |
Ifs.Nodes | High Availabilty Nodes Names. | Mandatory if HA configuration. |
Ifs.ScriptsFName | Management Server Script Execution Folder Name. | Mandatory. Must keep the Default value. |
Ifs.ScriptsLocal | Local Utility Script Location. | Mandatory. Must keep the Default value. |
Ifs.ScriptsLinux | Management Server Script Copy Folder Name. | Mandatory. Must keep the Default value. |
Ifs.KubeConfigPath | Folder That Holds or Will Store the Kubeconfig. | Mandatory. Must keep the Default value. |
Ifs.PowershellPath | External Powershell Module Location. | Mandatory. Must keep the Default value. |
Ifs.Microk8sBin | External Microk8s Bin Module Location. | Mandatory. Must keep the Default value. |
Ifs.NugetVersion | Nuget Version used by Powershells. | Mandatory. Must keep the Default value. |
Ifs.localPowershellAssembliesPath | Default Windows Location for Storing Provider Assemblies. | Mandatory. Set a new value only if needed. |
Ifs.localPSRepositoryName | Local Powershell Repository Name. | Mandatory. Set a new value only if needed. |
Ifs.PoshVersion | Compatible Posh Module Version. | Mandatory. Must keep the Default value. |
Ifs.PoshYamlVersion | Compatible PoshYaml Module Version. | Mandatory. Must keep the Default value. |
Ifs.PrvKeyFile | Management Server Private Key Location. | Optional, Set a new value only if needed. |
Ifs.RemoteArtifactUri | Remote Artifactory Uri. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryUri | DEPRECATED: Please use RemoteArtifactUri. IFS JFrog Artifactory Url. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactDockerRepo | Remote Artifactory Docker Repo Name. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryDockerRepo | DEPRECATED: Please use RemoteArtifactDockerRepo. IFS JFrog Artifactory Docker Repo Name. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactDockerRepoPath | Remote Artifactory Docker Repo Path. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryDockerRepoPath | DEPRECATED: Please use RemoteArtifactDockerRepoPath. IFS JFrog Artifactory Docker Repo Path. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactRemoteRepo | Remote Artifactory Remote Repo Name. | Optional. |
Ifs.JFrogArtifactoryRemoteRepo | DEPRECATED: Please use RemoteArtifactRemoteRepo. IFS JFrog Artifactory Remote Repo Name. | Optional. |
Ifs.RemoteArtifactRemoteRepoVersion | Remote Artifactory Remote Repo Artifacts Version. | Optional. |
Ifs.JFrogArtifactoryRemoteRepoVersion | DEPRECATED: Please use RemoteArtifactRemoteRepoVersion. IFS JFrog Artifactory Remote Repo Artifacts Version. | Optional. |
Ifs.RemoteArtifactHelmRepoName | Remote Artifactory Helm Hosting Artifactory Name. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryHelmRepoName | DEPRECATED: Please use RemoteArtifactHelmRepoName. IFS JFrog Artifactory Helm Hosting Artifactory Name. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactThirdPartyRepo | Remote Artifactory Third Party Repo Name. | Optional. |
Ifs.JFrogArtifactoryThirdPartyRepo | DEPRECATED: Please use RemoteArtifactThirdPartyRepo. IFS JFrog Artifactory Third Party Repo Name. | Optional. |
Ifs.RemoteArtifactHelmRepo | Remote Artifactory Helm Repo Name. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryHelmRepo | DEPRECATED: Please use RemoteArtifactHelmRepo. IFS JFrog Artifactory Helm Repo Name. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactHelmStorageVersion | Remote Artifactory Helm Storage Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactHelmIngressVersion | Remote Artifactory Helm Ingress Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryHelmIngressVersion | DEPRECATED: Please use RemoteArtifactHelmIngressVersion. IFS JFrog Artifactory Helm Ingress Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactHelmPriorityClassVersion | Remote Artifactory Helm Priority Class Artifact Version. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactHelmMonitoringVersion | Remote Artifactory Helm Monitoring Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryHelmMonitoringVersion | DEPRECATED: Please use RemoteArtifactHelmMonitoringVersion. IFS JFrog Artifactory Helm Monitoring Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.RemoteArtifactHelmIngressVersion | Remote Artifactory Helm Ingress Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.JFrogArtifactoryHelmIngressVersion | DEPRECATED: Please use RemoteArtifactHelmIngressVersion. IFS JFrog Artifactory Helm Ingress Artifacts Version. | Mandatory. Set a new value only if needed. |
Ifs.KubectlVersion | Compatible Kubectl Client Version. | Mandatory. Must keep the Default value. |
Ifs.HelmVersion | Compatible Helm Version. | Mandatory. Must keep the Default value. |
Ifs.StepVersion | Compatible Step Version. | Mandatory. Must keep the Default value. |
Ifs.HtpasswdVersion | Compatible Htpasswd Version. | Mandatory. Must keep the Default value. |
Ifs.OpenJDKVersion | Compatible JDK Version. | Mandatory. Must keep the Default value. |
Ifs.Dns | DNS used by Kubernetes. | Mandatory. Set a new value only if needed. |
Ifs.MaxVMRebootWaitSecs | Maximum Wait Time for Management Server Restart. | Mandatory. Set a new value only if needed. |
Ifs.IFSCloudNamespace | IFS Cloud Namespace Name. | Mandatory. |
Ifs.FirewallPorts | Additional Firewall Ports to open in Firewall. | Optional. |
Ifs.ManagementServerIP | Windows Management Server IP. | Mandatory. |
Ifs.PodCidrRange | Pod IP Range to use for the Kubernetes Cluster. | Mandatory. Set a new value only if needed. |
Ifs.LocalNetworkIpRange | Local Network IP Range. | Mandatory. |
Ifs.LoadBalancerPrivateIP | Load Balancer Private IP. | Mandatory for HA setup. |
IfsMonitoring.ReleaseName | IFS Monitoring Release Name. | Mandatory. Set a new value only if needed. |
IfsMonitoring.ElasticsearchHost | IFS Monitoring ElasticSearch Host Name. | Mandatory. Set a new value only if needed. |
IfsMonitoring.ElasticsearchPort | IFS Monitoring ElasticSearch Port. | Mandatory. Set a new value only if needed. |
IfsMonitoring.ElasticsearchPath | IFS Monitoring ElasticSearch Path. | Mandatory. Set a new value only if needed. |
IfsRemoteLogClient.ElasticsearchLogPath | IFS Remote Log Client ElasticSearch Path. | Mandatory. Set a new value only if needed. |
IfsRemoteLogClient.InitialLogFetchInterval | IFS Remote Log Client Initial Log Fetch Interval. | Optional. The Default value is two days. |
IfsRemoteLogClient.LogRetentionSize | IFS Remote Log Client Log Retention Size. | Optional. The Default value is 50 files. |
IfsRemoteLogClient.SingleResponseSize | IFS Remote Log Client Single Response Size. | Optional. The Default value is 5000 hits per response. |
IfsRemoteLogClient.LogFileSize | IFS Remote Log Client Log File Size. | Optional. The Default value is 10MB. |
Parameters required for the Main Powershell Script action¶
Command | Description |
---|---|
-action | action to execute resource |
-resource | resource to execute |
-verbosePref | verbose requirement ('enable' or 'disable'), disabled by default |
Open a Powershell window where the IFS remote folder structure was extracted.
Name the top folder with a unique name e.g. same as the namespace of the middle tier.
Continue to execute the following commands in the Powershell window.
1. Initialize & Install¶
Go through the steps for each of the following capabilities in the Advanced section and fill in the necessary values in the main_config.json.
This step completes the installation of the below capabilities.¶
- Initialize Powershell modules.
- Create SSH key for remote access to Middle Tier Server.
- Install or Reinstall Kubernetes cluster.
- Get the kubeconfig file from the Kubernetes cluster in Middle Tier Server.
- Disable AppArmor Profile.
- Set CoreDNS DNS server.
- Enable Middle-Tier Server Firewall.
- Check Middle-Tier Server Firewall Status.
- Change Pod IP Range.
- If High Availability, join the nodes.
- Install ifs-storage helm chart.
- Install ifs-ingress helm chart - When Installing ifs-ingress for the first time, you will be prompted for Remote Artifact credentials.
Pre-Requisites :
-
Copy and rename the main_config.json.template file located at ./ifsroot/config into main_config.json.
-
Open the main_config.json file located at ifsroot/config.
-
Check whether the localPowershellAssembliesPath value exists. If not, create the empty folders manually.
-
Change the "Linuxhost" variable to your linux box host name.
-
Have a network for DB and other IP end-points that is separated from the internal k8s virtual network. Read "Change Pod IP Range" below.
-
By default, the DNS used by Kubernetes points to 8.8.8.8 8.8.4.4.
-
Docker Registry should be secured with a valid SSL certificate. Edit the #Dns# tag in config\main_config.json and set it to the corporate DNS. If using a list of DNS servers, use spaces as separators.
-
To enable firewall, Fill the "ManagementServerIP" variable to your workstation windows IP.
-
Fill the "LocalNetworkIpRange" variable to your local network IP range.
If < 21R1 SU 11 or < 21R2 SU 4; Kubernetes default pod IP range is "10.1.0.0/16".
If >= 21R1 SU 11 or >= 21R2 SU 4; Kubernetes default pod IP range is "10.64.0.0/16". If 10.64.0.0/16 pod IP range is conflicting with your local network IP Range, and you still need to continue using 10.1.0.0/64 (the pod IP range in 21R1 SU 10/21R2 SU 3 or below) or some other IP Range, you can change the value of "PodCidrRange" in the ifsroot/config/main_config.json file to the IP Range you need to use. Also, fill in the "LocalNetworkIpRange" in the ifsroot/config/main_config.json file
-
If setting-up a High Availability environment, Refer High Availability Prerequisite Configuration and configure the prerequsites for High Availability Setup.
Use the below command to start the installation from 'Initialize Powershell modules' to 'Install ifs-ingress helm chart' in one go. You also have the choice to follow commands one by one only for the above-mentioned capabilities referring to the Advanced section.
Command :
ps> .\main.ps1
Accept all the prompts (eg: yes/y/Y) and give the middle-tier server user (eg: ifs) password when requested.
2. Configure Java, Helm and Kubectl¶
Java, Helm and Kubectl are required to run the ifscloud installer and need to be accessible from a powershell prompt. Add the full path to ifsroot\bin\jdk\bin and to ifsroot\bin to your windows path. (open app "edit system environment variables") or add them to the PATH environment variable.
Open a new powershell and try to start java, helm and kubectl from there.
ps> java -version
ps> helm version
ps> kubectl version
All above commands should successfully show the version of the respective tool.
3. Install ifs-monitoring helm chart command.¶
IMPORTANT: EFK - Elasticsearch, Fluentd, and Kibana will be installed when the below command is executed. The primary purpose of Elasticsearch is to store and retrieve logs from fluentd. Fluentd forwards logs to elasticsearch. Kibana is a UI tool for querying, visualization of logs, and dashboards. EFK stack replaces ifs remote log client after 22R2 GA, 22R1 SU7, and 21R2 SU13.
The existing logging client uses a powershell command to generate a file on which the logs can be viewed. However for monitoring, instead of running a PowerShell command, you can access Kibana and Grafana using a URL and get real time logs and metrics.
Kube-Prometheus stack provides an end-to-end Kubernetes cluster monitoring with Prometheus. Grafana allows users to visualize metrics, explore, and share dashboards.
Before installing ifs-monitoring, you need to have IFS Cloud installed. Open the main_config.json file located at ifsroot/config and fill in the below variables. If the IFS Cloud namespace is ever deleted, the ifs-monitoring need to be deleted and re-applied. Otherwise the ingress certificates will be set to a "Fake self-signed certificate"
-
"Linuxhost" variables to the "yourvmname.yourdomain.com" (to create ingress endpoints for Elasticsearch, Kibana and Grafana)
-
"IFSCloudNamespace" variable to the namespace given at the time of IFS Cloud installation.
This command will install the ifs-monitoring helm chart to the middle tier server.
The first time, you will be prompted for Remote Artifactory credentials if you have not used them before.
Contact LE if you don't have these credentials yet.
ps> .\main.ps1 -resource 'MONITORING'
Follow the below documentation to access Grafana and Kibana Dashboards:
NOTE: For users in air-gapped environments who need to install additional Grafana plugins, please follow these steps:
Navigate to the following directory on your Management Server:
ifsroot > infrastructure > grafana-pluginsRun the plugin_installer:
ps> .\plugin_installer.ps1
TROUBLESHOOTING: If there is a slowdown in your VM or network during or after the ifs-monitoring installation, please follow the guidelines below before reinstalling the monitoring solution.
Step 1: Remove ifs-monitoring installation.
Here, please ensure not to remove ifs-monitoring namespace completely since it will cause unnecessary malfunctions in the system.
Run the following commands in order:
helm delete kibana -n ifs-monitoring
helm delete fluentd -n ifs-monitoring
helm delete ifs-monitoring-curator -n ifs-monitoring
helm delete elasticsearch -n ifs-monitoring
helm delete kube-prometheus-stack -n ifs-monitoring
helm delete eshook -n ifs-monitoring
Then view the PVCs created within the ifs-monitoring namespace:
kubectl -n ifs-monitoring get pvc
And remove all of the PVCs returned by the above command:
kubectl -n ifs-monitoring delete pvc <pvcname>
Step 2: Re-run the main.ps1 command:
ps> .\main.ps1 -resource 'MONITORING'
4. Install ifs remote log client command. (this feature is deprecated in 22R2 GA, 22R1 SU7 and 21R2 SU13)¶
This command will create the remote log client and a windows schedule task named IfsRemoteLogClientSchedule.
If there is a windows schedule task named IfsRemoteLogClientSchedule already in Windows Task Scheduler, that needs to be deleted from Windows Task Scheduler before running this command.
ps> .\main.ps1 -resource 'LOGGING'
AirGap Installation Documentation¶
PREREQUISITES¶
- Docker
- OS: Ubuntu Server 20.04 LTS
- Wget Package (version 1.20.3 or above)
- A Private Registry that should be secure with a username and a password
- Private Registry that supports Docker and Helm
STEPS TO RUN FOR AIRGAPPED INSTALLATION¶
1. Run download script to save docker images and helm charts in local disk¶
In this step, you will download and save all the necessary docker images and helm charts that are mentioned in the release.yml.
Run download.sh script with the below parameters in the machine that has access to the Internet.
-j or --jfrog-artifactory : jfrog artifactory url. eg:ifscloud.jfrog.io
-u or --username : username of artifact repository
-p or --password : password of artifact repository
-r or --release : release version of release.yml
You also need to specify the category of docker images or helm charts to be downloaded at the end of the same command.
--ifs-helm : Download and save helm charts in ifs-helm section
--ifs-docker : Pull and save docker images in ifs-docker section
--ifs-docker-infra : Pull and save docker images in ifs-docker-infra section
--ifs-all : Download all helm charts and docker images mentioned in the release.yml. This is the default download method if you have not specified an option
example -
./download.sh -u your_username -p your_password -j jfrog_artifactory_url -r x.y.z --ifs-helm
For the " -j jfrog_artifactory_url " you will have to add the private registry you are using.
If you have a private registry where RemoteArtifactUri (eg: https://registry.yourdomain.com:8443) and RemoteArtifactUri (eg: registry.yourdomain.com:8444) where the docker repository port(eg: 8444) is different from the helm repository port (eg: 8443), then the jfrog_artifactory_url needs to be in the format private_registry_url:port_number (eg: registry.yourdomain.com:8444).
For the " -r x.y.z " taking release number as "x.y" and service update as "z" both combined should be "x.y.z".
For the "--ifs-helm" section you will have to use either ifs-helm, ifs-docker, ifs-docker-infra or ifs-all.
Docker images will be saved inside a directory called docker, and helm charts will be saved inside helm directory respectively.
2. Run upload script to upload docker images and helm charts from local disk to artifact repository¶
In this step, you will upload docker images and helm charts that reside on docker and helm directories in the local disk to a specified repository.
Run upload.sh script with the below parameters
-a or --artifactory : artifactory url that you need to upload artifacts (eg: ifscloud.jfrog.io)
-u or --username : username of the artifact repository
-p or --password : password of the artifact repository
-r or --helm-repository : helm repository name (eg: helm)
-d or --docker-registry : docker registry name (eg: docker)
You can specify the artifacts that you need to upload from the options indicated below,
--helm : will upload all helm charts in the helm directory
--docker : will upload all docker images in the docker directory
--all : will upload all helm charts and docker images
If not specified, all helm charts and docker images will be uploaded to the given repository.
examples -
Upload helm charts and docker images:
./upload.sh -a artifactory -u your_username -p your_password -r your_helm_repository -d docker-registry --all
Upload helm charts only:
./upload.sh -a artifactory -u your_username -p your_password -r your_helm_repository --helm
Upload helm charts only where the docker repository port is different from the helm repository port:
./upload.sh -a artifactory:port_number -u your_username -p your_password -r your_helm_repository --helm
Upload docker images only:
./upload.sh -a artifactory -u your_username -p your_password -d docker-registry --docker
Upload docker images only where the docker repository port is different from the helm repository port:
./upload.sh -a artifactory:port_number -u your_username -p your_password -d docker-registry --docker
3. Go to the main_config.json and change the variables for the private registry.¶
In the main_config.json file that is located inside the config folder. You will be required to change the following variables with your values.
"RemoteArtifactUri": Remote Artifactory Uri
"RemoteArtifactDockerRepo": Remote Artifactory Docker Repo Name
"RemoteArtifactDockerRepoPath": Remote Artifactory Docker Repo Path
"RemoteArtifactRemoteRepo": Remote Artifactory Remote Repo Name
"RemoteArtifactHelmRepoName": Remote Artifactory Helm Hosting Artifactory Name
"RemoteArtifactHelmRepo": Remote Artifactory Helm Repo Name
example -
"RemoteArtifactUri": "https://registry.yourdomain.com:8443",
"RemoteArtifactDockerRepo": "registry.yourdomain.com:8444",
"RemoteArtifactDockerRepoPath": "docker_registry",
"RemoteArtifactRemoteRepo": "remote",
"RemoteArtifactHelmRepoName": "helm.ifs.com",
"RemoteArtifactHelmRepo": "repository/helm.ifs.com",
4. Install Remote¶
Refer to the installation steps in 1. Initialize & Install.
High Availability¶
High Availability Overview¶
- A minimum of 3 Middle-Tier Server VMs are required to host the k8s nodes.
- Hardware/Software Load Balancer to load balance traffic coming into the Cluster.
- Static IPs are a must for each node in an on-prem high availability environment to ensure stable communication, consistent node discovery, and reliable cluster operation.
- If one node goes unavailable, the environment will still be up and running since the other nodes are available. However, the node that went unavailable must be re-joined to the cluster to ensure that quorum is maintained.
- All 3 nodes act as control plane servers for high redundancy.
- All 3 nodes run the API server in each node in a load balanced fashion.
- It will take approximately 45 minutes for the high availability installation process.
Note: High Availability is also supported in Air-gapped environments.
High Availability Objectives¶
- Exclude Single Point of Failure.
- Mitigate application unavailability or keep unavailability which may occur due to Node failures, at a minimum.
- Multi-Master Control-Plane K8s Cluster to withstand Node Failures.
High Availability Load Balancer¶
Customers can select a Hardware/Software load balancer of their choice and provision it in a suitable HA manner to avoid a single point of failure at the load balancer. IFS does not package a load balancer with the Remote Deployment Model.
High Availability Load Balancer - Routing Traffic and Ports¶
- Traffic coming to port 443 of the Load Balancer should be routed to port 443 of all Middle-Tier Server Linux VM Nodes.
- The "Linuxhost" variable in main_config.json file should be set to the DNS domain name of the systemUrl. And the DNS domain name should point to the IP(s) of the Load Balancer.
- The "Nodes" array variable in main_config.json file should contain the full list of Middle-Tier Server VMs/Nodes.
- Load balancing should be as such that session stickiness is maintained.
- Refer to the "How to access IFS Cloud from the Internet" page, of the "Remote Deployment Guide" for the paths of the application that need to be Load Balanced across nodes.
- HTTP connection, send and read timeouts should match the timeout values in the Application and ReverseProxy. Refer to "Remote Deployment Guide > New Installation of IFS Cloud > Deploying IFS Cloud > Installation Parameters > IFS Cloud Installer > Installation parameters > General Parameters" for the timeouts to be configured and their default values.
STEPS TO RUN FOR HIGH AVAILABILITY INSTALLATION¶
1. High Availability Prerequisite Configuration¶
Before installing, open the main_config.json file located at ifsroot/config and additionally fill in the below variables for High Availability.
-
"Linuxhost" variables to the LoadBalancer Hostname
-
"Nodes" array variable with the list of VM hostnames to be used as Nodes.
example -
"Nodes": [ "node-1.yourdomain.com", "node-2.yourdomain.com", "node-3.yourdomain.com" ],
-
"LoadBalancerPrivateIP" variable to Private IP of the LoadBalancer.
2. Install Remote¶
Refer to the installation steps in 1. Initialize & Install.
High Availability Advanced Section¶
Use below commands for manual high availability configuration.
High Availability Join Nodes¶
Run the below command to join a node. Add the VM name to the bottom of the "Nodes" array in main_config.json file located at ifsroot/config.
ps> .\main.ps1 -resource 'JOINNODE'
Enter the hostname of the VM to add as a node to the cluster when prompted after the command executes successfully.
High Availability Remove Nodes¶
ps> .\main.ps1 -resource 'REMOVENODE'
Enter the hostname of the VM to be removed from the cluster when prompted after the command executes successfully, remove the VM name from the "Nodes" array in main_config.json file located at ifsroot/config
Advanced¶
Initialize Powershell modules.¶
This command will install the necessary Powershell modules that are needed to communicate with the middle tier server.
IMPORTANT: Before running the below command, check whether localPowershellAssembliesPath value mentioned in main_config.json file (located at ifsroot/config) exists. If not, create the empty folders manually.
ps> .\main.ps1 -resource 'INIT'
Create SSH key for remote access to Middle Tier Server.¶
This command will create authentication keys that are needed to communicate between the middle tier server.
IMPORTANT: Before continuing, open the main_config.json file located at ./config. You will then need to change the "Linuxhost" variable to your linux Middle-Tier Server host name.
ps> .\main.ps1 -resource 'KEY'
Accept all the prompts (eg: yes/y) and give the middle-tier server user (eg: ifs) password when required.
Install or Reinstall Kubernetes cluster.¶
This command copies all the snaps to the Linux machine and Installs Kubernetes into the Middle-Tier Server.
If a folder called microk8s is already present in the Linux machine, it will be replaced with a new microk8s folder which contains all the newest snaps available.
If a Kubernetes Cluster already exists in the Middle-Tier Server, the entire existing Kubernetes Cluster will be removed and a fresh Kubernetes Cluster is re-installed.
ps> .\main.ps1 -resource 'KUBERNETES'
Accept all the prompts (eg: yes/y) and give the middle-tier server user (eg: ifs) password when required.
Get the kubeconfig file from the Kubernetes cluster in Middle Tier Server.¶
This command grabs the kube config file from the Kubernetes cluster and copies it over to the Windows VM. This file is used to access Kubernetes when used with command line tools such as kubectl and helm.
ps> .\main.ps1 -resource 'GETKUBECONFIG'
Copy the file ifsroot\config\kube\config to c:\users\
ps> mkdir $HOME\.kube
ps> copy .\config\kube\config $HOME\.kube\
Disable AppArmor Profile¶
Disable AppArmor Profile for the Kubernetes Cluster. If an error "container process caused apparmor failed to apply profile: write /proc/self/attr/exec: operation not permitted" (or similar) is displayed in pods, this command needs to be re-applied as apparmor profiles might have been reloaded.
ps> .\main.ps1 -resource "DISABLEAPPARMORPROFILE"
Set CoreDNS DNS server¶
By default, the DNS used by Kubernetes points to 8.8.8.8 8.8.4.4. This script is only needed if an internal DNS is needed (e.g. if public DNS servers are blocked or if internal hosts need to be resolved by the pods)
Edit the #Dns# tag in config\main_config.json and set it to the corporate DNS. If using a list of DNS servers, use spaces as separators.
ps> .\main.ps1 -resource "SETK8SDNS"
Install ifs-ingress helm chart.¶
This command will install the ifs-ingress helm chart to the middle tier server.
The first time you will be prompted for Remote Artifactory credentials if you have not used them before.
Contact LE if you don't have these credentials yet.
IMPORTANT: After installing ifs-ingress helm chart using the below command, you may require to wait a few minutes before installing IFS Cloud, till all pods in ifs-ingress namespace start-up.
ps> .\main.ps1 -resource 'INGRESS'
Install ifs-storage helm chart.¶
This command will install the ifs-storage helm chart to the middle tier server.
IMPORTANT: After installing ifs-storage helm chart using the below command, you may require to wait a few minutes before installing IFS Cloud, till all pods in ifs-storage namespace start-up.
ps> .\main.ps1 -resource 'STORAGE'
Check Middle-Tier Server Firewall Status (Optional).¶
Check the status of the firewall
ps> .\main.ps1 -resource 'FIREWALL' -status 'STATUS'
Enable Middle-Tier Server Firewall¶
Enable the firewall
IMPORTANT: Before enabling the firewall, add the IP of the Management Server. For that, open the main_config.json file located at ifsroot/config and fill the "ManagementServerIP" variable to the IP of the Management Server.
ps> .\main.ps1 -resource 'FIREWALL' -status 'ENABLE'
Disable Middle-Tier Server Firewall (Optional).¶
Disable the firewall
ps> .\main.ps1 -resource 'FIREWALL' -status 'DISABLE'
Allow access to Additional Ports of the Middle-Tier Server in Firewall (Optional).¶
Allows access to additional Ports of the Middle-Tier Server in Firewall
IMPORTANT: Before allowing access to additional Ports of the Middle-Tier Server in Firewall open the main_config.json file located at ifsroot/config and fill the "FirewallPorts" variable with the port(s). e.g. You can allow a single port (TCP or UDP) as follows: "443/tcp." Alternatively, you can allow multiple ports (TCP or UDP) as follows: "8080,9000/tcp." Or you can add a range of ports in this manner: "11200:11299/tcp."
ps> .\main.ps1 -resource 'FIREWALL' -status 'ENABLE-PORTS'
Get Middle-Tier Server Process Information. (Optional)¶
This command will display the Middle-Tier Linux Server Process Information.
ps> .\main.ps1 -resource 'REMOTE-TOP'
Download and Install the latest Security Updates for the Middle-Tier Linux VM. (Optional)¶
Download and Install the latest Security updates / patches for the Middle-Tier Linux VM.
ps> .\main.ps1 -resource 'SECURITYUPDATES'
Reboot Middle-Tier Server. (Optional)¶
This command will reboot the Linux Middle-Tier VM.
ps> .\main.ps1 -resource 'REBOOT-LINUXBOX'
Change Pod IP Range.¶
Change the Kubernetes pod IP address range if it conflicts with the local network.
Explanation: The pods that runs inside the kubernetes cluster will be connected to an internal virtual network that if unchanged will be a 10.1.0.0/16 network. To see the IP's of the pods do a "kubectl get pods -A -o wide". If e.g. the DB has IP 10.1.2.3, then pods will not be able to connect to the DB. "Kubernetes" will think all IP addresses in the 10.1.0.0/16 range are pods, and not forward calls outside the internal network i.e. to the "physical" 10.1.0.0/16 network where the DB reside.
Later if the DB and the pods reside in the same network IP range it will cause a "The Network Adapter could not establish the connection" error in the ifs-db-init when running the IFS Cloud installer. So make sure the PodCidrRange is separated from the LocalNetworkIpRange.
IMPORTANT: Before running the script, change the LocalNetworkIpRange value in main_config.json file (located at ifsroot/config) to your local network IP range. It will check your local network IP range conflict with the Kubernetes default pod IP range. If it conflicts, change the PodCidrRange to a new IP range and run the script again.
If < 21R1 SU 11 or < 21R2 SU 4; Kubernetes default pod IP range is "10.1.0.0/16".
If >= 21R1 SU 11 or >= 21R2 SU 4; Kubernetes default pod IP range is "10.64.0.0/16". If 10.64.0.0/16 pod IP range is conflicting with your local network IP Range, and you still need to continue using 10.1.0.0/64 (the pod IP range in 21R1 SU 10/21R2 SU 3 or below) or some other IP Range, you can change the value of "PodCidrRange" in the ifsroot/config/main_config.json file to the IP Range you need to use. Also, fill in the "LocalNetworkIpRange" in ifsroot/config/main_config.json file.
Warning: Once you change the pod IP range, linux VM will restart. After restart, you may be required to close the existing powershell session and re-open, to run the rest of the commands.
ps> .\main.ps1 -resource 'CHANGE-POD-IP-RANGE'
Get Powershell Help¶
Get-Help ".\main.ps1"
Accept all the prompts
Next step is to install ifscloud¶
Read about it here Deploy Fresh install