Migrating applications from Kubernetes to Onteon¶
Abstract¶
About the tutorial¶
This document is to help DevOps specialists to deploy applications, previously prepared for Kubernetes, to the Onteon platform.
Kubernetes is a system for automating deployment, scaling, and management of containerized applications. Originally developed by Google, now it is maintained by the Cloud Native Computing Foundation.
Onteon is an orchestration software for both containerized and non-containerized (native) applications. It is developed and maintained by Onteon Tech.
Since Kubernetes is complicated and the various costs associated with its use may be too high for some organizations, Onteon is an alternative proposal for those who want to achieve similar goals but simpler and shorter. Moreover not everybody wants or needs to use containerized applications. Onteon orchestrates both containers and native applications transparently, with the same user experience.
Why moving applications from Kubernetes to Onteon?¶
Reasons for migrating from Kubernetes to Onteon:
- Simplifying the deployment workflow — as it can be noted in later sections of this document Onteon configuration is much simpler to maintain and understand.
- Need to run native applications — Onteon not only takes care of containerized apps but also has the ability to run native applications and has special support for JVM-based software.
- No ecosystem fragmentation — only one implementation of Onteon exists, with a defined set of tools, so all company teams will use the same tooling.
Audience¶
We assume minimal operating knowledge of Kubernetes, minimal programming skills, and no knowledge about Onteon. This document should be easy to get into and should provide some insight to users who know nothing about either Kubernetes on Onteon.
This tutorial has been prepared for those who want to understand how to deploy applications on Onteon, which were previously prepared for Kubernetes platform.
Prerequisites¶
We assume anyone who wants to understand this topic should have basic knowledge of Kubernetes, Helm, Docker, and minimal programming skills. We assume no previous knowledge of Onteon. This tutorial is not an Onteon tutorial, although we include in the document links to Onteon documentation where necessary or helpful.
Copyright & Disclaimer¶
All the content and graphics published in this tutorial are the property of Onteon Tech. We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors. If you discover any errors on our website or in this tutorial, please notify us at contact@onteon.tech.
Scenario and migration path¶
Applications¶
Applications roles¶
In the following case a distribution of two simple applications is discussed:
hello-microservice-one
--- when requested withGET
will outputHello from microservice instance #1!
hello-microservice-two
--- when requested withGET
will queryhello-microservice-one
, then return bothhello-microservice-one
response andHello from microservice instance #2!
Both applications share the same source code but their behavior is changed based on passed in runtime options (see below code explanations).
Application diagram¶
Application diagram without orchestrators¶
Application diagram with Kubernetes¶
Application diagram with Onteon¶
Inside application code¶
Both hello-microservice-one
and hello-microservice-two
apps share
the same code, but different parameters are passed to each on startup.
File HelloController.java
:
The app's controller,
takes care of responding with a simple text response when queried with
GET
on api/hello
endpoint and also querying other apps when needed.
For special values explanation see below SystemPropertyFactory
.
package org.example.microservice.hello.controller;
import java.io.IOException;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.core5.http.HttpEntity;
import org.apache.hc.core5.http.ParseException;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.example.microservice.hello.factory.SystemPropertyFactory;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.CrossOrigin;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@CrossOrigin(origins = "*")
@RestController
@RequestMapping("api")
public class HelloController {
private final String helloMessage;
private final String peerGetUrl;
private final Boolean doQueryPeer;
private final CloseableHttpClient httpClient;
public HelloController() {
String instanceNumber = SystemPropertyFactory.getHelloIntanceNumber();
helloMessage = "Hello from microservice instance #" + instanceNumber + "!";
String peerUrl = SystemPropertyFactory.getHelloPeerUrl();
peerGetUrl = peerUrl + "/api/hello";
doQueryPeer = !peerUrl.equals("none");
httpClient = HttpClients.createDefault();
}
private String getResponseFromPeer() {
try {
HttpGet httpGet = new HttpGet(peerGetUrl);
CloseableHttpResponse peerResponse = httpClient.execute(httpGet);
HttpEntity entity = peerResponse.getEntity();
String responseString = EntityUtils.toString(entity);
return responseString;
} catch (IOException ioException) {
throw new RuntimeException("error occured while making a request",
ioException);
} catch (ParseException parseException) {
throw new RuntimeException("error occured while parsing response",
parseException);
}
}
@GetMapping(value = "/hello", produces = MediaType.TEXT_PLAIN_VALUE)
public String get() {
if (doQueryPeer) {
String peerResponse;
try {
peerResponse = getResponseFromPeer();
} catch (Throwable throwable) {
peerResponse = "error occurred, no response from peer";
}
return peerResponse + "\n" + helloMessage;
}
return helloMessage;
}
}
File SystemPropertyFactory.java
:
Takes care of resolving parameters passed to a given app to values.
-DhelloInstanceNumber
specifies a number by which the app is identified,-DhelloPeerUrl
specified URL to query in addition to returning app's own response, if it has the value of"none"
, then no query is performed
package org.example.microservice.hello.factory;
import java.util.Objects;
public class SystemPropertyFactory {
public static String getHelloIntanceNumber() {
String property = System.getProperty("helloInstanceNumber");
if (Objects.isNull(property)) {
return "0";
}
return property;
}
public static String getHelloPeerUrl() {
String property = System.getProperty("helloPeerUrl");
if (Objects.isNull(property)) {
return "none";
}
return property;
}
}
File Main.java
:
The Spring boot entry point.
package org.example.microservice.hello;
import org.example.microservice.hello.factory.SystemPropertyFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class Main {
public static void main(String[] args) {
SpringApplication.run(Main.class, args);
}
}
Kubernetes environment¶
Containerfile¶
The image built with this Containerfile
will be able to receive the following
configuration variables:
HELLO_INSTANCE_NUMBER
--- own identifier for a given microservice,HELLO_PEER_URL
--- URL the microservice will be querying.
FROM openjdk:17
ADD target/hello-microservice-1.0.0-SNAPSHOT.jar app.jar
ENV HELLO_INSTANCE_NUMBER="one"
ENV HELLO_PEER_URL="http://hello-microservice-two:8080"
ENTRYPOINT \
java \
-jar \
-Dserver.port=8080 \
-DhelloInstanceNumber=${HELLO_INSTANCE_NUMBER} \
-DhelloPeerUrl=${HELLO_PEER_URL} \
app.jar
EXPOSE 8080
Chart¶
Chart is the entrypoint of a Helm Kubernetes configuration, it defines the core metadata of application.
Toplevel Chart.yml
:
---
apiVersion: v2
name: hello-kube
description: A Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "1.0.0"
dependencies:
- name: hello-microservice-one
version: 1.0.0
- name: hello-microservice-two
version: 1.0.0
File Chart.yml
of hello-microservice-one
:
---
apiVersion: v2
name: hello-microservice-one
description: hello-microservice-one Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "1.0.0"
File Chart.yml
of hello-microservice-two
:
---
apiVersion: v2
name: hello-microservice-two
description: hello-microservice-two Helm chart for Kubernetes
type: application
version: 1.0.0
appVersion: "1.0.0"
Values¶
The difference between values.yml
of microservices is that they configure
which microservice will be querying which one.
In the setup below -two
will be querying -one
.
File values.yml
of hello-microservice-one
:
---
replicaCount: 1
image:
repository: "hello-microservice"
tag: "latest"
pullPolicy: IfNotPresent
microserviceConfiguration:
helloInstanceNumber: "one"
helloPeerUrl: "none"
service:
type: ClusterIP
port: 8080
File values.yml
of hello-microservice-two
:
---
replicaCount: 1
image:
repository: "hello-microservice"
tag: "latest"
pullPolicy: IfNotPresent
microserviceConfiguration:
helloInstanceNumber: "two"
helloPeerUrl: "http://hello-microservice-hello-microservice-one:8080"
service:
type: ClusterIP
port: 8080
Deployment template¶
File (template) deployment.yml
of hello-microservice-one
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hello-microservice-one.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "hello-microservice-one.name" . }}
helm.sh/chart: {{ include "hello-microservice-one.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "hello-microservice-one.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "hello-microservice-one.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: HELLO_INSTANCE_NUMBER
value: {{ .Values.microserviceConfiguration.helloInstanceNumber }}
- name: HELLO_PEER_URL
value: {{ .Values.microserviceConfiguration.helloPeerUrl }}
File (template) deployment.yml
of hello-microservice-two
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hello-microservice-two.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "hello-microservice-two.name" . }}
helm.sh/chart: {{ include "hello-microservice-two.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "hello-microservice-two.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "hello-microservice-two.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: HELLO_INSTANCE_NUMBER
value: {{ .Values.microserviceConfiguration.helloInstanceNumber }}
- name: HELLO_PEER_URL
value: {{ .Values.microserviceConfiguration.helloPeerUrl }}
Service template¶
File (template) service.yml
of hello-microservice-one
:
apiVersion: v1
kind: Service
metadata:
name: {{ include "hello-microservice-one.fullname" . }}
labels:
{{- include "hello-microservice-one.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "hello-microservice-one.selectorLabels" . | nindent 4 }}
File (template) service.yml
of hello-microservice-two
:
apiVersion: v1
kind: Service
metadata:
name: {{ include "hello-microservice-two.fullname" . }}
labels:
{{- include "hello-microservice-two.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "hello-microservice-two.selectorLabels" . | nindent 4 }}
Migration path to Onteon¶
Platform requirements¶
Onteon platform is comprised of:
- Onteon Control Center --- the cluster manager, serves as the decision maker,
- Onteon Node Manager --- service managing applications that connects with a given OCC (Onteon Control Center),
- Onteon CLI --- command-line tool for Onteon administration,
To run the minimal Onteon platform on one machine at least one OCC (Onteon Control Center) and one ONM (Onteon Node Manager) are required to run on that machine. Then, Onteon CLI can be installed either on that machine or on any other one that has network connectivity to the OCC.
The installation instructions are described in detail on the Onteon website.
See also:¶
Migration path¶
With every new technology, we face the problem of discovering what are the requirements to fit a given application to a platform. We can see this with Docker requiring Dockerfiles and containers, with Kubernetes requiring Pods, Services and Yaml configuration and even on the most basic level - with the Operating system requiring a specific set of functions to handle underlying hardware resources.
On Onteon the system-level configuration is not locked away from developers,
they are able to configure the service in any way they want - even drilling
into tiny tweaks with the "Generic OS Process Provider".
But in this document we will be using the DockerOsProcessProviderImpl
which gives required abstraction to integrate with Docker.
The most insightful in migrating from Kubernetes is to know the application architecture and how microservices are supposed to interact with each other. There are cases where just looking into the Helm configuration is not enough.
Next it will be shown how we could approach translating the Helm config of our apps to Onteon-specific configuration.
Microservices configuration¶
Onteon configuration files similarly to Kubernetes are written in YAML but
unlike applying
YAML with kubectl
each application must shit it's own
configuration file.
Configuration files are distributed in a tar archive with application's name,
(for example hello-microservice-one-1.0.0.tar.gz
),
must be called conf.yml
and put in the conf
directory.
Metadata¶
Let's write some metadata about the app:
app:
name: 'hello-microservice-one'
version: '1.0.0'
appType: 'standard'
placeHolder:
name: 'PlaceHolderManagerImpl'
version: '1.0.0'
filesToReplace: []
variables: {}
Docker support¶
We have to let Onteon know that we want to use that
DockerOsProcessProviderImpl
to run our app via docker.
Let's add the processProvider
settings:
procType: 'docker'
processProvider:
name: 'DockerOsProcessProviderImpl'
version: '1.0.0'
executable:
start:
imageName: 'hello-microservice:latest'
exposedPort: '${ont_port_1}'
innerPort: '8080'
pullNewerImage: false
imageName
is dependent on how we have named our container,exposedPort
is the port that the container will want to expose, in Dockerfiles this is done via theEXPOSE <PORT>
command, out container exposes the8080
port, we also know this fromcontainerPort
Helm template variable,pullNewerImage
is set to false to prevent startup overhead of checking if newer images is available since we specified thelatest
tag,
Behavior settings¶
The most important options are commonly located in Helm values.yml
file,
so take a look at how we used to configure hello-microservice-one
.
We had those variables to configure environment variables controlling the Pod behavior:
Now we have to pass them to the container under Onteon.
This is simply done via specifying appropriate runtimeOptions
:
innerPort: '8080'
pullNewerImage: false
runtimeOptions: '--env=HELLO_INSTANCE_NUMBER=one --env=HELLO_PEER_URL=none'
Another important option is to specify what the application assumes to be a
successful start.
Most of applications will have some special output message after starting.
Spring Boot applications will log Starting X
, where X
is a class
name. The application in question logs Starting Main
.
Network availability¶
With the Docker Process Provider the exposed port is connected to a random
available port picked by Onteon's algorithms, this port is specified
by ${ont_port_1}
. The URL application runs on is the Node URL, also known as
${address}
.
So the URL that the app is going to be available to Onteon is:
http://${address}:${ont_port_1}
.
serviceRepository:
healthCheckUrl: 'http://${address}:${ont_port_1}'
entities:
- entity:
priority: 1
port: ${ont_port_1}
protocol:
type: 'HTTP'
version: '1.1'
isExternal: false
isInternal: true
isExternal
will make the app available to the edge load balancer, that generally means it will be exposed to Internet or some other service (not Onteon), assuming the IP address of ONM is93.184.216.34
and that our app name ishello-microservice-one
that communication address will be:http://93.184.216.34:8020/_by_name/hello-microservice-one/
,isInternal
will make the app available to the internal load balancer, that means other applications running on the cluster will be able to interact with it via the special URL, assuming the IP address of ONM is93.184.216.34
and that our app name ishello-microservice-one
that communication address will be:http://93.184.216.34:8021/_by_name/hello-microservice-one/
Interacting with applications¶
Based on previous explanations,
the full configuration of hello-microservice-one
will be as follows:
app:
name: 'hello-microservice-one'
version: '1.0.0'
appType: 'standard'
procType: 'docker'
processProvider:
name: 'DockerOsProcessProviderImpl'
version: '1.0.0'
executable:
start:
imageName: 'hello-microservice:latest'
exposedPort: '${ont_port_1}'
innerPort: '8080'
pullNewerImage: false
runtimeOptions: '--env=HELLO_INSTANCE_NUMBER=one --env=HELLO_PEER_URL=none'
successLine: 'Started Main'
placeHolder:
name: 'PlaceHolderManagerImpl'
version: '1.0.0'
filesToReplace: []
variables: {}
serviceRepository:
healthCheckUrl: 'http://${address}:${ont_port_1}'
entities:
- entity:
priority: 1
port: ${ont_port_1}
protocol:
type: 'HTTP'
version: '1.1'
isExternal: false
isInternal: true
Assuming ONM IP is 93.184.216.34
, then that app should be available for
cluster's internal communication on the address:
http://93.184.216.34:8021/_by_name/hello-microservice-one/
.
Now we will introduce second microservice,
hello-microservice-two
that will talk to the unexposed app:
app:
name: 'hello-microservice-two'
version: '1.0.0'
appType: 'standard'
procType: 'docker'
processProvider:
name: 'DockerOsProcessProviderImpl'
version: '1.0.0'
executable:
start:
imageName: 'hello-microservice:latest'
exposedPort: '${ont_port_1}'
innerPort: '8080'
pullNewerImage: false
runtimeOptions: '--env=HELLO_INSTANCE_NUMBER=two --env=HELLO_PEER_URL=http://${address}:8021:/_by_name/hello-microservice-one'
successLine: 'Started Main'
placeHolder:
name: 'PlaceHolderManagerImpl'
version: '1.0.0'
filesToReplace: []
variables: {}
serviceRepository:
healthCheckUrl: 'http://${address}:${ont_port_1}'
entities:
- entity:
priority: 1
port: ${ont_port_1}
protocol:
type: 'HTTP'
version: '1.1'
isExternal: true
isInternal: true
HELLO_PEER_URL
is the URL that will route to the internal load balancer to talk to thehello-microservice-one
,isExternal
is enabled to make the app available outside the cluster,isInternal
is enabled to allow queryinghello-microservice-one
Creating application archives¶
The hello-microservice-one
configuration should be written to
file hello-microservice-one/conf/conf.yml
and hello-microservice-one
configuration to
file hello-microservice-two/conf/conf.yml
.
Old versions of Onteon always require bin
directory inside application
archive and will fail to start if that directory is missing.
To prevent compatibility issues it is good to always create that directory with
a empty .keep
file.
mkdir -p hello-microservice-one/bin
touch hello-microservice-one/bin/.keep
mkdir -p hello-microservice-two/bin
touch hello-microservice-two/bin/.keep
Above actions should result in forming the following configuration structure:
hello-microservice-distribution/
hello-microservice-one-distribution.yml
hello-microservice-two-distribution.yml
hello-microservice-one/
bin/
.keep
conf/
conf.yml
hello-microservice-two/
bin/
.keep
conf/
conf.yml
To prepare archives for upload:
cd hello-microservice-one
mkdir -p bin
touch bin/.keep
tar cfz hello-microservice-one-1.0.0.tar.gz bin conf
cd ..
cd hello-microservice-two
mkdir -p bin
touch bin/.keep
tar cfz hello-microservice-two-1.0.0.tar.gz bin conf
Distribution configuration¶
Distribution is like a Helm chart of a singular application. Distributions control how an application will auto-scale among available nodes.
File hello-microservice-one-distribution.yml
:
application: hello-microservice-one:1.0.0
numberOfInstances: 2
type: total
scripts:
checkIfNodeCanAcceptNewApplicationInstance: defaultAvailableNodeOnlyCINCANAIV1
selectNodeForNewApplicationInstance: defaultApplicationInstancesCountOnlySNFNAIV1
selectApplicationInstanceToRemove: defaultApplicationInstancesCountOnlySAITRV1
File hello-microservice-two-distribution.yml
:
application: hello-microservice-two:1.0.0
numberOfInstances: 2
type: total
scripts:
checkIfNodeCanAcceptNewApplicationInstance: defaultAvailableNodeOnlyCINCANAIV1
selectNodeForNewApplicationInstance: defaultApplicationInstancesCountOnlySNFNAIV1
selectApplicationInstanceToRemove: defaultApplicationInstancesCountOnlySAITRV1
See also¶
Uploading to the cluster¶
See availabe cluster nodes:
Upload application archives:
onteoncli application-registry upload hello-microservice-one/hello-microservice-one-1.0.0.tar.gz
onteoncli application-registry upload hello-microservice-two/hello-microservice-two-1.0.0.tar.gz
onteoncli application list
Load applications' distribution settings:
onteoncli distribution create-from-file hello-microservice-one-distribution.yml
onteoncli distribution create-from-file hello-microservice-two-distribution.yml
onteoncli distribution list
If application fails to start, then distribution will have the status
waiting-to-create
, this is because Onteon will continuously attempt to
bring up the application instances and fail to do so
(because there are unrecoverable errors).
To prevent this it is good to always check the details of instantiating a
given application by manually creating its application-instance
.
Create hello-microservice-one
instance on a selected cluster node:
Monitor all application instances:
See also¶
Tests¶
Test if the hello-microservice-two
on the edge is reachable:
curl http://localhost:8020/_by_name/hello-microservice-two/api/hello
Test if the internal hello-microservice-one
is reachable:
curl http://localhost:8021/_by_name/hello-microservice-one/api/hello
Takeaways¶
Key differences¶
The key changes between Onteon and Kubernetes in context of a cluster are:
- no provided DNS attached to applications (containers), communication between applications is achieved via a internal or external load balancer,
- greater reliance of native services - it is not common to create specialized common service apps for a cluster like Redis, PostgreSQL, etc.
- configuration for an application is delivered alongside the application,
in a
.tar.gz
file (which is an archive compressed with the standard GZ compression algorithm), in case of containers only the configuration is included in the tar archive
Onteon configuration for Docker containers¶
Image requirements¶
Registry¶
Currently Onteon does not host it's own registry, so the registry access has to be configured on individual cluster nodes (in order to pull images).
The Docker images can also be pre-loaded onto the machine running
the Onteon Node Manager.
If so, then it is advised to set pullNewerImage
to false
since pulling
is probably impossible in this case.
Health-check¶
Image must expose a URL to health-check, a GET
method on this URL
is performed to check if the application started correctly
(in addition to successLine
).
In case of existing Docker applications the special "alive" endpoint is rare, so any other safe endpoint can be used, as following:
Image specification¶
Onteon uses the processProvider
plugin called DockerOsProcessProviderImpl
to handle Docker images.
Assuming the image called myapp
with the tag 1.0.0
is used,
then the following is required:
app:
processProvider:
name: 'DockerOsProcessProviderImpl'
version: '1.0.0'
executable:
start:
imageName: 'myapp:1.0.0'
exposedPort: '${ont_port_1}'
Port rewriting¶
While working with onteon each app is assigned a random port from a special
range.
To configure those ports variables beginning with ont_port_
prefix are used.
For example if a container exposes the port 8000
, then to rewrite it onto
the Onteon cluster following configuration option can be used:
app:
processProvider:
name: 'DockerOsProcessProviderImpl'
executable:
start:
exposedPort: '${ont_port_1}'
innerPort: '8000'
serviceRepository:
healthCheckUrl: 'http://${address}:${ont_port_1}/alive'
entities:
- entity:
port: ${ont_port_1}
See also¶
App tar creation¶
Since with Docker we operate on containers and not native applications the only
required file in the app tar is the conf.yml
file.
Assuming the target app name is myapp-1.0.0
and the conf.yml
is in
the conf
subdirectory of current path, then execute following to create a tar
archive:
See also¶
Exposing to the outside world¶
With Kubernetes to expose applications to outside there are a few tactics that can be used, for example Ingress or NodePort exposing.
In case of Onteon all access in handled by the Internal and Edge load balancers.
To access application instances on load balancers, use these URLs:
http://<host>/<load-balancer-port>/_by_name/<app-name>/
http://<host>/<load-balancer-port>/_by_name_and_version/<app-name>/<app-version>/
Where:
host
--- either a name or IP of the machine running the cluster,load-balancer-port
--- either:8021
for communication via the internal load balancer, used only for communication between application instances without exposing to the outside world,8020
for communication via the external load balancer, also called the edge balancer,app-name
--- application name defined inside theconf.yml
file,app-version
--- application version defined inside theconf.yml
file.
If many application instances are running, /_by_name/<application-name>/
will
choose a random application instance.
Assuming a server runs on public IP 93.184.216.34
, and application
example-application
run on it, then clients can query the edge load balancer
on the endpoint https://93.184.216.34:8020/_by_name/example-application/
.
See also¶
Passing in options to containers¶
If a container needs to have special configuration environment variables
passed to it, then this can be achieved with adding arguments to
runtimeOptions
variable inside processProvide
.
app:
processProvider:
name: 'DockerOsProcessProviderImpl'
executable:
start:
runtimeOptions: -e "HOST_SERVICE='http://localhost:8021:/_by_name/host-service/'"
Limiting CPU and RAM usage¶
CPU and memory limists can be set by passing appropriate runtimeOptions
.
For example to limit CPU usage to two cores and maximum used memory to 1GB:
app:
processProvider:
name: 'DockerOsProcessProviderImpl'
executable:
start:
runtimeOptions: '--cpus=2 --memory=1g'
See also¶
Onteon distribution instead of Helm¶
Helm provides a way to configure not only individual applications but also whole stacks.
Onteon has it's own way of dealing with the problem of defining whole stack via the "distribution" configuration file.
See also¶
Firewall¶
Any firewall solution can be used for machines running Onteon software.
In this tutorial it is assumed Firewalld is used, so firewall-cmd
calls below
will have to be adapted for other firewall solutions.
The port 8050
is used for health-checks and special Onteon communication
between OCC and ONM.
It should always be available to cluster's IPs and the OCC.
Assuming OCC's IP is 93.184.216.34
we can issue this firewall-cmd
call
from one of Onteon nodes:
firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="93.184.216.34/20" port protocol="tcp" port="8050" accept'
It is recommended to close off the 8021
port and open it only for own cluster
IPs because it should be used only for in-cluster and inter-Onteon-cluster
application communication.
Assuming OCC's IP is 93.184.216.34
we can issue this firewall-cmd
call
from one of Onteon nodes:
firewall-cmd --permanent --zone=public --add-rich-rule='rule family="ipv4" source address="93.184.216.34/20" port protocol="tcp" port="8021" accept'
It is also recommended to always open 8020
port unless there is a non-Onteon
proxy before the Onteon's node with a external load balancer.
System Docker permissions¶
If Onteon uses system Docker installation, then there may be permission issues if it was installed without explicitly specifying Docker support during Onteon Node Manager installation.
Add the onteon
user to the docker group
:
Feedback¶
For feedback, questions about the tutorial or commercial support please connect to Onteon Tech via the e-mail: contact@onteon.tech.