Catalyst Blockchain Manager
Hyperledger Fabric Service 2.0.0
Search
⌃K
Links

Installation Instructions

Prerequisites

1. Setup Kubernetes or OpenShift cluster

Supported version of OpenShift: 4.7. Supported version of Kubernetes: 1.17 and later. We recommend AWS (EKS) or Google Cloud (GKE), but you can install it on a standalone cluster as well.
Define your cluster size considering the following minimal requirements and your business needs.
Minimal requirements for the Catalyst Blockchain Platform Hyperledger Fabric service for one organization — 1 instance with:
  • 2 core CPU
  • 4GB RAM
  • 10GB disk space
Minimal requirements for one node:
Node
CPU
Memory, Mi
Storage, Gi
CA
0.1
128
1
Peer
0.1
128
1
Orderer
0.1
128
1
Deciding on the size of the cluster, please consider the expected load of the nodes.
Note: Each chaincode installed to a peer runs as a separate pod and consumes additional resources (CPU and RAM).

2. Install Helm to your workstation

Installation manuals: https://helm.sh/docs/intro/install/ No customization needed.
Supported version of Helm: 3.*.

3. Install Traefik ingress

The ingress-controller is needed for traffic routing to expose nodes (peer, CA, orderer). The Catalyst Blockchain Platform Hyperledger Fabric service creates a CRD resource (IngressRouteTCP in case of using Traefik or Route in case of using OpenShift), which is automatically started and deleted along with each node.
Installation manuals: https://github.com/traefik/traefik-helm-chart No customization needed, the default port ( :443 ) for HTTPS traffic will be used.
Note: We recommend installing Traefik to a separate namespace from the application (creation of a namespace for the Catalyst Blockchain Platform Hyperledger Fabric service is described in step 6).
Supported version of Traefik: 2.3.
In case of using OpenShift, you should skip this step and specify it in the Helm chart values later (Helm chart values are described in the Setup section), because OpenShift has a built-in ingress-controller server.

4. Install cert-manager to create TLS certificate

TLS certificate is needed for secured communication between a User and the Сatalyst Blockchain Platform Hyperledger Fabric service components.
Installation manuals: https://cert-manager.io/docs/installation/helm/ We recommend using the last release of the official helm chart.
Note: You can skip this step and specify your TLS certificate and key as a Kubernetes secret in Helm chart values instead later (Helm chart values are described in the Setup section). You can find the manual on how to create a Kubernetes secret here: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets

5. Create an A-record in a zone in your domain's DNS management panel and assign it to the load balancer created upon Traefik or OpenShift installation

Catalyst Blockchain Platform Hyperledger Fabric service needs a wildcard record *.<domain> to expose nodes. All created nodes (peers, orderers, CAs) will have a <NodeName>.<domainName> address.
For example, in case you are using AWS, follow these steps:
  1. 1.
    Go to the Route53 service.
  2. 2.
    Create new domain or choose the existing domain.
  3. 3.
    Create an A record.
  4. 4.
    Switch “alias” to ON.
  5. 5.
    In the “Route traffic to” field select “Alias to application and classic load balancer.”
  6. 6.
    Select your region (where the cluster is installed).
  7. 7.
    Select an ELB balancer from the drop-down list.*
*Choose the ELB balancer, which was automatically configured upon the Traefik chart installation as described in step 3 (or upon OpenShift installation in case of using OpenShift). You can check the ELB by the following command:
kubectl get svc -n ${ingress-namespace}
where ${ingress-namespace} — the name of the namespace, where the ingress was installed. ELB is displayed in the EXTERNAL-IP field.

6. Create a namespace for the Catalyst Blockchain Platform Hyperledger Fabric service application

kubectl create ns ${ns_name}
where ${ns_name} — name of namespace (could be any). 6.1. Get the credentials to the Helm repository in the JFrog artifactory provided by the IntellectEU admin team.
6.2. Add repo to Helm with the username and password provided:
helm repo add catbp <https://intellecteu.jfrog.io/artifactory/catbp-helm> --username ${ARTIFACTORY_USERNAME} --password ${ARTIFACTORY_PASSWORD}
As a result: "catbp" has been added to your repositories

7. Create a "secret" file in Kubernetes (or OpenShift) with the provided by the IntellectEU admin team username and password in the namespace you created earlier

For example, create this Secret, naming it intellecteu-jfrog-access:
kubectl create secret docker-registry regcred --docker-server=${your-registry-server} --docker-username=${your-name} --docker-password=${your-password} --docker-email=${your-email} -n ${ns_name}
where:
  • ${your-registry-server} — your Private Docker Registry FQDN.
  • ${your-name} — your Docker username.
  • ${your-password} — your Docker password.
  • ${your-email} — your Docker email.
  • ${ns_name} — the namespace created for the Catalyst Blockchain Platform Hyperledger Fabric service on the previous step.

As a result, you will get:

  1. 1.
    Kubernetes (or Openshift) cluster deployed.
  2. 2.
    Helm installed to your workstation.
  3. 3.
    Traefik ingress installed to your Kubernetes cluster. In case of using OpenShift you should skip this step. You will specify OpenShift in the Helm chart values instead (as described in the Setup section).
  4. 4.
    Cert-manager installed to your cluster or TLS certificate prepared.
  5. 5.
    A-record created, for example, in your account on AWS or Google Cloud.
  6. 6.
    Namespace created in your cluster and Helm repository added to your workstation.
  7. 7.
    Kubernetes (OpenShift) secret created in the namespace on your Kubernetes (OpenShift) cluster.

Setup

Configure helm chart values

Following values are needed to be configured.
  • domainName
# -- address where application will be hosted. All created nodes (peers, orderers, cas) will have <NodeName>.proxy.<domainName> address
domainName: ""
  • env
# -- available envs: prod, staging, testing, dev. For customer usage suggested only 'prod'
env: prod # use 'testing' for test env
  • auth
You can choose one of two possible methods:
  • basicAuth
  • openID
# -- auth config
auth:
# -- enabled auth for api/v1 endpoints
enabled: true
# -- available methods are: 'basic', 'openid'
method: basic
# -- BasicAuth
basic:
## -- BasicAuth username
username: ""
## -- BasicAuth password
password: ""
# -- OpenID authorization scheme. Only public access type is supported.
openid:
## --OpenID provider endpoint for obtaining access token
url: ""
## -- OpenID configuration is a Well-known URI Discovery Mechanism
wellKnownURL: ""
## - OpenID client ID
clientID: ""
  • openshiftRoute
Specify enabled = true in case of using OpenShift.
# -- Route for Openshift Controller
openshiftRoute:
enabled: false
# -- it requires raw certificate here
certificate: ""
# -- it requires raw private key here
key: ""
  • ingressConfig
# -- IngressRoute for Traefik Ingress Controller
ingressConfig:
# -- specify whether to create IngresRoute resource
enabled: false
tls:
enabled: false
# -- Certificate and Issuer will be created with Cert-Manager. Names will be autogenerated.
# if `certManager.enabled` `ingressConfig.tls.secretName` will be ignored
certManager:
enabled: false
server: "https://acme-staging-v02.api.letsencrypt.org/directory"
# -- secret name with own tls certificate to use with ingress
secretName: ""
tlsStore:
enabled: false
You can configure other helm chart values if needed. You can see the full list of values here:
## -- Declare variables to be passed into your templates.
# -- address where application will be hosted. All created nodes (peers, orderers, cas) will have <NodeName>.<domainName> address
domainName: ""
logs:
level: info
# -- auth config
auth:
# -- enabled auth for api/v1 endpoints
enabled: true
# -- available methods are: 'basic', 'openid'
method: basic
# -- BasicAuth
basic:
## -- BasicAuth username
username: ""
## -- BasicAuth password
password: ""
# -- OpenID authorization scheme. Only public access type is supported.
openid:
## --OpenID provider endpoint for obtaining access token
url: ""
## -- OpenID configuration is a Well-known URI Discovery Mechanism
wellKnownURL: ""
## - OpenID client ID
clientID: ""
# -- this module enabled integration with prometheus-operator. Fetches metrics from all the peers, orderers and CAs in the system
monitoring:
# -- specify whether to create monitoring resources
# prometheus operator and grafana need to be installed beforehand
enabled: false
# -- configuration for ServiceMonitor resource
serviceMonitor:
# -- how often to pull metrics from resources
interval: 15s
grafana:
# -- grafana default admin username and email. Grafana is authenticated through default API authentication automatically.
user: admin
# -- grafana service and port for ingress
service:
name: grafana
namespace: monitoring
port: 80
# -- Route for Openshift Controller
openshiftRoute:
enabled: false
# -- it requires raw certificate here
certificate: ""
# -- best ee solution requires raw private key here
key: ""
# -- IngressRoute for Traefik Ingress Controller
ingressConfig:
# -- specify whether to create IngresRoute resource
enabled: false
tls:
enabled: false
# -- Certificate and Issuer will be created with Cert-Manager. Names will be autogenerated.
# if `certManager.enabled` `ingressConfig.tls.secretName` will be ignored
certManager:
enabled: false
server: "https://acme-staging-v02.api.letsencrypt.org/directory"
# -- secret name with own tls certificate to use with ingress
secretName: ""
tlsStore:
enabled: false
rbac:
# -- Whether to create RBAC Resourses (Role, SA, RoleBinding)
enabled: true
# -- Service Account Name to use for api, ui, operator, consumer
serviceAccountName: fabric-console
# operator component values
operator:
# -- number of operator pods to run
replicaCount: 1
# -- operator image settings
image:
repository: intellecteu-catbp-docker.jfrog.io/catbp/fabric-platform/fabric-console
pullPolicy: Always
tag: "v0.2.13"
# -- operator image pull secrets
imagePullSecrets:
- name: intellecteu-jfrog-access
# -- annotations for operator pods
podAnnotations: {}
# -- security context on a pod level
podSecurityContext: {}
# -- security context on a container level
securityContext: {}
# -- CPU and Memory requests and limits
resources:
limits:
cpu: "150m"
memory: "300Mi"
requests:
cpu: "100m"
memory: "100Mi"
# -- Specify Node Labels to place operator pods on
nodeSelector: {}
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
affinity: {}
## api component values
api:
# -- api autoscaling settings
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# -- number of api pods to run
replicaCount: 1
# -- api image settings
image:
repository: intellecteu-catbp-docker.jfrog.io/catbp/fabric-platform/fabric-console
pullPolicy: Always
tag: "v0.2.13"
# -- api image pull secrets
imagePullSecrets:
- name: intellecteu-jfrog-access
# -- api service port and name
service:
port: 8000
portName: http
# -- annotations for api pods
podAnnotations: {}
# -- securtiry context on a pod level
podSecurityContext: {}
# -- security context on a container level
securityContext: {}
# -- CPU and Memory requests and limits
resources:
limits:
cpu: "150m"
memory: "500Mi"
requests:
cpu: "100m"
memory: "200Mi"
# -- Specify Node Labels to place api pods on
nodeSelector: {}
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
affinity: {}
consumer:
# -- number of consumer pods to run
replicaCount: 1
# -- consumer image settings
image:
repository: intellecteu-catbp-docker.jfrog.io/catbp/fabric-platform/fabric-console
pullPolicy: Always
tag: "v0.2.13"
# -- consumer image pull secrets
imagePullSecrets:
- name: intellecteu-jfrog-access
# -- consumer service port and name
service:
port: 8050
portName: http
# -- annotations for consumer pods
podAnnotations: {}
# -- security context on a pod level
podSecurityContext: {}
# -- security context on a container level
securityContext: {}
# -- CPU and Memory requests and limits
resources:
limits:
cpu: "150m"
memory: "500Mi"
requests:
cpu: "100m"
memory: "200Mi"
# -- Specify Node Labels to place consumer pods on
nodeSelector: {}
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
affinity: {}
ui:
# -- number of ui pods to run
replicaCount: 1
# -- ui image settings
image:
repository: intellecteu-catbp-docker.jfrog.io/catbp/fabric-platform/fabric-console-ui
pullPolicy: Always
tag: "v0.2.13"
# -- api image pull secrets
imagePullSecrets:
- name: intellecteu-jfrog-access
# -- ui service port and name
service:
port: 3001
portName: http
# -- annotations for consumer pods
podAnnotations: {}
# -- security context on a pod level
podSecurityContext: {}
# -- security context on a container level
securityContext: {}
# -- CPU and Memory requests and limits
resources:
limits:
cpu: "100m"
memory: "100Mi"
requests:
cpu: "30m"
memory: "50Mi"
# -- Specify Node Labels to place ui pods on
nodeSelector: {}
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
tolerations: []
# -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
affinity: {}
# -- Rabbitmq bitnami chart settings
# https://github.com/bitnami/charts/tree/master/bitnami/rabbitmq
rabbitmq:
fullnameOverride: rabbitmq
resources:
limits:
cpu: "500m"
memory: "512Mi"
image:
tag: 3.8.8
auth:
username: intellecteu
password: catbp
erlangCookie: j3k12j#$#k1k322kkvvvv
persistence:
enabled: true
size: 1Gi
accessModes:
- ReadWriteOnce
# -- Postgresql bitnami chart settings
# https://github.com/bitnami/charts/tree/master/bitnami/postgresql
postgresql:
fullnameOverride: postgresql
postgresqlPostgresPassword: LEExdBeFyH5quyaIkDt!qK8h?4LRKOjnctuJKu6AwN
postgresqlUsername: fabric_console
postgresqlPassword: lL6GsAnLP1XgqRGnjyI7JmBPm5IMD0vQaNwH89Sk
postgresqlDatabase: fabric_db
resources:
limits:
cpu: "300m"
memory: "384Mi"
requests:
cpu: "200m"
memory: "256Mi"
persistence:
enabled: true
size: 1Gi
accessModes:
- ReadWriteOnce
# -- Whether to install dependent charts
dependencies:
rabbitmq: true
mysql: false
postgresql: true
rabbitmq_ext: false
mysql_ext: false
postgresql_ext: false
# -- Configuration for external services
#externalServices:
# rabbitmq:
# ipAddress: 192.168.0.1
# port: 5672
# username: username
# password: password
# mysql:
# ipAddress: 192.168.0.2
# port: 3306
# username: username
# password: password
# database: database
# Init contaner
initContainer:
checkimage: busybox

Install the Catalyst Blockchain Platform Hypeledger Fabric service

Use the following command:
helm upgrade --install ${fabric_release_name} catbp/fabric-console --values values.yaml -n ${ns_name}
where:
  • ${fabric_release_name} — name of the Catalyst Blockchain Platform Hypeledger Fabric service
    release. You can choose any name/alias. It is used to address for updating, deleting helmchart.
  • catbp/fabric-console— chart name, where “catbp” is a repository name, “fabric-console” is a chart name.
  • values.yaml — a values file.
  • ${ns_name} — name of the namespace you've created before.
You can check the status of the installation by using these commands:
  • helm ls— check the "status" field of installed chart.
Status “deployed” should be shown.
  • kubectl get pods— get the status off applications separately.
All pods statuses must be “running.”
  • kubectl describe pod $pod_name — get the detailed information about pods.