Installation Instructions

Prerequisites

1. Setup Kubernetes or OpenShift cluster

Supported version of OpenShift: 4.7.

Supported version of Kubernetes: 1.17 and later.

We recommend AWS (EKS) or Google Cloud (GKE), but you can install it on a standalone cluster as well.

Define your cluster size considering the following minimum requirements and your business needs:

  • 1. Minimal requirements for the Catalyst Blockchain Manager Hyperledger Fabric service for one organization — 1 instance with:

    • 2 core CPU

    • 4GB RAM

    • 10GB disk space

  • 2. Each node (CA, orderer, or peer) that will be deployed consumes additional resources. Minimal requirements for one node:

Node

CPUe

Memory, Mi

Storage, Gi

CA

0.1

128

1

Peer

0.1

128

1

Orderer

0.1

128

1

Deciding on the size of the cluster, please consider the expected load of the nodes and increase these values accordingly.
  • 3. Each chaincode installed to a peer runs as a separate pod and consumes additional resources (CPU and RAM).

2. Install Helm to your workstation

Installation manuals: helm.sh/docs/intro/install/

No customization is needed.

Supported version of Helm: 3.*.

3. Install Traefik ingress

The ingress-controller is needed for traffic routing to expose nodes (peer, CA, orderer). The Catalyst Blockchain Manager Hyperledger Fabric service creates a CRD resource (IngressRouteTCP in case of using Traefik or Route in case of using OpenShift), that is automatically started and deleted along with each node.

No customization is needed, the default port ( :443 ) for HTTPS traffic will be used.

We recommend installing Traefik to a separate namespace from the application (creation of a namespace for the Catalyst Blockchain Manager Hyperledger Fabric service is described in step 6).
Supported version of Traefik: 2.3.

In case of using OpenShift, you should skip this step and specify it in the Helm chart values later (Helm chart values are described in the Setup section), because OpenShift has a built-in ingress-controller server.

4. Install cert-manager to create TLS certificate

TLS certificate is needed for secured communication between a User and the Сatalyst Blockchain Manager Hyperledger Fabric service components.

We recommend using the last release of the official helm chart.

You can skip this step and specify your TLS certificate and key as a Kubernetes secret in Helm chart values instead later (Helm chart values are described in the Setup section). You can find the manual on how to create a Kubernetes secret here: kubernetes.io/docs/concepts/configuration/secret/#tls-secrets

5. Create an A-record in a zone in your domain’s DNS management panel and assign it to the load balancer created upon Traefik or OpenShift installation

Catalyst Blockchain Manager Hyperledger Fabric service needs a wildcard record *.<domain> to expose nodes. All created nodes (peers, orderers, CAs) will have a <NodeName>.<domainName> address.

For example, in case you are using AWS, follow these steps:

  1. Go to the Route53 service.

  2. Create a new domain or choose the existing domain.

  3. Create an A record.

  4. Switch “alias” to ON.

  5. In the “Route traffic to” field select “Alias to application and classic load balancer.”

  6. Select your region (where the cluster is installed).

  7. Select an ELB balancer from the drop-down list.*

*Choose the ELB balancer, which was automatically configured upon the Traefik chart installation as described in step 3 (or upon OpenShift installation in case of using OpenShift). You can check the ELB by the following command:

kubectl get svc -n ${ingress-namespace}

where ${ingress-namespace} — the name of the namespace, where the ingress was installed. ELB is displayed in the _EXTERNAL-IP field.

6. Create a namespace for the Catalyst Blockchain Manager Hyperledger Fabric service application

kubectl create ns ${ns_name}

where ${ns_name} — name of namespace (can be any).

6.1 Get the credentials to the Helm repository in the JFrog artifactory provided by the IntellectEU admin team or generate it under your Jfrog account

6.2 Add the repo to Helm with the username and password provided:

helm repo add catbp <https://intellecteu.jfrog.io/artifactory/catbp-helm> --username ${ARTIFACTORY_USERNAME} --password ${ARTIFACTORY_PASSWORD}

As a result: "catbp" has been added to your repositories

7. Create an ImagePullSecret to access the Catalyst Blockchain Manager Hyperledger service deployable images

For example, create this Secret, naming it intellecteu-jfrog-access:

kubectl create secret intellecteu-jfrog-access regcred --docker-server=intellecteu-catbp-docker.jfrog.io --docker-username=${your-name} --docker-password=${your-password} --docker-email=${your-email} -n ${ns_name}

where:

  • _ ${your-name} _ - your Docker username.

  • _ ${your-password} _ — your Docker password.

  • ${your-email} — your Docker email.

  • ${ns_name} — the namespace created for the Catalyst Blockchain Manager Hyperledger Fabric service on the previous step.

8. Deploy a message broker

A message broker is needed by the Catalyst Blockchain Manager Hyperledger Fabric service to schedule commands, emit events, and control workflows.

Currently, only RabbitMQ is supported. Version: 3.7 and later.

No specific configurations are needed. You can check the official production checklist: www.rabbitmq.com/production-checklist.html

We recommend 1GB RAM as a minimum setup.

In case you want to use a readiness check and use a private repository for the image, you should create a “secret” file with your credentials in Kubernetes/OpenShift for further specifying it in the Helm chart upon Catalyst Blockchain Manager installation. Please refer to the official Kubernetes documentation: kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/

Helm chart configuration instructions you will find here.

9. Deploy a database

A database is required by the Catalyst Blockchain Manager Hyperledger Fabric service to support internal architecture for workflows as well as store users' action logs.

No secure data is stored in the database.

Catalyst Blockchain Manager supports PostgreSQL and MySQL. You can use any.

Supported version of PostgreSQL: 12.8 and later.

Supported version of MySQL: 8.0.21 and later.

No specific configurations are needed. You can use the official manuals:

We recommend 1GB RAM as a minimum setup.

In case you want to use a readiness check and use a private repository for the image, you should create a “secret” file with your credentials in Kubernetes/OpenShift for further specifying it in the Helm chart upon Catalyst Blockchain Manager installation. Please refer to the official Kubernetes documentation: kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ Helm chart configuration instructions you will find here.

10. Setup secret storage (optional)

A digital identity is represented as a private key and x509 certificate. All identities that were enrolled through Catalyst Blockchain Manager are stored in secret storage. Underlying providers of the storage are Kubernetes Secrets* and Hashicorp Vault. . Kubernetes is enabled by default. It means that for each enrolled identity there is a corresponding Kubernetes secret. If you want to proceed with Kubernetes secret, you can skip this section and use the default configuration for secret storage in the helm chart https:// values. . he other option allows users to store generated identities in Hashicorp Vault, thus having more control over them, better encryption, backups, and other benefits. If you want to use Hashicorp Vault instead of Kubernetes secrets, please check the prerequisites here.

Operator:

While the mounting process of a Kubernetes secret is straightforward it is not the same for a Hashicorp Vault secret. Catalyst Blockchain Manager operator is responsible for that. All pods that require secrets from Hashicorp Vault have an additional init container vault-envwhich is based on hashicorp/consul-template. The operator creates a temporal short-lived token and config for consule-template. The last one authenticates using the token, fetches the secret, and places it into a volume, that is shared with the main container.

API:

Each action with Hyperledger Fabric SDK requires an identity and all identities are stored in Hashicorp Vault. It might affect the performance of the API, that’s why caching mechanism was introduced. Whenever an identity is loaded from Hashicorp Vault it will be placed into the cache for TTL. Default TTL is 10 seconds.

As a result, you will get:

  1. Kubernetes (or Openshift) cluster deployed.

  2. Helm installed to your workstation.

  3. Traefik ingress installed to your Kubernetes cluster.

  4. In case of using OpenShift, you should skip this step.

  5. Cert-manager installed to your cluster or TLS certificate prepared.

  6. A-record created, for example, in your account on AWS or Google Cloud.

  7. Namespace created in your cluster and Helm repository added to your workstation.

  8. Kubernetes (OpenShift) secret created in the namespace on your Kubernetes (OpenShift) cluster.

  9. A message broker (RabbitMQ) deployed.

  10. A database deployed.

  11. (optional) Hashicorp Vault installed.

Setup

Configure helm chart values

1. domainName

# -- address where the application will be hosted. All created nodes (peers, orderers, cas) will have <NodeName>.proxy.<domainName> address
domainName: ""

2. auth

You can choose one of two possible methods:

  • basicAuth

  • openID

# -- auth config
auth:
  # -- enabled auth for api/v1 endpoints
  enabled: true
  # -- available methods are: 'basic', 'openid'
  method: basic
  # -- BasicAuth
  basic:
    ## -- BasicAuth username
    username: ""
    ## -- BasicAuth password
    password: ""
    ## -- Or specify secure credentials using Kubernetes secret
    # -- Create a new Secret using these keys:
    # username
    # password
    authSecret: ""

  # -- OpenID authorization scheme. Only public access type is supported.
  openid:
    ## --OpenID provider endpoint for obtaining access token
    url: ""
    ## -- OpenID configuration is a Well-known URI Discovery Mechanism
    wellKnownURL: ""
    ## - OpenID client ID
    clientID: ""
    ## -- Enable role based auth for openId client
    roleBasedAuthEnabled: false
    # # - OpenID client secret
    # clientSecret: ""

3. ingressConfig

# -- Ingress for any ingress controller.
ingressConfig:
  provider:
    # -- #Currently supported traefik openshift ingress controller and istio: [traefik, openshift, istio]
    name: traefik #openshift, istio
    traefik:
      ingressClass: ""
    traefikCRD:
      tlsStore:
        enabled: false
        name: default
    openshift:
    istio:
      # -- Istio gateway name
      gateway: ""
      # -- match port for tls config
      port: 443
  # -- specify whether to create Ingres resources for API and UI
  enabled: false
  tls:
    enabled: false
    # -- Certificate and Issuer will be created with Cert-Manager. Names will be autogenerated.
    # if `certManager.enabled` `ingressConfig.tls.secretName` will be ignored
    certManager:
      enabled: false
      email: "your-email@example.com"
      server: "https://acme-staging-v02.api.letsencrypt.org/directory"
    # -- secret name with own tls certificate to use with ingress
    secretName: ""

4. amqp

Configure connection settings to your message broker.

# -- external RabbitMQ Message broker parameters
amqp:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- example values for rabbitmq queue. change them for your env
  host: "rabbitmq.rabbitmq"
  port: "5672"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  vhost: "test1"
# -- external RabbitMQ Message broker parameters
amqp:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- example values for rabbitmq queue. change them for your env
  host: "rabbitmq.rabbitmq"
  port: "5672"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  vhost: "test1"

Note: In case of using a private repository specify the secret you created before in the api.imagePullSecrets section:

api:
  imagePullSecrets:
    - name: mysecret1 # for registry with api images
    - name: mysecret2 # for registry with busybox images

5. database

Configure connection settings to your database.

# -- external database parameters
database:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- database type. `postgres` or `mysql` can be specified here
  type: postgres
  # -- example values for postgres database. change them for your env
  host: "postgresql.postgresql"
  port: "5432"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  dbname: "test1"
  # -- ignore password and use AWS IAM authentication into the database
  authAws: false
In case of using a private repository specify the secret you created before in the api.imagePullSecrets section:
api:
  imagePullSecrets:
    - name: mysecret1 # for registry with api images
    - name: mysecret2 # for registry with busybox images

6. identityStore

(in case of using Hashicorp Vault).

#by default it's k8s
identityStore: vault
vault:
	#enable usage of vault client
  enabled: true
	# AppRole id
  roleId: <role_id>
  # AppRole secret or wrapped token with secret
  secretId: <secret_id>
  # Vault address
  address: https://vault.address.io
	# Prefix for all secrets
  pathPrefix: secret/apps/fabric/org1
	# Signals to client that secretId is a wrapped token
  withWrappingToken: false
	# vault-env init container config
	vaultenv:
    # docker image for vault-env init container, default value is hashicorp/consul-template
    image:
  tls:
		# force to skip TLS verification at handshake
    skipVerify: true

You can configure other helm chart values if needed.

The full list of the helm chart values

## -- Declare variables to be passed into your templates.

# -- address where application will be hosted. All created nodes (peers, orderers, cas) will have <NodeName>.<domainName> address
domainName: ""
# -- available envs: prod, staging, testing, dev. For customer usage suggested only 'prod'
env: prod # use `testing` for test env
logs:
  level: info
# -- auth config
auth:
  # -- enabled auth for api/v1 endpoints
  enabled: true
  # -- available methods are: `basic`, `openid`
  method: basic
  # -- BasicAuth
  basic:
    ## -- BasicAuth username
    username: ""
    ## -- BasicAuth password
    password: ""
    ## -- Or specify secure credentials using Kubernetes secret
    # -- Create a new Secret using these keys:
    # username
    # password
    authSecret: ""
  # -- OpenID authorization mechanism
  openid:
  ## --OpenID provider confidential client type
    confidential: false
    ## --OpenID provider endpoint for obtaining access token
    url: ""
    ## -- OpenID configuration is a Well-known URI Discovery Mechanism
    wellKnownURL: ""
    ## - OpenID client ID
    clientID: ""
    ## - OpenID client Secret
    clientSecret: ""
    ## -- Enable role based auth for openId client
    roleBasedAuthEnabled: false
    scope: openid
    # # - OpenID client secret
    # clientSecret: ""
## -- Identity Store provider defines where to store digital identities of an organization. Supported providers are: k8s(default), vault.
identityStore: k8s
## -= HashicorpVault client configuration
vault:
  ## -- Enable usage of Vault client
  enabled: false
  ## -- AppRole ID
  roleId: <approle roleID>
  ## -- AppRole Secret
  secretId: <approle secretID>
  ## - Flag that tells secretId is a wrapped token
  withWrappingToken: false
  ## -- address https://vault.example.com
  address: https://vault.example.com
  ## -- path prefix for secrets
  pathPrefix: secrets/fabric-console
  vaultenv:
    # docker image for vault-env init container, default value is hashicorp/consul-template
    image:
  ## -- TLS configuration
  tls:
    ## -- Do not verify certificate presented by Vault server
    skipVerify: false
    ## - Secret name with CA trust chain
    certsSecretName: vault-tls

## -- Trusted cert pool enforces the application to trust specified certificates
trustedCertPool:
  ## -- secret will be mounted as folder. All certs from inside will be added to /etc/ssl/certs
  secret:
    ## -- secret name
    name:
    keys: []
# -- Whether to parse and send logs to centralised storage
# FluentD Output Configuration. Fluentd aggregates and parses logs
# FluentD is a part of Logging Operator. CRs `Output` and `Flow`s will be created
logOutput:
  # -- This section defines Loki specific configuration
  loki:
    enabled: false
    # -- url of loki instance
    url: http://loki.logging.svc.cluster.local:3100
    # -- labels to set on log streams
    # format `label_name`: `log_field_name`
    labels:
      namespace: namespace
      app_name: app_name
  # -- This section defines logz.io specific configuration
  logzIo:
    enabled: false
  # -- This section defines elasticSearch specific configuration
  elasticSearch:
    enabled: false
    # -- The hostname of your Elasticsearch node
    host: ""
    # -- The port number of your Elasticsearch node
    port: 443
    # -- The index name to write events
    index_name: ""
    # -- Data stream configuration
    data_stream:
      enabled: false
      name: ""
      data_stream_template_name: ""
    # -- The login username to connect to the Elasticsearch node
    user: ""
    # -- Specify secure password with Kubernetes secret
    secret:
      create: true
      password: ""
      annotations: {}
# -- message bus configuration
messageBus:
  queue:
    name: message_bus
    durable: false
    type: classic
  topic:
    name: message_bus_exchange
    durable: false
# -- this module enabled integration with prometheus-operator. Fetches metrics from all the peers, orderers and CAs in the system
monitoring:
  # -- specify whether to create monitoring resources
  # prometheus operator and grafana need to be installed beforehand
  enabled: false
  # -- configuration for ServiceMonitor resource
  serviceMonitor:
    enabled: false
    # -- how often to pull metrics from resources
    interval: 15s
    # -- HTTP path to scrape for metrics
    path: /metrics
    # -- RelabelConfigs to apply to samples before scraping
    relabelings: []
    # -- MetricRelabelConfigs to apply to samples before ingestion
    metricRelabelings: []
  grafana:
    # -- grafana default admin username and email. Grafana is authenticated through default API authentication automatically.
    user: admin
    email: admin@domain.com
    # -- grafana defaul path to dashboard
    dashboardPath: "/grafana/d/pUnN6JgWz/hyperledger-fabric-monitoring?orgId=1&refresh=30s&kiosk&var-namespace="
    # -- grafana service and port for ingress
    service:
      name: grafana
      namespace: monitoring
      port: 80
# -- Ingress for any ingress controller.
ingressConfig:
  provider:
    # -- #Currently supported traefik openshift ingress controller and istio: [traefik, openshift, istio]
    name: traefik #openshift, istio
    traefik:
      ingressClass: ""
    traefikCRD:
      tlsStore:
        enabled: false
        name: default
    openshift:
    istio:
      # -- Istio gateway name
      gateway: ""
      # -- match port for tls config
      port: 443
  # -- specify whether to create Ingres resources for API and UI
  enabled: false
  tls:
    enabled: false
    # -- Certificate and Issuer will be created with Cert-Manager. Names will be autogenerated.
    # if `certManager.enabled` `ingressConfig.tls.secretName` will be ignored
    certManager:
      enabled: false
      email: "services.cat-bp@intellecteu.com"
      server: "https://acme-staging-v02.api.letsencrypt.org/directory"
    # -- secret name with own tls certificate to use with ingress
    secretName: ""

# -- Configuration options to control how public and private docker repositories handled by api and operator
imageVerification:
  # -- Do not verify image existence in docker registry for Public type
  disabled: false

rbac:
  # -- Whether to create RBAC Resourses (Role, SA, RoleBinding)
  enabled: true
  # -- Service Account Name to use for api, ui, operator, consumer
  serviceAccountName: fabric-console
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: false
# operator component values
operator:
  # -- number of operator pods to run
  replicaCount: 1
  # -- operator image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: "2.7"
  # -- operator image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  labels: {}
  # -- image for init MSP container used in peers and orderer. Default value in application is intellecteu/msp-init:1.0
  mspInitContainerImage:
  # -- Configs for controlled CRDS
  crd:
    serviceAccount:
    # -- PodSecurityContext that will be applied into all pods created from CRDs
    podSecurityContext:
      # runAsNonRoot: true
      # runAsUser: 4444
      # runAsGroup: 5555
      # fsGroup: 4444
    containerSecurityContext:
      # readOnlyRootFilesystem: true
    # -- FabricMetricsProvider is the metrics provider for orderers and peers
    # available options for metrics providers are: prometheus, statsd
    fabricMetricsProvider:
  # -- annotations for operator pods
  podAnnotations: {}
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: true
  # -- security context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 4444
    # runAsGroup: 5555
    # fsGroup: 4444
  # -- security context on a container level
  securityContext: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "150m"
      memory: "300Mi"
    requests:
      cpu: "100m"
      memory: "100Mi"
  # -- Specify Node Labels to place operator pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
  # -- metrics server and Prometheus Operator configuration
  metrics:
    # -- should metrics server be enabled
    enabled: false
    # -- service port for metrics server
    servicePort: 8082
    # -- container port for metrics server
    containerPort: 8082
    # -- HTTP path to scrape for metrics.
    path: /metrics
    serviceMonitor:
      # -- should ServiceMonitor be created
      enabled: false
      # -- how often to pull metrics from resources
      interval: 30s
      # -- RelabelConfigs to apply to samples before scraping
      relabelings: []
      # -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
  # -- Spicify Peer Affinity Spec
  peerAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - peer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify Orderer Affinity Spec
  ordererAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - orderer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify CA Affinity Spec
  caAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - fabric-ca
    #         topologyKey: "kubernetes.io/zone"
## api component values
api:
  # -- gateway configuration for channel subscription gateway
  gateway:
    events:
      heartbeat:
        enabled: false
        interval: 60 # interval in seconds
  # -- api autoscaling settings
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  # -- number of api pods to run
  replicaCount: 1
  # -- api image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
  # -- api image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  labels: {}
  # -- api service port and name
  service:
    port: 8000
    portName: http
  # -- annotations for api pods
  podAnnotations: {}
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: true
  # -- securtiry context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 4444
    # runAsGroup: 5555
    # fsGroup: 4444
  # -- security context on a container level
  securityContext: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "150m"
      memory: "500Mi"
    requests:
      cpu: "100m"
      memory: "200Mi"
  # -- Specify Node Labels to place api pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
  # -- metrics server and Prometheus Operator configuration
  metrics:
    # -- should metrics server be enabled
    enabled: false
    # -- service port for metrics server
    servicePort: 8082
    # -- container port for metrics server
    containerPort: 8082
    # -- HTTP path to scrape for metrics.
    path: /metrics
    serviceMonitor:
      # -- should ServiceMonitor be created
      enabled: false
      # -- how often to pull metrics from resources
      interval: 30s
      # -- RelabelConfigs to apply to samples before scraping
      relabelings: []
      # -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
  # -- Spicify Peer Affinity Spec
  peerAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - peer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify Orderer Affinity Spec
  ordererAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - orderer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify CA Affinity Spec
  caAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - fabric-ca
    #         topologyKey: "kubernetes.io/zone"
ui:
  # -- number of ui pods to run
  replicaCount: 1
  # -- ui image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console-ui
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
  # -- api image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  # -- ui service port and name
  service:
    port: 3001
    portName: http
  # -- annotations for consumer pods
  podAnnotations: {}
  # -- security context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 101
    # runAsGroup: 101
    # fsGroup: 101
  # -- security context on a container level
  securityContext: {}
  labels: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "100m"
      memory: "100Mi"
    requests:
      cpu: "30m"
      memory: "50Mi"
  # -- Specify Node Labels to place ui pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
# -- external RabbitMQ Message broker parameters
amqp:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- example values for rabbitmq queue. change them for your env
  host: "rabbitmq.rabbitmq"
  tls: false
  port: "5672"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  vhost: "test1"
# -- external database parameters
database:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- database type. `postgres` or `mysql` can be specified here
  type: postgres
  # -- example values for postgres database. change them for your env
  host: "postgresql.postgresql"
  tls: false
  port: "5432"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  dbname: "test1"
  # -- Specify postgresSchema only if using postgres and not using the default schema
  postgresSchema:
  # -- ignore password and use AWS IAM authentication into the database
  authAws: false

# -- enables the use of aws sdk
awsIAM:
  enabled: false

# -- license config
license: ""

Install the Catalyst Blockchain Manager Hypeledger Fabric service

Use the following command:

helm upgrade --install ${fabric_release_name} catbp/fabric-console --values values.yaml -n ${ns_name}

where:

  • ${fabric_release_name} _ — name of the Catalyst Blockchain Manager Hypeledger Fabric service release. You can choose any name/alias. It is used to address for updating, deleting the Helm chart. * catbp/fabric-console_ — chart name, where “catbp” is a repository name, “fabric-console” is the chart name.

  • values.yaml — a values file.

  • ${ns_name} — name of the namespace you’ve created before.

You can check the status of the installation by using these commands:

  • helm ls — check the "status" field of the installed chart.

Status “deployed” should be shown.
  • kubectl get pods — get the status of applications separately.

All pods statuses must be “running.”
  • kubectl describe pod $pod_name — get detailed information about pods.