Google Kubernetes Engine

Prerequisites

1. Setup Kubernetes cluster

Supported version of Kubernetes: 1.17 and later.

You can use an existing cluster or create a new one using managed service Google Kubernetes Engine

  • Make your default storage class underlying provider resizable, so all PVCs for Hyperledger Fabric nodes that will be created by Catalyst Blockchain Manager can be resized when needed.

  • Add zone labels to Kubernetes nodes when a cluster is stretched across multiple availability zones to be able to schedule a Hyperledger Fabric node in a specific zone.

Define your cluster size considering the following minimum requirements and your business needs:

 1 . Minimal requirements for the Catalyst Blockchain Manager Hyperledger Fabric service for one organization — 1 instance with:
** 2 core CPU
** 4GB RAM
** 10GB disk space
2 . Each node (CA, orderer, or peer) that will be deployed consumes additional resources. Minimal requirements for one node:

Node

CPU

Memory Mi

Storage Gi

CA

0.1

128

1

Peer

0.1

128

1

Orderer

0.1

128

Deciding on the size of the cluster, please consider the expected load of the nodes and increase these values accordingly.
3 . Each chaincode runs as a separate pod and consumes additional resources (CPU and RAM).

2. Install Helm to your workstation

Installation manuals: helm.sh/docs/intro/install/ No customization is needed.

Supported version of Helm: 3.*

3. Configure ingress and DNS

Ingress controller

We recommend using the Traefik ingress controller.

Minimal supported version of Traefik: 2.3.

The ingress-controller is needed for traffic routing to expose Hyperledger Fabric nodes (peer, orderer, CA) as well as API and UI of the Catalyst Blockchain Manager Hyperledger Fabric service. All components are exposed through the port :443.

  • Ingress resources for Hyperledger Fabric nodes will be provisioned automatically by the Catalyst Blockchain Manager Hyperledger Fabric service operator upon creation/deletion of a node. These nodes require TLS passthrough enabled because of mutual TLS.

  • Ingress resources for API and UI will be created as a part of the helm package during the installation process. It can be configured in the helm values.

While Hyperledger Fabric nodes have self-signed TLS certificates managed by the Catalyst Blockchain Manager Hyperledger Fabric service operator, API and UI require a trusted TLS certificate.

You can choose any of the following options:

  1. Single load balancer of a cloud.google.com/load-balancing/docs/network [TCP Load Balancer (Level 4)] type which does TLS passthrough. TLS certificate is provisioned by cert-manager or provided in Secret Traefik is responsible for TLS termination for the API and UI.

  2. Two load balancers - The first one is the HTTPS Load Balancer (Level 7) with Managed SSL Certificate that is responsible for TLS termination for API and UI and the second one is TCP Load Balancer (Level 4) with TLS passthrough. In this scenario, additional DNS configuration is needed to route API and UI Traefik through the first LB and Hyperledger Fabric nodes traefik through the second one.

Single load balancer: NLB with TLS passthrough

No additional customization needed.

Two load balancers: HTTPS LB Level 7 + Managed SSL Certificate and TCP LB Level 4

By default when installing Traefik a TCP Load Balancer Level 4 is creted through creation of a Kuberenetes Service of type: LoadBalancer. In order to create an external HTTPS LB you would need to:

  1. *Create Managed SSL Certificate through UI or using networking.gke.io/v1 ManagedCertificate

  2. *Create Service of type ClusterIP with cloud.google.com/neg annotation which will result in creation of an Network Endpoint Group targeting GKE Node protocol

  3. *Create Backend Service that points to the NEG above

  4. *Create HTTPS Load Balancer with Managed SSL Certificate with Frontend on 443 port and the Backend Service

  5. *Create Firewall rule to allow incoming traffic for the LoadBalancer that is being creating

  6. *Configure DNS Routing for the domain so traffic goes to the created load balancer

Create a DNS record

DNS configuration differs depending on your choice:

Option

DNS records to be put

Comments

#1. Single load balancer

A *.example.com → TCP LB address

API, UI and nodes will go through this single record.

#2. Two load balancers

A example.com → HTTPS LB L7 address

A *.example.com → TCP LB L4 address

HTTPS LB L7 handles API and UI and does TLS termination using Managed Certificate. TCP LB L7 handles nodes traffic with TLS passthrough

With the following configuration each component is exposed on:

  • UI : example.com:443

  • API: example.com:443/api

  • Any HL Fabric node: <nodeName>.example.com:443

Create a namespace for the Catalyst Blockchain Manager Hyperledger Fabric service application

4. Create a namespace for the Catalyst Blockchain Manager Hyperledger Fabric service application

kubectl create ns ${ns_name}

where _ ${ns_name} — name of namespace (can be any).

4.1 Get the credentials to the Helm repository in the JFrog artifactory provided by the IntellectEU admin team.

4.2. Add the repo to Helm with the username and password provided:

helm repo add catbp <https://intellecteu.jfrog.io/artifactory/catbp-helm> --username ${ARTIFACTORY_USERNAME} --password ${ARTIFACTORY_PASSWORD}

As a result: "catbp" has been added to your repositories

5. Create an ImagePullSecret to access the Catalyst Blockchain Manager Hyperledger service deployable images

For example, create this Secret, naming it intellecteu-jfrog-access:

kubectl create secret intellecteu-jfrog-access regcred --docker-server=intellecteu-catbp-docker.jfrog.io --docker-username=${your-name} --docker-password=${your-password} --docker-email=${your-email} -n ${ns_name}

where:

  • ${your-name} — Docker username provided by IntellectEU.

  • ${your-password} — Docker password provided by IntellectEU.

  • ${your-email} — your email.

  • ${ns_name} — the namespace created for the Catalyst Blockchain Manager Hyperledger Fabric service on the previous step.

6. Deploy a message broker

Message broker is needed by the Catalyst Blockchain Manager Hyperledger Fabric service to schedule commands, emit events and control workflows.

Currently, only RabbitMQ is supported.

Version: 3.7 and later.

Since Google Cloud does not have native service for AMQP we recommend to use self-deployed solution or use any existing one from the Google Cloud Marketpace.

  • No specific configurations are needed.

  • 1GB RAM is recommended as a minimum setup.

  • Usually, load on the message broker is low so it does not require much resources. Free tier is applicable.

The Catalyst Blockchain Manager Hyperledger Fabric service requires a vhost and a user with full access for the vhost. Single queue will be propagated upon the Catalyst Blockchain Manager Hyperledger Fabric service startup.

Default configuration comes with TLS enabled, even for private VPC, make sure you enable amqp.tls option in helm values.

Backups are not required.

7. Deploy a database

We recommend using Google Cloud SQL managed service.

  • 1 CPU and 3 GB of RAM is recommended as a minimum setup.

  • No specific configurations are needed.

A database is required by the Catalyst Blockchain Manager Hyperledger Fabric service to support internal architecture for workflows as well as store users action logs.

No secure data is stored in the database.

The Catalyst Blockchain Manager Hyperledger Fabric service requires a database and a user with full read/write access to the database. Database tables will be provisioned in default schema on application startup.

In this example we will use PostgreSQL. Schema is ‘public’ by default.

Run these commands to provision a database on the recently deployed server:

CREATE DATABASE "catbp-org1";
CREATE USER "catbp" WITH ENCRYPTED PASSWORD 'catbp';
GRANT ALL PRIVILEGES ON DATABASE "catbp-org1" to "catbp";
Make sure to set up automatic backups so that all action logs of users won’t be lost in case of failure.

Catalyst Blockchain Manager supports cloud.google.com/sql/docs/mysql/sql-proxy [Google Cloud SQL Proxy] authentication mechanism into the database. It’s useful when default basic auth isn’t suitable. To enable Cloud SQL Proxy authentication the following values must be provided in the helm chart.

# -- enables the use of google cloud sdk
gcIAM:
  enabled: true

database:
  # -- database type. `postgres` or `mysql` can be specified here
  type: postgres
  # -- example values for postgres database. change them for your env
  host: "postgresql.postgresql"
  tls: true
  port: "5432"
  username: "test1"
  dbname: "test1"
  # -- ignore password and use AWS IAM authentication into the database
  sqlProxy: true

It’s expected that the Catalyst Blockchain Manager API pod will have sidecar running with google SQL proxy container.

Setup

Configure helm chart values

## -- Declare variables to be passed into your templates.

# -- address where application will be hosted. All created nodes (peers, orderers, cas) will have <NodeName>.<domainName> address
domainName: ""
# -- available envs: prod, staging, testing, dev. For customer usage suggested only 'prod'
env: prod # use `testing` for test env
logs:
  level: info
# -- auth config
auth:
  # -- enabled auth for api/v1 endpoints
  enabled: true
  # -- available methods are: `basic`, `openid`
  method: basic
  # -- BasicAuth
  basic:
    ## -- BasicAuth username
    username: ""
    ## -- BasicAuth password
    password: ""
    ## -- Or specify secure credentials using Kubernetes secret
    # -- Create a new Secret using these keys:
    # username
    # password
    authSecret: ""
  # -- OpenID authorization mechanism
  openid:
  ## --OpenID provider confidential client type
    confidential: false
    ## --OpenID provider endpoint for obtaining access token
    url: ""
    ## -- OpenID configuration is a Well-known URI Discovery Mechanism
    wellKnownURL: ""
    ## - OpenID client ID
    clientID: ""
    ## - OpenID client Secret
    clientSecret: ""
    ## -- Enable role based auth for openId client
    roleBasedAuthEnabled: false
    scope: openid
    # # - OpenID client secret
    # clientSecret: ""
## -- Identity Store provider defines where to store digital identities of an organization. Supported providers are: k8s(default), vault.
identityStore: k8s
## -= HashicorpVault client configuration
vault:
  ## -- Enable usage of Vault client
  enabled: false
  ## -- AppRole ID
  roleId: <approle roleID>
  ## -- AppRole Secret
  secretId: <approle secretID>
  ## - Flag that tells secretId is a wrapped token
  withWrappingToken: false
  ## -- address https://vault.example.com
  address: https://vault.example.com
  ## -- path prefix for secrets
  pathPrefix: secrets/fabric-console
  vaultenv:
    # docker image for vault-env init container, default value is hashicorp/consul-template
    image:
  ## -- TLS configuration
  tls:
    ## -- Do not verify certificate presented by Vault server
    skipVerify: false
    ## - Secret name with CA trust chain
    certsSecretName: vault-tls

## -- Trusted cert pool enforces the application to trust specified certificates
trustedCertPool:
  ## -- secret will be mounted as folder. All certs from inside will be added to /etc/ssl/certs
  secret:
    ## -- secret name
    name:
    keys: []
# -- Whether to parse and send logs to centralised storage
# FluentD Output Configuration. Fluentd aggregates and parses logs
# FluentD is a part of Logging Operator. CRs `Output` and `Flow`s will be created
logOutput:
  # -- This section defines Loki specific configuration
  loki:
    enabled: false
    # -- url of loki instance
    url: http://loki.logging.svc.cluster.local:3100
    # -- labels to set on log streams
    # format `label_name`: `log_field_name`
    labels:
      namespace: namespace
      app_name: app_name
  # -- This section defines logz.io specific configuration
  logzIo:
    enabled: false
  # -- This section defines elasticSearch specific configuration
  elasticSearch:
    enabled: false
    # -- The hostname of your Elasticsearch node
    host: ""
    # -- The port number of your Elasticsearch node
    port: 443
    # -- The index name to write events
    index_name: ""
    # -- Data stream configuration
    data_stream:
      enabled: false
      name: ""
      data_stream_template_name: ""
    # -- The login username to connect to the Elasticsearch node
    user: ""
    # -- Specify secure password with Kubernetes secret
    secret:
      create: true
      password: ""
      annotations: {}
# -- message bus configuration
messageBus:
  queue:
    name: message_bus
    durable: false
    type: classic
  topic:
    name: message_bus_exchange
    durable: false
# -- this module enabled integration with prometheus-operator. Fetches metrics from all the peers, orderers and CAs in the system
monitoring:
  # -- specify whether to create monitoring resources
  # prometheus operator and grafana need to be installed beforehand
  enabled: false
  # -- configuration for ServiceMonitor resource
  serviceMonitor:
    enabled: false
    # -- how often to pull metrics from resources
    interval: 15s
    # -- HTTP path to scrape for metrics
    path: /metrics
    # -- RelabelConfigs to apply to samples before scraping
    relabelings: []
    # -- MetricRelabelConfigs to apply to samples before ingestion
    metricRelabelings: []
  grafana:
    # -- grafana default admin username and email. Grafana is authenticated through default API authentication automatically.
    user: admin
    email: admin@domain.com
    # -- grafana defaul path to dashboard
    dashboardPath: "/grafana/d/pUnN6JgWz/hyperledger-fabric-monitoring?orgId=1&refresh=30s&kiosk&var-namespace="
    # -- grafana service and port for ingress
    service:
      name: grafana
      namespace: monitoring
      port: 80
# -- Ingress for any ingress controller.
ingressConfig:
  provider:
    # -- #Currently supported traefik openshift ingress controller and istio: [traefik, openshift, istio]
    name: traefik #openshift, istio
    traefik:
      ingressClass: ""
    traefikCRD:
      tlsStore:
        enabled: false
        name: default
    openshift:
    istio:
      # -- Istio gateway name
      gateway: ""
      # -- match port for tls config
      port: 443
  # -- specify whether to create Ingres resources for API and UI
  enabled: false
  tls:
    enabled: false
    # -- Certificate and Issuer will be created with Cert-Manager. Names will be autogenerated.
    # if `certManager.enabled` `ingressConfig.tls.secretName` will be ignored
    certManager:
      enabled: false
      email: "services.cat-bp@intellecteu.com"
      server: "https://acme-staging-v02.api.letsencrypt.org/directory"
    # -- secret name with own tls certificate to use with ingress
    secretName: ""

# -- Configuration options to control how public and private docker repositories handled by api and operator
imageVerification:
  # -- Do not verify image existence in docker registry for Public type
  disabled: false

rbac:
  # -- Whether to create RBAC Resourses (Role, SA, RoleBinding)
  enabled: true
  # -- Service Account Name to use for api, ui, operator, consumer
  serviceAccountName: fabric-console
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: false
# operator component values
operator:
  # -- number of operator pods to run
  replicaCount: 1
  # -- operator image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: "2.7"
  # -- operator image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  labels: {}
  # -- image for init MSP container used in peers and orderer. Default value in application is intellecteu/msp-init:1.0
  mspInitContainerImage:
  # -- Configs for controlled CRDS
  crd:
    serviceAccount:
    # -- PodSecurityContext that will be applied into all pods created from CRDs
    podSecurityContext:
      # runAsNonRoot: true
      # runAsUser: 4444
      # runAsGroup: 5555
      # fsGroup: 4444
    containerSecurityContext:
      # readOnlyRootFilesystem: true
    # -- FabricMetricsProvider is the metrics provider for orderers and peers
    # available options for metrics providers are: prometheus, statsd
    fabricMetricsProvider:
  # -- annotations for operator pods
  podAnnotations: {}
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: true
  # -- security context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 4444
    # runAsGroup: 5555
    # fsGroup: 4444
  # -- security context on a container level
  securityContext: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "150m"
      memory: "300Mi"
    requests:
      cpu: "100m"
      memory: "100Mi"
  # -- Specify Node Labels to place operator pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
  # -- metrics server and Prometheus Operator configuration
  metrics:
    # -- should metrics server be enabled
    enabled: false
    # -- service port for metrics server
    servicePort: 8082
    # -- container port for metrics server
    containerPort: 8082
    # -- HTTP path to scrape for metrics.
    path: /metrics
    serviceMonitor:
      # -- should ServiceMonitor be created
      enabled: false
      # -- how often to pull metrics from resources
      interval: 30s
      # -- RelabelConfigs to apply to samples before scraping
      relabelings: []
      # -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
  # -- Spicify Peer Affinity Spec
  peerAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - peer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify Orderer Affinity Spec
  ordererAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - orderer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify CA Affinity Spec
  caAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - fabric-ca
    #         topologyKey: "kubernetes.io/zone"
## api component values
api:
  # -- gateway configuration for channel subscription gateway
  gateway:
    events:
      heartbeat:
        enabled: false
        interval: 60 # interval in seconds
  # -- api autoscaling settings
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 5
    targetCPUUtilizationPercentage: 80
    # targetMemoryUtilizationPercentage: 80
  # -- number of api pods to run
  replicaCount: 1
  # -- api image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: "2.7"
  # -- api image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  labels: {}
  # -- api service port and name
  service:
    port: 8000
    portName: http
  # -- annotations for api pods
  podAnnotations: {}
  # -- Automount API credentials for a Service Account.
  automountServiceAccountToken: true
  # -- securtiry context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 4444
    # runAsGroup: 5555
    # fsGroup: 4444
  # -- security context on a container level
  securityContext: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "150m"
      memory: "500Mi"
    requests:
      cpu: "100m"
      memory: "200Mi"
  # -- Specify Node Labels to place api pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
  # -- metrics server and Prometheus Operator configuration
  metrics:
    # -- should metrics server be enabled
    enabled: false
    # -- service port for metrics server
    servicePort: 8082
    # -- container port for metrics server
    containerPort: 8082
    # -- HTTP path to scrape for metrics.
    path: /metrics
    serviceMonitor:
      # -- should ServiceMonitor be created
      enabled: false
      # -- how often to pull metrics from resources
      interval: 30s
      # -- RelabelConfigs to apply to samples before scraping
      relabelings: []
      # -- MetricRelabelConfigs to apply to samples before ingestion.
      metricRelabelings: []
  # -- Spicify Peer Affinity Spec
  peerAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - peer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify Orderer Affinity Spec
  ordererAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - orderer
    #         topologyKey: "kubernetes.io/zone"
    # -- Spicify CA Affinity Spec
  caAffinitySpec:
    # affinity:
    #   podAntiAffinity:
    #     requiredDuringSchedulingIgnoredDuringExecution:
    #       - labelSelector:
    #           matchExpressions:
    #             - key: controller
    #               operator: In
    #               values:
    #                 - fabric-ca
    #         topologyKey: "kubernetes.io/zone"
ui:
  # -- number of ui pods to run
  replicaCount: 1
  # -- ui image settings
  image:
    repository: registry.gitlab.com/intellecteu/products/catalyst/cat-bp/fabric/fabric-console-ui
    pullPolicy: Always
    # Overrides the image tag whose default is the chart appVersion.
    tag: "2.7"
  # -- api image pull secrets
  imagePullSecrets:
    - name: intellecteu-gitlab-access
  # -- ui service port and name
  service:
    port: 3001
    portName: http
  # -- annotations for consumer pods
  podAnnotations: {}
  # -- security context on a pod level
  podSecurityContext:
    # runAsNonRoot: true
    # runAsUser: 101
    # runAsGroup: 101
    # fsGroup: 101
  # -- security context on a container level
  securityContext: {}
  labels: {}
  # -- CPU and Memory requests and limits
  resources:
    limits:
      cpu: "100m"
      memory: "100Mi"
    requests:
      cpu: "30m"
      memory: "50Mi"
  # -- Specify Node Labels to place ui pods on
  nodeSelector: {}
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
  tolerations: []
  # -- https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
  affinity: {}
# -- external RabbitMQ Message broker parameters
amqp:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- example values for rabbitmq queue. change them for your env
  host: "rabbitmq.rabbitmq"
  tls: false
  port: "5672"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  vhost: "test1"
# -- external database parameters
database:
  readinessCheck:
    # -- Whether to perform readiness check with initContainer. Simple `nc` command
    enabled: true
    # -- which image to use for initContainer performing readiness check
    initContainer:
      image:
        repository: busybox
        pullPolicy: IfNotPresent
        tag: latest
  # -- database type. `postgres` or `mysql` can be specified here
  type: postgres
  # -- example values for postgres database. change them for your env
  host: "postgresql.postgresql"
  tls: false
  port: "5432"
  # -- Specify the Secret name if using Kubernetes Secret to provide credentials
  # -- Create a new Secret using these keys:
  #  username
  #  password
  credentialsSecret:
  # -- Specify username & password if using Helm values file to provide credentials
  username: "test1"
  password: "Abcd1234"
  dbname: "test1"
  # -- Specify postgresSchema only if using postgres and not using the default schema
  postgresSchema:
  # -- ignore password and use AWS IAM authentication into the database
  authAws: false

# -- enables the use of aws sdk
awsIAM:
  enabled: false

# -- license config
license: ""
In case of using a single load balancer change ingressConfig to:
ingressConfig:
  # -- specify whether to create IngresRoute resource
  enabled: true
  tls:
    enabled: true
    certManager:
      enabled: true
      email: "<change to your EMAIL>"
      server: "https://acme-v02.api.letsencrypt.org/directory"

You can configure other helm chart values if needed.

You can see the full list of values here

=== Install the Catalyst Blockchain Manager Hypeledger Fabric service

helm upgrade --install ${fabric_release_name} catbp/fabric-console --values values.yaml -n ${ns_name} --version 2.5

where:

  • ${fabric_release_name}— name of the Catalyst Blockchain Manager Hypeledger Fabric service release. You can choose any name/alias. It is used to address for updating, deleting the Helm chart.

  • catbp/fabric-console - chart name, where “catbp” is a repository name, “fabric-console” is the chart name.

  • values.yaml— a values file.

  • ${ns_name}—ame of the namespace you’ve created before.

You can check the status of the installation by using these commands:

  • helm ls - check the "status" field of the installed chart.

Status “deployed” should be shown.
  • kubectl get pods — get the status of applications separately.

All pods statuses must be “running.”
  • kubectl describe pod $pod_name — get detailed information about pods.

The following rbac will be created:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ include "fabric-console.fullname" . }}
  labels:
    {{- include "fabric-console.labels" . | nindent 4 }}
rules:
- apiGroups:
  - "*"
  resources:
  - namespaces
  - pods
  - services
  - persistentvolumes
  - persistentvolumeclaims
  - persistentvolumeclaims/finalizers
  - events
  - secrets
  - customresourcedefinitions
  - deployments
  - peersets
  - peersets/finalizers
  - peers
  - peers/status
  - peers/finalizers
  - orderingservices
  - orderingservices/finalizers
  - orderers
  - orderers/status
  - orderers/finalizers
  - chaincodeservices
  - chaincodeservices/status
  - chaincodeservices/finalizers
  - fabriccas
  - fabriccas/status
  - fabriccas/finalizers
  - configmaps
  {{- if .Values.openshiftRoute.enabled }}
  - routes
  - routes/custom-host
  {{- else }}
  - ingressroutetcps
  - ingressroutes
  {{- end }}
  verbs:
  - get
  - watch
  - list
  - create
  - update
  - patch
  - delete
  - create