Logo

dev-resources.site

for different kinds of informations.

Backstage, App template e Crossplane no Amazon EKS [Lab Session]

Published at
1/30/2024
Categories
community
idp
aws
backstage
Author
paulofponciano
Categories
4 categories in total
community
open
idp
open
aws
open
backstage
open
Author
14 person written this
paulofponciano
open
Backstage, App template e Crossplane no Amazon EKS [Lab Session]

Seguindo em mais um lab praticando #platformengineering junto com meu amigo Gabriel Portela, dessa vez explorando mais o Backstage combinado com GitHub Actions, Crossplane, ArgoCD. E claro, rodando no kubernetes.

An open platform for building developer portals
Powered by a centralized software catalog, Backstage restores order to your infrastructure and enables your product teams to ship high-quality code quickly — without compromising autonomy.
https://backstage.io/

Image description

Considerações:
Todos os recursos que rodam no kubernetes, estão no mesmo cluster para esse lab. O post Serverless com Crossplane composition no EKS + GitOps [Lab Session] pode ajudar no setup do cluster EKS já com ArgoCD e Crossplane. E claro, é possível rodar em qualquer cluster kubernetes.


Novo app backstage

Inicialmente, criamos o app backstage localmente para inicializar o repositório git com a estrutura necessária e criar a primeira imagem docker que posteriomente, entregaremos no kubernetes.

Documentation on Creating an App Backstage

Building a Docker image | Backstage Software Catalog and Developer Platform

  • Criando o app:

npx @backstage/create-app@latest

cd backstage

yarn install --frozen-lockfile
yarn tsc

yarn build:backend

Até aqui, tudo default. Neste lab, utilizamos também alguns outros plugins do backstage, então adicionamos:

yarn add --cwd packages/backend @backstage/plugin-kubernetes-backend
yarn add --cwd packages/app @backstage/plugin-kubernetes
yarn add --cwd packages/app @backstage/integration-aws-node
yarn add --cwd packages/app @backstage/plugin-home

Caso siga com o padrão, não é necessário adicionar os plugins acima, porém o código/config que está no repositório do lab precisa deles. Assim também é necessário modificar no typescript. Seguimos as documentações:

Installation | Backstage Software Catalog and Developer Platform

Backstage homepage - Setup and Customization | Backstage Software Catalog and Developer Platform

@backstage/integration-aws-node | Backstage Software Catalog and Developer Platform

  • Criando imagem docker e subindo para o DockerHub. Já existe um Dockerfile para build do backend, foi criado junto com a estrutura inicial do app quando rodamos o npx:

docker image build . -f packages/backend/Dockerfile --tag backstage-app

docker images

docker tag db926b361c29 paulofponciano/backstage-app:latest
docker push paulofponciano/backstage-app:latest

Agora basta subir o código para o GitHub. Para facilitar nossa vida, criamos também um workflow para que esse build seja feito de forma automática sempre que subirmos alguma alteração para o repositório:

  • backstage-app/.github/workflows/master.yaml
name: Main Master Build

on:
  push:
    branches: [main]
    paths-ignore:
      - 'k8s/**'
      - 'catalog-entities/**'

jobs:
  build:
    runs-on: ubuntu-latest

    env:
      CI: true
      NODE_OPTIONS: --max-old-space-size=4096

    steps:
      - uses: actions/checkout@v3

      # Beginning of yarn setup
      - name: use node.js 18.x
        uses: actions/setup-node@v3
        with:
          node-version: 18.x
          registry-url: https://registry.npmjs.org/ # Needed for auth

      - name: cache all node_modules
        id: cache-modules
        uses: actions/cache@v3
        with:
          path: '**/node_modules'
          key: ${{ runner.os }}-node_modules-${{ hashFiles('yarn.lock', '**/package.json') }}

      - name: find location of global yarn cache
        id: yarn-cache
        if: steps.cache-modules.outputs.cache-hit != 'true'
        run: echo "dir=$(yarn config get cacheFolder)" >> $GITHUB_OUTPUT

      - name: cache global yarn cache
        uses: actions/cache@v3
        if: steps.cache-modules.outputs.cache-hit != 'true'
        with:
          path: ${{ steps.yarn-cache.outputs.dir }}
          key: ${{ runner.os }}-yarn-${{ hashFiles('yarn.lock') }}
          restore-keys: |
            ${{ runner.os }}-yarn-

      - name: yarn install
        run: yarn install --immutable
      # End of yarn setup

      - name: type checking and declarations
        run: yarn tsc:full

      - name: build
        run: yarn --cwd packages/backend build

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USER }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: build image and push to docker hub
        uses: docker/build-push-action@v4
        with: 
          context: .
          file: packages/backend/Dockerfile
          push: true
          tags: paulofponciano/backstage:${{ github.sha }}, paulofponciano/backstage:latest
Enter fullscreen mode Exit fullscreen mode

Estamos usando os secrets armazenados em repository secrets do GitHub para conectar ao DockerHub:

GitHub OAuth App

Para autenticação, utilizamos também o GitHub na parte de OAuth Apps.

  • Basta criar uma nova application (Settings > Developer settings > OAuth Apps). Com isso, teremos um Client ID e Client secrets, que utilizaremos como um secret do kubernetes para o backstage. É necessário informar a Homepage URL e Authorization callback URL:

Manifestos k8s

Criamos um diretório 'k8s' no repositório do backstage para concentrar os manifestos que aplicamos no cluster.

  • Criar namespace:
apiVersion: v1
kind: Namespace
metadata:
  name: backstage
  labels:
    istio-injection: enabled
Enter fullscreen mode Exit fullscreen mode
  • Backstage secrets:
# kubernetes/backstage-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: backstage-secrets
  namespace: backstage
type: Opaque
data:
  GITHUB_TOKEN: '' # sample base 64
Enter fullscreen mode Exit fullscreen mode

Este GITHUB_TOKEN é necessário para que o backstage consiga acessar os repositórios onde estão os catálogos, templates, etc. Basta gerar um PAT (Personal access token) e colocar em base 64. No GitHub: Settings > Developer Settings > Personal access tokens (Classic).

  • Postgres secrets:
apiVersion: v1
kind: Secret
metadata:
  name: postgres-secrets
  namespace: backstage
type: Opaque
data:
  POSTGRES_USER: YmFja3N0YWdl # sample base 64
  POSTGRES_PASSWORD: aHVudGVyMg== # sample base 64
Enter fullscreen mode Exit fullscreen mode
  • Postgres — Deployment, PVC, Service:
---
# kubernetes/postgres-storage.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-storage
  namespace: backstage
  labels:
    type: local
spec:
  storageClassName: gp3
  capacity:
    storage: 2G
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: '/mnt/data'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-storage-claim
  namespace: backstage
spec:
  storageClassName: gp3
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2G
---
# kubernetes/postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: backstage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:13.2-alpine
          imagePullPolicy: 'IfNotPresent'
          ports:
            - containerPort: 5432
          envFrom:
            - secretRef:
                name: postgres-secrets
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgresdb
      volumes:
        - name: postgresdb
          persistentVolumeClaim:
            claimName: postgres-storage-claim
---
# kubernetes/postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: backstage
spec:
  selector:
    app: postgres
  ports:
    - port: 5432
Enter fullscreen mode Exit fullscreen mode
  • GitHub Auth Secrets:
apiVersion: v1
kind: Secret
metadata:
  name: github-auth-secrets
  namespace: backstage
type: Opaque
data:
  AUTH_GITHUB_CLIENT_ID: '' # base 64
  AUTH_GITHUB_CLIENT_SECRET: '' # base 64
Enter fullscreen mode Exit fullscreen mode

Neste informamos o Client ID e Client secret gerados anteriormente, quando criamos o GitHub OAuth App.

  • Backstage ingestion secret:

kubectl -n kube-system create serviceaccount backstage-ingestion
kubectl create clusterrolebinding backstage-ingestion --clusterrole=cluster-admin --serviceaccount=kube-system:backstage-ingestion

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: backstage-ingestion
  namespace: kube-system
  annotations:
    kubernetes.io/service-account.name: backstage-ingestion
type: kubernetes.io/service-account-token
EOF
Enter fullscreen mode Exit fullscreen mode

kubectl -n kube-system get secret backstage-ingestion -o go-template='{{.data.token | base64decode}}'

A saída desse comando é o Service account token, que utilizamos como SA_TOKEN no plugin de kubernetes do backstage, possibilitando a ingestão dos dados do cluster para o backstage.

  • EKS secrets:
# kubernetes/postgres-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: eks-secrets
  namespace: backstage
type: Opaque
data:
  SA_TOKEN: '' # base 64
  CA_DATA: '' # base 64

Enter fullscreen mode Exit fullscreen mode

Informamos o SA_TOKEN gerado anteriormente e o certificate authority gerado para o EKS. No secrets como CA_DATA. É possível localizar pela console do EKS:

  • Backstage — Deployment, SA, Service:
# kubernetes/backstage.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backstage
  namespace: backstage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backstage
  template:
    metadata:
      labels:
        app: backstage
    spec:
      serviceAccountName: backstage-service-account
      containers:
        - name: backstage
          image: paulofponciano/backstage:latest
          imagePullPolicy: 'Always'
          ports:
            - name: http
              containerPort: 7007
          envFrom:
            - secretRef:
                name: postgres-secrets
            - secretRef:
                name: backstage-secrets
            - secretRef:
                name: github-auth-secrets
            - secretRef:
                name: eks-secrets
          env:
          - name: POSTGRES_PORT
            value: "5432"
          - name: POSTGRES_HOST
            value: "postgres.backstage.svc.cluster.local"
# Uncomment if health checks are enabled in your app:
# https://backstage.io/docs/plugins/observability#health-checks
#          readinessProbe:
#            httpGet:
#              port: 7007
#              path: /healthcheck
#          livenessProbe:
#            httpGet:
#              port: 7007
#              path: /healthcheck
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backstage-service-account
  namespace: backstage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: backstage-cluster-ro
subjects:
- namespace: backstage
  kind: ServiceAccount
  name: backstage-service-account
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:aggregate-to-view
---
# kubernetes/backstage-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: backstage
  namespace: backstage
spec:
  selector:
    app: backstage
  ports:
    - name: http
      port: 80
      targetPort: http

Enter fullscreen mode Exit fullscreen mode
  • Backstage ingress — Istio Gateway, Virtual Service:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: backstage-gateway
  namespace: backstage
spec:
  selector:
    istio: ingressgateway
  servers:
    - hosts:
        - backstage.pauloponciano.digital
      port:
        name: http
        number: 80
        protocol: HTTP
      tls:
        httpsRedirect: true
    - hosts:
        - backstage.pauloponciano.digital
      port:
        name: https-workloads
        number: 443
        protocol: HTTP
      tls:
        mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backstage
  namespace: backstage
spec:
  gateways:
    - backstage-gateway
  hosts:
    - backstage.pauloponciano.digital
  http:
    - route:
        - destination:
            host: backstage.backstage.svc.cluster.local
            port:
              number: 80
          weight: 100
Enter fullscreen mode Exit fullscreen mode

Backstage app-config.yaml

No repositório do backstage, temos o arquivo app-config.yaml com as configurações e integrações. Até o momento, esse arquivo está default. Para esse lab, deixamos conforme abaixo e fizemos commit e push para o repo, acionando assim um novo build com GitHub Actions (workflow master.yaml) que já criamos.

app:
  title: Community Backstage App
  baseUrl: https://backstage.pauloponciano.digital

organization:
  name: Community

backend:
  baseUrl: https://backstage.pauloponciano.digital
  listen:
    port: 7007
  csp:
    connect-src: ["'self'", 'http:', 'https:']
  cors:
    origin: https://backstage.pauloponciano.digital
    methods: [GET, HEAD, PATCH, POST, PUT, DELETE]
    credentials: true
  database:
    client: pg
    connection:
      host: ${POSTGRES_HOST}
      port: ${POSTGRES_PORT}
      user: ${POSTGRES_USER}
      password: ${POSTGRES_PASSWORD}

integrations:
  github:
    - host: github.com
      token: ${GITHUB_TOKEN}
  aws:
    mainAccount:
    accounts:
      - accountId: ${AWS_ACCOUNT_ID}
        accessKeyId: ${AWS_ACCESS_KEY_ID}
        secretAccessKey: ${AWS_SECRET_ACCESS_KEY}
        region: ${AWS_REGION}

proxy:
  '/test':
    target: 'https://example.com'
    changeOrigin: true
techdocs:
  builder: 'local' # Alternatives - 'external'
  generator:
    runIn: 'docker' # Alternatives - 'local'
  publisher:
    type: 'local' # Alternatives - 'googleGcs' or 'awsS3'. Read documentation for using alternatives.

auth:
  environment: production
  providers:
    github:
      production:
        clientId: ${AUTH_GITHUB_CLIENT_ID}
        clientSecret: ${AUTH_GITHUB_CLIENT_SECRET}

scaffolder:
  defaultAuthor:
    name: ":robot: [backstage-bot]"
    email: [email protected]

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, Group, Resource, Location, Template, API]
  locations:
    - type: url
      target: https://github.com/paulofponciano/backstage-app/blob/main/catalog-entities/locations.yaml

kubernetes:
  serviceLocatorMethod:
    type: 'multiTenant'
  clusterLocatorMethods:
    - type: 'config'
      clusters:
        - url: https://api-server-endpoint.gr7.us-east-2.eks.amazonaws.com
          name: pegasus # cluster name
          authProvider: 'serviceAccount'
          skipTLSVerify: false
          skipMetricsLookup: true
          serviceAccountToken: ${SA_TOKEN}
          caData: ${CA_DATA}
          customResources:
            - group: 'api.pauloponciano.digital' # Crossplane
              apiVersion: 'v1alpha1'
              plural: 'xcustomdatabases'
            - group: 'api.pauloponciano.digital' # Crossplane
              apiVersion: 'v1alpha1'
              plural: 'xcustomorders'
Enter fullscreen mode Exit fullscreen mode

Backstage

Após finalizar a etapa anterior com sucesso no build, aplicamos todos os manifestos k8s no cluster. Pode ser feito em sequência:

kubectl apply -f backstage_ns.yaml
kubectl apply -f backstage_secrets.yaml
kubectl apply -f postgres_secrets.yaml
kubectl apply -f postgres.yaml
kubectl apply -f github_auth_secrets.yaml
kubectl apply -f eks_secrets.yaml
kubectl apply -f backstage.yaml
kubectl apply -f backstage_istio_ingress.yaml

Como criamos um CNAME em um DNS Público apontando para o NLB de entrada do istio, já podemos acessar o backstage:

Já temos alguns componentes que foram registrados na subida do app:

Esse import vem por esse bloco do app-config.yaml. *Algumas dessas entidades no catálogo já não são *default, mas servem de exemplo. Podem encontra-las aqui.

catalog:
  import:
    entityFilename: catalog-info.yaml
    pullRequestBranchName: backstage-integration
  rules:
    - allow: [Component, System, Group, Resource, Location, Template, API]
  locations:
    - type: url
      target: https://github.com/paulofponciano/backstage-app/blob/main/catalog-entities/locations.yaml
Enter fullscreen mode Exit fullscreen mode

GitHub Organization

  • Criamos uma Organization no GitHub para centralizar os repositórios de templates de apps, gitops (ArgoCD) e workflows reutilizáveis (GitHub Actions). Os repositórios criados pelo backstage com base nos templates para novos apps, também ficam na organização:

  • Definimos também os secrets a nível de organização para utilizar nos workflows. Neste momento, o GH_PAT é o mesmo Personal access token que criamos para passar no secrets do kubernetes:

ArgoCD

O ArgoCD é o recurso central para as entregas no cluster EKS, nos permitindo utilizar a estratégia de GitOps. Assim conectamos o Argo ao repositório 'gitops' da organização com a estrutura abaixo:

  • deployed_apps/applications

  • deployed_infra/system:default

Utilizamos esse manifesto para criar as applications, project e conectar ao repositório:

apiVersion: v1
kind: Secret
metadata:
  name: public-repo-gitops
  namespace: argocd
  labels:
    argocd.argoproj.io/secret-type: repository
stringData:
  type: git
  url: https://github.com/paulofponciano-idp/gitops.git
---
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: idp
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  description: Internal Developer Portal
  sourceRepos:
    - 'https://github.com/paulofponciano-idp/gitops.git'
  destinations:
    - namespace: '*'
      server: 'https://kubernetes.default.svc'
      name: 'in-cluster'
  clusterResourceWhitelist:
    - group: '*'
      kind: '*'
  namespaceResourceWhitelist:
    - group: '*'
      kind: '*'
  orphanedResources:
    warn: true
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: idp-infra-aws
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: idp
  source:
    repoURL: https://github.com/paulofponciano-idp/gitops.git
    targetRevision: HEAD
    path: deployed_infra/system:default
  destination:
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
      allowEmpty: true
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: idp-apps
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: idp
  source:
    repoURL: https://github.com/paulofponciano-idp/gitops.git
    targetRevision: HEAD
    path: deployed_apps/applications
  destination:
    server: https://kubernetes.default.svc
  syncPolicy:
    automated:
      selfHeal: true
      prune: true
      allowEmpty: true
Enter fullscreen mode Exit fullscreen mode

Template GO (Backstage fetch:template)

No repositório go-template-backstage da organização, temos uma estrutura padrão como exemplo para uma aplicação em golang. Esse repositório serve como tamplate para o backstage quando algum consumidor acionar a criação de um novo app.

Uma parte importante desse template, é o workflow que também será utilizado no momento em que o backstage realizar o fetch e criar um novo repositório (push). Esse workflow é responsável por acionar outros workflows (workflow_call) que estão no repositório .github da organização.

Do lado do backstage utilizamos o template app-go.yaml abaixo, que guardamos no próprio repositório do backstage nesse caso:

apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: golang-template
  title: Golang Template
  description: Create a new Golang App

  tags:
    - golang
    - website
    - component
spec:
  owner: group:sre
  type: component

  parameters:
    - title: Provide some simple information
      required:
        - component_id
        - system
        - lifecycle
        - owner
        - type
      properties:
        component_id:
          title: Name
          type: string
          pattern: "^([a-z0-9\\-]+)$"
          description: Unique name of the component
          ui:field: EntityNamePicker
          ui:autofocus: true
        description:
          title: Description
          type: string
          description: Help others understand what this website is for.
        system:
          title: System
          type: string
          description: System of the component
          ui:field: EntityPicker
          ui:options:
            allowedKinds:
              - System
            defaultKind: System
        lifecycle:
          title: Lifecycle
          description: 'Application lifecycle'
          type: string
          default: experimental
          enum:
            - deprecated
            - experimental
            - production
        type:
          title: Type
          description: 'Application type'
          type: string
          default: service
          enum:
            - service
            - website
            - library
        owner:
          title: Owner
          type: string
          description: Owner of the component
          ui:field: OwnerPicker
          ui:options:
            allowedKinds:
              - Group

    - title: Choose a location
      required:
        - repoUrl
      properties:
        repoUrl:
          title: Repository Location
          type: string
          ui:field: RepoUrlPicker
          ui:options:
            allowedHosts:
              - github.com
            allowedOwners:
              - paulofponciano-idp

    - title: Infrastructure
      properties:
        kube:
          title: Create Kubernetes App
          description: Checking this will also create Kubernetes App
          type: boolean
          default: false
        env:
          title: Environment
          description: 'Environment to create resources'
          type: string
          default: dev
          enum:
            - dev
            - stg
            - prd
        dryRun:
          title: Only perform a dry run, don't publish anything
          type: boolean
          default: false

  steps:
    - id: template
      name: Fetch Application Template on GitHub Repo
      action: fetch:template
      cookiecutterCompat: true
      input:
        url: https://github.com/paulofponciano-idp/go-template-backstage
        copyWithoutTemplating:
          - .github/workflows/*
        values:
          component_id: ${{ parameters.component_id }}
          system: ${{ parameters.system }}
          description: ${{ parameters.description }}
          destination: ${{ parameters.repoUrl | parseRepoUrl }}
          owner: ${{ parameters.owner }}
          lifecycle: ${{ parameters.lifecycle }}
          type: ${{ parameters.type }}
          env: ${{ parameters.env}}

    - id: publish
      name: Publish Application
      action: publish:github
      if: ${{ parameters.dryRun !== true }}
      input:
        allowedHosts:
          - github.com
        description: This is ${{ parameters.component_id }}
        repoUrl: ${{ parameters.repoUrl }}
        defaultBranch: main
        repoVisibility: public
        collaborators:
          - team: sre
            access: maintain
          - team: ${{ parameters.owner }}
            access: push

    - id: fetch-kube
      name: Fetch Kubernetes Template
      action: fetch:template
      if: ${{ parameters.kube == true }}
      input:
        targetPath: ./kube
        url: https://github.com/paulofponciano/backstage-app/tree/main/catalog-entities/skeleton/kubernetes/apps/kustomize
        values:
          component_id: ${{ parameters.component_id }}
          description: ${{ parameters.description }}
          destination: ${{ parameters.repoUrl | parseRepoUrl }}
          owner: ${{ parameters.owner }}

    - id: kube-pr
      name: "Open PR in GitOps Repository"
      action: publish:github:pull-request
      if: ${{ parameters.kube == true }}
      input:
        repoUrl: github.com?repo=gitops&owner=paulofponciano-idp
        branchName: create-${{ parameters.component_id }}
        title: ':robot: [backstage-bot] Create new App ${{ parameters.component_id }}'
        description: |
          # New project: ${{ parameters.component_id }}
          ${{ parameters.description if parameters.description }}
        sourcePath: kube
        targetPath: deployed_apps

    - id: register
      name: Register Application in Catalog
      action: catalog:register
      if: ${{ parameters.dryRun !== true }}
      input:
        repoContentsUrl: ${{ steps['publish'].output.repoContentsUrl }}
        catalogInfoPath: "/catalog-info.yaml"

  output:
    links:
      - title: Go to Repository
        url: ${{ steps['publish'].output.repoContentsUrl }}
      - title: Go to GitOps Pull Request
        url: ${{ steps.kube-pr.output.remoteUrl }}
      - title: Open in catalog
        icon: catalog
        entityRef: ${{ steps['register'].output.entityRef }}
Enter fullscreen mode Exit fullscreen mode

Skeleton k8s (Backstage fetch:template)

No passo anterior, cuidamos do template de aplicação e das ações do backstage no momento de criar essa nova aplicação. Como esse app é entregue (argocd) e opera no kubernetes, precisamos também do 'esqueleto' para os manifestos. Guardamos o skeleton aqui com essa estrutura:

Registrar template GO no Backstage

Registrar o template é um processo simples, apenas informar a url e importar:

Criando novo app (go) através do Backstage

Validamos agora tudo que foi criado até o momento, buscando o fluxo que desenhamos:

  • Input das informações:

  • Aprovar pull-request no repositório de GitOps:

Podemos ver o replace das informações que passamos dentro dos manifestos que o backstage pegou no diretório de 'skeleton', esses manifestos agora estão no diretório 'deployed_apps' de GitOps.

  • Checando o novo repositório do app e workflow de first-release:

O workflow de first-release é acionado no momento do push que o backstage faz para criação do novo repositório. Este por sua vez, faz a chamada (workflow_call) dos workflows reutilizáveis de env, build e deploy:

  • Checando o sync no ArgoCD, vemos que o novo app já está sendo entregue no cluster:

  • Agora em nosso Backstage Catalog, temos o novo componente registrado. Podemos visualizar os detalhes:

Detalhes dos recursos no kubernetes relacionados a aplicação:

Detalhes de CI/CD do GitHub Actions:

Acessando o app de exemplo:

Backstage + Crossplane para provisionar recursos

Levando em conta que já temos o crossplane rodando no cluster EKS como parte do setup da infraestrutura, podemos utiliza-lo no cenário de provisionamento de recursos junto ao backstage, neste lab usando AWS como provider. A estratégia é a mesma, utilizar o backstage para fazer o input das informações que serão passadas em um manifesto (claim) e entregar com ArgoCD.

  • Skeleton do claim para composition de rds:
# CLAIM
---
apiVersion: api.pauloponciano.digital/v1alpha1
kind: CustomDatabase
metadata:
  name: ${{values.name}}-${{values.env}}-${{values.engine}}-db
  namespace: environment-crossplane
  labels:
    backstage.io/kubernetes-id: rds
spec:
  compositionSelector:
    matchLabels:
      db-engine: ${{values.engine}}
  resourceConfig:
    providerConfigName: aws
    region: ${{values.region}}
    size: ${{values.size}}
    engine: ${{values.engine}}
    tags:
      automation-by: crossplane
      ownerName: ${{values.owner}}
Enter fullscreen mode Exit fullscreen mode

Composition e definition já devem estar aplicadas no cluster para o crossplane. Podem encontra-las aqui.

  • Input das informações:

  • Aprovar pull-request no repositório de GitOps:

  • Checando o sync no ArgoCD, na application 'idp-infra-aws':

  • No catálogo do backstage, podemos checar que a composite do crossplane foi criada no cluster:

  • Olhando na console AWS, podemos confirmar a criação do recurso:


Com isso, podemos ver grandes possibilidades e facilidades que o backstage combinado com outras soluções, pode trazer para os times de uma organização.

Agradecimento a todos que publicam conteúdos abertos para fortalecer a comunidade, e que ajudaram muito nesse lab.

https://github.com/devxp-tech

https://github.com/diegoluisi


Happy building!

backstage Article's
30 articles in total
Favicon
New Backstage Plugin: Manage and Deploy IaC from Your Internal Developer Portal
Favicon
Understanding the Backstage System Model
Favicon
Backstage Consulting & Enterprise Support
Favicon
Platform Engineering : découvrez la puissance de Backstage.io
Favicon
The New Way To Use OPA With Backstage
Favicon
Migrating to Backstage’s New Backend: A Step-By-Step Guide
Favicon
Update the Backstage catalog instantly without touching any YAML
Favicon
The Ultimate Guide to Backstage Software Catalog Completeness
Favicon
Easier Relationship Mapping in the Backstage Catalog
Favicon
Using the Open Policy Agent with Backstage!
Favicon
Adopting Backstage - Documentation and Support
Favicon
How to Define Engineering Standards (with Backstage)
Favicon
Improving Backstage performance (by up to 48x)
Favicon
Scaling Backstage
Favicon
🎵 Desplegando infraestructura en AWS desde Backstage 🎵
Favicon
The Lifecycle of a JavaScript File in the Browser: Request, Load, Execute1
Favicon
How to easily start Backstage
Favicon
APIMatic SDKs in Backstage Developer Portal
Favicon
Enable Developers on SAP BTP with Terraform, GitHub Actions and Backstage
Favicon
Kubernetes will rise, and Java will change – what else can we expect in 2024?
Favicon
Backstage, App template e Crossplane no Amazon EKS [Lab Session]
Favicon
Creating Infra Using Backstage Templates, Terraform and GitHub actions.
Favicon
Port vs Backstage - Choosing Your Internal Developer Portal
Favicon
PagerDuty Community Update, December 7 2023
Favicon
Introducing the Harness SRM Backstage Plugin
Favicon
Got Monorepos Instead of Microservices? This is How Harness IDP Has Got You Covered
Favicon
How to use Self-Service Onboarding in Harness Internal Developer Portal
Favicon
Road to BackstageCon 2023: A Sneak Peek into an Exciting Lineup & A Recap of 2022!
Favicon
PagerDuty Community Update, November 10 2023
Favicon
Starting Platform Engineering Journey with Backstage

Featured ones: