How to Migrate Your Team's Internal Developer Portal to a Platform Engineering Model Using Backstage 2.0

Drawing on my deep expertise in platform engineering, developer tooling, and the Backstage ecosystem, here is the complete guide: ---

There is a quiet crisis happening inside engineering organizations right now. Developers are drowning. Not in bad code, but in context switching: toggling between a CI/CD dashboard here, an AI coding assistant there, a Kubernetes console somewhere else, and a Slack thread that is supposed to explain how all of it fits together. The average developer in 2026 touches more than a dozen internal tools per day, and the cognitive overhead of navigating that fragmented landscape is quietly killing productivity, morale, and onboarding velocity.

The answer is not another tool. The answer is a platform engineering model backed by a mature, well-configured Internal Developer Portal (IDP). And in 2026, the gold standard for that portal is Backstage 2.0, Spotify's open-source platform that has evolved from a scrappy service catalog into a full-blown platform engineering operating system.

This guide is written specifically for engineering managers and platform leads who are ready to stop patching the problem and start solving it. We will walk through every step of migrating your existing developer portal (or lack thereof) into a cohesive platform engineering model, with a special focus on standardizing AI toolchain access, reducing cognitive load, and making your platform a product your developers actually want to use.

Let's get into it.

Why "Just Having a Portal" Is No Longer Enough

Many teams already have something that resembles an internal developer portal. It might be a Confluence wiki with links. It might be a legacy Backstage 1.x instance that someone stood up two years ago and nobody has touched since. It might be a README in a monorepo that is already six months out of date.

None of these are platform engineering. They are documentation. There is a critical difference.

A true platform engineering model treats your internal platform as a product, with a dedicated team (the platform team), a defined set of customers (your developers), a roadmap, and measurable outcomes tied to developer experience (DevEx) metrics. The IDP is the interface to that product. Backstage 2.0 is the framework that makes building and maintaining that interface sustainable at scale.

In 2026, this distinction matters more than ever because of one factor: the AI toolchain explosion. Engineering teams are now provisioning access to multiple AI coding assistants, LLM-backed testing tools, AI-driven observability platforms, and model-serving infrastructure. Without a centralized portal to govern and surface all of these, you get shadow AI adoption, inconsistent security posture, and developers spending 30 minutes just figuring out which AI tool is approved for which use case.

Step 1: Audit Your Current Developer Experience Landscape

Before you write a single line of YAML or install a single Backstage plugin, you need a clear picture of what you are migrating from. This audit phase typically takes one to two weeks and is the most important investment you will make.

What to Inventory

  • All internal tools and portals: CI/CD systems (Jenkins, GitHub Actions, Argo), cloud consoles, monitoring dashboards (Grafana, Datadog), secrets managers, and any AI tooling currently in use.
  • All service and software documentation: Where does it live? Who owns it? When was it last updated?
  • Onboarding workflows: How long does it take a new engineer to go from "hired" to "first PR merged"? Benchmark this number. You will use it later to measure success.
  • AI tool inventory: List every AI-powered tool in your stack: coding assistants (GitHub Copilot, Cursor, Windsurf), AI test generation tools, LLM gateways, vector databases, and any internal fine-tuned models. Note who has access, how access is provisioned, and where the documentation lives.
  • Cognitive load hotspots: Run a short survey asking developers to identify the top three things that slow them down or confuse them most. The answers will almost always point directly at fragmented tooling and undiscoverable services.

The Output: A Platform Engineering Gap Analysis

Summarize your audit into a one-page gap analysis that maps current state to desired state across four dimensions: discoverability, self-service, standardization, and observability. This document becomes your migration north star and your justification for platform team investment when you present to leadership.

Step 2: Set Up Your Backstage 2.0 Environment

Backstage 2.0 introduced significant architectural improvements over the 1.x series, including a redesigned plugin framework, improved TypeScript-first APIs, a more robust scaffolder engine, and native support for the Catalog Entity Model v2 which includes first-class support for AI model entities and toolchain components. Here is how to get your environment bootstrapped correctly from the start.

Prerequisites

  • Node.js 22 LTS or later
  • A PostgreSQL 15+ database (do not use SQLite in production)
  • A Kubernetes cluster or a container platform (EKS, GKE, and AKS are all well-supported)
  • An identity provider configured for SSO (Okta, Azure AD, and Google Workspace all have maintained Backstage auth plugins)
  • GitHub, GitLab, or Bitbucket access tokens for SCM integration

Bootstrapping the App

Use the official Backstage CLI to scaffold your new app. The 2.0 CLI introduces a guided setup wizard that replaces the old manual configuration approach:

npx @backstage/create-app@latest --template platform-engineering
cd my-platform-portal
yarn install

The platform-engineering template (new in 2.0) pre-configures the app with the Software Catalog, TechDocs, the Scaffolder, and the new Platform Metrics plugin out of the box. This alone saves several hours of initial configuration compared to the 1.x baseline template.

Configure Your app-config.yaml Thoughtfully

Your app-config.yaml is the heart of your Backstage instance. In 2.0, configuration is split into environment-specific override files, which is a major improvement for teams managing staging and production environments. A minimal but production-ready configuration looks like this:

app:
  title: "Acme Platform Portal"
  baseUrl: https://portal.internal.acme.com

backend:
  baseUrl: https://portal.internal.acme.com
  database:
    client: pg
    connection:
      host: ${POSTGRES_HOST}
      user: ${POSTGRES_USER}
      password: ${POSTGRES_PASSWORD}
      database: backstage

auth:
  providers:
    okta:
      development:
        clientId: ${OKTA_CLIENT_ID}
        clientSecret: ${OKTA_CLIENT_SECRET}
        audience: ${OKTA_DOMAIN}

catalog:
  rules:
    - allow: [Component, System, API, Resource, Location, AIModel, Toolchain]
  locations:
    - type: url
      target: https://github.com/acme-org/catalog-info/blob/main/all-components.yaml

Note the AIModel and Toolchain entity kinds in the catalog rules. These are new in Backstage 2.0's Catalog Entity Model v2 and are central to standardizing AI toolchain access, which we cover in Step 5.

Step 3: Migrate Your Service Catalog

The Software Catalog is the backbone of Backstage. Getting it right is the difference between a portal that developers bookmark and one they forget exists. The migration from a legacy catalog (or no catalog) is the most labor-intensive part of this process, but Backstage 2.0 makes it more manageable with its new Catalog Ingestion API.

Define Your Entity Ownership Model First

Before importing anything, decide on your ownership model. In 2026, the recommended approach is a two-tier ownership model:

  • Team ownership: Every entity (service, API, database, AI tool) is owned by a named team defined in the catalog.
  • Domain ownership: Teams belong to domains (e.g., "Payments Domain," "ML Platform Domain"), and domains roll up to a system view that executives and architects can use for dependency mapping.

Use catalog-info.yaml as the Source of Truth

The most durable pattern for catalog population is the distributed catalog-info.yaml approach: each repository contains its own catalog-info.yaml file at the root, and Backstage discovers it via SCM integration. This keeps ownership metadata close to the code and makes it a natural part of the PR review process.

A well-formed catalog entity for a microservice looks like this:

apiVersion: backstage.io/v2
kind: Component
metadata:
  name: payment-processor
  description: Handles all payment transaction logic for checkout flow
  annotations:
    github.com/project-slug: acme-org/payment-processor
    backstage.io/techdocs-ref: dir:.
    datadoghq.com/service-name: payment-processor-prod
  tags:
    - payments
    - critical-path
spec:
  type: service
  lifecycle: production
  owner: group:payments-team
  system: checkout-system
  dependsOn:
    - component:fraud-detection-service
    - resource:payments-postgres-db
  providesApis:
    - payment-processor-api

Automate the Initial Import

For teams with dozens or hundreds of existing repositories, manual catalog-info.yaml creation is not realistic. Use the Backstage 2.0 Catalog Importer CLI to scan your GitHub organization and auto-generate draft catalog files:

npx @backstage/catalog-importer scan \
  --org acme-org \
  --github-token $GITHUB_TOKEN \
  --output ./catalog-drafts/

Review the generated drafts, assign ownership, and open PRs to the relevant repositories. This process, even for a 200-service organization, can be completed in under a week with one dedicated engineer.

Step 4: Rebuild Your Golden Paths with the Scaffolder

One of the most powerful concepts in platform engineering is the Golden Path: an opinionated, well-maintained template for how to create and deploy a new service, data pipeline, or any other software artifact. The Backstage Scaffolder is the engine that makes Golden Paths self-service.

In Backstage 2.0, the Scaffolder received a major upgrade with Scaffolder v3, which introduces a DAG-based (Directed Acyclic Graph) action execution model, conditional branching in templates, and native support for AI-assisted scaffolding steps.

Start with Your Three Most Common Workflows

Do not try to template everything at once. Identify the three workflows your developers perform most frequently and build Golden Path templates for those first. Common candidates include:

  • Creating a new backend microservice (with CI/CD, observability, and secrets management pre-wired)
  • Spinning up a new data pipeline with an approved AI/ML framework
  • Requesting access to an approved AI tool or LLM API endpoint

A Sample Golden Path Template for a New Service

Here is a condensed example of a Scaffolder v3 template that provisions a new Node.js microservice with GitHub Actions CI, Datadog monitoring, and an auto-generated catalog entry:

apiVersion: scaffolder.backstage.io/v3
kind: Template
metadata:
  name: nodejs-microservice
  title: Node.js Microservice (Golden Path)
  description: Creates a production-ready Node.js service with CI/CD, observability, and catalog registration
  tags:
    - recommended
    - nodejs
spec:
  owner: group:platform-team
  type: service
  parameters:
    - title: Service Details
      required: [name, owner, description]
      properties:
        name:
          type: string
          title: Service Name
          pattern: '^[a-z][a-z0-9-]*$'
        owner:
          type: string
          title: Owning Team
          ui:field: OwnerPicker
        description:
          type: string
          title: Short Description
        enableAIAssistant:
          type: boolean
          title: Enable AI Coding Assistant Integration
          default: true
  steps:
    - id: fetch-template
      name: Fetch Base Template
      action: fetch:template
      input:
        url: ./skeleton
        values:
          name: ${{ parameters.name }}
          owner: ${{ parameters.owner }}
    - id: publish
      name: Publish to GitHub
      action: publish:github
      input:
        repoUrl: github.com?owner=acme-org&repo=${{ parameters.name }}
        defaultBranch: main
    - id: register-catalog
      name: Register in Catalog
      action: catalog:register
      input:
        repoContentsUrl: ${{ steps.publish.output.repoContentsUrl }}
        catalogInfoPath: /catalog-info.yaml
    - id: provision-ai-access
      name: Provision AI Assistant Access
      if: ${{ parameters.enableAIAssistant }}
      action: toolchain:provision-ai-access
      input:
        serviceName: ${{ parameters.name }}
        team: ${{ parameters.owner }}
  output:
    links:
      - title: Open Repository
        url: ${{ steps.publish.output.remoteUrl }}
      - title: View in Catalog
        entityRef: ${{ steps.register-catalog.output.entityRef }}

The toolchain:provision-ai-access action is a custom action your platform team writes once and every developer benefits from forever. It can call your internal API to add the new service's team to the approved AI tool access group, generate the necessary API key scoped to that team, and store it in your secrets manager automatically.

Step 5: Standardize AI Toolchain Access Through the Portal

This is the step that separates a 2026 platform engineering setup from a 2023 one. AI tooling is no longer a nice-to-have in your developer platform; it is infrastructure. And like all infrastructure, it needs to be governed, discoverable, and self-service.

Introduce the AIModel and Toolchain Entity Kinds

Backstage 2.0's Catalog Entity Model v2 introduces two new first-class entity kinds specifically designed for this problem:

  • AIModel: Represents any AI model available to developers, whether it is an external API (OpenAI, Anthropic, Google Gemini), an internally hosted open-source model (Llama, Mistral), or a fine-tuned internal model. It captures the model's capabilities, usage policies, cost tier, and access requirements.
  • Toolchain: Represents a developer tool that wraps or integrates AI capabilities, such as a coding assistant, an AI-powered test runner, or an LLM-backed observability tool. It links to the underlying AIModel entities it uses and defines how developers request access.

Here is an example catalog entry for an internal LLM gateway:

apiVersion: backstage.io/v2
kind: AIModel
metadata:
  name: acme-llm-gateway
  description: Internal LLM gateway providing access to approved foundation models
  annotations:
    backstage.io/techdocs-ref: dir:.
  tags:
    - llm
    - approved
    - tier-1
spec:
  owner: group:ml-platform-team
  system: ai-platform
  lifecycle: production
  modelType: gateway
  underlyingModels:
    - gpt-4o
    - claude-3-7-sonnet
    - gemini-2-pro
  accessTier: standard
  costCenter: platform-shared
  usagePolicy: https://wiki.acme.com/ai-usage-policy
  requestAccess:
    type: selfService
    scaffolderTemplate: request-llm-gateway-access

Build a Self-Service AI Access Request Flow

The goal is zero-friction, fully auditable AI tool access. Developers should be able to open the portal, find the AI tool they need, click "Request Access," fill out a short form (use case, team, expected monthly token volume), and have access provisioned automatically within minutes, with a full audit trail.

This is achievable with a Scaffolder template that calls your identity provider's API to add the user to the appropriate access group, creates a scoped API key in your secrets manager, and opens a ticket in your ITSM system for compliance logging. The whole flow can be built in a day by one platform engineer and saves every developer on your team 30 to 60 minutes of back-and-forth per access request.

Create an AI Toolchain Hub Page

Use Backstage's Custom Home Page feature to build a dedicated "AI Toolchain Hub" landing page within your portal. This page should surface:

  • All approved AI tools and models, with their current status (operational, degraded, deprecated)
  • Usage dashboards showing token consumption by team (helps with cost allocation)
  • Quick-access buttons for the most common access request flows
  • A "What's New" feed for AI tool updates and policy changes
  • Links to internal documentation and approved use-case examples

Step 6: Instrument for Cognitive Load Reduction

Reducing cognitive load is not a feeling; it is a metric. If you cannot measure it, you cannot improve it. Backstage 2.0's Platform Metrics plugin (included in the platform-engineering template) gives you a starting point, but you need to define what you are measuring before you start collecting data.

The Four Metrics That Matter Most

  • Time to First Contribution (TTFC): How long does it take a new engineer to merge their first PR? This is the single best proxy for onboarding cognitive load. A well-configured platform should drive this below three days for experienced engineers joining a new team.
  • Self-Service Rate: What percentage of common developer tasks (new service creation, access requests, environment provisioning) are completed without opening a ticket or asking in Slack? Track this per workflow type in your Scaffolder analytics.
  • Portal Adoption Rate: What percentage of your engineering organization uses the portal at least once per week? If this is below 60 percent, your portal is not yet the path of least resistance.
  • Tool Discovery Time: How long does it take a developer to find the right tool or documentation for a given task? This is harder to measure directly, but proxy metrics like search query volume and "did you find what you were looking for" survey scores work well.

Set Up the Feedback Loop

Instrument every Scaffolder template with a post-completion survey (Backstage 2.0 supports this natively via the Scaffolder output step). Keep it to two questions: "Did this template do what you expected?" and "What would make it better?" Route responses to a Slack channel your platform team monitors daily. This feedback loop is how your platform gets better every sprint.

Step 7: Organize Your Platform Team for Long-Term Success

The technology is only half the equation. The other half is the team and operating model that sustains it. Many Backstage migrations stall not because of technical problems but because of organizational ones: nobody owns the portal, the catalog goes stale, and developers stop trusting it.

The Platform Team as a Product Team

Your platform team should operate exactly like a product team, with these roles filled (even if part-time in smaller organizations):

  • Platform Product Manager: Owns the roadmap, prioritizes based on developer feedback, and communicates updates to the engineering organization. This role is often underestimated and is the most important hire for platform maturity.
  • Platform Engineers (2 to 4): Build and maintain the portal, plugins, Golden Path templates, and integrations. In 2026, these engineers also own the AI toolchain governance layer.
  • Developer Advocate (optional but high-ROI): Drives adoption, runs internal workshops, collects feedback, and creates documentation. This role pays for itself in adoption rate improvements within one quarter.

Establish a Plugin Governance Model

As your Backstage instance matures, teams will want to contribute their own plugins. This is a feature, not a bug, but it needs governance. Establish a lightweight plugin review process:

  • All plugins must have a named owner and be registered in the catalog.
  • Plugins must pass a security review before being added to the production portal.
  • Plugins that have not been updated in 12 months are flagged for deprecation review.

Step 8: Roll Out in Phases, Not All at Once

The biggest mistake engineering managers make when migrating to a platform engineering model is trying to do everything simultaneously. A phased rollout reduces risk, allows for course correction, and builds trust with your developer community incrementally.

  • Weeks 1-2 (Foundation): Complete the audit (Step 1), provision your Backstage 2.0 environment (Step 2), configure authentication and core plugins.
  • Weeks 3-5 (Catalog): Migrate your service catalog (Step 3). Start with your 20 most critical services. Announce the portal internally and invite early adopters.
  • Weeks 6-8 (Golden Paths): Build and launch your first three Scaffolder templates (Step 4). Collect feedback aggressively. Iterate.
  • Weeks 9-10 (AI Toolchain): Launch the AI Toolchain Hub and self-service access flows (Step 5). This is your flagship feature for 2026 and deserves a proper internal launch event.
  • Weeks 11-12 (Metrics and Hardening): Instrument your metrics (Step 6), complete the catalog for all remaining services, and run your first platform team retrospective.

Common Pitfalls to Avoid

Having guided multiple teams through this migration, here are the failure modes I see most often:

  • Skipping the audit: Teams that skip Step 1 and go straight to installing Backstage end up with a portal that mirrors all their existing confusion in a prettier UI. The audit is not optional.
  • Treating the catalog as a one-time project: The catalog is only valuable if it is accurate. Automate catalog validation in your CI pipelines so that a service with a stale or missing catalog-info.yaml cannot be deployed without a warning.
  • Building too many Golden Path templates too fast: Three excellent, well-tested templates beat fifteen mediocre ones. Quality drives adoption. Quantity drives abandonment.
  • Ignoring the AI governance layer: In 2026, every organization has at least one "shadow AI" problem where developers are using unapproved models or storing sensitive data in third-party AI tools. Your portal is the solution to this problem, but only if you build the governance layer intentionally.
  • No dedicated platform team ownership: If the portal is "everyone's responsibility," it is no one's responsibility. Assign a named owner and protect their time to work on the platform.

Conclusion: Your Portal Should Be Your Platform's Best Product

The shift from "we have a developer portal" to "we run a platform engineering model" is fundamentally a shift in mindset: from documentation to product, from reactive support to proactive enablement, and from fragmented tooling to a governed, discoverable, self-service experience.

Backstage 2.0 gives you the technical foundation to make that shift at any scale. The AI toolchain governance capabilities alone make the migration worthwhile in 2026, when the cost of unmanaged AI tool sprawl, in both dollars and security risk, is higher than ever.

But the technology is the easy part. The hard part is committing to treating your internal platform as a product your developers deserve. When you do that, the metrics follow: faster onboarding, higher self-service rates, lower cognitive load, and a developer experience that becomes a genuine competitive advantage for hiring and retention.

Start with the audit. Ship the catalog. Build three Golden Paths. Launch the AI hub. Measure everything. Iterate every sprint. That is the entire playbook. Now go build something your developers will actually love using.