Run the following commands to install KEDA on your cluster:
helm repo add kedacore https://kedacore.github.io/chartshelm upgrade --install keda kedacore/keda --namespace keda --create-namespace
KEDA automatically scales the deployment system based on queue size.
3
Configure an ingress
Configure an ingress, gateway, or Istio for your LangSmith instance. All agents will be deployed as Kubernetes services behind this ingress. See Set up an ingress. You must provide a hostname in your langsmith_config.yaml.
4
Verify cluster capacity
Ensure your cluster has available capacity for multiple deployments. A cluster autoscaler is recommended.
5
Verify storage
Ensure a valid dynamic PV provisioner or PVs are available on your cluster.
kubectl get storageclass
At least one StorageClass should have a PROVISIONER value (not kubernetes.io/no-provisioner) and be marked (default), or you must configure one before proceeding.
6
Verify egress
Ensure egress to https://beacon.langchain.com is available. See the egress documentation.
Run the following command to apply the changes. This command is used throughout this guide whenever you are asked to apply changes. Replace <version> and <namespace> with your values:
Each feature uses its own Fernet encryption key to encrypt feature-specific secrets such as credentials and tokens. Separate keys allow independent rotation and limit exposure if a key is compromised. Generate one key per feature using Python:
We recommend storing each key in a predefined Kubernetes secret rather than setting them directly in your config file. See Use an existing secret for the relevant parameters: agent_builder_encryption_key, insights_encryption_key, and polly_encryption_key.
Add the configuration to your langsmith_config.yaml
Using Kubernetes secrets (recommended)
Using inline values
Reference your existing secret by name. The chart reads agent_builder_encryption_key, insights_encryption_key, and polly_encryption_key from it automatically.
agentBuilderToolServer and agentBuilderTriggerServer are required for Fleet.
Each feature deploys its own dedicated PostgreSQL and Redis instances by default. To use external databases instead, configure the postgres.external and redis.external sections under each feature. For example:
(Optional) Enable OAuth tools and triggers for Fleet
To enable OAuth-based tools such as Gmail, Slack, or Linear in Fleet, configure the providerOrgId and add provider IDs for each integration you want to use. You can enable any combination of providers.
Add the following URLs to your OAuth client, replacing <hostname> with your LangSmith hostname and <provider-id> with the provider ID you’ll use (for example, google):Authorized JavaScript origins:
To enable Microsoft OAuth for Fleet, create an Azure app registration, add the required Microsoft Graph delegated permissions, and configure a Microsoft OAuth provider in LangSmith.
Select the account type that matches your deployment. If you need users from multiple Microsoft Entra tenants to authenticate, choose a multi-tenant option. If your deployment is limited to one tenant, you can use a single-tenant app registration.
3
Add the redirect URI
Add the following web redirect URI, replacing <hostname> with your LangSmith hostname and <provider-id> with your provider ID:
In your app settings, go to the Auth tab. Add the following redirect URI, replacing <hostname> with your LangSmith hostname and <provider-id> with your provider ID:
Add the following redirect URI to your Slack app under OAuth & Permissions > Redirect URLs, replacing <hostname> with your LangSmith hostname and <provider-id> with your provider ID (for example, slack):
Fleet integrates with GitHub through a dedicated GitHub App (not an OAuth app). The GitHub App provides repository access for Fleet’s GitHub tools and supports the user authorization flow required for private repository access.Setup involves creating a GitHub App, gathering its credentials, storing them as Kubernetes secrets, and referencing them from your langsmith_config.yaml.
You can create the app under a personal account or an organization. If multiple people will manage the integration, an organization-owned app is recommended.
2
Fill in basic details
GitHub App name: Any unique name, for example acme-langsmith-fleet. Make a note of the slug GitHub generates (the lowercased, hyphenated form of the name), as this is the value you’ll use for FLEET_GITHUB_APP_SLUG.
Homepage URL: Your LangSmith hostname, for example https://langsmith.acme.com.
Deselect Active under Webhook for now. You’ll enable it in a later step after generating a webhook secret.
3
Set callback URLs
Under Identifying and authorizing users, add the following Callback URL, replacing <hostname> with your LangSmith hostname:
Paste the generated value into Webhook secret. Save it, as you’ll need the same value when creating the Kubernetes secret in a later step.
5
Set repository permissions
Under Permissions > Repository permissions, grant the following:
Contents: Read and write
Issues: Read and write
Pull requests: Read and write
Metadata: Read-only (automatically selected)
Under Permissions > Account permissions, grant Email addresses: Read-only.
These are the minimum permissions required for Fleet’s built-in GitHub tools (issue management, pull request creation, repository content access). Adjust if you need additional tool capabilities.
6
Choose install visibility
Under Where can this GitHub App be installed?, select the option that matches your distribution needs. For most self-hosted deployments, Only on this account is correct.
7
Create the app
Click Create GitHub App. On the app settings page, note the following values:
Value
Where to find it
Environment variable
App ID
Numeric, at the top of the page
FLEET_GITHUB_APP_ID
Public link
For example, https://github.com/apps/acme-langsmith-fleet
FLEET_GITHUB_APP_PUBLIC_LINK
App slug
Last path segment of the public link
FLEET_GITHUB_APP_SLUG
Client ID
Under About
FLEET_GITHUB_APP_CLIENT_ID
8
Generate a client secret
Under Client secrets, click Generate a new client secret and copy the value. This is FLEET_GITHUB_APP_CLIENT_SECRET. GitHub only shows it once.
9
Generate a private key
Scroll to Private keys and click Generate a private key. GitHub downloads a .pem file. Keep this file secure, as it grants full access to the GitHub App. The PEM contents are FLEET_GITHUB_APP_PRIVATE_KEY.
10
Generate a state JWT secret
LangSmith signs short-lived OAuth state tokens with an HMAC key. Generate one:
FLEET_GITHUB_APP_ENABLED must be set on the tool server so the GitHub tools are registered. The remaining FLEET_GITHUB_APP_* variables are consumed by the platform backend and live under commonEnv.
In LangSmith, open a Fleet agent and go to the GitHub integration in the agent editor.
Click Connect GitHub to install the app on the repositories Fleet should access.
For private repositories, you must explicitly select each repository during installation.
Each user must also authorize the GitHub App against their own GitHub account using the re-auth flow in LangSmith. This allows Fleet to resolve per-user tokens for tools that act on behalf of a user.
If your Agent Server deployments will use images from private container registries (for example, AWS ECR, Azure ACR, or GCP Artifact Registry), configure image pull secrets. This configuration applies to all deployments automatically, allowing them to authenticate with your private registry.
Retrieving server logs is not supported for self-hosted deployments where the control plane (host-backend) and data plane (listener) are deployed in different Kubernetes clusters.
For deployments where the control plane and data plane are in the same cluster, ensure the control plane Kubernetes deployment (host-backend) has permission to get, list, and watch Kubernetes deployments, pods, replicasets, and logs from the namespace where the Agent Server deployment exists. There are different ways to achieve this. The following example uses Kubernetes RBAC, but use the approach that best fits your use case:
1
Create a Role with the required permissions
Create a Role in the Agent Server namespace. Replace <data_plane_namespace>:
In this example, the Role and RoleBinding are defined in the same Kubernetes namespace as the Agent Server deployment. You can assign any name to the Role and RoleBinding and customize them as needed.
Once your infrastructure is set up, you’re ready to deploy agent applications. See the deployment guides for instructions on building and deploying your agent applications.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.