The Digital Landscape is a tool that provides insights into the technologies used across ONS to help make informed decisions about technology adoption and usage. The Landscape provides a range of utilities including:
- Tech Radar: A visual representation of the technologies used across ONS, categorised into Adopt, Trial, Assess or Hold based on their maturity and suitability for use within ONS.
- Projects: A list of projects within ONS and the technologies they use, allowing for better visibility and the ability to identify potential areas for technology consolidation or standardisation.
- Statistics: A collection of statistics about the language breakdown within the ONSDigital GitHub Organisation, providing insights into the most commonly used languages and their usage trends.
- GitHub Copilot Usage Metrics: Provides insights into the usage of GitHub Copilot across ONS, including organisation-wide statistics.
- GitHub Address Book: A mechanism to find the ONS staff members based on their GitHub username and vice versa, to facilitate communication and collaboration within ONS.
For more information about the project, please refer to our documentation site: Digital Landscape Documentation.
To submit a technology to be changed or added to the Tech Radar, please visit the Tech Radar Submissions repository (internal only).
- Digital Landscape
- Node.js (v24.1.0 recommended)
- It is recommended to use Node Version Manager (nvm) to manage Node.js versions
- Python (v3.10 or higher recommended)
- It is recommended to use Python's built-in venv module to manage virtual environments alongside Poetry for dependency management
- Fly CLI (for Concourse deployments)
- Make (for using the Makefile commands)
This repository uses a Makefile to simplify common tasks. To see the available commands, run the following command:
make helpTo run the project locally, do the following:
-
Install frontend and backend dependencies:
make install-dev
-
Export the required environment variables
# AWS export AWS_ACCESS_KEY_ID=<your_access_key> export AWS_SECRET_ACCESS_KEY=<your_secret_key> export AWS_SECRET_NAME=<your_secret_name> export AWS_REGION=<your_region> # Github export GITHUB_APP_ID=<your_github_app_id> export GITHUB_APP_CLIENT_ID=<your_github_app_client_id> export GITHUB_APP_CLIENT_SECRET=<your_github_app_client_secret> export GITHUB_ORG=<your_github_organisation>
Alternatively, you can use the
.env.examplefiles. Copy the.env.examplefiles to.envin both the frontend and backend directories and fill in the values.Security reminder to not commit secrets. Do not put the secrets in the
.env.examplefiles. -
Run the project:
make dev
This will run both the frontend and backend locally on ports 3000 and 5001 respectively.
Sometimes it can be useful to run the frontend and backend separately (i.e. to separate the logs). This can be done with the following commands (each in their own terminal):
make frontend # runs the frontend only make backend # runs the backend only
Note: If running in separate terminals, ensure the environment variables are exported in both terminals.
When running the backend locally, it bypasses the Application Load Balance (ALB) authentication which is applied within AWS. Instead, the backend makes use of a developer user that is found in the backend/src/services/cognitoService.js file with the helper function getDevUser().
This defaults the dev user to dev@ons.gov.uk with both admin and reviewer groups for respective access to each page on the frontend. Should you want to run locally with the dev user having different groups, you can use for following environment variable:
export DEV_USER_GROUPS=group1,group2The application supports sending alerts from the frontend to the backend, which then authenticates to Azure and forwards the alert payload to an Azure webhook. Additional information can be found in the alerts documentation: Alerts Documentation.
Note: These are optional to setup when running the project locally.
POST /alerts/api/alert
- Content-Type:
application/json - Body: JSON object. The backend forwards this object to the Azure webhook as JSON.
Example request:
curl -X POST http://localhost:5001/alerts/api/alert \
-H "Content-Type: application/json" \
-d '{"channel":"<channel-id>","message":"Radar page failed to load"}'The backend needs Azure credentials and the webhook target.
The required variables:
AZURE_TENANT_IDAZURE_CLIENT_IDAZURE_CLIENT_SECRETWEBHOOK_SCOPEWEBHOOK_URL(the URL of the Azure webhook endpoint)
Set these in backend/.env (see backend/.env.example). Security reminder to not commit secrets.
The frontend needs to know which channel to send the alert to, and the URL of the backend to send the alert to.
The required variables:
VITE_BACKEND_URL(i.e.http://localhost:5001for local development)VITE_ALERTS_CHANNEL_ID(the channel identifier used by the Azure webhook to route the alert to the correct channel)
Set these in frontend/.env (see frontend/.env.example). Security reminder to not commit secrets.
Frontend pages call the alert endpoint using the helper:
frontend/src/components/Alerts/Alerts.js
Example:
import sendAlert from '../components/Alerts/Alerts';
// ...
try {
// ...
} catch (err) {
await sendAlert(
'Error 🚨',
err?.message || String(err),
'Failed to fetch data for the Radar page'
);
}To setup the deployment pipeline with concourse, you must first allowlist your IP address on the Concourse server. IP addresses are flushed everyday at 00:00 so this must be done at the beginning of every working day whenever the deployment pipeline needs to be used.
Instructions on this are available within KEH's Confluence Space.
All pipelines run within the sdp-pipeline-prod AWS account, whereas sdp-pipeline-dev is the account used for testing changes to the Concourse instance itself (i.e. configuration changes, not pipeline changes).
Our pipelines use the ecs-infra-user IAM user within AWS to interact with our infrastructure.
Credentials/secrets for pipelines are stored within AWS Secrets Manager on the sdp-pipeline-prod account, so you do not need to set up anything yourself.
To set the pipeline, run the following script:
chmod u+x ./concourse/scripts/set_pipeline.sh
./concourse/scripts/set_pipeline.shNote: You only have to run chmod the first time running the script in order to give permissions.
This script will set the branch and pipeline name to whatever branch you are currently on.
It will also set the image tag on ECR to 7 characters of the current branch name if running on a branch other than main.
For main, the ECR tag will be the latest release tag on the repository that has semantic versioning(vX.Y.Z).
The pipeline name itself will usually follow a pattern as follows:
digital-landscape-<branch-name>for any non-main branch.- When following our branching strategy, pipelines are normally postfixed with the Jira ticket number, e.g.
digital-landscape-KEH-1234.
- When following our branching strategy, pipelines are normally postfixed with the Jira ticket number, e.g.
digital-landscapefor the main/master branch.
To deploy to prod, it is required that a Github Release is made on Github. The release is required to follow semantic versioning of vX.Y.Z.
A manual trigger is to be made on the pipeline name digital-landscape > deploy-after-github-release job through the Concourse CI UI. This will create a github-create-tag resource that is required on the digital-landscape > build-and-push-prod job. Then the prod deployment job is also through a manual trigger ensuring that prod is only deployed using the latest GitHub release tag in the form of vX.Y.Z and is manually controlled.
More information on our typical deployment patterns in Concourse can be found in our Confluence space.
Once the pipeline has been set, you can manually trigger a dev build on the Concourse UI, or run the following command for non-main branch deployment:
fly -t aws-sdp trigger-job -j digital-landscape-<branch-name>/build-and-push-devand for main branch deployment:
fly -t aws-sdp trigger-job -j digital-landscape/build-and-push-devTo destroy the pipeline, run the following command:
fly -t aws-sdp destroy-pipeline -p digital-landscape-<branch-name>It is unlikely that you will need to destroy a pipeline, but the command is here if needed.
Note: This will not destroy any resources created by Terraform. You must manually destroy these resources using Terraform.
Note: All deployments of the Digital Landscape should be done through Concourse. Manual deployments should only be done in exceptional circumstances.
There are 3 Terraform configurations that need to be applied in order to deploy the service:
terraform/storage: This creates the S3 bucket for storing the frontend assets and the Terraform state file.terraform/authentication: This creates the Cognito User Pool for user authentication.terraform/service: This creates the backend and frontend services, as well as the necessary AWS resources for the service to run (e.g. ECS cluster, ALB, etc).
When deploying the new service and its resources, it must be deployed in the above order (storage, authentication, service).
There are 2 main steps to deploying the service:
- Updating the ECR image.
- Applying the Terraform configuration.
For other modules (i.e. storage, authentication), only step 2 is required.
When changes are made within the codebase, the code needs to be containerised and pushed to ECR for the Terraform configuration to pull the latest image and deploy it.
Dockerfiles for both the frontend and backend are located in the root of each respective directory (/frontend/Dockerfile and /backend/Dockerfile).
All of the push commands below are available for your environment within the AWS Console. Navigate to ECR > repository-name > View push commands.
Note: Before running the push commands from ECR, ensure that you have exported the AWS credentials:
export AWS_ACCESS_KEY_ID=<your_access_key>
export AWS_SECRET_ACCESS_KEY=<your_secret_key>With the image updated in ECR, the Terraform configuration can be applied. Navigate to the terraform/service directory and do the following:
-
Fill out
.tfvarsfiles.env/dev/dev.tfvarsfor dev environment.env/prod/prod.tfvarsfor prod environment.
These files can be created based on the respective
example_tfvars.txtfiles in the same directories.Note: Do not commit the
.tfvarsfiles to the repository and do not put the secrets in theexample_tfvars.txtfiles. -
Initialise Terraform:
terraform init -backend-config=env/<environment>/backend-<environment>.tfbackend -reconfigure
-
Refresh Terraform state:
terraform refresh -var-file=env/<environment>/<environment>.tfvars
-
Apply the Terraform configuration:
terraform apply -var-file=env/<environment>/<environment>.tfvars
Note: Replace <environment> with either dev or prod depending on which environment you are deploying to.
Manual production deployments should only be done via Concourse, unless absolutely necessary.
This repository uses MkDocs for documentation. The documentation source files are located in the docs directory.
MkDocs gets deployed to GitHub Pages using GitHub Actions. The workflow for this is located at .github/workflows/deploy-docs.yml.
Before deployment, another GitHub Action workflow runs to check that the documentation builds correctly and has no linting or formatting issues.
This workflow is located at .github/workflows/ci-docs.yml.
To run the documentation locally:
-
Create and activate a Python virtual environment (optional but recommended):
python -m venv venv source venv/bin/activate -
Install the required dependencies:
make install-docs
Note: This will install the dependencies for MkDocs and any MkDocs plugins we use. If a virtual environment is not activated, poetry will configure its own virtual environment.
-
Run the MkDocs development server.
make serve-docs
This repository has GitHub Actions workflows setup for linting, testing and other CI jobs. The workflows are located at:
.github/workflows/ci-docs.yml: For documentation linting and build checks..github/workflows/ci-fmt.yml: For code formatting checks using Prettier (andterraform fmtfor Terraform)..github/workflows/ci-lint.yml: For code linting checks using ESLint..github/workflows/deploy-docs.yml: For deploying the documentation to GitHub Pages..github/workflows/mega-linter.yml: For running MegaLinter checks across the repository.
The application has the following tests:
- Unit tests for both the frontend and backend (Vitest).
- UI tests for the frontend (Playwright).
- Accessibility tests for the frontend (Playwright / axe-core).
Unit tests for both the frontend and backend are written using Vitest. To run the unit tests, do the following:
-
Ensure you have installed the development dependencies:
make install-dev
-
Run the unit tests:
make test-unit
This will run all unit tests for both the frontend and backend. If you want to run the unit tests separately, you can use the following commands:
make test-unit-frontend # runs frontend unit tests only
make test-unit-backend # runs backend unit tests onlyWithin the frontend/backend directories, the tests are located in the src directory alongside the code they are testing.
This helps ensure tests are easily discoverable and maintainable, as they are co-located with the code they are testing. It also helps to encourage writing tests for new code, as the test files are right there when creating new code files.
For example:
frontend/
├── src/
│ ├── components/
│ │ ├── MyComponent.js
│ │ └── MyComponent.test.js
│ ├── pages/
│ │ ├── MyPage.js
│ │ └── MyPage.test.js
│ ├── App.js
│ └── App.test.js
etc...
backend/
├── src/
│ ├── routes/
│ │ ├── myRoute.js
│ │ └── myRoute.test.js
│ ├── services/
│ │ ├── myService.js
│ │ └── myService.test.js
│ ├── index.js
│ └── index.test.js
Further information on how to run the UI and Accessibility tests can be found in the README files within the respective directories:
- UI Tests:
./testing/ui/README.md - Accessibility Tests:
./testing/accessibility/README.md
Both can be run from the root of the repository with the following commands:
make test-ui
make test-accessibilityThis will simply run the setup and test commands for both UI and Accessibility tests respectively, as defined in their own Makefiles.
The application uses ESLint for linting and Prettier for formatting. To run the linters and formatters, do the following:
-
Ensure you have installed the development dependencies:
make install-dev
-
Run the linters and formatters:
ESLint:
make lint # This will only check for linting issues and not fix them. make lint-fix # This will check for linting issues and fix them where possible.
Prettier:
make format # This will automatically format the code using Prettier. make format-check # This will check if the code is formatted correctly without making any changes.
This repository uses MegaLinter for comprehensive linting across multiple languages and file types. We use this so that all additional assets in the repository (e.g. YAML files, Markdown files, etc.) are also linted and checked for formatting issues, without having to set up specific linters for each file type.
To run MegaLinter, do the following:
make megalint-check # This will run MegaLinter and check for any linting or formatting issues without making any changes.
make megalint-fix # This will run MegaLinter and attempt to fix any linting or formatting issues where possible.Note: MegaLinter can be quite slow to run, especially the fix command, as it runs multiple linters and formatters under the hood. It is recommended to let Megalinter run via GitHub Actions and address any issues that arise there, rather than running it locally.
This repository uses Markdownlint for linting the documentation. To run the Markdownlint, do the following:
-
Install Markdownlint:
npm install -g markdownlint-cli
-
Run Markdownlint:
make lint-docs # This will only check for linting issues and not fix them. make lint-docs-fix # This will check for linting issues and fix them where possible.
To test that the documentation builds correctly with MkDocs, run the following command:
make build-docsNote: This depends on MkDocs being set up for the repository. Instructions for setting up MkDocs can be found in the Documentation section of this README.