Staff vetting | None |
---|---|
Staff security clearance | Internally access-controlled and trained (see below) |
Government security clearance | Not required |
Approach to internal security control | We tightly control who has access to the database containing user-submitted planning application data, which is usually restricted to 3 service engineers who are fully trained and aware of the sensitivity and security requirements around that data. |
The only reason for engineers to access that data will be:
Read our team IT and Cybersecurity policy |
| Data protection between buyer and supplier networks | All web traffic is encrypted using TLS 1.3+.
Uses industry standard certificate authority https://letsencrypt.org and https://www.cloudflare.com/ssl.
There are no extra costs for any TLS/SSL purchases. | | --- | --- | | Data protection within supplier network | As above |
Secure storage | Data is held at data centres (AWS) with all appropriate security measures in place. |
---|---|
Encryption | Amazon RDS encryption uses the industry standard AES-256 encryption algorithm. |
Access controls | We follow CyberEssentials best practices for access control to our AWS accounts. |
Access is limited to authorised developers, with mandatory Multi-Factor Authentication (MFA) in place. Once authenticated, developers use an IAM role with a limited permission set, following the principle of least privilege. | | Secure key management | We follow CyberEssentials best practices for secure key management.
All services use separate, named, accounts and all “root” passwords are stored in a secure vault system with limited access.
Please see our IT & Security Policy for further details. | | IP ‘allow list’ | We do not maintain and operate an IP allow list.
This is mitigated through the use of - • Enforced MFA to access all services • Restricting key services via Cloudflare’s Zero Trust “WARP” platform | | Separating backups How do we prevent ransomware attacks from deleting or encrypting backup data? | We do not currently separate our backups from other AWS resources, or employ an immutable storage facility.
This is mitigated through the use of - • Enforced MFA to access all services • Named user accounts, with IAM roles operating under the principle of least privilege | | Auditing and monitoring | We do not currently audit access by developers to AWS resources, such as our RDS databases.
This is mitigated through the use of - • Enforced MFA to access all services • Named user accounts, with IAM roles operating under the principle of least privilege | | Data sanitisation and equipment disposal | Please see our Data Retention Policy for information on how data is handled within PlanX.
Our IT & Security Policy details how we handle equipment. | | Can information be shared externally directly from within the platform (eg via email or URL) and if so are there measures in place to ensure sensitive data is not shared inappropriately? | The only way of sharing data externally from within the public interface is via the ‘Save and Return’ feature. ‘Magic link’ is an established, tested method and we have followed all security best practices around it.
The only way of using the editor interface to share data externally would be to change the email on the ‘Send to Email’ function. In future, ability to change this address will be limited to admins. Currently council admins have to provide this email address to us directly. | | Separation of customers data | PlanX employs a multi-tenant software architecture. While the data resides within a shared database, we have established a robust system to maintain separation between distinct instances of Council data and services.
PlanX manages the balance between multi-tenancy and data segregation by focussing on Role-Based Access Control (RBAC) mechanisms. These mechanisms are designed to ensure that only authorised team members can access and manipulate data, mitigating the potential risks associated with shared infrastructure. This safeguard operates not only at the user-interface level but also to the API and database level though our use of Hasura's user permissions model. |
Public interface | Editor interface | API | |
---|---|---|---|
User authentication | No login required. |
Save and return uses passwordless authentication with ‘magic link’ sent to user’s email inbox. | Single sign-on (SSO) using Google federated ID only.
SSO with Microsoft ID now possible | Single sign-on (SSO) using Google federated ID only.
Certain endpoints can only be accessed by internal services using API keys.
Restricted “admin” endpoints can only be accessed when signed in to Cloudflare’s WARP client.
SSO with Microsoft ID on roadmap | | Access restrictions in management interfaces and support channels | N/A | The Plan✕ editor uses role-based access control for admins and editors. Users will be authenticated using federated identities (e.g. Google or Microsoft vis OAuth 2.0 standard). Attempts to circumvent these restrictions (e.g. via the API) would return an error and the request will be logged.
Third party support channels used by OSL enforce industry standard authentication and require two-factor authentication whenever possible.
We use Hasura to handle authenticating users for data access. More information here . An access log is kept centrally, detailing permission levels.
Control and access permissions based on roles and controlled by customer admins. Users can be revoked by admins. | N/A | | Access restriction testing frequency | Integration tests and end-to-end tests run multiple times per day (and always at least once) which check that access policies work as expected.
If a code change were to break existing policies, it would not be possible to deploy this change.
Additionally, this is checked annually as part of a security check by an independent third-party. | Integration tests and end-to-end tests run multiple times per day (and always at least once) which check that access policies work as expected.
If a code change were to break existing policies, it would not be possible to deploy this change.
Additionally, this is checked annually as part of a security check by an independent third-party. | Integration tests and end-to-end tests run multiple times per day (and always at least once) which check that access policies work as expected.
If a code change were to break existing policies, it would not be possible to deploy this change.
Additionally, this is checked annually as part of a security check by an independent third-party. |
| Access to user activity audit information | Available on request.
Authenticated Users The following operations retain a foreign key relationship with the authenticated user who executed them. This is retained indefinitely (for the lifetime of the resource). • Any CRUD operation on a Flow • Publishing a flow
Any queries or mutations made by an authenticated user will be captured in our logs, and retained for 1 month.
We do not store user login/logout activity.
Public users We do not retain user activity audit information for any Guidance services (e.g. “Find out if…”). Anonymised analytics logs are collected on these services.
Submission services (where “Save & Return” is implemented) maintain user activity audit information in the form of “breadcrumbs” which describe an applicant’s journey through the service. | | --- | --- | | How long is user audit data stored for? | Authenticated users - please see above.
Public users - please see our data retention policy. | | Access to supplier activity audit information | We do not currently audit access by developers specifically.
Any activity conducted through the PlanX application will require developers to login, meaning they will be audited as authenticated user - see above for further details. | | How long supplier audit data is stored for | See above. | | How long system logs are stored for | 1 month | | How often are system logs checked for alerts or suspicious activity? | We do not currently have a standardised process or schedule to manually monitor our logs.
We mitigate against this via the following methods - • Use of Cloudflare WAF and Managed Rules to automatically monitor and protect our services • Automated error logging and reporting via Airbrake • Rate limiting our API | | What types of information can be contained within the access audit? | See above. | | What format can the access audit be in? | Within reason, we could export this data to a variety of formats. |
Named board-level person responsible for service security | Alastair Parvin |
---|---|
Security governance certified | No |
Security governance approach | The Plan✕ senior developers are tasked with ensuring OSL security policies are complied with. The operations lead is tasked with putting in place reviews and procedures to ensure appropriate reviews and audits are carried out, as far as reasonable. |
We seek external advice from trusted experts whenever possible. | | Information security policies and processes | The CEO is a director of OSL and is ultimately responsible for ensuring policies and processes are well-designed and followed. Directors receive a report from the Plan✕ tech lead at Board Meetings. OSL maintains a risk register and issue identification and escalation process. Company procedures are regularly reviewed to ensure best practice compliance. | | ICO Registration | ZA487800 | | Data protection policy in place | Yes. Our data protection policy applies to all staff (including freelance) and any 3rd party provider who may process personal / sensitive data (as set out in the Data Processing Agreement / Statement of Processing Activity). | | Managing access to cloud infrastructure | Our approach to managing access to cloud infrastructure aligns with CyberEssentials compliance standards.
Access is limited to authorised developers, with mandatory Multi-Factor Authentication (MFA) in place. Once authenticated, developers have use an IAM role with a limited permission set, following the principle of least privilege.
Access to Infrastructure as Code (IAC) operations through Pulumi are channeled exclusively through our Continuous Integration (CI) process, which ensures rigorous testing and validation before any deployment occurs, particularly in production environments. |
| How do we prevent attacks on our services and data? | Our primary method of prevention is by routing incoming traffic to our services via Cloudflare.
Cloudflare offers an integrated suite of security, performance, and reliability solutions, effectively shielding our web application from threats. It offers robust DDoS protection and a web application firewall (WAF) which protects us from a range of potential vulnerabilities.
Our frontend uses the AWS Cloudfront CDN, which has AWS Shield enabled which is a comparable service.
In future, we aim to simplify this by also using Cloudflare for our frontend application. | | --- | --- | | How do we detect attacks attacks on our services and data? | Cloudflare’s WAF has the following rules enabled - • Cloudflare Managed Ruleset • Cloudflare OWASP Core Ruleset • Cloudflare Exposed Credentials Check Ruleset
These allow us to effectively detect attacks by monitoring incoming, blocked, or suspicious traffic. |
| How do we respond after identifying an attack? | Please see our PlanX Incident Response Plan for full details. |
| Continuous monitoring | We always update images when they are built (e.g. apt-get update
). We deploy to staging and production at a high frequency which ensures that this is updated frequently.
In addition to this, we have a service configured via GitHub (dependabot) which scans our software (including Docker images) for vulnerabilities on a nightly basis. | | Reviewing security practices and compliance? | Ongoing.
We will formally review these at least annually following an external security audit. | | How often do we carry out external penetration testing by suitably qualified teams? | Annually |
Incident management approach | A risk log is maintained and mitigation actions are captured. Incidents are checked against this log to ensure we are constantly learning to prevent recurrence. Many incidents can be automatically detected and logged. Customers can report incidents via their Account Manager or through an issue report. |
---|---|
Incident reporting | Any breaches or significant incidents will be logged and reported to the service admin within 24 hours of discovery. |
Incident response | On being informed of a vulnerability we would expect to respond within 24 hours, either informing you that the vulnerability has been fixed, or giving a plan and timeline for fixing it. |