Zum Inhalt springen
Startseite » Blog Archive » Top 5 fails when developing a DiGA backend (German Digital Therapeutics – DTx)

Top 5 fails when developing a DiGA backend (German Digital Therapeutics – DTx)

Navigating the jungle of ISO norms and DiGAV annexes can be frustratingly hard and sometimes you still feel you didn´t get a real direction or answer. Legal texts are mostly quite blurry when it comes to concrete technical implications.

Since we are working with DiGAs on a daily basis and do know their challenges, we will show some of the most important pitfalls you will encounter when developing a DiGA. We will also present our solutions and technology stacks to avoid the mentioned pitfalls.

1. Choose correct Storage Location

In a nutshell, we do not recommend to use any major cloud services provider or BaaS offerings (like Firebase, AWS, Azure or GCP) and hence we will have to find an alternative to host your personal data.

Many mobile developers starting from scratch will choose Flutter as their platform of choice (as do we). And with Flutter, Firebase is heavily marketed and well-supported as the Backend-As-A-Service solution.

Sadly, there is DiGAV Annex I, data protection 38, which states that personal data can only be processed in the following locations:

  • In Germany
  • In another member state of the EU
  • In accordance with an adequacy resolution of EU 2016/679, article 45

Keeping it short: USA and any services originating from there are not covered. The only way to store outside of the aforementioned locations is to either encrypt or to anonymize the personal data before it leaves the EU.

Should we just encrypt?

Encrypting opens up a completely new problem space for backend developers, as even simple database operations like selections, joins or ordering might become unfeasible. The encryption key is not allowed to be used in systems that are non-EU based and of course decrypting the personal data, no matter where, drastically reduces performance of those operations that should be finished in a millisecond time frame.

Anonymizing is likely not what you want, as the personal data would be lost for the user.

Performing the encryption in-app leaves the possibility for “skilled users” to get access to the encryption key.

So if you want to store unencrypted personal or person-relatable data in a backend, you need to make sure that:

  • Personal data is stored in the EU
  • The company providing the storage solution is not a subsidiary of any US-based company

This rules out all major cloud services like Firebase, AWS, Azure, Google Cloud Services, even if you choose a european storage location.

What you will choose as an alternative will depend on your technology stack. Do you only run everything on one Virtual Machine? Do you prefer a Kubernetes cluster? Would you like to go fully serverless?

For our customers, we have chosen https://www.scaleway.com/ as our provider of choice. Located in France, it contains a fully managed Kubernetes offering and everything else you might need to develop a DiGA.

Other alternatives are:

2. Mitigate Broken Access Control

Especially for a DiGA it is vital that the stored personal and health data is secure. The data should only be accessible by those that are accredited to use it.

It seems like a no-brainer, but in the past there have been reports about existing DiGAs where one user could read the (quite sensitive) personal data of another user (https://zerforschung.org/posts/datenabfluss-auf-rezept/), just by incrementing or decrementing a number in the request.

There are several ways to mitigate this, even before the app reaches the public:

Solve authorization by reusing code

def authorize_user(user_id: str, user: auth_schemas.AuthenticatedUser = Depends(authenticated_user)) -> auth_schemas.AuthorizedUser:
    if user.user_id == user_id:
        return user
    raise authorization_exception()

If you can refactor authorization code into building blocks that can be reused, chances to ‘forget’ pieces of the puzzle are reduced.

Have Peer Reviews

By having a second developer review the code, flaws are much more likely to be discovered. The reviewing dev should also check if there are tests securing the behavior and inspect if the newly introduced feature covers all edge cases.

We also recommend to automatically add checklists for the reviewer to every Merge/Pull Request.

No excuses, every branch should be reviewed before being merged to the main line of code!

Introduce System Tests

System Tests do secure the behavior of your system as a whole, different than Unit Tests and Integration Tests.

Usually this is achieved by starting up a new instance of the whole backend system, containing the new feature to be introduced. A piece of software then acts as the client and uses the backend.

Useful tests that answer the following questions for all introduced endpoints could be:

  • Is the data of the test user accessible without authentication?
  • Is the data accessible with authentication?
  • Can the data of another user be fetched/manipulated/deleted without being accredited to do so?
  • Do aggregated endpoints only return data for the current user or do they leak information that should not be accessible to that user?

Define guidelines

One part of this guideline should be, that no enumerable ID is handed out of the backend or accepted as input to retrieve a selection of data.

Not good:

http://backend.com/v1/user/1

Better:

http://backend.com/v1/user/ADDC955C-E420-48D2-941E-70A9C45CB9E4
http://backend.com/v1/user/hello[at]mail.com

Instead, you should opt for a UUID or a similar, non-enumerable ID. Reasons:

  • If your API contains enumerable IDs, it is quite easy fetch all of the data, just by incrementing the number to infinity
  • It´s also easy to guess your total number of users, overall entries of users and so on, because usually numerical IDs start with number 1.
  • If your access control is broken, your API is instantly attackable by incrementing/decrementing IDs. Guessing UUIDs is much harder than incrementing a number.

3. Use approved Identity Provider

Lots of developers are tempted to go with the usual solutions for user login and authentication, for example Firebase or Auth0. Maybe they even are convinced to develop their own user management, directly built into the backend.

No developer wants to spend weeks of developing on a feature or software package that has been solved thousands of times before. If we are developing a DiGA though, there´s rules to what an Identity Provider should fulfil and there are even OpenSource projects that are recommended to use.

DiGAV Annex I, data security 11 states:

  • The DiGA should use a centralized authentication/authorization component
  • It is built using established standard components
  • The DiGA can verify trustworthiness of the auth component

An identity provider also uses some token of personal data (for example e-mail address or phone number) to identify the user´s existence. This in turn means that we have to look into the storage location of our chosen Identity Provider, as personal data receives special treatment in the DiGA and GDPR context.

Their storage location rules out Firebase, Auth0 and other comparable products instantly. ‘Homegrown‘ implementations are usually ruled out because they don´t fulfil the ‘established standard component’ rule.

Thankfully, in the newest DiPA FastTrack Leitfaden, BfArM acknowledges three Identity Providers as established and proven components:

  • Keycloak
  • OpenIAM
  • CAS

These are battle-proven over years and constantly being updated to ensure that no vulnerabilities are left open for attackers.

Our choice is Keycloak, which also fulfils lots of other DiGA requirements all in one go, for example:

  • Password policy
  • Brute Force protection
  • Two-Factor authentication
  • Audit logs
  • Established components like OpenID Connect and OAuth2
  • Configurable JWT token expiry times
  • And many more

4. Implement silent fails

What would you say if you are a customer of a DiGA where its usage might have an impact on social expectations placed upon you? Would you tell your colleagues about it?

Examples could be apps where the focus is on sexual or psychological topics.

Of course we do not welcome that certain diagnoses do attract more attention than others, but with the current state of society I am sure the majority would not be pleased if a diagnose becomes public. That´s exactly why we want to implement silent fails to stop leaking personal information.

The usual place where this kind of leak can happen are the backend or identity provider endpoints for user creation, password reset and login.

Example for existing user

Request:

POST https://backend.com/auth/login
{
  "email": "marc-geheim[at]deyan7.de",
  "password": "asdfqwertz"
}

Response:

{
  "error": "Unauthorized",
  "message": "Username and password do not match",
  "statusCode": 401
}

Example for nonexisting user

Request:

POST https://backend.com/auth/login
{
  "email": "marcus.roedderus[at]deyan7.de",
  "password": "asdfqwertz"
}

Response:

{
  "error": "Bad Request",
  "message": "Account does not exist.",
  "statusCode": 400
}

As you can see, if your backend or Identity Provider is implemented in a way that it responds with too detailed error information, it is quite easy to crawl your backend and to find out who is a customer of yours.

The solution here is to return the exact same response for all error cases.

This will make it harder for the User Experience of the app, but it won´t leak information.

5. Do not underestimate or misinterpret the interoperable export requirement

One DiGA on its own might already be beneficial for the patient, but the real power comes into play when the data of all the patient´s apps, diagnoses etc. could be inspected together as a whole.

For this to happen, a data format needs to be introduced that also includes medical terminologies like SNOMED-CT and LOINC to fit into the whole context of medical data. HL7 FHIR R4 is the weapon of choice here, and although it is not explicitly mentioned in the DiGAV, it´s factually the only export format that´s accepted by BfArM.

It´s explicitly not sufficient to just export your own format as JSON, XML or CSV by serializing your objects, although the DiGAV might sound a bit like it!

If you don´t have experience with HL7 FHIR R4 profiling, it may be time to either look for an external expert to create it (shameless plug: we can help!), or to reserve a fair bit of time to learn.

We recommend using https://github.com/FHIR/sushi and https://hl7.org/fhir/uv/shorthand/ to create your Implementation Guides. An example of an ImplementationGuide that was created by Deyan7 can be found here: https://simplifier.net/medipee-uroli-export.


Of course there is lots more ground to cover to really reach the state of a DiGA that is approved by Germany´s BfArM (federal institute for drugs and medical products), e.g. proving your medical efficacy or acquiring several certifications, but in this article we only want to focus on what you need to have in mind when writing software that should become a DiGA later.

Datenschutz
Wir, Deyan7 GmbH & Co. KG (Firmensitz: Deutschland), verarbeiten zum Betrieb dieser Website personenbezogene Daten nur im technisch unbedingt notwendigen Umfang. Alle Details dazu in unserer Datenschutzerklärung.
Datenschutz
Wir, Deyan7 GmbH & Co. KG (Firmensitz: Deutschland), verarbeiten zum Betrieb dieser Website personenbezogene Daten nur im technisch unbedingt notwendigen Umfang. Alle Details dazu in unserer Datenschutzerklärung.