Tools and templates

This section compiles the practical tools and templates developed through the research behind this playbook. They are designed to support the implementation of Human-centered design in crisis contexts, helping teams apply participatory, ethical, and responsible approaches when working with people affected by crisis.

These resources are flexible and adaptable. They are intended to guide action, —not replace critical thinking.

  • Provide on-the-ground knowledge of cultural, linguistic, and technological norms and infrastructure.

  • Enable the digital tool to be customized to local needs, context and technological realities, and promotes sustainability after a project ends.

Trust

Core considerations

Meaningful two-way communication

Core considerations

Support for digital skills

Core considerations

Data protection and security

Core considerations

Inclusiveness

Core considerations

Understand the technology

Informed consent

Understand the people affected by the crisis

Informed consent

Discover and understand

Human-centered design

Explore and ideate

Human-centered design

Prototype and test

Human-centered design

Implement and evaluate

Human-centered design

Trust

Core considerations

Meaningful two-way communication

Core considerations

Support for digital skills

Core considerations

Data protection and security

Core considerations

Inclusiveness

Core considerations

Understand the technology

Informed consent

Understand the people affected by the crisis

Informed consent

Discover and understand

Human-centered design

Explore and ideate

Human-centered design

Prototype and test

Human-centered design

Implement and evaluate

Human-centered design

Examples of the benefits and risk of digital technology

Integrating technology in humanitarian programming is an opportunity for increased reach and efficiency. It also comes with risks and unintended outcomes, which must be proactively identified and mitigated in partnership with crisis-affected communities.

Below are a few examples of how technology may solve some problems while also introducing critical new challenges.

Current state

  •  Mobile-based service answers frequently asked questions on gender-based violence.
  • Users navigate via keypad, listen to pre-recorded messages, and leave voice messages.
  • Aggregated data shows popular topics, but transcription and analysis require human effort.
  • The service is available offline, but the service is limited and feedback loops are slow or impossible to close.

Future state ​

With language AI

  • A fully speech-enabled and AI-powered service allows dynamic conversations.
  • Users get personalized, real-time responses from the AI tool.
  • Transcription and analysis are automated.

Risks and design considerations

  • Shared phones: sensitive complaints or feedback and questions that should be confidential could be exposed to family members or service staff or more widely.
  • AI risks: users receive inaccurate, biased, or culturally inappropriate responses.
  • Consent: existing informed consent agreements may no longer apply, requiring revised processes and means for giving consent.
  • Marginalised users: greater accessibility means greater responsibility to protect people’s privacy.
  • Service providers must find ways to inform users about new risks associated with the new technology.

Current state

  • Aid distribution relies on ID cards or manual registration.
  • Some marginalised groups (for example, displaced or older people) lack proper documentation, which leads to exclusion.

Future state ​

With biometrics (for example, facial recognition, fingerprints, iris scanning)

  • Faster, more accurate identification methods reduce fraud.
  • Access to aid is improved because physical documents are no longer needed.

Risks and design considerations

  • Storing biometric data increases the risk of misuse or surveillance.
  • Crisis-affected people might need to learn about biometric tracking and its implications.
  • False rejections (for example, “worn-out” fingerprints and facial recognition bias) can deny aid to people who are eligible to receive it.
  • Data sharing with authorities could place internally displaced people, refugees, minority language speakers and members of other marginalised groups at risk.

Current state

  • Many migrants rely on word-of-mouth and social media for information on safe routes, shelters, and legal rights.
  • Smugglers and traffickers exploit information and communication vacuums that leave space for rumours and misinformation to flourish. This can raise tensions between migrants and people living in host communities.
  • Without verified sources of information found on mobile apps, migrants risk unsafe crossings, detention, fraud and other harm.

Future state ​

  • Mobile apps provide real-time updates on safe routes, shelters, legal aid and other key topics.
  • Migrants can access verified, trustworthy information to reduce reliance on smugglers.
  • The service enables migrants to make informed decisions about their journey and their family’s well-being.

Risks and design considerations

  • Surveillance and tracking risks
    • Using the app could expose migrants to authorities, traffickers, and criminal groups if phone data is compromised.
    • Border authorities or hostile actors could monitor app activity, leading to detentions or pushbacks.
  • Device and connectivity challenges
    • Migrants often use shared or second-hand phones; data breaches could expose their travel history and past or future routes.
    • Patchy Internet access limits real-time functionality.
  • Device theft and exploitation
    • Phone theft may leave migrants vulnerable to identity theft or fraud.
    • Stolen phones could be used to track family members or blackmail individuals, especially if sensitive information is stored.
    • Traffickers could use stolen phones to extort money from families back home and threaten harm if their demands are not met.
  • Safety in emergency situations
    • If a migrant is stopped or detained, having the app on their phone could raise suspicion or place them at risk.
    • Some apps require the user to create an identifiable profile that could be used against a migrant.
  • Digital understanding and trust 
    • Migrants may be uninformed about data privacy risks and assume the app does not record their identity.
    • If the app is linked to a government or large unpopular organisation, some migrants may distrust and avoid using it.