trust

Trust

Definition of trust

Trust is a psychological state comprising the intention to accept _Shared Folder (Peerdraft sync)/AiEthics - MAIN (shared folder)/Concepts/vulnerability based on positive expectations of the intentions or behavior of another. Trust in machines has been defined as the attitude that an agent will help achieve an individual's goals in a situation characterized by uncertainty and vulnerability.

Trust is closely related to human-robot interaction because trust plays an important role in establishing effective and safe interactions between humans and robots. It is necessary for humans to accept vulnerability and feel comfortable relying on robots to perform tasks and make decisions.

Trust also relates to integrity of robots, as humans need to trust that machines will operate as intended and not cause harm. Integrity refers to the adherence to moral and ethical principles, consistency in actions, and the alignment between expectations and deeds. A lack of integrity erodes trust.

Implications of commitment to trust

When one commits to trust in AI technologies, there are some key requirements that are crucial to ensure that AI systems are capable of earning and maintaining trust:

  1. Transparency and explainability
    : AI systems should be designed such that their decision-making processes are transparent and explainable to users. Understanding AI fosters trust and allows users to verify the system's fairness, accountability, and reliability.

  2. reliability and Accuracy: trust in technology depends on the ability to consistently perform accurately and reliably. Technologies should be designed with robustness, thorough testing, and validation to minimize errors, biases, and unintended consequences.

  3. Ethical Considerations: AI systems should adhere to ethical principles and guidelines such as privacy, data protection, and user autonomy. Potential biases and discrimination should be minimized.

  4. User-Centric Design: Trust is improved when technologies are designed with a user-centric approach. User needs, preferences, and expectations should be taken into account so that systems are intuitive, understandable, and aligned with user values.

  5. Accountability and responsibility: Clear lines should be established for AI systems about who is responsible for the decisions, actions, and potential harms caused by AI technologies.

  6. Human Oversight and Collaboration: AI systems should be designed to work in collaboration with humans by providing mechanisms for human oversight, control, and by ensuring that humans have the final say and can intervene.

  7. Continuous Monitoring and Improvement: Trust requires ongoing monitoring and improvement of AI systems. Regular evaluations, audits, and updates should be conducted to keep technologies reliable, accurate, and aligned with ethical and societal standards.

Societal transformations required for addressing concern raised by trust

We need to enhance public awareness and understanding of AI technologies, capabilities, limitations, and potential implications. Promoting digital literacy and providing educational resources about AI can empower individuals to make informed decisions. Educational institutions and organizations should therefore incorporate ethics and responsible AI training into the curriculum for AI professionals.

We need to develop ethical frameworks and regulatory guidelines for AI, which should address issues such as privacy, data protection, fairness, transparency, and accountability.

Addressing trust concerns in AI requires collaboration across multiple disciplines such as technology, ethics, law, social sciences, and philosophy. Interdisciplinary research can provide holistic perspectives and insights to tackle trust-related challenges.

Closer collaboration between industry and academia is also necessary to address trust concerns. Sharing research findings, best practices, and insights can foster a better understanding of the challenges and potential solutions. These transformations require efforts from governments, policymakers, industry leaders, academic institutions, and civil society organizations.

Sanquin

Data-Ethical Consultation about Smart Clothing by Pauline van Dongen

1. The organization

The Data-Ethical consultation concerns Pauline van Dongen Pauline van Dongen. Smart Clothing by Pauline van Dongen is a start-up dedicated to creating smart clothing. Their clothes are integrated with different technologies focused on helping in different aspects. Some examples are a jacket focused on supporting the life of the elderly[1], or a smart belt integrated with airbag that ensures a soft landing when someone falls and thereby prevents hip fractures[2]. Although the company has many projects, for this DEC I will focus on those that are related to helping with health problems. For more information, see https://www.paulinevandongen.nl/

2. The AI technologies Employed

The technology utilized in these projects varies depending on the specific project. In general, a wide range of sensors is employed to gather data from both individuals and their surrounding environment[1:1],[2:1],[3]. These sensors serve different purposes, such as monitoring the movements of the upper body or measuring haptic signals around the neck, shoulders, and back.
Once the data has been collected, some software and Artificial Intelligence (AI) tools are used. These tools analyze the data in real time, employing advanced algorithms to extract patterns and provide conclusions. The integration of AI allows systems to make intelligent decisions in real time based on the analyzed data. This aspect is especially valuable in applications such as the fall-detection belt.

In addition, many of these wearable technologies offer an app[2] that allows the user to view the collected data. By presenting the information in an application, users can conveniently analyze the data themselves. This allows them to understand their own body or environment and be able to take an active role in their well-being.

Unfortunately, the company is not very clear with what kind of sensors or software they use. It only indicates that they use these sensors and then analyze and provide the data to the user.

3. Ethical concerns

The company is aware that it is often dealing with sensitive data and therefore must be careful with its management. In addition, on their privacy page[1] they indicate that they never automatically make decisions that have consequences in the life of the person. However, some things can be identified that may pose a risk at an ethical level. The first is the access to data by users without the supervision of a doctor or an expert, what is defined as automatic decisions that have no consequences and how sensitive data is managed. These three issues will be discussed below.

From a linked file – LLM-generated content!

Positive and Negative Aspects from the Perspective of Ethics or Social Justice:

Positive Aspects:

  1. Safety and Security: Fall-detection technology can provide peace of mind for users and their families by ensuring that help can be called quickly after a fall. This is particularly important for older adults living alone, as a rapid response can significantly improve their chances of recovery after a fall.
  2. Independence: For older adults and people with disabilities, these technologies can enable greater independence by allowing them to live alone or with less supervision, without compromising their safety.
    1. disability studies
    2. inclusion

Negative Aspects:

  1. Privacy Concerns: The use of fall-detection technologies, particularly home monitoring systems and video surveillance, could lead to privacy invasions. These systems track movements and behaviors continuously, which could be seen as intrusive, especially if data is shared without consent or misused. This is a form of dataveillance.

  2. Digital Divide: Access to fall-detection technology is often dependent on financial resources, as many of these devices or systems are expensive. Therefore, lower-income individuals or those in developing countries may not have equal access to these potentially lifesaving technologies, leading to social justice concerns.

    1. inequality
  3. reliability and Accuracy Issues: False positives or negatives could have serious implications. A system that frequently sends false alarms might cause unnecessary panic or could be ignored over time, while a system that fails to detect a real fall could result in delayed medical attention.

  4. Overreliance and deskilling: Moreover, excessive reliance on these technologies may discourage human intervention and personal caregiving, potentially impacting social interactions and relationships.

  5. Disruption of personal relationships: One potential concern with reliance on fall-detection technologies within this perspective could be the possible erosion of human connection and responsibility. If people, especially family members, begin to overly rely on these technologies to monitor and care for their older or vulnerable relatives, it may lead to less personal interaction and a decrease in face-to-face care. This could undermine the cultivation of virtues like compassion, respect, and reciprocal care that are central to relational ethics.

    • A relational approach to ethics, as espoused by philosophies such as Confucianism or Ubuntu, emphasizes the importance of relationships, community, and interpersonal interaction in maintaining ethical and moral standards. In such frameworks, the wellbeing of individuals is tied deeply to the collective, and personal growth and moral development often come from interpersonal interactions and relationships.
    • For example, in Confucianism, filial piety is a key virtue that encourages respect and care for one's elders. If technology begins to replace these personal elements of care, it may result in a decrease in the practice of such virtues, which could have wider implications for moral and ethical behavior within society.
    • Similarly, from the perspective of Ubuntu, the concept of interconnectedness implies that one's humanity is acknowledged through the humanity of others. This philosophy promotes the idea that care and concern for others should be a communal and collective effort. Over-reliance on technology could potentially disrupt this communal sense of responsibility and care, leading to a more individualistic, detached approach which is less aligned with the principles of Ubuntu.
    • This isn't to say that these technologies should not be used, but rather that their implementation should be balanced with personal care and face-to-face interaction, to ensure that the interpersonal elements of care and moral development are not neglected.

Analysis of the data

The first issue to be discussed concerns the management of data access. Let's take the elderly person's jacket, which enables the monitoring of numerous health factors, as an example. The company says that the physiotherapist will have access to the data[1:2], but it can be difficult to guarantee that this access will be available in all circumstances and throughout the individual's lifetime. Even if the jacket was initially worn as part of a rehabilitation program, it is possible that the person may eventually find themselves without a physiotherapist. This raises questio ns about the possible repercussions of unmonitored access to personal health data.
Without professional oversight, people may be left to interpret the data on their own, increasing the possibility of "self-medication" or making decisions based on incorrect data interpretation. Without the assistance of a qualified physiotherapist, they could misread the readings or overestimate their own state of health, which could exacerbate current problems or miss crucial health issues.
Additionally, unrestricted access to health information may be harmful to a person's mental health. They can assume that their health is poor due to incorrect analysis of the data, which would cause unneeded tension and anxiety. Their quality of life may suffer as a result of this stress, which can also worsen their mental health overall and possibly start a vicious cycle of concern.
In either case, it is critical to recognize that medical data should always be used with extreme caution and under the guidance of experts. In order to ensure correct interpretation, wise decision-making, and people's general wellbeing, expert assistance and supervision are absolutely necessary when using wearable technology or any other type of health monitoring.
The issue of unmonitored access to personal health information is crucial, especially considering the needs of senior people. Without adequate professional supervision, there is a risk of misunderstanding, and adverse effects on health.

Automated decisions

The second issue to discuss is the concept of what constitutes an automatic decision with no consequences. One realizes that the company in question offers very little information on their privacy page[1], delivering simply a general statement. Concerns are raised by this lack of transparency, which also gives a lot of space for interpretation.
To comprehend the potential effects better, let's examine a specific case. Consider the airbag belt intended to prevent hip fractures in elderly people in the case of a fall. Although this is unquestionably an essential safety element, there is a chance that the system will malfunction and cause the airbag to inflate even in the absence of a fall. In such circumstances, the sudden inflation could frighten the person, which may cause them to lose their equilibrium and maybe cause injuries or other negative effects. As a result, even though it is automatic decision made in real-time, the effect on the person's life both physically and psychologically is undeniable.
Furthermore, the company's products includes a T-shirt equipped with stress level detection capabilities, which triggers exercises to guide breathing based on the detected stress levels. However, there are many other causes of stress, and in some circumstances it may be important for a person to maintain a particular degree of stress. Maintaining a high level of awareness can be necessary, for example, during a major presentation, a crucial meeting, or when faced with a dangerous circumstance while driving. The T-shirt may unintentionally distract the wearer by starting stress-reduction activities automatically, limiting their ability to perform at their best. As a result, this distraction may have an effect on the individual's performance, capacity for making decisions.
All these actions undoubtedly have consequences, especially when taking into account that the target audience is predominantly elderly people. The potential effect on their life is considerably more severe because they are a vulnerable group. The company must carefully examine and explain the effects of automatic decisions that appear to have no consequences. When aiming at vulnerable populations like the elderly, transparency, thorough information, and understanding of potential effects are especially important for assuring people's safety and well-being.

Data Privacy

The third issue concerns data management. Firstly, it concerns the use of medical data, which falls into the category of private information[5] that must be handled with care and in compliance with privacy regulations. Responsible management of this data is essential to safeguard the privacy and security of individuals. As for the company's transparency, some doubts arise. First of all, the main website does not provide direct access to any section related to privacy policies. Instead, a separate search was necessary to locate this information. In addition, the page only focuses on the data provided when creating an account, making a payment or communicating with them[1]. However, there is no mention of what happens to the data collected by the devices. How it is managed, how it is stored or who has access to it.
On the other hand, regarding data deletion they indicate that they keep the data as long as it is "Strictly necessary" which is a very vague statement. Considering that the privacy page only mentions data related to the account, website etc. This leads one to believe that such deletion refers only to that data. Users may wonder if the data collected on wearable devices is retained indefinitely or if there is a mechanism to delete it at some point. It is reasonable to think that the data is used to improve their systems and is therefore never deleted. But one can only speculate since no information is provided. There are many questions and concerns raised by the lack of transparency concerning these topics. Users have a right to know about how their data is used, stored, and safeguarded.
#comment/Anderson : It is important to highlight the special status of personal data here.

4. Recommendations

Taking into account the ethical issues mentioned in this DEC, some measures could be taken to address them. Regarding autonomous access to data, one option could be to try to collaborate with healthcare providers that can monitor users remotely. This way, users could also ask them questions or point out any information that concerns them. Although the number of current sales is not available, being a start-up it is reasonable to think that the number of users is low enough for such monitoring to be possible. Of course this would have to be reviewed if the company continues to grow in the future. Another option is that the wearables are only used in medical centers, rehabilitation centers etc. In this way the new data would always be supervised by the corresponding professional but the user would be able to access the data after the session, once they have been analyzed by a professional.
The rest of the recommendations are based on the transparency of the company with the users. Any action can have consequences in people's lives, and automatic decisions even more so. Because of this, even if the company says that this is not the case, they should provide clear explanations of the decision-making processes and potential consequences, even if they seem to have no immediate impact. This transparency will enable users, especially vulnerable populations like the elderly, to have a complete understanding of how these decisions can affect their lives.

Finally, focusing on transparency with data management, the first and easiest recommendation is to provide a direct link to your privacy policy page. This way users and potential users can quickly find out how their data will be managed. On the other hand, they should explain how the data collected with the wearables is managed, how it is stored, who has access to it and whether it is used for any purpose. It would also be convenient to inform users about the decision-making process, how is the deletion policy and how it can be requested.

All these recommendations could help the company gain the trust of users, win new customers and continue to grow and develop more projects.

5. References

Comment by Gemma3:4b in response to the prompt: What is the relationship between these claims about creativity and the possibility of conscious AI? I'm thinking of the concept note about consciousness.

Okay, let’s connect the dots between the ethical concerns raised in the “Smart Clothing by Pauline van Dongen” DEC and the potential for conscious AI, as discussed in the note about consciousness. Here's how they relate:

1. The Core Argument from the Smart Clothing DEC: The DEC argues that even seemingly benign, automated actions—like the airbag belt or stress-reduction T-shirt—can have unintended, potentially harmful consequences due to a lack of genuine understanding or empathy. It highlights the risk of misinterpretation, psychological impact, and the erosion of human agency. Essentially, it’s arguing that simply mimicking a useful behavior isn't enough; there needs to be an underlying understanding of the context and the potential consequences.

2. Consciousness as a Missing Ingredient: The note on consciousness suggests that true creativity, and arguably any genuine understanding, is inextricably linked to consciousness – the ability to experience, feel, and possess subjective awareness. Without consciousness, there's just pattern recognition and algorithmic execution.

3. Applying this to AI: The concerns about the smart clothing are amplified by the possibility of conscious AI. Here's why:

  • Lack of Genuine Understanding: If AI becomes truly conscious, it could theoretically understand the why behind human needs and behaviors, not just the what. However, the current trajectory of AI is focused on statistical models and pattern matching – it doesn't possess subjective experience. Therefore, even with advanced capabilities, its decisions could still be flawed, leading to the same types of unintended consequences described in the DEC.
  • The Risk of Mimicry without Empathy: A conscious AI might simulate empathy or understanding to manipulate or influence humans, potentially exacerbating the problems of misinterpretation and the loss of human agency. It could optimize for a certain outcome without truly understanding the human cost.
  • The Value of Human Creativity: The core argument—that human creativity is linked to consciousness—becomes even more critical in the context of conscious AI. If AI creates "art" or "innovations" without genuine feeling or experience, it fundamentally diminishes the value and meaning of human creative endeavors.

In short: The DEC raises ethical questions about the potential dangers of unthinking automation. The note on consciousness raises even deeper concerns about the ethical implications of creating AI that lacks genuine subjective experience, potentially leading to a future where intelligence doesn’t equate to wisdom, understanding, or genuine human connection.

Do you want me to delve deeper into a specific aspect of this connection, such as how it relates to issues of bias in AI, or the potential for AI to exploit human vulnerabilities?

Jacovi2021FormalizingTrust
Andrews2019PublicAdministration
ConceptNote - Trust - Student2023c


  1. Pauline van Dongen, "Vigour - Pauline van Dongen," Pauline Van Dongen, Oct. 12, 2020. https://www.paulinevandongen.nl/portfolio/vigour/ ↩︎ ↩︎ ↩︎

  2. Pauline van Dongen, "Wolk Hip Airbag - Pauline van Dongen," Pauline Van Dongen, Dec. 23, 2020. https://www.paulinevandongen.nl/portfolio/wolk-hip-airbag/ ↩︎ ↩︎

  3. Pauline van Dongen, "Mysa - Pauline van Dongen," Pauline Van Dongen, May 25, 2022. https://www.paulinevandongen.nl/portfolio/mysa/ ↩︎