Clarifying digital rights vs wrongs
With revolution comes change and the digital revolution is no exception. But how disruptive is it and what does it mean for society?
At the Fundamental Rights Forum 2018, business and tech experts together with human rights advocates took a hard look at technology and its impact on human rights during the dedicated #RightsTech track.
Recognising the ever-growing use of technology in all walks of life, they sought to examine the risks, challenges and opportunities that digitalisation brings when it comes to human rights.
Take artificial intelligence (AI). Computer-controlled machines are now doing things that before only people did. They are taking decisions that once were only made by humans.
From recommending what film to watch to deciding who to invite to interview, that’s computer-based decision-making at work.
But how does AI impact human rights?
During a session hosted by Microsoft, participants focused on AI’s impact on inclusion. They recognised the benefits AI can bring to society. Just think of how voice recognition has advanced, opening up the digital world to people with visual impairments.
However, they also spoke of digital divides. As not all individuals have equal access to technology, it leads to division. In addition, computer-based decision-making can have in-built biases that result in discrimination.
To limit the risk of deepening the digital divide, participants felt that AI systems are best used when they are indispensable to achieve the desired outcome.
They also believed in the need to educate stakeholders about the human right risks and opportunities that the use of AI brings.
Keeping children safe online was another dimension that was explored during the Forum during a session hosted by the European Data Protection Supervisor (EDPS).
Participants acknowledged the potential dangers to children of exposure to harmful content online. However, they also recognised that children are more and more knowledgeable of the challenges that technology brings.
Not only that, many are also developing the skills they need to deal with such challenges.
Participants underlined the importance of establishing trust and transparency between parents and children. Keeping digital rules clear, including how parental control tools are configured, creates a fruitful setting for open dialogue between parents and children.
This in turn will help foster a healthier digital environment to keep children safe online.
One of the recommendations that emerged during discussions was to work with service providers to enable users to have more control over the services they use. This could either be through community-based moderation or by having more control over unfiltered information they can see
The human rights implications of using biometrics in EU IT systems was yet another aspect explored during the Forum.
It was covered during a joint session hosted by FRA, the European Commission and AE Nyströms Advokatbyrå, a Swedish law firm specialising in human rights.
One of the key messages that emerged from discussions was that data quality, including biometric data, must be at the core of the developments of EU IT systems as false or no fingerprint matches or inaccurate data stored may have severe consequences for the person concerned.
Participants also spoke of the low awareness of the implications of storing and using data in EU IT systems among public officials and civil society actors, which needs to be addressed.
Over the course of three days, participants spoke about the pace of digital change and what this means for human rights. While recognising the many benefits, they also argued for greater awareness and discussion about the potential pitfalls of digital technologies.
In this way, we can minimise the dangers of the fast-moving digital revolution while maximising the benefits for all.