Does science and innovative achievement take into account moral implications? Should we limit the success of our scientific and techonological projects for the sake of presumptive moral and political harm?
I don´t have the knowledge of critically assessing the label that Pegasus is ´world´s most dangerous spyware´, but as much as it was featured in the media, this system developed by the Israeli company NSO was definitely sold to state actors and institutions that used them contrary to human rights, media freedoms and democracy principles.
In general, the overall discussion about Pegasus took inevitably a biased direction that does not have necessarily to do with the results of the investigation, targeting NSO beyond the purely sake of the investigation. However, after reading the investigation by Forbidden Stories investigative reporters Laurent Richard&Sandrine Rigaud I couldn´t stop from thinking in terms of responsibility and human risks - privacy, and well beyond that.
Should companies, any of them, consider more the human element into their risk assessment considerations? We are living in an era when social responsibility is higher than ever before. Companies are taking responsibility and trying to behave in a responsible way when it comes to protecting the environment, for example. Why not trying to take a similar take on issues that may pertain to digital surveillance and human rights?
As humans who are creating machines and apps and push forward for technological achievement, maybe we can try sometimes to temper our enthusiasm in order to leave place for human and moral consideration. From the academic point of view, this is a discussion that belongs to our current process of mentalities shaping in the making. For each times, it´s own priorities and until the transition will fully take place, conflicts between new and old mindsets will prevail.
No comments:
Post a Comment