Tag: facial-recognition

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in

  • COVID-19: Need for technology intervention in India

    COVID-19: Need for technology intervention in India

    Much of the globalized world is experiencing a standstill due to the COVID-19 Pandemic Crisis. While world leaders are establishing measures to cope with the large scale outbreak, technology has been in the forefront as a crucial aspect of recovery. From sanitizer drones to virtual workspaces — the adoption of computing technology in healthcare, businesses and governance has seen an unprecedented rise.

    However, due to India’s unique factors of dense population and per capita poverty — the country’s response to this crisis will be an important case study. The World Health Organization’s guidelines insist that people should wash their hands regularly but over 163 million people residing in India do not have access to clean water. When access to fundamental resources are limited, one can only assume that access to robust healthcare facilities are also limited. The stark contrast in the capacity to handle this crisis will be tested when the local communication of COVID-19 reaches the second and third tier cities. As a primary effort in flattening the curve, the government has announced a 21 day country-wide lockdown. In spite of the measure being welcomed, if the country fails to control the spread, the lack of modern infrastructure and medical professionals will result in catastrophic consequences.

    This is reflected in the adoption of technology in primary healthcare centres. Medical professionals say there is a shortage of around 70,000 ventilators and the existing resources are being utilized by critical at-risk patients. The surging requirement of intensive care medical devices, including ventilators and high-end diagnostic and robotic surgery instruments is a growing concern. While domestic manufacturing and innovation have been scarce, Indian companies like Skanray Technologies are struggling to meet the immediate demand due to the international airline ban. Companies find it difficult to import crucial equipment such as chips, controllers and sensors from China — hindering their ability to produce these equipment on time.

    Globally, innovative technologies that seemed gimmicky in the past are being brought into mainstream practice. Drones have been deployed to carry medical samples and to spray disinfectants across the country. Robots are put in hospitals which aid in remote diagnosing and thermal sensing of the patients. The same is also used as service bots that bring food and toiletries to people.

    Facial recognition cameras are commonplace in China and a growing trend in other countries. Technology companies like SenseTime have built contactless temperature detection software that have been integrated into the cameras for wider coverage of people with fever. Big data analytics being done on these massive feeds has resulted in prediction algorithms which can determine whether a person has come in contact with another infected person. This data is then relayed via telecom companies to inform the individuals to self-quarantine.

    Complex surveillance systems come with their share of privacy concerns. While the lines between responsible surveillance and invasion of privacy become blurred, one cannot overlook the fact that some of these drastic measures are working. In China, the official reports indicate that the domestic cases are under control and newer cases of the virus are classified as imported. In a time of crisis, an open-minded analysis of these “draconian” measures would seem justified. However, this pandemic has not provided any justification of collecting these sensitive data in secrecy.

    Flawless implementation of such systems in India would have to hurdle through multiple policy hoops and comprehensive definitions of data privacy. However, inexpensive technologies such as drones and robotics should spark interest in the country. Medical professionals at the forefront of this battle could benefit from such technology that can reduce their risk of contracting the virus. Alongside technology, modern day practices of preliminary diagnoses such as telemedicine should be encouraged.

    Information and communication technologies across the country have made this battle a lighter burden than what it could have been. While the rate of awareness is significantly higher in the age of social media, it is important to note its duality. Online medical information and guidelines are accessible by at least 34% of the total population compared to the 7.5% in 2010. However, this information influx has also resulted in rumour mongering and exaggeration of outlier incidents — causing trivial worry and needless panic. In the past five years, rapid penetration of the internet has occurred in all sections of society but it has not ensured awareness in responsible use of the technology.

    On the other hand, the quarantined lifestyle has increased the need for virtual workspace. Facebook’s CEO, Mark Zuckerberg reported that traffic for their video streaming and messaging platform had grown multifold. Microsoft also reported a 40% increase in their active user base of collaboration software. High speed fiber internet’s extension throughout India will help in fragmenting this dense working population to multiple locations. With virology experts anticipating an effective vaccine at the earliest of 18 months — some of these altered lifestyles could become the new norm.

    Years following the second world war, measures were actively put in place to prevent another global conflict. The COVID-19 Crisis could leave a similar impact on the world where pandemic response and technology experience drastic reforms. However, the lens of India should vision this wake up call towards something more fundamental — uniformity in primary healthcare, civic infrastructure and technology intervention.

    Views expressed are author’s own.