Category: Big Data & Data Analytics

  • Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Nations must adopt Artificial Intelligence as a mechanism to build transparency, integrity, and trustworthiness, which are necessary to fight corruption. Without effective public scrutiny, the risk of money being lost to corruption and misappropriation was vast. Dr Chris Kpodar, a global Artificial Intelligence Specialist, has advocated the use of artificial intelligence as an anti-corruption tool through the redesigning of systems to address systems that were previously prone to bribery and corruption.

     

    Artificial Intelligence Tools

    Artificial Intelligence has become popular due to its increasing applications in many fields. Recently, IIT Madras opened a course on B.Tech Data Science in Tanzania, demonstrating the popularity of Artificial Intelligence. The history of Artificial Intelligence goes back to the 1950s when computing power was less, and hardware were huge. These days, computing power has increased exponentially along with the miniaturisation of hardware, leading to algorithms being able to compute larger datasets. The field of AI, however, has gone through ups and downs in terms of popularity.

    Researchers have worked on Neural Networks (Figure below), a mathematical model modelled after neurons in the brain, a foundation unit, and one of the foundations of state-of-the-art AI.

    Artificial intelligence (AI), machine learning, deep learning, and data science are popular terms that describe computing fields that teach a machine how to learn. AI is a catch-all term that broadly means computing systems designed to understand and replicate human intelligence. Machine Learning is a subfield of AI where algorithms are trained on datasets to make predictions or decisions without explicitly being programmed. Deep Learning is a subfield of Machine Learning, which specifically refers to using multi-layers of neural networks to learn from large datasets, mimicking cognition of the neurons in the brain. Recently, the field of AI has resurged in popularity after a popular type of neural network architecture, AlexNET, achieved impressive results in the Image Recognition Challenge in 2012. Since then, neural networks have started to enter into applications in the industry, with colossal research funding mobilised.

    Breakthroughs that can aid Policy Implementation

    There are many types of neural networks, each designed for a particular application. The recent popularity of applications like ChatGPT is due to a neural network called Language Models. Language Models are probability models which ask the question, what is the next best token to generate, given the previous token?

    Two significant breakthroughs led towards ChatGPT, including translating language from one language to another using a machine learning technique called attention mechanism. Secondly, this technique was introduced in transformer-type language models, which led to increased state-of-the-art performance in many tasks in artificial intelligence.

    Transformers, a robust neural network, was introduced in 2017 by Google Researchers in “Attention is All You Need”. This translates into generating human-like text in ChatGPT. Large language models have taken a big step in the technology landscape. As Machine Learning applications are being deployed rapidly, it calls for a governance model for these models, as research in AI models is advancing quickly with innumerable breakthroughs. Earlier in 2019, GPT-2, a Machine Learning model based on transformers, could not solve fundamental mathematical problems such as elucidating numbers from 0-100. Within a year, more advancement in the GPT models led to models being able to perform higher-level scores in SAT exams, GRE, etc. Another breakthrough advancement was the ability of machine-learning programs to generate code, which has increased developer productivity automatically.

     Moreover, many researchers are working on AGI (Artificial General Intelligence), and nobody knows precisely when such capabilities might be developed or researched. Researchers have not settled on a convincing definition of AGI agreeable to everyone in the AI research community. The rate of advancement and investment in AI research is staggering, which calls for ethical concerns and governance of these large language models. India is an emerging economy where all sectors are growing rapidly. India’s economy grows nearly 10% yearly, with the services sector making up almost 50% of the entire economy. This translates to the government enjoying high tax revenues from this sector, generating high-paying jobs. Most of the Indian workforce is employed in the industrial and agricultural sectors.

    Using AI to deal with Corruption and enhance Trust

    The primary issue in India has been corruption at all levels of the government, from the panchayat, district level, and state level to central machinery. Corruption is attributed mainly to regulation, rent-seeking behaviour, lack of accountability, and requiring permits from the Government. Indian bureaucratic system and government employees are among the least efficient across sectors such as infrastructure, real estate, metal & mining, aerospace & defence, power and utility, which are also most susceptible to corruption. Due to inefficiency, the productivity of the public sector is low, impacting the local Indian economy.

    India ranks 85 out of 180 countries using the Corruption Index measured in 2022, with close to 62% of Indians encountering corruption, paying bribes to government officials to get the job done. There are many reasons for corruption in India: excessive regulation, a complicated tax system, bureaucratic hurdles, lack of ownership of work, and the public sector being the least productive organisation. Corruption is dishonest or fraudulent conduct by those in power, typically involving bribery. Bribery is defined generally as corrupt solicitation, acceptance, or transfer of value in exchange for official action. In bribery, there are two actors in the transaction, the giver and the receiver; however, corruption involves primarily one actor who abuses the position of power for personal gain. Bribery is a singular act, while corruption might be an ongoing abuse of power to benefit oneself.

    Trust is a critical glue in financial transactions; where trust between individuals is higher, the economic transactions are faster, and the economy grows, with more businesses moving, bringing capital, and increasing the production and exchange of goods. However, when trust is low, businesses hesitate, and the economy either stagnates or declines. High-trust societies like Norway have advanced financial systems, where credit and financial instruments are more developed, compared with lower-trust societies such as Kenya and India, where many financial instruments and capital markets to raise finances are unavailable. Therefore, public policymakers must seek ways to increase trust in their local economies by forming policies conducive to business transactions.

    The real-estate sector in Tamilnadu: a fit case for the use of AI

    Tamil Nadu is India’s second-largest economy and is the most industrialised and urbanised state in India. Real estate is an economic growth engine and a prime mover of monetary transactions. It is a prime financial asset for most Tamils from many social strata. However, real estate in Tamil Nadu is prone to corruption at many levels. One specific popular method is the forgery of land registration documents, which has resulted in a lack of trust among investors at all levels in Tamil Nadu.

    To address this lack of trust, we can use technology tools to increase confidence and empower the public to create an environment of accountability, resulting in greater confidence. Machine Learning can provide algorithms to detect these forgeries and prevent land grabbing. Tools such as identity analysis, document analysis, and transaction pattern analysis can help to provide more accountability. In addition to the above, machine learning offers many methods or combinations of methods that can be used. One advanced way is using transformer-based models, which are the foundation for language models such as BERT and generative Pre-Trained Models for text-based applications. The original documents could be trained using large language models as a baseline to frequently check and find forgeries. Documents can be encoded to compare semantic anomalies between different types of documents.

    Once forgery is detected, it can be automatically sent to civil magistrates or pertinent authorities. Additionally, the recent introduction of Software repository sites allows the public to be informed or notice any change in the status or activity. Customised public repositories based on GitHub might create immense value for Tamil Nadu’s Department of Revenue, create accountability, increase productivity and reduce workload. The Customised public repositories displaying land transaction activity might inform the public of such forgeries, thus creating an environment of greater accountability and trust for the people. Another popular method can be introduced by introducing Computer Vision Algorithms, such as convolutional neural networks combined with BERT, that can validate signatures, document tampering, and time-frames to flag forgeries. This can be done by training original documents with specific algorithms and checking documents with reasonable doubts about forgery.

    Another primary concern in Tamil Nadu’s Government has been people in positions of power in the government or close to financial oversight. They are more prone to corruption, which can be flagged or monitored using graph neural networks, which can map individuals, connections, and economic transactions in a network to flag which individuals are more likely or prone to corruption. Another method to reduce corruption is to remove personal discretion in the process, which Machine Learning can enable to automate the tasks and documents in land registration; digitisation might help reduce corruption. Large Language Models can also be used as classifiers and released to the public to keep accountability on the Tamil Nadu Government’s spending, so the public is aware and personal gain of Government money can be further reduced this way. Another central area of corruption is the tender, the bidding process for government contracts in Tamil Nadu, such as public development works or engineering projects. Tamil Nadu’s tender or bidding process can be made more public, and machine learning algorithms can be used to check if norms, contracts, and procedures are followed to award tender bids for government projects. To save wasteful expenditure, algorithms can check if objective conditions are met, with any deviations flagged and in the public domain. Given any suspicion, the public can file a PIL in Tamil Nadu’s court system.

    We can argue and conclude that with more deployed machine learning tools being part of Tamil Nadu’s State machinery, we can confidently say that corruption can be reduced to more significant levels by releasing all information to the public and creating an environment of greater accountability.

    References:

    1. Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach

    2.Bau, D., Elhussein, M., Ford, J. B., Nwanganga, H., & Sühr, T. (n.d.). Governance of AI models. Managing AI risks. https://managing-ai-risks.com/

    1. S. Department of State. (2021). 2021 Country Reports on Human Rights Practices: India. U.S. Department of State. https://www.state.gov/reports/2021-country-reports-on-human-rights-practices/india/
    1. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (pp. 4171-4186). https://arxiv.org/abs/1810.04805
    1. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8). https://openai.com/blog/better-language-models/
    1. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI blog, 12. https://openai.com/blog/language-unsupervised/
    2. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., … Kaplan, J. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073. https://arxiv.org/pdf/2212.08073.pdf,

    https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback

    1. Reinforcement Learning with Human Feedback (RLHF), Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. https://arxiv.org/abs/2203.02155

    Feature Image: modernghana.com

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in