Tag: Artificial Intelligence

  • Artificial Intelligence vs The Indian Job Market

    Artificial Intelligence vs The Indian Job Market

    Artificial intelligence (AI) has become a ubiquitous presence in our daily lives, transforming the way we operate in the modern era. From the development of autonomous vehicles to facilitating advanced healthcare research, AI has enabled the creation of groundbreaking solutions that were once thought to be unattainable. As more investment is made in this area and more data becomes available, it is expected that AI will become even more powerful in the coming years.

    AI, often referred to as the pursuit of creating machines capable of exhibiting intelligent behaviour, has a rich history that dates back to the mid-20th century. During this time, pioneers such as Alan Turing laid the conceptual foundations for AI. The journey of AI has been marked by a series of intermittent breakthroughs, periods of disillusionment, and remarkable leaps forward. It has also been a subject of much discussion over the past decade, and this trend is expected to continue in the years to come.

    According to a report by Precedence Research, the global artificial intelligence market was valued at USD 454.12 billion in 2022 and is expected to hit around USD 2,575.16 billion by 2032, progressing with a compound annual growth rate (CAGR) of 19% from 2023 to 2032. The Asia Pacific is expected to be the fastest-growing artificial intelligence market during the forecast period, expanding at the highest CAGR of 20.3% from 2023 to 2032. The rising investments by various organisations towards adopting artificial intelligence are boosting the demand for artificial intelligence technology.[1]

    Figure 1 illustrates a bar graph displaying the upward trajectory of the AI market in recent years, sourced from Precedence Research.

    The Indian government has invested heavily in developing the country’s digital infrastructure. In 2020, The Government of India increased its spending on Digital India to $477 million to boost AI, IoT, big data, cyber security, machine learning, and robotics. The artificial intelligence market is expected to witness significant growth in the BFSI(banking, financial services, and insurance) sectors on account of data mining applications, as there is an increase in the adoption of artificial intelligence solutions in data analytics, fraud detection, cybersecurity, and database systems.

    Figure 2 illustrates a pie chart displaying the distribution of the Artificial Intelligence (AI) market share across various regions in 2022, sourced from Precedence Research.

    Types of AI Systems and Impact on Employment

    AI systems can be divided primarily into three types:

    Narrow AI: This is a specific form of artificial intelligence that executes dedicated tasks with intelligence. It represents the prevailing and widely accessible type of AI in today’s technological landscape.

    General AI: This represents an intelligence capable of efficiently undertaking any intellectual task akin to human capabilities. Aspiration driving the development of General AI revolves around creating a system with human-like cognitive abilities that enables autonomous, adaptable thinking. However, as of now, the realisation of a General AI system that comprehensively emulates human cognition remains elusive.

    Super AI: It is a level of intelligence within systems where machines transcend human cognitive capacities, exhibit superior performance across tasks, and possess advanced cognitive properties. This extends from the culmination of the General AI.

    Artificial intelligence has been incorporated into various aspects of our lives, ranging from virtual assistants on our mobile devices to advancements in customisation, cyber protection, and more. The growth of these systems is swift, and it is only a matter of time before the emergence of general artificial intelligence becomes a reality.

    According to a report by PwC, the global GDP is estimated to be 14% higher in 2030 due to the accelerating development and utilisation of AI, which translates to an additional $15.7 trillion. This growth can be attributed to:

    1. Improvements in productivity resulting from the automation of business processes (including the use of robots and autonomous vehicles).
    2. Productivity gains from businesses integrating AI technologies into their workforce (assisted and augmented intelligence).
    3. Increased consumer demand for AI-enhanced products and services, resulting in personalised and/or higher-quality offerings.

    The report suggests that the most significant economic benefits from AI will likely come from increased productivity in the near future. This includes automating mundane tasks, enhancing employees’ capabilities, and allowing them to focus on more stimulating and value-added work. Capital-intensive sectors such as manufacturing and transport are likely to experience the most significant productivity gains from AI, given that many operational processes in these industries are highly susceptible to automation. (2)

    AI will disrupt many sectors and lead to the creation of many more. A compelling aspect to observe is how the Indian Job Market responds to AI and its looming threat to job security in the future.

    The Indian Job Market

    As of 2021, around 487.9 million people were part of the workforce in India out of 950.2 million people aged 15-64, the second largest after China. While there were 986.5 million people in China aged 15-64, there were 747.9 million people were part of the workforce.

    India’s labour force participation rate (LFPR) at 51.3 per cent was less than China’s 76 per cent and way below the global average of 65 per cent.[3]

    The low LFPR can be primarily attributed to two reasons:

    Lack of Jobs

    To reach its growth potential, India is expected to generate approximately 9 million nonfarm jobs annually until 2030, as per a report by McKinsey & Company. However, analysts suggest that the current rate of job creation falls significantly below this target, with only about 2.9 million nonfarm jobs being added each year from 2013 to 2019. [4]

    During the COVID-19 pandemic, urban unemployment in India surged dramatically, peaking at 20.9% in the April-June 2020 quarter, coinciding with wage decline. Although the unemployment rate has decreased since then, full-time employment opportunities are scarce. Economists highlight a concerning trend where an increasing number of job-seekers, particularly the younger demographic, are turning towards low-paying casual jobs or opting for less stable self-employment options.[5]

     This shift in employment pattern occurs alongside a broader outlook for the Indian economy, which is projected to achieve an impressive growth rate of 6.5% by the fiscal year ending in March 2025. Despite this optimistic growth forecast, the employment landscape appears to be evolving, leading individuals towards less secure and lower-paying work options. This shift raises pertinent concerns about the job market’s quality, stability, and inclusivity, particularly in accommodating the aspirations and needs of India’s burgeoning young workforce.

    Low female labour participation

    In 2021, China boasted an estimated female population of 478.3 million within the 15-64 age bracket, with an active female labour force of approximately 338.6 million. In stark contrast, despite India having a similar demographic size of 458.2 million women in that age group, its female labour force was significantly smaller, numbering only 112.8 million.[6]

    This discrepancy underscores a notable disparity in India’s female labour force participation rate compared to China, despite both countries having sizeable female populations within the working-age bracket.[7]

    Along with unemployment, there was also a crisis of under-employment and the collapse of small businesses, which has worsened since the pandemic.

    AI vs the Indian Job Market

    The presence and implications of AI cast a significant shadow on a country as vast and diverse as India. Amidst the dynamic and often unpredictable labour market, where employment prospects have been uncertain, addressing the impact of AI poses a considerable challenge for employers. Balancing the challenges and opportunities presented by AI while prioritising job security for the workforce is a critical obstacle to overcome.

     The diverse facets of artificial intelligence (AI) and its capacity to transform industries across the board amplify the intricacy of the employment landscape in India. Employers confront the formidable challenge of devising effective strategies to incorporate AI technologies without compromising the livelihoods of their employees.

    As per the findings of the Randstad Work Monitor Survey, a staggering 71% of individuals in India exhibit an inclination towards altering their professional circumstances within the next six months, either by transitioning to a new position within the same organisation or by seeking employment outside it. Furthermore, 23% of the workforce can be classified as passive job seekers, who are neither actively seeking new opportunities nor applying for them but remain open to considering job prospects if a suitable offer arises.

    It also stated that at least half of Indian employees fear losing their jobs to AI, whereas the figure is one in three in developed countries. The growing concern among Indian workers stems from the substantial workforce employed in Business Process Outsourcing (BPO) and Knowledge Process Outsourcing (KPO), which are notably vulnerable to AI automation. Adding to this concern is India’s rapid uptake of AI technology, further accentuating the apprehension among employees.[8]

    India’s role as a global hub for outsourcing and its proficiency in delivering diverse services have amplified the impact of AI adoption. The country has witnessed a swift embrace of AI technologies across various industries, magnifying workers’ concerns regarding the potential ramifications of their job security.

    Goldman Sachs’ report highlights the burgeoning emergence of generative artificial intelligence (AI) and its potential implications for labour dynamics. The rapid evolution of this technology prompts questions regarding a possible surge in task automation, leading to cost savings in labour and amplified productivity. [9]

    The labour market could confront significant disruptions if generative AI delivers its pledged capabilities. Analysing occupational tasks across the US and Europe revealed that approximately two-thirds of the current jobs are susceptible to AI automation. Furthermore, the potential of generative AI to substitute up to one-fourth of existing work further underscores its transformative potential.

     Expanding these estimates on a global scale suggests that generative AI might expose the equivalent of 300 million full-time jobs to automation, signifying the far-reaching impact this technology could have on global labour markets.

    Recent advancements in artificial intelligence (AI) and machine learning have exerted substantial influence across various professions and industries, particularly impacting job landscapes in sectors such as Indian IT, ITeS, BPO, and BPM. These sectors collectively employ over five million people and are India’s primary source of white-collar jobs. [10]

    In a recent conversation with Business Today, Vardhman Jain, the founder and Vice Chairman of Access Healthcare, a Chennai-based BPO, highlighted the forthcoming impact of AI integration on the workplace. Jain indicated that AI implementation may cause customer service to be the sector most vulnerable to initial disruptions.

    Jain pointed out that a substantial portion of services provided by the Indian BPO industry is focused on customer support, including voice and chat functions, data entry, and back-office services. He expounded upon how AI technologies, such as Natural Language Processing, Machine Learning, and Robotic Process Automation, possess the potential to significantly disrupt and automate these tasks within the industry.

    While the discourse surrounding AI often centres on the potential for job displacement, several industry leaders argue that AI will not supplant human labour, but rather augment worker output and productivity.

    At the 67th Foundation Day celebration of the All-India Management Association (AIMA), NR Narayan Murthy, as reported by Business Today, conveyed a noteworthy message by asserting that AI is improbable to supplant human beings, as humans will not allow it to happen.

    Quoting Murthy’s statement from the report, “I think there is a mistaken belief that artificial intelligence will replace human beings; human beings will not allow artificial intelligence to replace them.” The Infosys founder stressed that AI has functioned as an assistive force rather than an outright replacement, enhancing human lives and making them more comfortable.[11]

    McKinsey Global Institute’s study, “Generative AI and the Future of Work in America,” highlighted AI’s capability to expedite economic automation significantly. The report emphasised that while generative AI wouldn’t immediately eliminate numerous jobs, it would enhance the working methods of STEM, creative, business, and legal professionals.[12]

     However, the report also underscored that the most pronounced impact of automation would likely affect job sectors such as office support, customer service, and food service employment.

    While the looming threats posed by AI are undeniable, its evolution is expected to usher in a wave of innovation, leading to the birth of new industries and many job opportunities. This surge in new industries promises employment prospects and contributes significantly to economic growth by leveraging AI capabilities.

    Changing employment Landscape

    Having explored different perspectives and conversations on AI, it has become increasingly evident that the employment landscape is poised for significant transformation in the years ahead. This prompts a crucial enquiry: Will there remain a necessity for human jobs, and are our existing systems equipped to ensure equitable distribution of the benefits fostered by this technology developments?

    • Universal Basic Income

    Universal basic income (UBI) is a social welfare proposal in which all citizens of a given population regularly receive minimum income in the form of an unconditional transfer payment, that is, without a means test or need to work, in which case it would be called guaranteed minimum income.

    Supporters of Universal Basic Income (UBI) now perceive it not only as a solution to poverty, but also as a potential answer to several significant challenges confronting contemporary workers: wage disparities, uncertainties in job stability, and the looming spectre of job losses due to advancements in AI.

    Karl Widerquist, a professor of philosophy at Georgetown University-Qatar and an economist and political theorist, posits that the influence of AI on employment does not necessarily result in permanent unemployment. Instead, he suggests a scenario in which displaced workers shift into lower-income occupations, leading to increased competition and saturation in these sectors.

    According to Widerquist, the initial effects of AI advancements might force white-collar workers into the gig economy or other precarious and low-paying employment. This shift, he fears, could trigger a downward spiral in wages and job security, exacerbating economic inequality.

     He advocates for a Universal Basic Income (UBI) policy as a response to the challenges posed by AI and automation. Widerquist argues that such a policy would address employers’ failure to equitably distribute the benefits of economic growth, fuelled in part by automation, among workers. He sees UBI as a potential solution to counter the widening disparity in wealth distribution resulting from these technological advancements.[13]

    A study conducted by researchers at Utrecht University, Netherlands, from 2017 to 2019 led to the implementation of basic income for unemployed individuals who previously received social assistance. The findings showcase an uptick in labour market engagement. This increase wasn’t solely attributed to the financial support offered by Universal Basic Income (UBI) but also to removing conditions—alongside sanctions for non-compliance—typically imposed on job seekers.[14]

    Specifically, participants exempted from the obligation to actively seek or accept employment demonstrated a higher likelihood of securing permanent contracts, as opposed to the precarious work arrangements highlighted by Widerquist.

     While UBI experiments generally do not demonstrate a significant trend of workers completely exiting the labour market, instances of higher payments have resulted in some individuals reducing their working hours. This nuanced impact showcases the varying effects of UBI on labour participation, highlighting both increased job security for some and a choice for others to adjust their work hours due to enhanced financial stability.

    In exploring the potential for Universal Basic Income (UBI), it becomes evident that while the concept holds promise, its implementation and efficacy are subject to multifaceted considerations. The diverse socioeconomic landscape, coupled with the scale and complexity of India’s population, presents both opportunities and challenges for UBI.

     UBI’s potential to alleviate poverty, enhance social welfare, and address economic disparities in a country as vast and diverse as India is compelling. However, the feasibility of funding such a program, ensuring its equitable distribution, and navigating its impact on existing welfare schemes requires careful deliberation.

    Possible Tax Solutions

    • Robot Tax

    The essence of a robot tax lies in the notion that companies integrating robots into their operations should bear a tax burden given that these machines replace human labour.

     There exist various arguments advocating for a robot tax. Initially, it aimed to safeguard human employment by dissuading firms from substituting humans with robots. Additionally, while companies may prefer automation, imposing a robot tax can generate government revenue to offset the decline in funds from payroll and income taxes. Another crucial argument favouring this tax is rooted in allocation efficiency: robots neither contribute to payroll nor income taxes. Taxing robots at a rate similar to human labour aligns with economic efficiency to prevent distortions in resource allocation.

    In various developed economies, such as the United States, the prevailing taxation system presents a bias toward artificial intelligence (AI) and automation over human workforce. This inclination, fueled by tax incentives, may lead to investments in automation solely for tax benefits rather than for the actual potential increase in profitability. Furthermore, the failure to tax robots can exacerbate income inequality as the share of labor in national income diminishes.

    One possible solution to address this issue is the implementation of a robot tax, which could generate revenue that could be redistributed as Universal Basic Income (UBI) or as support for workers who have lost their jobs due to the adoption of robotic systems and AI and are unable to find new employment opportunities.

    • Digital Tax

    The discourse surrounding digital taxation primarily centers on two key aspects. Firstly, it grapples with the challenge of maintaining tax equity between traditional and digital enterprises. Digital businesses have benefited from favorable tax structures, such as advantageous tax treatment for income derived from intellectual property, accelerated amortization of intangible assets, and tax incentives for research and development. However, there is a growing concern that these preferences may result in unintended tax advantages for digital businesses, potentially distorting investment trajectories instead of promoting innovation.

    Secondly, the issue arises from digital companies operating in countries with no physical presence yet serving customers through remote sales and service platforms. This situation presents a dilemma regarding traditional corporate income tax regulations. Historically, digital businesses paid corporate taxes solely in countries where they maintained permanent establishments, such as headquarters, factories, or storefronts. Consequently, countries where sales occur or online users reside have no jurisdiction over a firm’s income, leading to taxation challenges.

    Several approaches have been suggested to address the taxation of digital profits. One approach involves expanding existing frameworks, for instance, a country may extend its Value-Added Tax (VAT) or Goods and Services Tax (GST) to encompass digital services or broaden the tax base to include revenues generated from digital goods and services. Alternatively, there is a need to implement a separate Digital Service Tax (DST).

    While pinpointing the ultimate solution remains elusive, ongoing experimentation and iterative processes are expected to guide us toward a resolution that aligns with the need for a larger consensus. With each experiment and accumulated knowledge, we move closer to uncovering an approach that best serves the collective requirements.[15]

    Reimagining the Future

    The rise of Artificial Intelligence (AI) stands as a transformative force reshaping the industry and business landscape. As AI continues to revolutionise how we work and interact, staying ahead in this rapidly evolving landscape is not just an option, but a necessity. Embracing AI is not merely about adapting to change; it is also about proactive readiness and strategic positioning. Whether you’re a seasoned entrepreneur or a burgeoning startup, preparing for the AI revolution involves a multifaceted approach encompassing automation, meticulous research, strategic investment, and a keen understanding of how AI can augment and revolutionise your business. PwC’s report lists some crucial steps to prepare one’s business for the future and stay ahead. [16]

    Understand AI’s Impact: Start by evaluating the industry’s technological advancements and competitive pressure. Identify operational challenges AI can address, disruptive opportunities available now and those on the horizon.

    Prioritise Your Approach: Determine how AI aligns with business goals. Assess your readiness for change— are you an early adopter or follower? Consider feasibility, data availability, and barriers to innovation—Prioritise automation and decision augmentation processes based on potential savings and data utilisation.

    Talent, Culture, and Technology: While AI investments might seem high, costs are expected to decrease over time. Embrace a data-driven culture and invest in talent like data scientists and tech specialists. Prepare for a hybrid workforce, combining AI’s capabilities with human skills like creativity and emotional intelligence.

    Establish Governance and Trust: Trust and transparency are paramount. Consider the societal and ethical implications of AI. Build stakeholder trust by ensuring AI transparency and unbiased decision-making. Manage data sources rigorously to prevent biases and integrate AI management with overall technology transformation.

     Getting ready for Artificial Intelligence (AI) is not just about new technology; it is an intelligent strategy. Understanding how AI fits one’s goals is crucial; prioritising where it can help, building the right skills, and setting clear rules are essential. As AI becomes more common, it is not about robots taking over, but humans and AI working together. By planning and embracing AI wisely, businesses can stay ahead and create innovative solutions in the future.

    References:

    [1] Precedence Research. “Artificial Intelligence (AI) Market.” October 2023. Accessed November 14, 2023. https://www.precedenceresearch.com/artificial-intelligence-market

    [2] Pricewaterhouse Coopers (PwC). “Sizing the prize, PwC’s Global Artificial Intelligence Study.” October 2017. Accessed November 14, 2023. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html#:~:text=The%20greatest%20economic%20gains%20from,of%20the%20global%20economic%20impact.

    [3] World Bank. “Labor force, total – India 2021.” Accessed November 12, 2023. https://data.worldbank.org/indicator/SL.TLF.TOTL.IN?locations=IN

    [4] McKinsey & Company. “India’s Turning Point.” August 2020. https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/India/Indias%20turning%20point%20An%20economic%20agenda%20to%20spur%20growth%20and%20jobs/MGI-Indias-turning-point-Executive-summary-August-2020-vFinal.pdf

    [5] Dugal, Ira. “Where are the jobs? India’s world-beating growth falls short.” Reuters, May 31, 2023. Accessed November 14, 2023. https://www.reuters.com/world/india/despite-world-beating-growth-indias-lack-jobs-threatens-its-young-2023-05-30/

    [6] Government of India. Ministry of Labour and Employment. “Labour and Employment Statistics 2022.” July 2022. https://dge.gov.in/dge/sites/default/files/2022-08/Labour_and_Employment_Statistics_2022_2com.pdf

    [7] Deshpande, Ashwini, and Akshi Chawla. “It Will Take Another 27 Years for India to Have a Bigger Labour Force Than China’s.” The Wire, July 27, 2023. https://thewire.in/labour/india-china-population-labour-force

    [8] Randstad. “Workmonitor Pulse Survey.” Q3 2023. https://www.randstad.com/workforce-insights/future-work/ai-threatening-jobs-most-workers-say-technology-an-accelerant-for-career-growth/

    [9] Briggs, Joseph, and Devesh Kodnani. “The Potentially Large Effects of Artificial Intelligence on Economic Growth.” Goldman Sachs, March 26, 2023. https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

    [10] Chaturvedi, Aakanksha. “‘Might take toll on low-skilled staff’: How AI can cost BPO, IT employees their jobs.” Business Today, April 5, 2023. https://www.businesstoday.in/latest/corporate/story/might-take-toll-on-low-skilled-staff-how-ai-can-cost-bpo-it-employees-their-jobs-376172-2023-04-05

    [11] Sharma, Divyanshi. “Can AI take over human jobs? This is what Infosys founder NR Narayan Murthy thinks.” India Today, February 27, 2023. https://www.indiatoday.in/technology/news/story/can-ai-take-over-human-jobs-this-is-what-infosys-founder-nr-narayan-murthy-thinks-2340299-2023-02-27

    [12] McKinsey Global Institute. “Generative AI and the future of work in America.” July 26, 2023. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america

    [13] Kelly, Philippa. “AI is coming for our jobs! Could universal basic income be the solution?” The Guardian, November 16, 2022. https://www.theguardian.com/global-development/2023/nov/16/ai-is-coming-for-our-jobs-could-universal-basic-income-be-the-solution

    [14] Utrecht University. “What works (Weten wat werkt).” March 2020. https://www.uu.nl/en/publication/final-report-what-works-weten-wat-werkt

    [15] Merola, Rossana. “Inclusive Growth in the Era of Automation and AI: How Can Taxation Help?” *Frontiers in Artificial Intelligence* 5 (2022). Accessed November 23, 2023. https://www.frontiersin.org/articles/10.3389/frai.2022.867832

    [16]  Rao, Anand. “A Strategist’s Guide to Artificial Intelligence.” PwC, May 10, 2017.https://www.strategy-business.com/article/A-Strategists-Guide-to-Artificial-Intelligence

     

  • Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Nations must adopt Artificial Intelligence as a mechanism to build transparency, integrity, and trustworthiness, which are necessary to fight corruption. Without effective public scrutiny, the risk of money being lost to corruption and misappropriation was vast. Dr Chris Kpodar, a global Artificial Intelligence Specialist, has advocated the use of artificial intelligence as an anti-corruption tool through the redesigning of systems to address systems that were previously prone to bribery and corruption.

     

    Artificial Intelligence Tools

    Artificial Intelligence has become popular due to its increasing applications in many fields. Recently, IIT Madras opened a course on B.Tech Data Science in Tanzania, demonstrating the popularity of Artificial Intelligence. The history of Artificial Intelligence goes back to the 1950s when computing power was less, and hardware were huge. These days, computing power has increased exponentially along with the miniaturisation of hardware, leading to algorithms being able to compute larger datasets. The field of AI, however, has gone through ups and downs in terms of popularity.

    Researchers have worked on Neural Networks (Figure below), a mathematical model modelled after neurons in the brain, a foundation unit, and one of the foundations of state-of-the-art AI.

    Artificial intelligence (AI), machine learning, deep learning, and data science are popular terms that describe computing fields that teach a machine how to learn. AI is a catch-all term that broadly means computing systems designed to understand and replicate human intelligence. Machine Learning is a subfield of AI where algorithms are trained on datasets to make predictions or decisions without explicitly being programmed. Deep Learning is a subfield of Machine Learning, which specifically refers to using multi-layers of neural networks to learn from large datasets, mimicking cognition of the neurons in the brain. Recently, the field of AI has resurged in popularity after a popular type of neural network architecture, AlexNET, achieved impressive results in the Image Recognition Challenge in 2012. Since then, neural networks have started to enter into applications in the industry, with colossal research funding mobilised.

    Breakthroughs that can aid Policy Implementation

    There are many types of neural networks, each designed for a particular application. The recent popularity of applications like ChatGPT is due to a neural network called Language Models. Language Models are probability models which ask the question, what is the next best token to generate, given the previous token?

    Two significant breakthroughs led towards ChatGPT, including translating language from one language to another using a machine learning technique called attention mechanism. Secondly, this technique was introduced in transformer-type language models, which led to increased state-of-the-art performance in many tasks in artificial intelligence.

    Transformers, a robust neural network, was introduced in 2017 by Google Researchers in “Attention is All You Need”. This translates into generating human-like text in ChatGPT. Large language models have taken a big step in the technology landscape. As Machine Learning applications are being deployed rapidly, it calls for a governance model for these models, as research in AI models is advancing quickly with innumerable breakthroughs. Earlier in 2019, GPT-2, a Machine Learning model based on transformers, could not solve fundamental mathematical problems such as elucidating numbers from 0-100. Within a year, more advancement in the GPT models led to models being able to perform higher-level scores in SAT exams, GRE, etc. Another breakthrough advancement was the ability of machine-learning programs to generate code, which has increased developer productivity automatically.

     Moreover, many researchers are working on AGI (Artificial General Intelligence), and nobody knows precisely when such capabilities might be developed or researched. Researchers have not settled on a convincing definition of AGI agreeable to everyone in the AI research community. The rate of advancement and investment in AI research is staggering, which calls for ethical concerns and governance of these large language models. India is an emerging economy where all sectors are growing rapidly. India’s economy grows nearly 10% yearly, with the services sector making up almost 50% of the entire economy. This translates to the government enjoying high tax revenues from this sector, generating high-paying jobs. Most of the Indian workforce is employed in the industrial and agricultural sectors.

    Using AI to deal with Corruption and enhance Trust

    The primary issue in India has been corruption at all levels of the government, from the panchayat, district level, and state level to central machinery. Corruption is attributed mainly to regulation, rent-seeking behaviour, lack of accountability, and requiring permits from the Government. Indian bureaucratic system and government employees are among the least efficient across sectors such as infrastructure, real estate, metal & mining, aerospace & defence, power and utility, which are also most susceptible to corruption. Due to inefficiency, the productivity of the public sector is low, impacting the local Indian economy.

    India ranks 85 out of 180 countries using the Corruption Index measured in 2022, with close to 62% of Indians encountering corruption, paying bribes to government officials to get the job done. There are many reasons for corruption in India: excessive regulation, a complicated tax system, bureaucratic hurdles, lack of ownership of work, and the public sector being the least productive organisation. Corruption is dishonest or fraudulent conduct by those in power, typically involving bribery. Bribery is defined generally as corrupt solicitation, acceptance, or transfer of value in exchange for official action. In bribery, there are two actors in the transaction, the giver and the receiver; however, corruption involves primarily one actor who abuses the position of power for personal gain. Bribery is a singular act, while corruption might be an ongoing abuse of power to benefit oneself.

    Trust is a critical glue in financial transactions; where trust between individuals is higher, the economic transactions are faster, and the economy grows, with more businesses moving, bringing capital, and increasing the production and exchange of goods. However, when trust is low, businesses hesitate, and the economy either stagnates or declines. High-trust societies like Norway have advanced financial systems, where credit and financial instruments are more developed, compared with lower-trust societies such as Kenya and India, where many financial instruments and capital markets to raise finances are unavailable. Therefore, public policymakers must seek ways to increase trust in their local economies by forming policies conducive to business transactions.

    The real-estate sector in Tamilnadu: a fit case for the use of AI

    Tamil Nadu is India’s second-largest economy and is the most industrialised and urbanised state in India. Real estate is an economic growth engine and a prime mover of monetary transactions. It is a prime financial asset for most Tamils from many social strata. However, real estate in Tamil Nadu is prone to corruption at many levels. One specific popular method is the forgery of land registration documents, which has resulted in a lack of trust among investors at all levels in Tamil Nadu.

    To address this lack of trust, we can use technology tools to increase confidence and empower the public to create an environment of accountability, resulting in greater confidence. Machine Learning can provide algorithms to detect these forgeries and prevent land grabbing. Tools such as identity analysis, document analysis, and transaction pattern analysis can help to provide more accountability. In addition to the above, machine learning offers many methods or combinations of methods that can be used. One advanced way is using transformer-based models, which are the foundation for language models such as BERT and generative Pre-Trained Models for text-based applications. The original documents could be trained using large language models as a baseline to frequently check and find forgeries. Documents can be encoded to compare semantic anomalies between different types of documents.

    Once forgery is detected, it can be automatically sent to civil magistrates or pertinent authorities. Additionally, the recent introduction of Software repository sites allows the public to be informed or notice any change in the status or activity. Customised public repositories based on GitHub might create immense value for Tamil Nadu’s Department of Revenue, create accountability, increase productivity and reduce workload. The Customised public repositories displaying land transaction activity might inform the public of such forgeries, thus creating an environment of greater accountability and trust for the people. Another popular method can be introduced by introducing Computer Vision Algorithms, such as convolutional neural networks combined with BERT, that can validate signatures, document tampering, and time-frames to flag forgeries. This can be done by training original documents with specific algorithms and checking documents with reasonable doubts about forgery.

    Another primary concern in Tamil Nadu’s Government has been people in positions of power in the government or close to financial oversight. They are more prone to corruption, which can be flagged or monitored using graph neural networks, which can map individuals, connections, and economic transactions in a network to flag which individuals are more likely or prone to corruption. Another method to reduce corruption is to remove personal discretion in the process, which Machine Learning can enable to automate the tasks and documents in land registration; digitisation might help reduce corruption. Large Language Models can also be used as classifiers and released to the public to keep accountability on the Tamil Nadu Government’s spending, so the public is aware and personal gain of Government money can be further reduced this way. Another central area of corruption is the tender, the bidding process for government contracts in Tamil Nadu, such as public development works or engineering projects. Tamil Nadu’s tender or bidding process can be made more public, and machine learning algorithms can be used to check if norms, contracts, and procedures are followed to award tender bids for government projects. To save wasteful expenditure, algorithms can check if objective conditions are met, with any deviations flagged and in the public domain. Given any suspicion, the public can file a PIL in Tamil Nadu’s court system.

    We can argue and conclude that with more deployed machine learning tools being part of Tamil Nadu’s State machinery, we can confidently say that corruption can be reduced to more significant levels by releasing all information to the public and creating an environment of greater accountability.

    References:

    1. Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach

    2.Bau, D., Elhussein, M., Ford, J. B., Nwanganga, H., & Sühr, T. (n.d.). Governance of AI models. Managing AI risks. https://managing-ai-risks.com/

    1. S. Department of State. (2021). 2021 Country Reports on Human Rights Practices: India. U.S. Department of State. https://www.state.gov/reports/2021-country-reports-on-human-rights-practices/india/
    1. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (pp. 4171-4186). https://arxiv.org/abs/1810.04805
    1. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8). https://openai.com/blog/better-language-models/
    1. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI blog, 12. https://openai.com/blog/language-unsupervised/
    2. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., … Kaplan, J. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073. https://arxiv.org/pdf/2212.08073.pdf,

    https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback

    1. Reinforcement Learning with Human Feedback (RLHF), Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. https://arxiv.org/abs/2203.02155

    Feature Image: modernghana.com

  • Is Singularity here?

    Is Singularity here?

    One of the most influential figures in the field of AI, Ray Kurzweil, has famously predicted that the singularity will happen by 2045. Kurzweil’s prediction is based on his observation of exponential growth in technological advancements and the concept of “technological singularity” proposed by mathematician Vernor Vinge.

    The term Singularity alludes to the moment in which artificial intelligence (AI) becomes indistinguishable from human intelligence. Ray Kurzweil, one of AI’s fathers and top apologists, predicted in 1999 that Singularity was approaching (Kurzweil, 2005). In 2011, he even provided a date for that momentous occasion: 2045 (Grossman, 2011). However, in a book in progress, initially estimated to be released in 2022 and then in 2024, he announces the arrival of Singularity for a much closer date: 2029 (Kurzweil, 2024). Last June, though, a report by The New York Times argued that Silicon Valley was confronted by the idea that Singularity had already arrived (Strifeld, 2023). Shortly after that report, in September 2023, OpenAI announced that ChatGPT could now “see, hear and speak”. That implied that generative artificial intelligence, meaning algorithms that can be used to create content, was speeding up.

     Is, thus, the most decisive moment in the history of humankind materializing before our eyes? It is difficult to tell, as Singularity won’t be a big noticeable event like Kurzweil suggests when given such precise dates. It will not be a discovery of America kind of thing. On the contrary, as Kevin Kelly argues, AI’s very ubiquity allows its advances to be hidden. However, silently, its incorporation into a network of billions of users, its absorption of unlimited amounts of information and its ability to teach itself, will make it grow by leaps and bounds. And suddenly, it will have arrived (Kelly, 2017).

    The grain of wheat and the chessboard

             What really matters, though, is the gigantic gap that will begin taking place after its arrival. Locked in its biological prison, human intelligence will remain static at the point where it was reached, while AI shall keep advancing at exponential speed. As a matter of fact, the human brain has a limited memory capacity and a slow speed of processing information: About 10 Hertz per second (Cordeiro, 2017.)  AI, on its part, will continue to double its capacity in short periods of time. This is reminiscent of the symbolic tale of the grain of wheat and the chess board, which takes place in India. According to the story, if we place one grain of wheat in the first box of the chess board, two in the second, four in the third, and the number of grains keeps doubling until reaching box number 64, the total amount of virtual grains on the board would exceed 18 trillion grains (IntoMath). The same will happen with the advance of AI.

    The initial doublings, of course, will not be all that impressive. Two to four or four to eight won’t say much. However, according to Ray Kurzweil, the moment of transcendence would come 15 years after Singularity itself, when the explosion of non-human intelligence should have become overwhelming (Kurzweil, 2005). But that will be only the very beginning. Centuries of progress would be able to materialize in years or even months. At the same time, though, centuries of regression in the relevance of the human race could also occur in years or even months.

    Humans equaling chickens

             As Yuval Noah Harari points out, the two great attributes that separate homo sapiens from other animal species are intelligence and the flow of consciousness. While the first has allowed humans to become the owners of the planet, the second gives meaning to human life. The latter translates into a complex interweaving of memories, experiences, sensations, sensitivities, and aspirations: meaning, the vital expressions of a sophisticated mind. According to Harari, though, human intelligence will be utterly negligible compared to the levels to be reached by AI. In contrast, the flow of consciousness will be an expression of capital irrelevance in the face of algorithms’ ability to penetrate the confines of the universe. Not in vain, in his terms, human beings will be to AI the equivalent of what chickens are for human beings (Harari, 2016).

             Periodically, humanity goes through transitional phases of immense historical significance that shake everything on its path. During these, values, beliefs and certainties are eroded to their foundations and replaced by new ones. All great civilizations have had their own experiences in this regard. In the case of the Western World, there have been three significant periods of this kind in the last six hundred years: The Renaissance that took place in the 15th and 16th centuries, the Enlightenment of the 18th, and Modernism that began at the end of the 19th century and reached its peak in the 20th.

    Renaissance, Enlightenment and Modernism

    The Renaissance is understood as a broad-spectrum movement that led to a new conception of the human being, transforming it in the measure of all things. At the same time, it expressed a significant leap in scientific matters where, beyond great advances in several areas, the Earth ceased to be seen as the centre of the universe. The Enlightenment placed reason as the defining element of society. Not only in terms of the legitimacy of political power but also as the source of liberal ideals such as freedom, progress, or tolerance. It was, concurrently, the period in which the notion of harmony was projected into all orders, including the understanding of the universe. During this time, the scientific method began to be supported by verification and evidence. Enlightenment represented a new milestone in the self-gratifying vision human beings had of themselves.

    Modernism, understood as a movement of movements, overturned prevailing paradigms in almost all areas of existence. Among its numerous expressions were abstract art in its multiple variables, an introspective narrative that gave a free run to the flow of consciousness, and psychoanalysis, the theatre of the absurd. In sum, reason and harmony were turned upside down at every step. Following its own dynamic but feeding back the former, science toppled down the pillars of certainty. This included the conception of the universe built by Newton during the Enlightenment. The conventional notions of time and space lost all meaning under the theory of Relativity while, going even further, quantum physics made the universe a place dominated by randomness. Unlike the previous two periods of significant changes, Modernism eroded to its bones the self-gratifying vision human beings had of themselves.

    The end of human centrality

             Renaissance, Enlightenment and Modernism unleashed and symbolized new ways of perceiving the human being and the universe surrounding him. Each of these movements placed humanity before new levels of consciousness (including the subconscious during Modernism). In each of them, humans could feel themselves more or less valued, more secure or insecure with respect to their own condition and its position in relation to the universe itself. However, a fundamental element was never altered: Humans always studied themselves and their surroundings. Even while questioning their nature and motives, they reaffirmed their centrality within the planet. As it had been defined since the Renaissance, humans kept being the measure of all Earthly things.

    Singularity, however, is called to destroy that human centrality in a radical, dramatic, and irreversible way. As a result, human beings will not only confront its obsolescence and irrelevance but will embark on the path towards becoming equals to chickens. Everything previously experienced in the march of human development, including the three above-mentioned groundbreaking periods, will pale drastically by comparison.

    The countdown towards the end

             We are, thus, within the countdown towards the henhouse grounds. Or worse still, towards the destruction of the human race itself. That is what Stephen Hawking, one of the most outstanding scientists of our time, believed would result from the advent of AI’s dominance. This is also what hundreds of top-level scientists and CEOs of high-tech companies felt when, in May 2023, they signed an open letter warning about the risk to human subsistence involved in an uncontrolled AI. For them, the risk for humanity associated with this technology was on par with those of a nuclear war or a devastating human pandemic. Furthermore, at a “summit” of bosses of large corporations held at Yale University in mid-June this year, 42 percent indicated that AI could destroy humanity in five to ten years (Egan, 2023).

    The risk for humanity associated with AI technology was on par with those of a nuclear war or a devastating human pandemic. At a “summit” of bosses of large corporations held at Yale University in mid-June this year, 42 percent indicated that AI could destroy humanity in five to ten years.

             In the short to medium term, although at the cost of increasing and massive unemployment, AI will spurt gigantic advances in multiple fields. Inevitably, though, at some point, this superior intelligence will escape human control and pursue its own ends. This may happen if freed from the “jail” imposed by its programmers by some interested hand. The natural culprits of these actions would come from what Harari labels as the community of experts. Among its members, many believe that if humans can no longer control the overwhelming volumes of information available, the logical solution is to pass the commanding torch to AI (Harari, 2016). The followers of the so-called Transhumanist Party in the United States represent a perfect example of this. They aspire to have a robot as President of that country within the next decade (Cordeiro, 2017). However, AI might be able to free itself of human constraints without any external help. Along the road, its own self-learning process would certainly allow so. One way or the other, when this happens, humanity will be doomed.

             As a species, humans do not seem to have much of an instinct for self-preservation. If nuclear war or climate change doesn’t get rid of us, AI will probably take care of it. The apparently imminent arrival of Singularity, thus, should be seen with frightful eyes.

    References

    Cordeiro, José Luis (2017). “En 2045 asistiremos a la muerte de la muerte”. Conversando con Gustavo Núñez, AECOC, noviembre.

    Egan, Matt (2023). “42% of CEOs say AI could destroy humanity in five to ten years”, CNN Business, June 15.

    Harari, Yuval Noah (2016). Homo Deus. New York: Harper Collins.

    Grossman, Lev (2011) “2045: The Year the Man Becomes Inmortal”, Time, February 10.

    IntoMath, “The Wheat and the Chessboard: Exponents.

    Kelly, Kevin (2017). The Inevitable. New York: Penguin Books.

    Kurzweil, Ray (2005). The Singularity is Near. New York: Viking Books.

    Kurzweil, Ray (2024). The Singularity is Nearer. New York: Penguin Random House.

    Streifeld, David (2023). “Silicon Valley Confronts the Idea that Singularity is Here”, The New York Times, June 11.

    Feature Image Credit: Technological Singularity https://kardashev.fandom.com

    Text Image: https://ts2.space

  • The Geopolitical Consolidation of Artificial Intelligence

    The Geopolitical Consolidation of Artificial Intelligence

    Key Points

    • IT hardware and Semiconductor manufacturing has become strategically important and critical geopolitical tools of dominant powers. Ukraine war related sanctions and Wassenaar Arrangement regulations invoked to ban Russia from importing or acquiring electronic components over 25 Mhz.
    • Semi conductors present a key choke point to constrain or catalyse the development of AI-specific computing machinery.
    • Taiwan, USA, South Korea, and Netherlands dominate the global semiconductor manufacturing and supply chain. Taiwan dominates the global market and had 60% of the global share in 2021. Taiwan’s one single company – TSMC (Taiwan Semiconductor Manufacturing Co), the world’s largest foundry, alone accounted for 54% of total global revenue.
    • China controls two-thirds of all silicon production in the world.
    • Monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence (AI), exacerbating the strategic and innovation bottlenecks for developing countries like India.
    • Developing a competitive advantage over existing leaders would require not just technical breakthroughs but also some radical policy choices and long-term persistence.
    • India should double down over research programs on non-silicon based computing with a national urgency instead of pursuing a catch-up strategy.

    Russia was recently restricted, under category 3 to category 9 of the Wassenaar Arrangement, from purchasing any electronic components over 25MHz from Taiwanese companies. That covers pretty much all modern electronics. Yet, the tangibles of these sanctions must not deceive us into overlooking the wider impact that hardware access and its control have on AI policies and software-based workflows the world over. As Artificial Intelligence technologies reach a more advanced stage, the capacity to fabricate high-performance computing resources i.e. semiconductor production becomes key strategic leverage in international affairs.

    Semiconductors present a key chokepoint to constrain or catalyse the development of AI-specific computing machinery. In fact, most of the supply of semiconductors relies on a single country – Taiwan. The Taiwan Semiconductor Manufacturing Corporation (TSMC) manufactures Google’s Tensor Processing Unit (TPU), Cerebras’s Wafer Scale Engine (WSE), as well as Nvidia’s A100 processor. The following table provides a more detailed1 assessment:

    Hardware Type

    AI Accelerator/Product Name

    Manufacturing Country

    Application-Specific Integrated Circuits (ASICs)

    Huawei Ascend 910

    Taiwan

    Cerebras WSE

    Taiwan

    Google TPUs

    Taiwan

    Intel Habana

    Taiwan

    Tesla FSD

    USA

    Qualcomm Cloud AI 100

    Taiwan

    IBM TrueNorth

    South Korea

    AWS Inferentia

    Taiwan

    AWS Trainium

    Taiwan

    Apple A14 Bionic

    Taiwan

    Graphic Processing Units (GPUs)

    AMD Radeon

    Taiwan

    Nvidia A100

    Taiwan

    Field-Programmable Gate Arrays (FPGAs)

    Intel Agilex

    USA

    Xilinx Virtex

    Taiwan

    Xilinx Alveo

    Taiwan

    AWS EC2 FI

    Taiwan

    As can be seen above, the cake of computing hardware is largely divided in such a way that the largest pie holders also happen to form a singular geopolitical bloc vis-a-vis China. This further shapes the evolution of territorial contests in the South China Sea. This monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence, especially exacerbating the strategic and innovation bottlenecks for developing countries like India. Since the invention of the transistor in 1947, and her independence, India has found herself in an unenviable position where there stands zero commercial semiconductor manufacturing capacity after all these years while her office-bearers continually promise of leading in the fourth industrial revolution.

    Bottlenecking Global AI Research

    There are two aspects of developing these AI accelerators – designing the specifications and their fabrication. AI research firms first design chips which optimise hardware performance to execute specific machine learning calculations. Then, semiconductor firms, operating in a range of specialities and specific aspects of fabrication, make those chips and increase the performance of computing hardware by adding more and more transistors to pieces of silicon. This combination of specific design choices and advanced hardware fabrication capability forms the bedrock that will decide the future of AI, not the amount of data a population is generating and localising.

    However, owing to the very high fixed costs of semiconductor manufacturing, AI research has to be focused on data and algorithms. Therefore, innovations in AI’s algorithmic efficiency and model scaling have to compensate for a lack of equivalent situations in the AI’s hardware. The aggressive consolidation and costs of hardware fabrication mean that firms in AI research are forced to outsource their hardware fabrication requirements. In fact, as per DARPA2, because of the high costs of getting their designs fabricated, AI hardware startups do not even receive much private capital and merely 3% of all venture funding between 2017-21 in AI/ML has gone to startups working on AI hardware.

    But TSMC’s resources are limited and not everyone can afford them. To get TSMC’s services, companies globally have to compete with the likes of Google and Nvidia, therefore prices go further high because of the demand side competition. Consequently, only the best and the biggest work with TSMC, and the rest have to settle for its competitors. This has allowed this single company to turn into a gatekeeper in AI hardware R&D. And as the recent sanctions over Russia demonstrate, it is now effectively playing the pawn which has turned the wazir in a tense geopolitical endgame.

    Taiwan’s AI policy also reflects this dominance in ICT and semiconductors – aiming to develop “world-leading AI-on-Device solutions that create a niche market and… (make Taiwan) an important partner in the value chain of global intelligent systems”.3 The foundation of strong control over the supply of AI hardware and also being #1 in the Global Open Data Index, not just gives Taiwan negotiating leverage in geopolitical competition, but also allows it to focus on hardware and software collaboration based on seminal AI policy unlike most countries where the AI policy and discourse revolve around managing the adoption and effects of AI, and not around shaping the trajectory of its engineering and conceptual development like the countries with hardware advantage.

    Now to be fair, R&D is a time-consuming, long-term activity which has a high chance of failure. Thus, research focus naturally shifts towards low-hanging fruits, projects that can be achieved in the short-term before the commissioning bureaucrats are rotated. That’s why we cannot have a nationalised AGI research group, as nobody will be interested in a 15-20 year-long enterprise when you have promotions and election cycles to worry about. This applies to all high-end bleeding-edge technology research funding everywhere – so, quantum communications will be prioritised over quantum computing, building larger and larger datasets over more intelligent algorithms, and silicon-based electronics over researching newer computing substrates and storage – because those things are more friendly to short-term outcome pressures and bureaucracies aren’t exactly known to be a risk-taking institution.

    Options for India

    While China controls 2/3 of all the silicon production in the world and wants to control the whole of Taiwan too (and TSMC along with its 54% share in logic foundries), the wider semiconductor supply chain is a little spreadout too for any one actor’s comfort. The leaders mostly control a specialised niche of the supply chain, for example, the US maintains a total monopoly on Electronic Design Automation (EDA) software solutions, the Netherlands has monopolised Extreme UltraViolet and Argon Flouride scanners, and Japan has been dishing out 300 mm wafers used to manufacture more than 99 percent of the chips today.4 The end-to-end delivery of one chip could have it crossing international borders over 70 times.5 Since this is a matured ecosystem, developing a competitive advantage over existing leaders would require not just proprietary technical breakthroughs but also some radical policy choices and long term persistence.

    It is also needless to say that the leaders are also able to attract and retain the highest quality talent from across the world. On the other hand, we have a situation where regional politicians continue cribbing about incoming talent even from other Indian states. This is therefore the first task for India, to become a technology powerhouse, she has to, at a bare minimum, be able to retain all her top talent and attract more. Perhaps, for companies in certain sectors or of certain size, India must make it mandatory to spend at least X per cent of revenue on R&D and offer incentives to increase this share – it’ll revamp things from recruitment and retention to business processes and industry-academia collaboration – and in the long-run prove to be a lot more socioeconomically useful instrument than the CSR regulation.

    It should also not escape anyone that the human civilisation, with all its genius and promises of man-machine symbiosis, has managed to put all its eggs in a single basket that is also under the constant threat of Chinese invasion. It is thus in the interest of the entire computing industry to build geographical resiliency, diversity and redundancy in the present-day semiconductor manufacturing capacity. We don’t yet have the navy we need, but perhaps in a diplomatic-naval recognition of Taiwan’s independence from China, the Quad could manage to persuade arrangements for an uninterrupted semiconductor supply in case of an invasion.

    Since R&D in AI hardware is essential for future breakthroughs in machine intelligence – but its production happens to be extremely concentrated, mostly by just one small island country, it behoves countries like India to look for ways to undercut the existing paradigm of developing computing hardware (i.e. pivot R&D towards DNA Computing etc) instead of only trying to pursue a catch-up strategy. The current developments are unlikely to solve India’s blues in integrated circuits anytime soon. India could parallelly, and I’d emphatically recommend that she should, take a step back from all the madness and double down on research programs on non-silicon-based computing with a national urgency. A hybrid approach toward computing machinery could also resolve some of the bottlenecks that AI research is facing due to dependencies and limitations of present-day hardware.

    As our neighbouring adversary Mr Xi says, core technologies cannot be acquired by asking, buying, or begging. In the same spirit, even if it might ruffle some feathers, a very discerning reexamination of the present intellectual property regime could also be very useful for the development of such foundational technologies and related infrastructure in India as well as for carving out an Indian niche for future technology leadership.

    References:

    1. The Other AI Hardware Problem: What TSMC means for AI Compute. Available at https://semiliterate.substack.com/p/the-other-ai-hardware-problem

    2. Leef, S. (2019). Automatic Implementation of Secure Silicon. In ACM Great Lakes Symposium on VLSI (Vol. 3)

    3. AI Taiwan. Available at https://ai.taiwan.gov.tw/

    4. Khan et al. (2021). The Semiconductor Supply Chain: Assessing National Competitiveness. Center for Security and Emerging Technology.
    5. Alam et al. (2020). Globality and Complexity of the Semiconductor Ecosystem. Accenture.

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in