Category: Artificial Intelligence & Robotics

  • UARCs: The American Universities that Produce Warfighters

    UARCs: The American Universities that Produce Warfighters

    America’s military-industrial complex (MIC) has grown enormously powerful and fully integrated into the Department of Defense of the US Government to further its global influence and control. Many American universities have become research centres for the MIC. Similarly, American companies have research programs in leading universities and educational institutions across the world, for example in few IITs in India. In the article below, Dr Sylvia J. Martin explores the role of University Affiliated Research Centers (UARCs) in the U.S. military-industrial complex. UARCs are institutions embedded within universities, designed to conduct research for the Department of Defense (DoD) and other military agencies. The article highlights how UARCs blur the lines between academic research and military objectives, raising ethical questions about the use of university resources for war-related activities. These centres focus on key areas such as nano-technology, immersive simulations, and weapons systems. For example, the University of South California’s Institute for Creative Technologies (ICT) was created to develop immersive training simulations for soldiers, drawing from both science and entertainment, while universities like Johns Hopkins and MIT are involved in anti-submarine warfare and soldier mobility technologies. Sylvia Martin critically examines the consequences of these relationships, particularly their impact on academic freedom and the potential prioritization of military needs over civilian research. She flags the resistance faced by some universities, like the University of Hawai’i, where concerns about militarisation, environmental damage and indigenous rights sparked protests against their UARCs. As UARCs are funded substantially, it becomes a source of major influence on the university. Universities, traditionally seen as centres for open, unbiased inquiry may become aligned with national security objectives, further entrenching the MIC within academics.

    This article was published earlier in Monthly Review.

    TPF Editorial Team

    UARCs: The American Universities that Produce Warfighters

    Dr Sylvia J Martin

    The University of Southern California (USC) has been one of the most prominent campuses for student protests against Israel’s campaign in Gaza, with students demanding that their university “fully disclose and divest its finances and endowment from companies and institutions that profit from Israeli apartheid, genocide, and occupation in Palestine, including the US Military and weapons manufacturing.”

    Students throughout the United States have called for their universities to disclose and divest from defense companies with ties to Israel in its onslaught on Gaza. While scholars and journalists have traced ties between academic institutions and U.S. defense companies, it is important to point out that relations between universities and the U.S. military are not always mediated by the corporate industrial sector.1 American universities and the U.S. military are also linked directly and organizationally, as seen with what the Department of Defense (DoD) calls “University Affiliated Research Centers (UARCs).” UARCs are strategic programs that the DoD has established at fifteen different universities around the country to sponsor research and development in what the Pentagon terms “essential engineering and technology capabilities.”2Established in 1996 by the Under Secretary of Defense for Research and Engineering, UARCs function as nonprofit research organizations at designated universities aimed to ensure that those capabilities are available on demand to its military agencies. While there is a long history of scientific and engineering collaboration between universities and the U.S. government dating back to the Second World War, UARCs reveal the breadth and depth of today’s military-university complex, illustrating how militarized knowledge production emerges from within the academy and without corporate involvement. UARCs demonstrate one of the less visible yet vital ways in which these students’ institutions help perpetuate the cycle of U.S.-led wars and empire-building.

    The University of Southern California (USC) has been one of the most prominent campuses for student protests against Israel’s campaign in Gaza, with students demanding that their university “fully disclose and divest its finances and endowment from companies and institutions that profit from Israeli apartheid, genocide, and occupation in Palestine, including the US Military and weapons manufacturing.”3  USC also happens to be home to one of the nation’s fifteen UARCs, the Institute of Creative Technology (ICT), which describes itself as a “trusted advisor to the DoD.”4  ICT is not mentioned in the students’ statement, yet the institute—and UARCs at other universities—are one of the many moving parts of the U.S. war machine that are nestled within higher education institutions, and a manifestation of the Pentagon’s “mission creep” that encompasses the arts as well as the sciences.5

    Institute of Creative Technologies – military.usc.edu

    Significantly, ICT’s remit to develop dual-use technologies (which claim to provide society-wide “solutions”) entails nurturing what the Institute refers to as “warfighters” for the battlefields of the future, and, in doing so, to increase warfighters’ “lethality.6 Established by the DoD in 1999 to pursue advanced modelling and simulation and training, ICT’s basic and applied research produces prototypes, technologies, and know-how that have been deployed for the U.S. Army, Navy, and Marine Corps. From artificial intelligence-driven virtual humans deployed to teach military leadership skills to futuristic 3D spatial visualization and terrain capture to prepare these military agencies for their operational environments, ICT specializes in immersive training programs for “mission rehearsal,” as well as tools that contribute to the digital innovations of global warmaking.7  Technologies and programs developed at ICT were used by U.S. troops in the U.S.-led Global War on Terror. One such program is UrbanSim, a virtual training application initiated in 2006 designed to improve army commanders’ skills for conducting counterinsurgency operations in Iraq and Afghanistan, delivering fictional scenarios through a gaming experience.8  From all of the warfighter preparation that USC’s Institute researches, develops, prototypes, and deploys, ICT boasts of generating over two thousand academic peer-reviewed publications.

    I encountered ICT’s work while conducting anthropological research on the relationship between the U.S. military and the media entertainment industry in Los Angeles.9  The Institute is located not on the university’s main University Park campus but by the coast, in Playa Vista, alongside offices for Google and Hulu. Although ICT is an approximately thirty-minute drive from USC’s main campus, this hub for U.S. warfighter lethality was enabled by an interdisciplinary collaboration with what was then called the School of Cinema-Television and the Annenberg School for Communications, and it remains entrenched within USC’s academic ecosystem, designated as a unit of its Viterbi School of Engineering, which is located on the main campus.10  Given the presence and power of UARCs at U.S. universities, we can reasonably ask: What is the difference between West Point Military Academy and USC, a supposedly civilian university? The answer, it seems, is not a difference in kind, but in degree. Indeed, universities with UARCs appear to be veritable military academies.

    What Are UARCs?

    UARCs are similar to federally funded research centres such as the Rand Corporation; however, UARCs are required to be situated within a university, which can be public or private.11  The existence of UARCs is not classified information, but their goals, projects, and implications may not be fully evident to the student bodies or university communities in which they are embedded, and there are differing levels of transparency among them about their funding. DoD UARCs “receive sole source funds, on average, exceeding $6 million annually,” and may receive other funding in addition to that from their primary military or federal sponsor, which may also differ among the fifteen UARCs.12  In 2021, funding from federal sources for UARCs ranged “from as much as $831 million for the Johns Hopkins University Applied Physics Lab to $5 million for the University of Alaska Geophysical Detection of Nuclear Proliferation.”13  Individual UARCs are generally created after the DoD’s Under Secretary of Defense for Research and Engineering initiates a selection process for the proposed sponsor, and typically are reviewed by their primary sponsor every five years for renewed contracts.14  A few UARCs, such as Johns Hopkins University’s Applied Physics Lab and the University of Texas at Austin’s Applied Research Lab, originated during the Second World War for wartime purposes but were designated as UARCs in 1996, the year the DoD formalized that status.15

    UARCs are supposed to provide their sponsoring agency and, ultimately, the DoD, access to what they deem “core competencies,” such as MIT’s development of nanotechnology systems for the “mobility of the soldier in the battlespace” and the development of anti-submarine warfare and ballistic and guided missile systems at Johns Hopkins University.16  Significantly, UARCs are mandated to maintain a close and enduring relationship with their military or federal sponsor, such as that of ICT with the U.S. Army. These close relationships are intended to facilitate the UARCs’ “in-depth knowledge of the agency’s research needs…access to sensitive information, and the ability to respond quickly to emerging research areas.”17  Such an intimate partnership for institutions of higher learning with these agencies means that the line between academic and military research is (further) blurred. With the interdisciplinarity of researchers and the integration of PhD students (and even undergraduate interns) into UARC operations such as USC’s ICT, the question of whether the needs of the DoD are prioritized over those of an ostensibly civilian institute of higher learning practically becomes moot: the entanglement is naturalized by a national security logic.

    Table 1 UARCs: The American Universities that Produce Warfighters

    Primary Sponsor University UARC Date of Designation (*original year established)
    Army University of Southern California Institute of Creative Technologies 1999
    Army Georgia Institute of Technology Georgia Tech Research Institute 1996 (*1995)
    Army Massachusetts Institute of Technology Institute for Soldier Nanotechnologies 2002
    Army University of California, Santa Barbara Institute for Collaborative Biotechnologies 2003
    Navy Johns Hopkins University Applied Physics Laboratory 1996 (*1942)
    Navy Pennsylvania State University Applied Research Laboratory 1996 (*1945)
    Navy University of Texas at Austin Applied Research Laboratories 1996 (*1945)
    Navy University of Washington Applied Physics Laboratory 1996 (*1943)
    Navy University of Hawai’i Applied Research Laboratory 2004
    Missile Defense Agency Utah State University Space Dynamics Laboratory 1996
    Office of the Under Secretary of Defense for Intelligence and Security University of Maryland, College Park Applied Research Laboratory for Intelligence and Security 2017 (*2003)
    Under Secretary of Defense for Research and Engineering Stevens Institute of Technology Systems Engineering Research Center 2008
    U.S. Strategic Command University of Nebraska National Strategic Research Institute 2012
    Department of the Assistant Secretary of Defense (Threat Reduction and Control) University of Alaska Fairbanks Geophysical Detection of Nuclear Proliferation 2018
    Air Force Howard University Research Institute for Tactical Autonomy 2023
    Sources: Joan Fuller, “Strategic Outreach—University Affiliated Research Centers,” Office of the Under Secretary of Defense (Research and Engineering), June 2021, 4; C. Todd Lopez, “Howard University Will Be Lead Institution for New Research Center,” U.S. Department of Defense News, January 23, 2023.

    A Closer Look

    The UARC at USC is unique from other UARCs in that, from its inception, the Institute explicitly targeted the artistic and humanities-driven resources of the university. ICT opened near the Los Angeles International Airport, in Marina del Rey, with a $45 million grant, tasked with developing a range of immersive technologies. According to the DoD, the core competencies that ICT offers include immersion, scenario generation, computer graphics, entertainment theory, and simulation technologies; these competencies were sought as the DoD decided that they needed to create more visually and narratively compelling and interactive learning environments for the gaming generation.18  USC was selected by the DoD not just because of the university’s work in science and engineering but also its close connections to the media entertainment industry, which USC fosters from its renowned School of Cinematic Arts (formerly the School of Cinema-Television), thereby providing the military access to a wide range of storytelling talents, from screenwriting to animation. ICT later moved to nearby Playa Vista, part of Silicon Beach, where the military presence also increased; by April 2016, the U.S. Army Research Lab West opened next door to ICT as another collaborative partner, further integrating the university into military work.19  This university-military partnership results in “prototypes that successfully transition into the hands of warfighters”; UARCs such as ICT are thus rendered a crucial link in what graduate student worker Isabel Kain from the Researchers Against War collective calls the “military supply chain.”20

    universities abandon any pretence to neutrality once they are assigned UARCs, as opponents at the University of Hawai’i at Mānoa (UH Mānoa) asserted when a U.S. Navy-sponsored UARC was designated for their campus in 2004. UH Mānoa faculty, students, and community members repeatedly expressed their concerns about the ethics of military research conducted on their campus, including the threat of removing “researchers’ rights to refuse Navy directives”

    USC was touted as “neutral ground” from which the U.S. Army could help innovate military training by one of ICT’s founders in his account of the Institute’s origin story.21  Yet, universities abandon any pretence to neutrality once they are assigned UARCs, as opponents at the University of Hawai’i at Mānoa (UH Mānoa) asserted when a U.S. Navy-sponsored UARC was designated for their campus in 2004. UH Mānoa faculty, students, and community members repeatedly expressed their concerns about the ethics of military research conducted on their campus, including the threat of removing “researchers’ rights to refuse Navy directives.”22  The proposed UARC at UH Mānoa occurred within the context of university community resistance to U.S. imperialism and militarism, which have inflicted structural violence on Hawaiian people, land, and waters, from violent colonization to the 1967 military testing of lethal sarin gas in a forest reserve.23 Hawai’i serves as the base of the military’s U.S. Indo-Pacific Command, where “future wars are in development,” professor Kyle Kajihiro of UH Mānoa emphasizes.24

    Writing in Mānoa Now about the proposed UARC in 2005, Leo Azumbuja opined that “it seems like ideological suicide to allow the Navy to settle on campus, especially the American Navy.”25 A key player in the Indo-Pacific Command, the U.S. Navy has long had a contentious relationship with Indigenous Hawaiians, most recently with the 2021 fuel leakage from the Navy’s Red Hill fuel facility, resulting in water contamination levels that the Hawai’i State Department of Health referred to as “a humanitarian and environmental disaster.”26  Court depositions have since revealed that the Navy knew about the fuel leakage into the community’s drinking water but waited over a week to inform the public, even as people became ill, making opposition to its proposed UARC unsurprising, if not requisite.27  The detonation of bombs and sonar testing that happens at the biennial international war games that the U.S. Navy has hosted in Hawai’i since 1971 have also damaged precious marine life and culturally sacred ecosystems, with the sonar tests causing whales to “swim hundreds of miles, rapidly change their depth (sometimes leading to bleeding from the eyes and ears), and even beach themselves to get away from the sounds of sonar.”28  Within this context, one of the proposed UARC’s core competencies was “understanding of [the] ocean environment.”29

    In a flyer circulated by DMZ Hawaii, UH Mānoa organizers called for universities to serve society, and “not be used by the military to further their war aims or to perfect ways of killing or controlling people.”30  Recalling efforts in previous decades on U.S. campuses to thwart the encroachment of military research, protestors raised questions about the UARC’s accountability and transparency regarding weapons production within the UH community. UH Mānoa’s strategic plan during the time that the Navy’s UARC was proposed and executed (2002–2010) called for recognition of “our kuleana (responsibility) to honour the Indigenous people and promote social justice for Native Hawaiians” and “restoring and managing the Mānoa stream and ecosystem”—priorities that the actions of the U.S. Navy disregarded.31  The production of knowledge for naval weapons within the auspices of this public, land-grant institution disrupts any pretension to neutrality the university may purport.

    while the UH administration claimed that the proposed UARC would not accept any classified research for the first three years, “the base contract assigns ‘secret’ level classification to the entire facility, making the release of any information subject to the Navy’s approval,” raising concerns about academic freedom, despite the fanfare over STEM and rankings

    Further resistance to the UARC designation was expressed by the UH Mānoa community: from April 28 to May 4, 2005, the SaveUH/StopUARC Coalition staged a six-day campus sit-in protest, and later that year, the UH Mānoa Faculty Senate voted 31–18 in favour of asking the administration to reject the UARC designation.32  According to an official statement released by UH Mānoa on January 23, 2006, at a university community meeting with the UH Regents in 2006, testimony from opponents to the UARC outnumbered supporters, who, reflecting the neoliberal turn of universities, expressed hope that their competitiveness in science, technology, engineering, and mathematics (STEM) would advance with a UARC designation, and benefit the university’s ranking.33  Yet in 2007, writing in DMZ Hawaii, Kajihiro clarified that while the UH administration claimed that the proposed UARC would not accept any classified research for the first three years, “the base contract assigns ‘secret’ level classification to the entire facility, making the release of any information subject to the Navy’s approval,” raising concerns about academic freedom, despite the fanfare over STEM and rankings.34  However, the campus resistance campaign was unsuccessful, and in September 2007, the UH Regents approved the Navy UARC designation. By 2008, the U.S. Navy-sponsored Applied Research Laboratory UARC at UH Mānoa opened.

    “The Military Normal”

    Yet with the U.S. creation of the national security state in 1947 and its pursuit of techno-nationalism since the Cold War, UARCs are direct pipelines to the intensification of U.S. empire

    UH Mānoa’s rationale for resistance begs the question: how could this university—indeed, any university—impose this military force onto its community? Are civilian universities within the United States merely an illusion, a deflection from education in the service of empire? What anthropologist Catherine Lutz called in 2009 the ethos of “the military normal” in U.S. culture toward its counterinsurgency wars in Iraq and Afghanistan—the commonsensical, even prosaic perspective on the inevitability of endless U.S.-led wars disseminated by U.S. institutions, especially mainstream media—helps explain the attitude toward this particular formalized capture of the university by the DoD.35  Defense funding has for decades permeated universities, but UARCs perpetuate the military normal by allowing the Pentagon to insert itself through research centres and institutes in the (seemingly morally neutral) name of innovation, within part of a broader neoliberal framework of universities as “engines” and “hubs,” or “anchor” institutions that offer to “leverage” their various forms of capital toward regional development in ways that often escape sustained scrutiny or critique.36  The normalization is achieved in some cases given that UARCs such as ICT strive to serve civilian needs as well as military ones with dual-use technologies and tools. Yet with the U.S. creation of the national security state in 1947 and its pursuit of techno-nationalism since the Cold War, UARCs are direct pipelines to the intensification of U.S. empire. Some of the higher-profile virtual military instructional programs developed at ICT at USC, such as its Emergent Leader Immersive Training Environment (ELITE) system, which provides immersive role-playing to train army leaders for various situations in the field, are funnelled to explicitly military-only learning institutions such as the Army Warrant Officer School.37

    The fifteenth and most recently created UARC, at Howard University in 2023—the first such designation for one of the historically Black colleges and universities (HBCUs)—boasts STEM inclusion

    The military normal generates a sense of moral neutrality, even moral superiority. The logic of the military normal, the offer of STEM education and training, especially through providing undergraduate internships and graduate training, and of course funding, not only rationalizes the implementation of UARCs, but ennobles it. The fifteenth and most recently created UARC, at Howard University in 2023—the first such designation for one of the historically Black colleges and universities (HBCUs)—boasts STEM inclusion.38  Partnering with the U.S. Air Force, Howard University’s UARC is receiving a five-year, $90 million contract to conduct AI research and develop tactical autonomy technology. Its Research Institute for Tactical Autonomy (RITA) leads a consortium of eight other HCBUs. As with the University of Hawai’i, STEM advantages are touted by the UARC, with RITA’s reach expanding in other ways: it plans to supplement STEM education for K–12 students to “ease their path to a career in the fields of artificial intelligence, cybersecurity, tactical autonomy, and machine learning,” noting that undergraduate and graduate students will also be able to pursue fully funded research opportunities at their UARC. With the corporatization of universities, neoliberal policies prioritize STEM for practical reasons, including the pursuit of university rankings and increases in both corporate and government funding. This fits well with increased linkages to the defence sector, which offers capital, jobs, technology, and gravitas. In a critique of Howard University’s central role for the DoD through its new UARC, Erica Caines at Black Agenda Reportinvokes the “legacies of Black resistance” at Howard University in a call to reduce “the state’s use of HBCUs.”39  In another response to Howard’s UARC, another editorial in Black Agenda Report draws upon activist Kwame Ture’s (Stokely Carmichael’s) autobiography for an illuminative discussion about his oppositional approach to the required military training and education at Howard University during his time there.40

    With their respectability and resources, universities, through UARCs, provide ideological cover for U.S. war-making and imperialistic actions, offering up student labour at undergraduate and graduate levels in service of that cover. When nearly eight hundred U.S. military bases around the world are cited as evidence of U.S. empire and the DoD requires research facilities to be embedded within places of higher learning, it is reasonable to expect that university communities—ostensibly civilian institutions—ask questions about UARC goals and operations, and how they provide material support and institutional gravitas to these military and federal agencies.41  In the case of USC, ICT’s stated goal of enhancing warfighter lethality runs counter to current USC student efforts to strive for more equitable conditions on campus and within its larger community (for example, calls to end “land grabs,” and “targeted repression and harassment of Black, Brown and Palestinian students and their allies on and off campus”) as well as other reductions in institutional harms.42  The university’s “Minor in Resistance to Genocide”—a program pursued by USC’s discarded valedictorian Asna Tabassum—also serves as mere cover, a façade, alongside USC’s innovations for warfighter lethality.

    the Hopkins Justice Collective at Johns Hopkins University recently proposed a demilitarization process to its university’s Public Interest Investment Advisory Committee that cited Johns Hopkins’s UARC, Applied Physics Lab, as being the “sole source” of DoD funding for the development and testing of AI-guided drone swarms used against Palestinians in 2021

    Many students and members of U.S. society want to connect the dots, as evident from the nationwide protests and encampments, and a push from within the academy to examine the military supply chain is intensifying. In addition to Researchers Against War members calling out the militarized research that flourishes in U.S. universities, the Hopkins Justice Collective at Johns Hopkins University recently proposed a demilitarization process to its university’s Public Interest Investment Advisory Committee that cited Johns Hopkins’s UARC, Applied Physics Lab, as being the “sole source” of DoD funding for the development and testing of AI-guided drone swarms used against Palestinians in 2021.43  Meanwhile, at UH Mānoa, the struggle continues: in February 2024, the Associated Students’ Undergraduate Senate approved a resolution requesting that the university’s Board of Regents terminate UH’s UARC contract, noting that UH’s own president is the principal investigator for a $75 million High-Performance Computer Center for the U.S. Air Force Research Laboratory that was contracted by the university’s UARC, Applied Research Laboratory.44  Researchers Against War organizing, the Hopkins Justice Collective’s proposal, the undaunted UH Mānoa students, and others help pinpoint the flows of militarized knowledge—knowledge that is developed by UARCs to strengthen warfighters from within U.S. universities, through the DoD, and to different parts of the world.45

    Notes

    1. Jake Alimahomed-Wilson et al., “Boeing University: How the California State University Became Complicit in Palestinian Genocide,” Mondoweiss, May 20, 2024; Brian Osgood, “U.S. University Ties to Weapons Contractors Under Scrutiny Amid War in Gaza,” Al Jazeera, May 13, 2024.
    2. Collaborate with Us: University Affiliated Research Center,” DevCom Army Research Laboratory, arl.devcom.army.mil.
    3. USC Divest From Death Coalition, “Divest From Death USC News Release,” April 24, 2024.
    4. USC Institute for Creative Technologies, “ICT Overview Video,” YouTube, 2:52, December 12, 2023.
    5. Gordon Adams and Shoon Murray, Mission Creep: The Militarization of U.S. Foreign Policy?(Washington DC: Georgetown University Press, 2014).
    6. USC Institute for Creative Technologies, “ICT Overview Video”; USC Institute for Creative Technologies, Historical Achievements: 1999–2019 (Los Angeles: University of Southern California, May 2021), ict.usc.edu.
    7. Yuval Abraham, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza,” +972 Magazine.
    8. “UrbanSim,” USC Institute for Creative Technologies.
    9. Sylvia J. Martin, “Imagineering Empire: How Hollywood and the U.S. National Security State ‘Operationalize Narrative,’” Media, Culture & Society 42, no. 3 (April 2020): 398–413.
    10. Paul Rosenbloom, “Writing the Original UARC Proposal,” USC Institute for Creative Technologies, March 11, 2024.
    11. Susannah V. Howieson, Christopher T. Clavin, and Elaine M. Sedenberg, “Federal Security Laboratory Governance Panels: Observations and Recommendations,” Institute for Defense Analyses—Science and Technology Policy Institute, Alexandria, Virginia, 2013, 4.
    12. OSD Studies and Federally Funded Research and Development Centers Management Office (FFRDC), Engagement Guide: Department of Defense University Affiliated Research Centers (UARCs) (Alexandria, Virginia: OSD Studies and FFRDC Management Office, April 2013), 5.
    13. Christopher V. Pece, “Federal Funding to University Affiliated Research Centers Totaled $1.5 Billion in FY 2021,” National Center for Science and Engineering Statistics, National Science Foundation, 2024, ncses.nsf.gov.
    14. “UARC Customer Funding Guide,” USC Institute for Creative Technologies, March 13, 2024.
    15. Federally Funded Research and Development Centers (FFRDC) and University Affiliated Research Centers (UARC),” Department of Defense Research and Engineering Enterprise, rt.cto.mil.
    16. OSD Studies and FFRDC Management Office, Engagement Guide.
    17. Congressional Research Service, “Federally Funded Research and Development Centers (FFDRCs): Background and Issues for Congress,” April 3, 2020, 5.
    18. OSD Studies and FFRDC Management Office, Engagement Guide, 18.
    19. Institute for Creative Technologies (ICT),” USC Military and Veterans Initiatives, military.usc.edu.
    20. USC Institute for Creative Technologies, Historical Achievements: 1999–2019, 2; Linda Dayan, “‘Starve the War Machine’: Workers at UC Santa Cruz Strike in Solidarity with Pro-Palestinian Protesters,” Haaretz, May 21, 2024.
    21. Richard David Lindholm, That’s a 40 Share!: An Insider Reveals the Origins of Many Classic TV Shows and How Television Has Evolved and Really Works (Pennsauken, New Jersey: Book Baby, 2022).
    22. Leo Azambuja, “Faculty Senate Vote Opposing UARC Preserves Freedom,” Mānoa Now, November 30, 2005.
    23. Deployment Health Support Directorate, “Fact Sheet: Deseret Test Center, Red Oak, Phase I,” Office of the Assistant Secretary of the Defense (Health Affairs), health.mil.
    24. Ray Levy Uyeda, “U.S. Military Activity in Hawai’i Harms the Environment and Erodes Native Sovereignty,” Prism Reports, July 26, 2022.
    25. Azambuja, “Faculty Senate Vote Opposing UARC Preserves Freedom.”
    26. Kyle Kajihiro, “The Militarizing of Hawai’i: Occupation, Accommodation, Resistance,” in Asian Settler Colonialism, Jonathon Y. Okamura and Candace Fujikane, eds. (Honolulu: University of Hawai’i Press, 2008), 170–94; “Hearings Officer’s Proposed Decision and Order, Findings of Fact, and Conclusions of Law,” Department of Health, State of Hawaii vs. United States Department of the Navy, no. 21-UST-EA-02 (December 27, 2021).
    27. Christina Jedra, “Red Hill Depositions Reveal More Details About What the Navy Knew About Spill,” Honolulu Civil Beat, May 31, 2023.
    28. “Does Military Sonar Kill Marine Wildlife?,” Scientific American, June 10, 2009.
    29. Joan Fuller, “Strategic Outreach—University Affiliated Research Centers,” Office of the Under Secretary of Defense (Research and Engineering), June 2021, 4.
    30. DMZ Hawaii, “Save Our University, Stop UARC,” dmzhawaii.org.
    31. University of Hawai’i at Mānoa, Strategic Plan 2002–2010: Defining Our Destiny, 8–9.
    32. Craig Gima, “UH to Sign Off on Navy Center,” Star Bulletin, May 13, 2008.
    33. University of Hawai’i at Mānoa, “Advocates and Opponents of the Proposed UARC Contract Present Their Case to the UH Board of Regents,” press release, January 23, 2006.
    34. Kyle Kajihiro, “The Secret and Scandalous Origins of the UARC,” DMZ Hawaii, September 23, 2007.
    35. Catherine Lutz, “The Military Normal,” in The Counter-Counterinsurgency Manual, or Notes on Demilitarizing American Society, The Network of Concerned Anthropologists, ed. (Chicago: Prickly Paradigm Press, 2009).
    36. Anne-Laure Fayard and Martina Mendola, “The 3-Stage Process That Makes Universities Prime Innovators,” Harvard Business Review, April 19, 2024; Paul Garton, “Types of Anchor Institution Initiatives: An Overview of University Urban Development Literature,” Metropolitan Universities 32, no. 2 (2021): 85–105.
    37. Randall Hill, “ICT Origin Story: How We Built the Holodeck,” Institute for Creative Technologies, February 9, 2024.
    38. Brittany Bailer, “Howard University Awarded $90 Million Contract by Air Force, DoD to Establish First-Ever University Affiliated Research Center Led by an HCBU,” The Dig, January 24, 2023, thedig.howard.edu.
    39. Erica Caines, “Black University, White Power: Howard University Covers for U.S. Imperialism,” Black Agenda Report, February 1, 2023.
    40. Editors, “Howard University: Every Black Thing and Its Opposite, Kwame Ture,” The Black Agenda Review (Black Agenda Report), February 1, 2023.
    41. David Vine, Base Nation: How U.S. Military Bases Abroad Harm America and the World (New York: Metropolitan Books, 2015).
    42. USC Divest from Death Coalition, “Divest From Death USC News Release”; “USC Renames VKC, Implements Preliminary Anti-Racism Actions,” Daily Trojan, June 11, 2020.
    43. Hopkins Justice Collective, “PIIAC Proposal,” May 4, 2024.
    44. Bronson Azama to bor.testimony@hawaii.edu, “Testimony for 2/15/24,” February 15, 2024, University of Hawai’i; “UH Awarded Maui High Performance Computer Center Contract Valued up to $75 Million,” UH Communications, May 1, 2020.
    45. Isabel Kain and Becker Sharif, “How UC Researchers Began Saying No to Military Work,” Labor Notes, May 17, 2024.

     

    Feature Image: Deep Space Advanced Radar Capability (DARC) at Johns Hopkins Advanced Physical Laborotory, A UARC facility – www.jhuapl.edu

  • Artificial Intelligence vs The Indian Job Market

    Artificial Intelligence vs The Indian Job Market

    Artificial intelligence (AI) has become a ubiquitous presence in our daily lives, transforming the way we operate in the modern era. From the development of autonomous vehicles to facilitating advanced healthcare research, AI has enabled the creation of groundbreaking solutions that were once thought to be unattainable. As more investment is made in this area and more data becomes available, it is expected that AI will become even more powerful in the coming years.

    AI, often referred to as the pursuit of creating machines capable of exhibiting intelligent behaviour, has a rich history that dates back to the mid-20th century. During this time, pioneers such as Alan Turing laid the conceptual foundations for AI. The journey of AI has been marked by a series of intermittent breakthroughs, periods of disillusionment, and remarkable leaps forward. It has also been a subject of much discussion over the past decade, and this trend is expected to continue in the years to come.

    According to a report by Precedence Research, the global artificial intelligence market was valued at USD 454.12 billion in 2022 and is expected to hit around USD 2,575.16 billion by 2032, progressing with a compound annual growth rate (CAGR) of 19% from 2023 to 2032. The Asia Pacific is expected to be the fastest-growing artificial intelligence market during the forecast period, expanding at the highest CAGR of 20.3% from 2023 to 2032. The rising investments by various organisations towards adopting artificial intelligence are boosting the demand for artificial intelligence technology.[1]

    Figure 1 illustrates a bar graph displaying the upward trajectory of the AI market in recent years, sourced from Precedence Research.

    The Indian government has invested heavily in developing the country’s digital infrastructure. In 2020, The Government of India increased its spending on Digital India to $477 million to boost AI, IoT, big data, cyber security, machine learning, and robotics. The artificial intelligence market is expected to witness significant growth in the BFSI(banking, financial services, and insurance) sectors on account of data mining applications, as there is an increase in the adoption of artificial intelligence solutions in data analytics, fraud detection, cybersecurity, and database systems.

    Figure 2 illustrates a pie chart displaying the distribution of the Artificial Intelligence (AI) market share across various regions in 2022, sourced from Precedence Research.

    Types of AI Systems and Impact on Employment

    AI systems can be divided primarily into three types:

    Narrow AI: This is a specific form of artificial intelligence that executes dedicated tasks with intelligence. It represents the prevailing and widely accessible type of AI in today’s technological landscape.

    General AI: This represents an intelligence capable of efficiently undertaking any intellectual task akin to human capabilities. Aspiration driving the development of General AI revolves around creating a system with human-like cognitive abilities that enables autonomous, adaptable thinking. However, as of now, the realisation of a General AI system that comprehensively emulates human cognition remains elusive.

    Super AI: It is a level of intelligence within systems where machines transcend human cognitive capacities, exhibit superior performance across tasks, and possess advanced cognitive properties. This extends from the culmination of the General AI.

    Artificial intelligence has been incorporated into various aspects of our lives, ranging from virtual assistants on our mobile devices to advancements in customisation, cyber protection, and more. The growth of these systems is swift, and it is only a matter of time before the emergence of general artificial intelligence becomes a reality.

    According to a report by PwC, the global GDP is estimated to be 14% higher in 2030 due to the accelerating development and utilisation of AI, which translates to an additional $15.7 trillion. This growth can be attributed to:

    1. Improvements in productivity resulting from the automation of business processes (including the use of robots and autonomous vehicles).
    2. Productivity gains from businesses integrating AI technologies into their workforce (assisted and augmented intelligence).
    3. Increased consumer demand for AI-enhanced products and services, resulting in personalised and/or higher-quality offerings.

    The report suggests that the most significant economic benefits from AI will likely come from increased productivity in the near future. This includes automating mundane tasks, enhancing employees’ capabilities, and allowing them to focus on more stimulating and value-added work. Capital-intensive sectors such as manufacturing and transport are likely to experience the most significant productivity gains from AI, given that many operational processes in these industries are highly susceptible to automation. (2)

    AI will disrupt many sectors and lead to the creation of many more. A compelling aspect to observe is how the Indian Job Market responds to AI and its looming threat to job security in the future.

    The Indian Job Market

    As of 2021, around 487.9 million people were part of the workforce in India out of 950.2 million people aged 15-64, the second largest after China. While there were 986.5 million people in China aged 15-64, there were 747.9 million people were part of the workforce.

    India’s labour force participation rate (LFPR) at 51.3 per cent was less than China’s 76 per cent and way below the global average of 65 per cent.[3]

    The low LFPR can be primarily attributed to two reasons:

    Lack of Jobs

    To reach its growth potential, India is expected to generate approximately 9 million nonfarm jobs annually until 2030, as per a report by McKinsey & Company. However, analysts suggest that the current rate of job creation falls significantly below this target, with only about 2.9 million nonfarm jobs being added each year from 2013 to 2019. [4]

    During the COVID-19 pandemic, urban unemployment in India surged dramatically, peaking at 20.9% in the April-June 2020 quarter, coinciding with wage decline. Although the unemployment rate has decreased since then, full-time employment opportunities are scarce. Economists highlight a concerning trend where an increasing number of job-seekers, particularly the younger demographic, are turning towards low-paying casual jobs or opting for less stable self-employment options.[5]

     This shift in employment pattern occurs alongside a broader outlook for the Indian economy, which is projected to achieve an impressive growth rate of 6.5% by the fiscal year ending in March 2025. Despite this optimistic growth forecast, the employment landscape appears to be evolving, leading individuals towards less secure and lower-paying work options. This shift raises pertinent concerns about the job market’s quality, stability, and inclusivity, particularly in accommodating the aspirations and needs of India’s burgeoning young workforce.

    Low female labour participation

    In 2021, China boasted an estimated female population of 478.3 million within the 15-64 age bracket, with an active female labour force of approximately 338.6 million. In stark contrast, despite India having a similar demographic size of 458.2 million women in that age group, its female labour force was significantly smaller, numbering only 112.8 million.[6]

    This discrepancy underscores a notable disparity in India’s female labour force participation rate compared to China, despite both countries having sizeable female populations within the working-age bracket.[7]

    Along with unemployment, there was also a crisis of under-employment and the collapse of small businesses, which has worsened since the pandemic.

    AI vs the Indian Job Market

    The presence and implications of AI cast a significant shadow on a country as vast and diverse as India. Amidst the dynamic and often unpredictable labour market, where employment prospects have been uncertain, addressing the impact of AI poses a considerable challenge for employers. Balancing the challenges and opportunities presented by AI while prioritising job security for the workforce is a critical obstacle to overcome.

     The diverse facets of artificial intelligence (AI) and its capacity to transform industries across the board amplify the intricacy of the employment landscape in India. Employers confront the formidable challenge of devising effective strategies to incorporate AI technologies without compromising the livelihoods of their employees.

    As per the findings of the Randstad Work Monitor Survey, a staggering 71% of individuals in India exhibit an inclination towards altering their professional circumstances within the next six months, either by transitioning to a new position within the same organisation or by seeking employment outside it. Furthermore, 23% of the workforce can be classified as passive job seekers, who are neither actively seeking new opportunities nor applying for them but remain open to considering job prospects if a suitable offer arises.

    It also stated that at least half of Indian employees fear losing their jobs to AI, whereas the figure is one in three in developed countries. The growing concern among Indian workers stems from the substantial workforce employed in Business Process Outsourcing (BPO) and Knowledge Process Outsourcing (KPO), which are notably vulnerable to AI automation. Adding to this concern is India’s rapid uptake of AI technology, further accentuating the apprehension among employees.[8]

    India’s role as a global hub for outsourcing and its proficiency in delivering diverse services have amplified the impact of AI adoption. The country has witnessed a swift embrace of AI technologies across various industries, magnifying workers’ concerns regarding the potential ramifications of their job security.

    Goldman Sachs’ report highlights the burgeoning emergence of generative artificial intelligence (AI) and its potential implications for labour dynamics. The rapid evolution of this technology prompts questions regarding a possible surge in task automation, leading to cost savings in labour and amplified productivity. [9]

    The labour market could confront significant disruptions if generative AI delivers its pledged capabilities. Analysing occupational tasks across the US and Europe revealed that approximately two-thirds of the current jobs are susceptible to AI automation. Furthermore, the potential of generative AI to substitute up to one-fourth of existing work further underscores its transformative potential.

     Expanding these estimates on a global scale suggests that generative AI might expose the equivalent of 300 million full-time jobs to automation, signifying the far-reaching impact this technology could have on global labour markets.

    Recent advancements in artificial intelligence (AI) and machine learning have exerted substantial influence across various professions and industries, particularly impacting job landscapes in sectors such as Indian IT, ITeS, BPO, and BPM. These sectors collectively employ over five million people and are India’s primary source of white-collar jobs. [10]

    In a recent conversation with Business Today, Vardhman Jain, the founder and Vice Chairman of Access Healthcare, a Chennai-based BPO, highlighted the forthcoming impact of AI integration on the workplace. Jain indicated that AI implementation may cause customer service to be the sector most vulnerable to initial disruptions.

    Jain pointed out that a substantial portion of services provided by the Indian BPO industry is focused on customer support, including voice and chat functions, data entry, and back-office services. He expounded upon how AI technologies, such as Natural Language Processing, Machine Learning, and Robotic Process Automation, possess the potential to significantly disrupt and automate these tasks within the industry.

    While the discourse surrounding AI often centres on the potential for job displacement, several industry leaders argue that AI will not supplant human labour, but rather augment worker output and productivity.

    At the 67th Foundation Day celebration of the All-India Management Association (AIMA), NR Narayan Murthy, as reported by Business Today, conveyed a noteworthy message by asserting that AI is improbable to supplant human beings, as humans will not allow it to happen.

    Quoting Murthy’s statement from the report, “I think there is a mistaken belief that artificial intelligence will replace human beings; human beings will not allow artificial intelligence to replace them.” The Infosys founder stressed that AI has functioned as an assistive force rather than an outright replacement, enhancing human lives and making them more comfortable.[11]

    McKinsey Global Institute’s study, “Generative AI and the Future of Work in America,” highlighted AI’s capability to expedite economic automation significantly. The report emphasised that while generative AI wouldn’t immediately eliminate numerous jobs, it would enhance the working methods of STEM, creative, business, and legal professionals.[12]

     However, the report also underscored that the most pronounced impact of automation would likely affect job sectors such as office support, customer service, and food service employment.

    While the looming threats posed by AI are undeniable, its evolution is expected to usher in a wave of innovation, leading to the birth of new industries and many job opportunities. This surge in new industries promises employment prospects and contributes significantly to economic growth by leveraging AI capabilities.

    Changing employment Landscape

    Having explored different perspectives and conversations on AI, it has become increasingly evident that the employment landscape is poised for significant transformation in the years ahead. This prompts a crucial enquiry: Will there remain a necessity for human jobs, and are our existing systems equipped to ensure equitable distribution of the benefits fostered by this technology developments?

    • Universal Basic Income

    Universal basic income (UBI) is a social welfare proposal in which all citizens of a given population regularly receive minimum income in the form of an unconditional transfer payment, that is, without a means test or need to work, in which case it would be called guaranteed minimum income.

    Supporters of Universal Basic Income (UBI) now perceive it not only as a solution to poverty, but also as a potential answer to several significant challenges confronting contemporary workers: wage disparities, uncertainties in job stability, and the looming spectre of job losses due to advancements in AI.

    Karl Widerquist, a professor of philosophy at Georgetown University-Qatar and an economist and political theorist, posits that the influence of AI on employment does not necessarily result in permanent unemployment. Instead, he suggests a scenario in which displaced workers shift into lower-income occupations, leading to increased competition and saturation in these sectors.

    According to Widerquist, the initial effects of AI advancements might force white-collar workers into the gig economy or other precarious and low-paying employment. This shift, he fears, could trigger a downward spiral in wages and job security, exacerbating economic inequality.

     He advocates for a Universal Basic Income (UBI) policy as a response to the challenges posed by AI and automation. Widerquist argues that such a policy would address employers’ failure to equitably distribute the benefits of economic growth, fuelled in part by automation, among workers. He sees UBI as a potential solution to counter the widening disparity in wealth distribution resulting from these technological advancements.[13]

    A study conducted by researchers at Utrecht University, Netherlands, from 2017 to 2019 led to the implementation of basic income for unemployed individuals who previously received social assistance. The findings showcase an uptick in labour market engagement. This increase wasn’t solely attributed to the financial support offered by Universal Basic Income (UBI) but also to removing conditions—alongside sanctions for non-compliance—typically imposed on job seekers.[14]

    Specifically, participants exempted from the obligation to actively seek or accept employment demonstrated a higher likelihood of securing permanent contracts, as opposed to the precarious work arrangements highlighted by Widerquist.

     While UBI experiments generally do not demonstrate a significant trend of workers completely exiting the labour market, instances of higher payments have resulted in some individuals reducing their working hours. This nuanced impact showcases the varying effects of UBI on labour participation, highlighting both increased job security for some and a choice for others to adjust their work hours due to enhanced financial stability.

    In exploring the potential for Universal Basic Income (UBI), it becomes evident that while the concept holds promise, its implementation and efficacy are subject to multifaceted considerations. The diverse socioeconomic landscape, coupled with the scale and complexity of India’s population, presents both opportunities and challenges for UBI.

     UBI’s potential to alleviate poverty, enhance social welfare, and address economic disparities in a country as vast and diverse as India is compelling. However, the feasibility of funding such a program, ensuring its equitable distribution, and navigating its impact on existing welfare schemes requires careful deliberation.

    Possible Tax Solutions

    • Robot Tax

    The essence of a robot tax lies in the notion that companies integrating robots into their operations should bear a tax burden given that these machines replace human labour.

     There exist various arguments advocating for a robot tax. Initially, it aimed to safeguard human employment by dissuading firms from substituting humans with robots. Additionally, while companies may prefer automation, imposing a robot tax can generate government revenue to offset the decline in funds from payroll and income taxes. Another crucial argument favouring this tax is rooted in allocation efficiency: robots neither contribute to payroll nor income taxes. Taxing robots at a rate similar to human labour aligns with economic efficiency to prevent distortions in resource allocation.

    In various developed economies, such as the United States, the prevailing taxation system presents a bias toward artificial intelligence (AI) and automation over human workforce. This inclination, fueled by tax incentives, may lead to investments in automation solely for tax benefits rather than for the actual potential increase in profitability. Furthermore, the failure to tax robots can exacerbate income inequality as the share of labor in national income diminishes.

    One possible solution to address this issue is the implementation of a robot tax, which could generate revenue that could be redistributed as Universal Basic Income (UBI) or as support for workers who have lost their jobs due to the adoption of robotic systems and AI and are unable to find new employment opportunities.

    • Digital Tax

    The discourse surrounding digital taxation primarily centers on two key aspects. Firstly, it grapples with the challenge of maintaining tax equity between traditional and digital enterprises. Digital businesses have benefited from favorable tax structures, such as advantageous tax treatment for income derived from intellectual property, accelerated amortization of intangible assets, and tax incentives for research and development. However, there is a growing concern that these preferences may result in unintended tax advantages for digital businesses, potentially distorting investment trajectories instead of promoting innovation.

    Secondly, the issue arises from digital companies operating in countries with no physical presence yet serving customers through remote sales and service platforms. This situation presents a dilemma regarding traditional corporate income tax regulations. Historically, digital businesses paid corporate taxes solely in countries where they maintained permanent establishments, such as headquarters, factories, or storefronts. Consequently, countries where sales occur or online users reside have no jurisdiction over a firm’s income, leading to taxation challenges.

    Several approaches have been suggested to address the taxation of digital profits. One approach involves expanding existing frameworks, for instance, a country may extend its Value-Added Tax (VAT) or Goods and Services Tax (GST) to encompass digital services or broaden the tax base to include revenues generated from digital goods and services. Alternatively, there is a need to implement a separate Digital Service Tax (DST).

    While pinpointing the ultimate solution remains elusive, ongoing experimentation and iterative processes are expected to guide us toward a resolution that aligns with the need for a larger consensus. With each experiment and accumulated knowledge, we move closer to uncovering an approach that best serves the collective requirements.[15]

    Reimagining the Future

    The rise of Artificial Intelligence (AI) stands as a transformative force reshaping the industry and business landscape. As AI continues to revolutionise how we work and interact, staying ahead in this rapidly evolving landscape is not just an option, but a necessity. Embracing AI is not merely about adapting to change; it is also about proactive readiness and strategic positioning. Whether you’re a seasoned entrepreneur or a burgeoning startup, preparing for the AI revolution involves a multifaceted approach encompassing automation, meticulous research, strategic investment, and a keen understanding of how AI can augment and revolutionise your business. PwC’s report lists some crucial steps to prepare one’s business for the future and stay ahead. [16]

    Understand AI’s Impact: Start by evaluating the industry’s technological advancements and competitive pressure. Identify operational challenges AI can address, disruptive opportunities available now and those on the horizon.

    Prioritise Your Approach: Determine how AI aligns with business goals. Assess your readiness for change— are you an early adopter or follower? Consider feasibility, data availability, and barriers to innovation—Prioritise automation and decision augmentation processes based on potential savings and data utilisation.

    Talent, Culture, and Technology: While AI investments might seem high, costs are expected to decrease over time. Embrace a data-driven culture and invest in talent like data scientists and tech specialists. Prepare for a hybrid workforce, combining AI’s capabilities with human skills like creativity and emotional intelligence.

    Establish Governance and Trust: Trust and transparency are paramount. Consider the societal and ethical implications of AI. Build stakeholder trust by ensuring AI transparency and unbiased decision-making. Manage data sources rigorously to prevent biases and integrate AI management with overall technology transformation.

     Getting ready for Artificial Intelligence (AI) is not just about new technology; it is an intelligent strategy. Understanding how AI fits one’s goals is crucial; prioritising where it can help, building the right skills, and setting clear rules are essential. As AI becomes more common, it is not about robots taking over, but humans and AI working together. By planning and embracing AI wisely, businesses can stay ahead and create innovative solutions in the future.

    References:

    [1] Precedence Research. “Artificial Intelligence (AI) Market.” October 2023. Accessed November 14, 2023. https://www.precedenceresearch.com/artificial-intelligence-market

    [2] Pricewaterhouse Coopers (PwC). “Sizing the prize, PwC’s Global Artificial Intelligence Study.” October 2017. Accessed November 14, 2023. https://www.pwc.com/gx/en/issues/data-and-analytics/publications/artificial-intelligence-study.html#:~:text=The%20greatest%20economic%20gains%20from,of%20the%20global%20economic%20impact.

    [3] World Bank. “Labor force, total – India 2021.” Accessed November 12, 2023. https://data.worldbank.org/indicator/SL.TLF.TOTL.IN?locations=IN

    [4] McKinsey & Company. “India’s Turning Point.” August 2020. https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/India/Indias%20turning%20point%20An%20economic%20agenda%20to%20spur%20growth%20and%20jobs/MGI-Indias-turning-point-Executive-summary-August-2020-vFinal.pdf

    [5] Dugal, Ira. “Where are the jobs? India’s world-beating growth falls short.” Reuters, May 31, 2023. Accessed November 14, 2023. https://www.reuters.com/world/india/despite-world-beating-growth-indias-lack-jobs-threatens-its-young-2023-05-30/

    [6] Government of India. Ministry of Labour and Employment. “Labour and Employment Statistics 2022.” July 2022. https://dge.gov.in/dge/sites/default/files/2022-08/Labour_and_Employment_Statistics_2022_2com.pdf

    [7] Deshpande, Ashwini, and Akshi Chawla. “It Will Take Another 27 Years for India to Have a Bigger Labour Force Than China’s.” The Wire, July 27, 2023. https://thewire.in/labour/india-china-population-labour-force

    [8] Randstad. “Workmonitor Pulse Survey.” Q3 2023. https://www.randstad.com/workforce-insights/future-work/ai-threatening-jobs-most-workers-say-technology-an-accelerant-for-career-growth/

    [9] Briggs, Joseph, and Devesh Kodnani. “The Potentially Large Effects of Artificial Intelligence on Economic Growth.” Goldman Sachs, March 26, 2023. https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf

    [10] Chaturvedi, Aakanksha. “‘Might take toll on low-skilled staff’: How AI can cost BPO, IT employees their jobs.” Business Today, April 5, 2023. https://www.businesstoday.in/latest/corporate/story/might-take-toll-on-low-skilled-staff-how-ai-can-cost-bpo-it-employees-their-jobs-376172-2023-04-05

    [11] Sharma, Divyanshi. “Can AI take over human jobs? This is what Infosys founder NR Narayan Murthy thinks.” India Today, February 27, 2023. https://www.indiatoday.in/technology/news/story/can-ai-take-over-human-jobs-this-is-what-infosys-founder-nr-narayan-murthy-thinks-2340299-2023-02-27

    [12] McKinsey Global Institute. “Generative AI and the future of work in America.” July 26, 2023. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america

    [13] Kelly, Philippa. “AI is coming for our jobs! Could universal basic income be the solution?” The Guardian, November 16, 2022. https://www.theguardian.com/global-development/2023/nov/16/ai-is-coming-for-our-jobs-could-universal-basic-income-be-the-solution

    [14] Utrecht University. “What works (Weten wat werkt).” March 2020. https://www.uu.nl/en/publication/final-report-what-works-weten-wat-werkt

    [15] Merola, Rossana. “Inclusive Growth in the Era of Automation and AI: How Can Taxation Help?” *Frontiers in Artificial Intelligence* 5 (2022). Accessed November 23, 2023. https://www.frontiersin.org/articles/10.3389/frai.2022.867832

    [16]  Rao, Anand. “A Strategist’s Guide to Artificial Intelligence.” PwC, May 10, 2017.https://www.strategy-business.com/article/A-Strategists-Guide-to-Artificial-Intelligence

     

  • Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Using Artificial Intelligence to address Corruption: A proposal for Tamilnadu

    Nations must adopt Artificial Intelligence as a mechanism to build transparency, integrity, and trustworthiness, which are necessary to fight corruption. Without effective public scrutiny, the risk of money being lost to corruption and misappropriation was vast. Dr Chris Kpodar, a global Artificial Intelligence Specialist, has advocated the use of artificial intelligence as an anti-corruption tool through the redesigning of systems to address systems that were previously prone to bribery and corruption.

     

    Artificial Intelligence Tools

    Artificial Intelligence has become popular due to its increasing applications in many fields. Recently, IIT Madras opened a course on B.Tech Data Science in Tanzania, demonstrating the popularity of Artificial Intelligence. The history of Artificial Intelligence goes back to the 1950s when computing power was less, and hardware were huge. These days, computing power has increased exponentially along with the miniaturisation of hardware, leading to algorithms being able to compute larger datasets. The field of AI, however, has gone through ups and downs in terms of popularity.

    Researchers have worked on Neural Networks (Figure below), a mathematical model modelled after neurons in the brain, a foundation unit, and one of the foundations of state-of-the-art AI.

    Artificial intelligence (AI), machine learning, deep learning, and data science are popular terms that describe computing fields that teach a machine how to learn. AI is a catch-all term that broadly means computing systems designed to understand and replicate human intelligence. Machine Learning is a subfield of AI where algorithms are trained on datasets to make predictions or decisions without explicitly being programmed. Deep Learning is a subfield of Machine Learning, which specifically refers to using multi-layers of neural networks to learn from large datasets, mimicking cognition of the neurons in the brain. Recently, the field of AI has resurged in popularity after a popular type of neural network architecture, AlexNET, achieved impressive results in the Image Recognition Challenge in 2012. Since then, neural networks have started to enter into applications in the industry, with colossal research funding mobilised.

    Breakthroughs that can aid Policy Implementation

    There are many types of neural networks, each designed for a particular application. The recent popularity of applications like ChatGPT is due to a neural network called Language Models. Language Models are probability models which ask the question, what is the next best token to generate, given the previous token?

    Two significant breakthroughs led towards ChatGPT, including translating language from one language to another using a machine learning technique called attention mechanism. Secondly, this technique was introduced in transformer-type language models, which led to increased state-of-the-art performance in many tasks in artificial intelligence.

    Transformers, a robust neural network, was introduced in 2017 by Google Researchers in “Attention is All You Need”. This translates into generating human-like text in ChatGPT. Large language models have taken a big step in the technology landscape. As Machine Learning applications are being deployed rapidly, it calls for a governance model for these models, as research in AI models is advancing quickly with innumerable breakthroughs. Earlier in 2019, GPT-2, a Machine Learning model based on transformers, could not solve fundamental mathematical problems such as elucidating numbers from 0-100. Within a year, more advancement in the GPT models led to models being able to perform higher-level scores in SAT exams, GRE, etc. Another breakthrough advancement was the ability of machine-learning programs to generate code, which has increased developer productivity automatically.

     Moreover, many researchers are working on AGI (Artificial General Intelligence), and nobody knows precisely when such capabilities might be developed or researched. Researchers have not settled on a convincing definition of AGI agreeable to everyone in the AI research community. The rate of advancement and investment in AI research is staggering, which calls for ethical concerns and governance of these large language models. India is an emerging economy where all sectors are growing rapidly. India’s economy grows nearly 10% yearly, with the services sector making up almost 50% of the entire economy. This translates to the government enjoying high tax revenues from this sector, generating high-paying jobs. Most of the Indian workforce is employed in the industrial and agricultural sectors.

    Using AI to deal with Corruption and enhance Trust

    The primary issue in India has been corruption at all levels of the government, from the panchayat, district level, and state level to central machinery. Corruption is attributed mainly to regulation, rent-seeking behaviour, lack of accountability, and requiring permits from the Government. Indian bureaucratic system and government employees are among the least efficient across sectors such as infrastructure, real estate, metal & mining, aerospace & defence, power and utility, which are also most susceptible to corruption. Due to inefficiency, the productivity of the public sector is low, impacting the local Indian economy.

    India ranks 85 out of 180 countries using the Corruption Index measured in 2022, with close to 62% of Indians encountering corruption, paying bribes to government officials to get the job done. There are many reasons for corruption in India: excessive regulation, a complicated tax system, bureaucratic hurdles, lack of ownership of work, and the public sector being the least productive organisation. Corruption is dishonest or fraudulent conduct by those in power, typically involving bribery. Bribery is defined generally as corrupt solicitation, acceptance, or transfer of value in exchange for official action. In bribery, there are two actors in the transaction, the giver and the receiver; however, corruption involves primarily one actor who abuses the position of power for personal gain. Bribery is a singular act, while corruption might be an ongoing abuse of power to benefit oneself.

    Trust is a critical glue in financial transactions; where trust between individuals is higher, the economic transactions are faster, and the economy grows, with more businesses moving, bringing capital, and increasing the production and exchange of goods. However, when trust is low, businesses hesitate, and the economy either stagnates or declines. High-trust societies like Norway have advanced financial systems, where credit and financial instruments are more developed, compared with lower-trust societies such as Kenya and India, where many financial instruments and capital markets to raise finances are unavailable. Therefore, public policymakers must seek ways to increase trust in their local economies by forming policies conducive to business transactions.

    The real-estate sector in Tamilnadu: a fit case for the use of AI

    Tamil Nadu is India’s second-largest economy and is the most industrialised and urbanised state in India. Real estate is an economic growth engine and a prime mover of monetary transactions. It is a prime financial asset for most Tamils from many social strata. However, real estate in Tamil Nadu is prone to corruption at many levels. One specific popular method is the forgery of land registration documents, which has resulted in a lack of trust among investors at all levels in Tamil Nadu.

    To address this lack of trust, we can use technology tools to increase confidence and empower the public to create an environment of accountability, resulting in greater confidence. Machine Learning can provide algorithms to detect these forgeries and prevent land grabbing. Tools such as identity analysis, document analysis, and transaction pattern analysis can help to provide more accountability. In addition to the above, machine learning offers many methods or combinations of methods that can be used. One advanced way is using transformer-based models, which are the foundation for language models such as BERT and generative Pre-Trained Models for text-based applications. The original documents could be trained using large language models as a baseline to frequently check and find forgeries. Documents can be encoded to compare semantic anomalies between different types of documents.

    Once forgery is detected, it can be automatically sent to civil magistrates or pertinent authorities. Additionally, the recent introduction of Software repository sites allows the public to be informed or notice any change in the status or activity. Customised public repositories based on GitHub might create immense value for Tamil Nadu’s Department of Revenue, create accountability, increase productivity and reduce workload. The Customised public repositories displaying land transaction activity might inform the public of such forgeries, thus creating an environment of greater accountability and trust for the people. Another popular method can be introduced by introducing Computer Vision Algorithms, such as convolutional neural networks combined with BERT, that can validate signatures, document tampering, and time-frames to flag forgeries. This can be done by training original documents with specific algorithms and checking documents with reasonable doubts about forgery.

    Another primary concern in Tamil Nadu’s Government has been people in positions of power in the government or close to financial oversight. They are more prone to corruption, which can be flagged or monitored using graph neural networks, which can map individuals, connections, and economic transactions in a network to flag which individuals are more likely or prone to corruption. Another method to reduce corruption is to remove personal discretion in the process, which Machine Learning can enable to automate the tasks and documents in land registration; digitisation might help reduce corruption. Large Language Models can also be used as classifiers and released to the public to keep accountability on the Tamil Nadu Government’s spending, so the public is aware and personal gain of Government money can be further reduced this way. Another central area of corruption is the tender, the bidding process for government contracts in Tamil Nadu, such as public development works or engineering projects. Tamil Nadu’s tender or bidding process can be made more public, and machine learning algorithms can be used to check if norms, contracts, and procedures are followed to award tender bids for government projects. To save wasteful expenditure, algorithms can check if objective conditions are met, with any deviations flagged and in the public domain. Given any suspicion, the public can file a PIL in Tamil Nadu’s court system.

    We can argue and conclude that with more deployed machine learning tools being part of Tamil Nadu’s State machinery, we can confidently say that corruption can be reduced to more significant levels by releasing all information to the public and creating an environment of greater accountability.

    References:

    1. Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach

    2.Bau, D., Elhussein, M., Ford, J. B., Nwanganga, H., & Sühr, T. (n.d.). Governance of AI models. Managing AI risks. https://managing-ai-risks.com/

    1. S. Department of State. (2021). 2021 Country Reports on Human Rights Practices: India. U.S. Department of State. https://www.state.gov/reports/2021-country-reports-on-human-rights-practices/india/
    1. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT (pp. 4171-4186). https://arxiv.org/abs/1810.04805
    1. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8). https://openai.com/blog/better-language-models/
    1. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI blog, 12. https://openai.com/blog/language-unsupervised/
    2. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., … Kaplan, J. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv preprint arXiv:2212.08073. https://arxiv.org/pdf/2212.08073.pdf,

    https://www.anthropic.com/news/constitutional-ai-harmlessness-from-ai-feedback

    1. Reinforcement Learning with Human Feedback (RLHF), Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. https://arxiv.org/abs/2203.02155

    Feature Image: modernghana.com

  • Is Singularity here?

    Is Singularity here?

    One of the most influential figures in the field of AI, Ray Kurzweil, has famously predicted that the singularity will happen by 2045. Kurzweil’s prediction is based on his observation of exponential growth in technological advancements and the concept of “technological singularity” proposed by mathematician Vernor Vinge.

    The term Singularity alludes to the moment in which artificial intelligence (AI) becomes indistinguishable from human intelligence. Ray Kurzweil, one of AI’s fathers and top apologists, predicted in 1999 that Singularity was approaching (Kurzweil, 2005). In 2011, he even provided a date for that momentous occasion: 2045 (Grossman, 2011). However, in a book in progress, initially estimated to be released in 2022 and then in 2024, he announces the arrival of Singularity for a much closer date: 2029 (Kurzweil, 2024). Last June, though, a report by The New York Times argued that Silicon Valley was confronted by the idea that Singularity had already arrived (Strifeld, 2023). Shortly after that report, in September 2023, OpenAI announced that ChatGPT could now “see, hear and speak”. That implied that generative artificial intelligence, meaning algorithms that can be used to create content, was speeding up.

     Is, thus, the most decisive moment in the history of humankind materializing before our eyes? It is difficult to tell, as Singularity won’t be a big noticeable event like Kurzweil suggests when given such precise dates. It will not be a discovery of America kind of thing. On the contrary, as Kevin Kelly argues, AI’s very ubiquity allows its advances to be hidden. However, silently, its incorporation into a network of billions of users, its absorption of unlimited amounts of information and its ability to teach itself, will make it grow by leaps and bounds. And suddenly, it will have arrived (Kelly, 2017).

    The grain of wheat and the chessboard

             What really matters, though, is the gigantic gap that will begin taking place after its arrival. Locked in its biological prison, human intelligence will remain static at the point where it was reached, while AI shall keep advancing at exponential speed. As a matter of fact, the human brain has a limited memory capacity and a slow speed of processing information: About 10 Hertz per second (Cordeiro, 2017.)  AI, on its part, will continue to double its capacity in short periods of time. This is reminiscent of the symbolic tale of the grain of wheat and the chess board, which takes place in India. According to the story, if we place one grain of wheat in the first box of the chess board, two in the second, four in the third, and the number of grains keeps doubling until reaching box number 64, the total amount of virtual grains on the board would exceed 18 trillion grains (IntoMath). The same will happen with the advance of AI.

    The initial doublings, of course, will not be all that impressive. Two to four or four to eight won’t say much. However, according to Ray Kurzweil, the moment of transcendence would come 15 years after Singularity itself, when the explosion of non-human intelligence should have become overwhelming (Kurzweil, 2005). But that will be only the very beginning. Centuries of progress would be able to materialize in years or even months. At the same time, though, centuries of regression in the relevance of the human race could also occur in years or even months.

    Humans equaling chickens

             As Yuval Noah Harari points out, the two great attributes that separate homo sapiens from other animal species are intelligence and the flow of consciousness. While the first has allowed humans to become the owners of the planet, the second gives meaning to human life. The latter translates into a complex interweaving of memories, experiences, sensations, sensitivities, and aspirations: meaning, the vital expressions of a sophisticated mind. According to Harari, though, human intelligence will be utterly negligible compared to the levels to be reached by AI. In contrast, the flow of consciousness will be an expression of capital irrelevance in the face of algorithms’ ability to penetrate the confines of the universe. Not in vain, in his terms, human beings will be to AI the equivalent of what chickens are for human beings (Harari, 2016).

             Periodically, humanity goes through transitional phases of immense historical significance that shake everything on its path. During these, values, beliefs and certainties are eroded to their foundations and replaced by new ones. All great civilizations have had their own experiences in this regard. In the case of the Western World, there have been three significant periods of this kind in the last six hundred years: The Renaissance that took place in the 15th and 16th centuries, the Enlightenment of the 18th, and Modernism that began at the end of the 19th century and reached its peak in the 20th.

    Renaissance, Enlightenment and Modernism

    The Renaissance is understood as a broad-spectrum movement that led to a new conception of the human being, transforming it in the measure of all things. At the same time, it expressed a significant leap in scientific matters where, beyond great advances in several areas, the Earth ceased to be seen as the centre of the universe. The Enlightenment placed reason as the defining element of society. Not only in terms of the legitimacy of political power but also as the source of liberal ideals such as freedom, progress, or tolerance. It was, concurrently, the period in which the notion of harmony was projected into all orders, including the understanding of the universe. During this time, the scientific method began to be supported by verification and evidence. Enlightenment represented a new milestone in the self-gratifying vision human beings had of themselves.

    Modernism, understood as a movement of movements, overturned prevailing paradigms in almost all areas of existence. Among its numerous expressions were abstract art in its multiple variables, an introspective narrative that gave a free run to the flow of consciousness, and psychoanalysis, the theatre of the absurd. In sum, reason and harmony were turned upside down at every step. Following its own dynamic but feeding back the former, science toppled down the pillars of certainty. This included the conception of the universe built by Newton during the Enlightenment. The conventional notions of time and space lost all meaning under the theory of Relativity while, going even further, quantum physics made the universe a place dominated by randomness. Unlike the previous two periods of significant changes, Modernism eroded to its bones the self-gratifying vision human beings had of themselves.

    The end of human centrality

             Renaissance, Enlightenment and Modernism unleashed and symbolized new ways of perceiving the human being and the universe surrounding him. Each of these movements placed humanity before new levels of consciousness (including the subconscious during Modernism). In each of them, humans could feel themselves more or less valued, more secure or insecure with respect to their own condition and its position in relation to the universe itself. However, a fundamental element was never altered: Humans always studied themselves and their surroundings. Even while questioning their nature and motives, they reaffirmed their centrality within the planet. As it had been defined since the Renaissance, humans kept being the measure of all Earthly things.

    Singularity, however, is called to destroy that human centrality in a radical, dramatic, and irreversible way. As a result, human beings will not only confront its obsolescence and irrelevance but will embark on the path towards becoming equals to chickens. Everything previously experienced in the march of human development, including the three above-mentioned groundbreaking periods, will pale drastically by comparison.

    The countdown towards the end

             We are, thus, within the countdown towards the henhouse grounds. Or worse still, towards the destruction of the human race itself. That is what Stephen Hawking, one of the most outstanding scientists of our time, believed would result from the advent of AI’s dominance. This is also what hundreds of top-level scientists and CEOs of high-tech companies felt when, in May 2023, they signed an open letter warning about the risk to human subsistence involved in an uncontrolled AI. For them, the risk for humanity associated with this technology was on par with those of a nuclear war or a devastating human pandemic. Furthermore, at a “summit” of bosses of large corporations held at Yale University in mid-June this year, 42 percent indicated that AI could destroy humanity in five to ten years (Egan, 2023).

    The risk for humanity associated with AI technology was on par with those of a nuclear war or a devastating human pandemic. At a “summit” of bosses of large corporations held at Yale University in mid-June this year, 42 percent indicated that AI could destroy humanity in five to ten years.

             In the short to medium term, although at the cost of increasing and massive unemployment, AI will spurt gigantic advances in multiple fields. Inevitably, though, at some point, this superior intelligence will escape human control and pursue its own ends. This may happen if freed from the “jail” imposed by its programmers by some interested hand. The natural culprits of these actions would come from what Harari labels as the community of experts. Among its members, many believe that if humans can no longer control the overwhelming volumes of information available, the logical solution is to pass the commanding torch to AI (Harari, 2016). The followers of the so-called Transhumanist Party in the United States represent a perfect example of this. They aspire to have a robot as President of that country within the next decade (Cordeiro, 2017). However, AI might be able to free itself of human constraints without any external help. Along the road, its own self-learning process would certainly allow so. One way or the other, when this happens, humanity will be doomed.

             As a species, humans do not seem to have much of an instinct for self-preservation. If nuclear war or climate change doesn’t get rid of us, AI will probably take care of it. The apparently imminent arrival of Singularity, thus, should be seen with frightful eyes.

    References

    Cordeiro, José Luis (2017). “En 2045 asistiremos a la muerte de la muerte”. Conversando con Gustavo Núñez, AECOC, noviembre.

    Egan, Matt (2023). “42% of CEOs say AI could destroy humanity in five to ten years”, CNN Business, June 15.

    Harari, Yuval Noah (2016). Homo Deus. New York: Harper Collins.

    Grossman, Lev (2011) “2045: The Year the Man Becomes Inmortal”, Time, February 10.

    IntoMath, “The Wheat and the Chessboard: Exponents.

    Kelly, Kevin (2017). The Inevitable. New York: Penguin Books.

    Kurzweil, Ray (2005). The Singularity is Near. New York: Viking Books.

    Kurzweil, Ray (2024). The Singularity is Nearer. New York: Penguin Random House.

    Streifeld, David (2023). “Silicon Valley Confronts the Idea that Singularity is Here”, The New York Times, June 11.

    Feature Image Credit: Technological Singularity https://kardashev.fandom.com

    Text Image: https://ts2.space

  • Why India risks a quantum tech brain drain

    Why India risks a quantum tech brain drain

    Clear career progression would help India’s quantum workforce and avoid a brain drain overseas

    India could lose its best quantum tech talent if the industry doesn’t get its act together.

    Quantum technology has the potential to revolutionise our lives through speeds which once seemed like science fiction.

    India is one of a few nations with national quantum initiatives and it stands on the threshold of potentially enormous technological and social benefits.

    The National Quantum Mission, approved by the national cabinet in April, is a timely government initiative that has the potential to catapult India to a global leader leading in quantum research and technologies if leveraged correctly.

    Its main areas of research are quantum computing, secure quantum communications, quantum sensing and metrology and quantum materials.

    The challenge for India is how it ensures it gets the best out of the mission.

    The benefits of the technology can benefit many aspects of society through processing power, accuracy and speed and can positively impact health, drug research, finance and economics.

    Similarly, quantum security can revolutionise security in strategic communication sectors including defence, banking, health records and personal data.

    Quantum sensors can enable better GPS services through atomic clocks and high-precision imaging while quantum materials research can act as an enabler for more quantum technologies.

    But the Indian quantum ecosystem is still academia-centric.

    India’s Department of Science and Technology had set up a pilot programme on Quantum Enabled Science and Technologies — a precursor to the National Quantum Mission.

    As a result, India has a large number of young and energetic researchers, working at places such as RRI Bangalore, TIFR and IIT Delhi who have put an infrastructure in place for the next generation quantum experiments with capabilities in different quantum technology platforms. These include quantum security through free space, fibres as well-integrated photonics, quantum sensing and metrology.

    The prospects and impact of quantum technologies will be hugely strategic. Predictions suggest quantum computing will have a profound impact on financial services, logistics, transportation, aerospace and automotive, materials science, energy, agriculture, pharmaceuticals and healthcare, and cybersecurity. All of these areas are strategic on macroeconomic and national security scales.

    Even as it has taken significant policy initiative to kickstart research into quantum technologies, India will need to craft a national strategy with a long-term perspective and nurture and develop its research work force.

    Clear career progression would help India’s quantum workforce. The risk of brain drain, where local talent moves overseas for better opportunities, could be a real possibility if different industries which can benefit from the technology fail to recognise its transformative capabilities and how it can help create jobs and opportunities.

    While there are multiple labs working in different quantum sectors, the career path of students and post-doctoral researchers remains unclear as there are not enough positions in the academic sector.

    One problem is industry and academia are competing with each other for quantum research funding which is why equal emphasis on quantum technology development in the industrial sector could help.

    While India does have some quantum start-ups, more lab-to-market innovations which would make the technology practically useful could give the field momentum. Currently, the big industrial firms in India are not yet committed to quantum technology.

    The lack of homegrown technologies like optical, optomechanical and electronic components for precision research is another impediment. Most of these are imported, resulting in financial drain and long delays in research.

    The National Quantum Mission could help fix a number of these problems.

    Hurdles could be turned into opportunities if more start-ups and established industries were to manufacture high-end quantum technology enabling products in India.

    Another major deterrent is the lack of coordination. Multiple efforts to develop and research the technology, across government and start-ups, do not seem to have coherence and still lack maturity. People involved in quantum research are hopeful the mission will help address this.

    Like most other countries, India has witnessed plenty of hype about quantum research. While this may help provide a short-term boost to the field, excessive hype can lead to unrealistic expectations.

    Continuing to build a skilled workforce and a clear career progression plan for those involved in research and development of quantum technologies can help secure India’s future in this space.

    There is a distinction between magic and miracles and while believing in one, one should not start expecting the latter as that can only lead to disappointment in the long run.

     

    This article was originally published under Creative Commons by 360info™.

     

  • The Geopolitical Consolidation of Artificial Intelligence

    The Geopolitical Consolidation of Artificial Intelligence

    Key Points

    • IT hardware and Semiconductor manufacturing has become strategically important and critical geopolitical tools of dominant powers. Ukraine war related sanctions and Wassenaar Arrangement regulations invoked to ban Russia from importing or acquiring electronic components over 25 Mhz.
    • Semi conductors present a key choke point to constrain or catalyse the development of AI-specific computing machinery.
    • Taiwan, USA, South Korea, and Netherlands dominate the global semiconductor manufacturing and supply chain. Taiwan dominates the global market and had 60% of the global share in 2021. Taiwan’s one single company – TSMC (Taiwan Semiconductor Manufacturing Co), the world’s largest foundry, alone accounted for 54% of total global revenue.
    • China controls two-thirds of all silicon production in the world.
    • Monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence (AI), exacerbating the strategic and innovation bottlenecks for developing countries like India.
    • Developing a competitive advantage over existing leaders would require not just technical breakthroughs but also some radical policy choices and long-term persistence.
    • India should double down over research programs on non-silicon based computing with a national urgency instead of pursuing a catch-up strategy.

    Russia was recently restricted, under category 3 to category 9 of the Wassenaar Arrangement, from purchasing any electronic components over 25MHz from Taiwanese companies. That covers pretty much all modern electronics. Yet, the tangibles of these sanctions must not deceive us into overlooking the wider impact that hardware access and its control have on AI policies and software-based workflows the world over. As Artificial Intelligence technologies reach a more advanced stage, the capacity to fabricate high-performance computing resources i.e. semiconductor production becomes key strategic leverage in international affairs.

    Semiconductors present a key chokepoint to constrain or catalyse the development of AI-specific computing machinery. In fact, most of the supply of semiconductors relies on a single country – Taiwan. The Taiwan Semiconductor Manufacturing Corporation (TSMC) manufactures Google’s Tensor Processing Unit (TPU), Cerebras’s Wafer Scale Engine (WSE), as well as Nvidia’s A100 processor. The following table provides a more detailed1 assessment:

    Hardware Type

    AI Accelerator/Product Name

    Manufacturing Country

    Application-Specific Integrated Circuits (ASICs)

    Huawei Ascend 910

    Taiwan

    Cerebras WSE

    Taiwan

    Google TPUs

    Taiwan

    Intel Habana

    Taiwan

    Tesla FSD

    USA

    Qualcomm Cloud AI 100

    Taiwan

    IBM TrueNorth

    South Korea

    AWS Inferentia

    Taiwan

    AWS Trainium

    Taiwan

    Apple A14 Bionic

    Taiwan

    Graphic Processing Units (GPUs)

    AMD Radeon

    Taiwan

    Nvidia A100

    Taiwan

    Field-Programmable Gate Arrays (FPGAs)

    Intel Agilex

    USA

    Xilinx Virtex

    Taiwan

    Xilinx Alveo

    Taiwan

    AWS EC2 FI

    Taiwan

    As can be seen above, the cake of computing hardware is largely divided in such a way that the largest pie holders also happen to form a singular geopolitical bloc vis-a-vis China. This further shapes the evolution of territorial contests in the South China Sea. This monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence, especially exacerbating the strategic and innovation bottlenecks for developing countries like India. Since the invention of the transistor in 1947, and her independence, India has found herself in an unenviable position where there stands zero commercial semiconductor manufacturing capacity after all these years while her office-bearers continually promise of leading in the fourth industrial revolution.

    Bottlenecking Global AI Research

    There are two aspects of developing these AI accelerators – designing the specifications and their fabrication. AI research firms first design chips which optimise hardware performance to execute specific machine learning calculations. Then, semiconductor firms, operating in a range of specialities and specific aspects of fabrication, make those chips and increase the performance of computing hardware by adding more and more transistors to pieces of silicon. This combination of specific design choices and advanced hardware fabrication capability forms the bedrock that will decide the future of AI, not the amount of data a population is generating and localising.

    However, owing to the very high fixed costs of semiconductor manufacturing, AI research has to be focused on data and algorithms. Therefore, innovations in AI’s algorithmic efficiency and model scaling have to compensate for a lack of equivalent situations in the AI’s hardware. The aggressive consolidation and costs of hardware fabrication mean that firms in AI research are forced to outsource their hardware fabrication requirements. In fact, as per DARPA2, because of the high costs of getting their designs fabricated, AI hardware startups do not even receive much private capital and merely 3% of all venture funding between 2017-21 in AI/ML has gone to startups working on AI hardware.

    But TSMC’s resources are limited and not everyone can afford them. To get TSMC’s services, companies globally have to compete with the likes of Google and Nvidia, therefore prices go further high because of the demand side competition. Consequently, only the best and the biggest work with TSMC, and the rest have to settle for its competitors. This has allowed this single company to turn into a gatekeeper in AI hardware R&D. And as the recent sanctions over Russia demonstrate, it is now effectively playing the pawn which has turned the wazir in a tense geopolitical endgame.

    Taiwan’s AI policy also reflects this dominance in ICT and semiconductors – aiming to develop “world-leading AI-on-Device solutions that create a niche market and… (make Taiwan) an important partner in the value chain of global intelligent systems”.3 The foundation of strong control over the supply of AI hardware and also being #1 in the Global Open Data Index, not just gives Taiwan negotiating leverage in geopolitical competition, but also allows it to focus on hardware and software collaboration based on seminal AI policy unlike most countries where the AI policy and discourse revolve around managing the adoption and effects of AI, and not around shaping the trajectory of its engineering and conceptual development like the countries with hardware advantage.

    Now to be fair, R&D is a time-consuming, long-term activity which has a high chance of failure. Thus, research focus naturally shifts towards low-hanging fruits, projects that can be achieved in the short-term before the commissioning bureaucrats are rotated. That’s why we cannot have a nationalised AGI research group, as nobody will be interested in a 15-20 year-long enterprise when you have promotions and election cycles to worry about. This applies to all high-end bleeding-edge technology research funding everywhere – so, quantum communications will be prioritised over quantum computing, building larger and larger datasets over more intelligent algorithms, and silicon-based electronics over researching newer computing substrates and storage – because those things are more friendly to short-term outcome pressures and bureaucracies aren’t exactly known to be a risk-taking institution.

    Options for India

    While China controls 2/3 of all the silicon production in the world and wants to control the whole of Taiwan too (and TSMC along with its 54% share in logic foundries), the wider semiconductor supply chain is a little spreadout too for any one actor’s comfort. The leaders mostly control a specialised niche of the supply chain, for example, the US maintains a total monopoly on Electronic Design Automation (EDA) software solutions, the Netherlands has monopolised Extreme UltraViolet and Argon Flouride scanners, and Japan has been dishing out 300 mm wafers used to manufacture more than 99 percent of the chips today.4 The end-to-end delivery of one chip could have it crossing international borders over 70 times.5 Since this is a matured ecosystem, developing a competitive advantage over existing leaders would require not just proprietary technical breakthroughs but also some radical policy choices and long term persistence.

    It is also needless to say that the leaders are also able to attract and retain the highest quality talent from across the world. On the other hand, we have a situation where regional politicians continue cribbing about incoming talent even from other Indian states. This is therefore the first task for India, to become a technology powerhouse, she has to, at a bare minimum, be able to retain all her top talent and attract more. Perhaps, for companies in certain sectors or of certain size, India must make it mandatory to spend at least X per cent of revenue on R&D and offer incentives to increase this share – it’ll revamp things from recruitment and retention to business processes and industry-academia collaboration – and in the long-run prove to be a lot more socioeconomically useful instrument than the CSR regulation.

    It should also not escape anyone that the human civilisation, with all its genius and promises of man-machine symbiosis, has managed to put all its eggs in a single basket that is also under the constant threat of Chinese invasion. It is thus in the interest of the entire computing industry to build geographical resiliency, diversity and redundancy in the present-day semiconductor manufacturing capacity. We don’t yet have the navy we need, but perhaps in a diplomatic-naval recognition of Taiwan’s independence from China, the Quad could manage to persuade arrangements for an uninterrupted semiconductor supply in case of an invasion.

    Since R&D in AI hardware is essential for future breakthroughs in machine intelligence – but its production happens to be extremely concentrated, mostly by just one small island country, it behoves countries like India to look for ways to undercut the existing paradigm of developing computing hardware (i.e. pivot R&D towards DNA Computing etc) instead of only trying to pursue a catch-up strategy. The current developments are unlikely to solve India’s blues in integrated circuits anytime soon. India could parallelly, and I’d emphatically recommend that she should, take a step back from all the madness and double down on research programs on non-silicon-based computing with a national urgency. A hybrid approach toward computing machinery could also resolve some of the bottlenecks that AI research is facing due to dependencies and limitations of present-day hardware.

    As our neighbouring adversary Mr Xi says, core technologies cannot be acquired by asking, buying, or begging. In the same spirit, even if it might ruffle some feathers, a very discerning reexamination of the present intellectual property regime could also be very useful for the development of such foundational technologies and related infrastructure in India as well as for carving out an Indian niche for future technology leadership.

    References:

    1. The Other AI Hardware Problem: What TSMC means for AI Compute. Available at https://semiliterate.substack.com/p/the-other-ai-hardware-problem

    2. Leef, S. (2019). Automatic Implementation of Secure Silicon. In ACM Great Lakes Symposium on VLSI (Vol. 3)

    3. AI Taiwan. Available at https://ai.taiwan.gov.tw/

    4. Khan et al. (2021). The Semiconductor Supply Chain: Assessing National Competitiveness. Center for Security and Emerging Technology.
    5. Alam et al. (2020). Globality and Complexity of the Semiconductor Ecosystem. Accenture.

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in

  • Contemporary and Upcoming Issues In the Field of Intellectual Property Rights

    Contemporary and Upcoming Issues In the Field of Intellectual Property Rights

    1.1   Contemporary Issues: IP Awareness and Drug Price Caps

    1.1.1. Introduction

    The realm of intellectual property (IP) rights has been in existence and been a driving force for novelty and innovation for centuries and can be dated back to at least 500 BC when a state in Greece provided protection for 1 year to innovators of ‘a new refinement in luxury’, ensuring innovators can monopolize and reap benefits out of their innovations.[i] That being the case, the first international convention (known as the ‘Paris Convention’) was enforced much later in the year 1883, establishing a union for the protection of ‘industrial property’. Since then, we have seen rapid growth in the field of IP rights. It goes without saying that till the time entrepreneurs are incorporating companies, innovators are inventing technology or artists are creating their works of art and/or literature, the domain of IP will only progress further.

    Although the evolution of international IP regime has been rapid and the laws have become a lot more complicated than they initially were, it appears that we have only scratched the surface of the extent and reach of IP rights. It cannot be stressed enough that IP rights are crucial to every company, creator and inventor since it ensures that their rights and interests are protected and gives them the right to claim relief against any violation.

    Insofar as the Indian IP regime is concerned, we have seen a gradual but crucial development in our laws which has now motivated not only foreign corporations to seek IP protection in India but has also supported start-ups in seeking protection of their IP to the extent that these enterprises have the liberty to seek the protection of their IP at significantly reduced fees (barring copyright and geographical indications). The Indian Intellectual Property Office (IPO) has also taken measures to promote e-filing by reducing costs associated therewith and improving its e-filing system/mechanism. However, the issues arise when start-ups and small enterprises seek to register their IP and are unaware of these common, but cost-effective mechanisms in place.

    Besides, our intellectual property policies (especially patent policies) have been a subject matter of criticism for a long time at a global stage due to the government’s intervention in the enforcement of patent rights. One of the primary concerns for foreign corporations and organizations have been related to working of patented inventions in India and the issue of compulsory licensing.

    1.1.2. Lack of Awareness of Intellectual Property Rights

    Launched by the Government of India in 2014, the ‘Make in India’ project has motivated entrepreneurs to establish their business with the help of government funding and foreign direct investments (FDI) of up to 100%.[ii] This step has led to a boost in people exploiting their entrepreneurial skills to establish a successful business (in most cases). Although the Make in India project also focuses on the importance of IP rights by attempting to educate the entrepreneurial minds of the importance and benefits of their IP, it appears that small businesses are yet to benefit from the IP aspect of the project. These businesses/start-ups do not realize the importance of their IP and tend to often misuse violate another’s. This leads to the institution of a lawsuit seeking infringement (or passing off) against such businesses by big corporations and since defending such Suits is an expensive and time-consuming process, it becomes an uphill task for the entrepreneurs to defend the Suits and run their business effectively. Entrepreneurs are often misinformed and miseducated of the basics of IP by professionals who do not have an expertise in the area of IP law, which leads them to believe that their acts of adopting an identical or deceptively similar trademark would go unnoticed or would not constitute infringement or passing off. Due to their lack of knowledge in the domain of IP and lack of proper guidance by professionals, these entrepreneurs tend to believe that: –

    • Adopting an identical mark (intentionally) in a different class does not constitute infringement or passing off;
    • Adopting a similar mark in the same (or allied and cognate) class does not constitute infringement or passing off;
    • Even if the competing marks are identical or deceptively similar, filing a trademark application with a user claim would give them a defensible stand against the true proprietor’s claim.

    Needless to say, these are some of the common misconceptions which lead to a claim of infringement or passing off being raised by true proprietors of the marks. Also, the possibility of the Court of law imposing damages and/or costs on a defendant cannot be ruled out either. In such a scenario, due to the limited funding of these start-ups, they are often forced to reconsider their entire business strategy in-line with the pending lawsuit. This can, however, be avoided if the entrepreneurs are either well-educated in the field of IP law or take necessary steps to ensure that they receive proper guidance regarding risks involved in registration and use of their mark, from a professional with expertise in the field of IP laws. Instances of start-ups adopting a similar or identical mark of a big corporation/start-up are quite common nowadays with some of the known cases being instituted by ‘Bookmyshow’ against ‘Bookmyoffer’, ‘Shaadi.com’ seeking relief against use of ‘Secondshaadi.com’, ‘Naukri.com’ suing ‘naukrie.com’, etc.[iii]

    In instances involving the pharmaceutical industry, the issue becomes far severe since adopting a similar or identical mark can result not only in infringement of IP but can only be extremely harmful to the patients/consumers. Unlike any other consumable item, patients/consumers are at much greater risk if they consume wrong medication and such instances where corporations adopt a similar or identical mark for its pharmaceutical drug, the consequence can be fatal to the extent that it may even lead to death. In one such famous instance in India where the Defendant was a repeated/hardened infringer, the High Court of Bombay while imposing exemplary costs of INR 1.5 crores stated “Drugs are not sweets. Pharmaceutical companies which provide medicines for the health of the consumers have a special duty of care towards them. These companies have a greater responsibility towards the general public. However, nowadays, the corporate and financial goals of such companies cloud the decision of its executives whose decisions are incentivized by profits, more often than not, at the cost of public health. This case is a perfect example of just that”.

    Another issue these entrepreneurs/start-ups tend to face in the realm of IP law is vis-a-vis use of copyrighted material without knowledge/intention to infringe upon someone else’s IP rights. For instance, when start-ups launch their online portals, they tend to use images/GIFs or music for their videos which are copyrighted and use thereof without permission is illegal. On account of lack of knowledge of IP laws and consequences of such misuse, they often violate rights residing in the copyrighted work and are then subject to either a legal notice from the owner/proprietor of the copyrighted material or a lawsuit before the Court of law.

    The United States of America’s (USA) Chamber of Global Innovation Policy Center (GIPC) which promotes innovation and creativity through intellectual property standards, in its 2019 list of countries performing in the field of IP law, places India at a substantially low rank of 40 out of 53[iv] which indicates that USA considers India as a major threat when it comes to development and investment the field of IP rights in India (especially in the field of patents). Additionally, India also lacks in the number of patent applications filed before the Indian Patent Office, averaging at around 9,500 filings per year, compared to 2,69,000 filings in the USA.[v] One of the primary reason behind this difference might have something to do with India’s lack of support towards the encouragement of IP protection, especially for start-ups.

    1.1.3. Raising Awareness on Intellectual Property Laws for Entrepreneurs

    With almost 50% of litigations within the domain of IP pertaining to trademark infringement and passing off,[vi] entrepreneurs and small businesses must take the following necessary steps to ensure that their rights and interests in their business are protected: –

    • Entrepreneurs/Business owners must entrust lawyers/law firms specializing in the field of IP rights to advise and prosecute their trademark applications;
    • Understand or attempt to understand each and every step involved in prosecuting and registering a trademark application and participate in discussions leading to every step taken in the prosecution of their IP; and,
    • Approach IP lawyers/law firms to get a gist of importance of IP protection along-with freedom to use a mark either before registering it or using the said mark for goods in classes not forming part of the trademark registration.

    It is also the duty of IP lawyers/law firms to promote IP protection for entrepreneurs and small businesses by organizing interactive sessions with new and/or domestic clients and providing competitive charges for prosecuting and enforcing IP rights of these entrepreneurs and businesses.

    Statistics reflect that majority of IP infringement cases in India involve a small enterprise being unaware of the basics of IP rights and therefore, using an IP that is either deceptively similar or virtually identical to a registered and/or well-known IP.[vii] Often businesses expanding their presence in the online portal (either through their website or a social media page) use copyrighted material without realizing that their use of the same would tantamount infringement.   Raising awareness of the importance of IP and consequences of infringement (and passing off) would ensure that start-ups avoid misusing an IP belonging to someone else.

    1.1.4. The imposition of Price Caps on pharmaceutical drugs in India and its work-around

    One of the primary reasons why the USA considers India’s IP regime a major threat has something to do with India’s patent laws, especially vis-à-vis the pharmaceutical industry. Albeit the US Trade Representative (USTR) last year stated that the USA is attempting to restrict patentability for new pharmaceutical drugs which are “essentially mere discoveries of a new form of a known substance which does not result in enhancement of the known efficacy of that substance ….. machine or apparatus” (which is identical to Section 3(d) of the Indian Patents Act, 1970),[viii] it still considers India as a threat to its IP regime, especially due of India’s patent laws.

    To better understand the problems faced in the Indian pharmaceutical industry, it would be prudent to point out that unlike developed nations, the Indian government (through its Patents Act and policies) keeps strict control over the drug pricing with an intention to make healthcare (specifically medication) accessible amongst all States and income groups. This can especially be observed in pharmaceutical drugs for cancer and diabetes medication. The Government of India has imposed strict price restrictions for its pharmaceutical drugs, thereby diluting IP rights and causing a severe impact on IP valuation of those pharmaceutical drugs.[ix]Although the impact might seem insignificant to consumers since they benefit from these price reductions, making cancer medicines 90% cheaper due to price control would not make IP holders happy or promote invention. Simply put, once the government slashes prices of pharmaceutical drugs intending to make them easily accessible to the majority of patients, it severely impacts the profit margin of the pharmaceutical industries, forcing them to invest more into the industry of generic drug manufacturers because of a bigger profit margin and lesser costs, rather than invest into inventing new drugs, which might although tackle a currently incurable disease (or a curable disease more effectively), but would at the same time, lead the corporation into losses. These price cuts would also force the pharmaceutical corporations to compromise on the quality of drugs which might, in a longer run, have a severe impact on healthcare.

    India’s investment in its healthcare sector has been a major concern since the Indian States ideally spend as low as 1.3% of their gross domestic product (GDP) on healthcare which results in a substantial increase in out-of-pocket expenses and placement of poor healthcare mechanisms.[x] India’s heavy reliance on generic drugs supporting the lesser privileged consumers has been expressed as a concern by the USTR[xi] and global pharmaceutical giants to the extent that investors and pharmaceutical corporations have restricted their investment into the Indian pharmaceutical industry since their price margin would result in government either forcing price caps on the drugs or implement compulsory licensing for the expensive and life-saving drugs.

    As stated above, this approach of placing price caps towards the Indian and global (vis-à-vis their sale of drugs in India) pharmaceutical industry has a major impact on India’s patent laws since it affects innovation, and since innovation is an essential part of the invention in the healthcare sector, pharmaceutical industries tend to focus more on generic drug production, profit from out-of-pocket expenses of consumers/patients, hospitalization costs, etc.[xii] The imposition of price caps has shown to have no significant improvement in accessibility of pharmaceutical drugs.

    Although the imposition of price caps is necessary for a developing nation, the same should be practiced to a limited extent. For instance, instead of capping the price of a pharmaceutical drug and dropping its price by 90%, the price caps should be dependent on the situation and need for the drug. For instance, cancer and diabetes medication are in high requirements in India[xiii] (and other nations) and therefore, the government can impose price caps and reduce the cost of the drugs by 50%. Insofar as other (less crucial/critical) pharmaceutical drugs are concerned, the government can either refrain from price caps or impose a price cap of a maximum of 20%. This would not only promote investment in innovating patented drugs but would also increase FDI in the Indian pharmaceutical sector, thereby permitting Indian pharmaceutical corporations to invent new and possibly better pharmaceutical drugs.

    At the same time, a concerned person always reserves their right under Section 84(1)[xiv] of the Indian Patents Act, 1970 to request for issuance of a compulsory license (after the expiry of three years from the date of grant of the patent) against the said pharmaceutical drug in case it does not comply with the guidelines issued under Section 83[xv]  of the afore-mentioned Act like in the case of Bayer Corporation v. Union of India.[xvi] In essence, the Indian government must invest more in its healthcare sector policies to reduce out-of-pocket expenses incurred by patients/consumers and reduce the price caps by a significant amount to promote innovation in the field of patent laws, especially in the pharmaceutical sector.

    1.2. A Global Upcoming Issue: Impact of Use/Commercialization of Artificial Intelligence on Intellectual Property Rights

    1.2.1. Introduction

    “Can machines think?” – Alan Turing, 1950

    A few years after Alan Mathiso Turing, a renowned English mathematician, cryptanalyst and computer scientist played a pivotal role in defeating Hitler’s Nazi Germany, he wrote a paper titled ‘Computing Machinery and Intelligence’ (1950) where he asked a simple, yet intriguing question: “Can machines think?”. His paper and subsequent research established the basis of what we refer to as ‘Artificial Intelligence’ (AI) or machine learning/intelligence. Fast forward to today, the concept of AI has become a lot more complex than what had been imagined by researchers around half a century ago. AI or a machine which reflects the ability to think and act in as close of a manner as a human mind is as of date, an exciting new development in the field of technology.

    From ‘The Turin Test’ in the year 1950 to creation of Sophia, a humanoid robot created by Hanson Robotics in the year 2016, technology, especially in the field of AI has progressed at a drastic rate, with some of the major developments being the creation of Google’s Home device (2016), Apple’s virtual assistant ‘Siri’ (2011), Microsoft’s virtual assistant ‘Cortana’ (2014), Amazon’s home assistant ‘Alexa’ (2014), etc. occurring in the past decade (2010 to 2019) itself. It is safe to say that with this progress, it is not far-fetched to assume that we may soon see the age of commercialization of much smarter versions of currently existing machine learning devices. The technology relating to AI has seen explosive growth in recent times with the number of patent applications for technologies relating to AI exceeding 1,00,000.[xvii]

    Today, AI can be dissected into two types of intelligence, namely:

    • Weak AI: This is a more common type of AI which is used amongst major corporations like Google, Apple, Microsoft, etc. and although it is being used widely, its abilities are limited to performing tasks that it has been trained to perform. Such AI can store data and present it to the consumer upon enquiry or on need-basis. However, the algorithms do not permit this AI to think and act in a manner a human mind would and therefore, this AI does not pose a threat within the domain of IP.
    • Strong AI: Unlike weak AI, this form of AI would perform more cognitive functions that imitate a human mind more closely as against weak AI. Even though weak AI is known to perform basic functions more efficiently (when compared to humans), the strong AI will not only perform those basic functions of a weak AI but also will also perform more complex tasks such as inventing or creating a new IP (like a new copyrightable sound or video or a unique design, etc.).

    To a certain extent, researchers worry about the consequences of AI in case its goals end up being misaligned to ours. But at this stage, AI seems to be more promising than dangerous, especially in the field of healthcare and agriculture which is a critical industry for India.

    Needless to say, corporations are investing a lot of resources to develop this field of technology which is said to have revolutionary impacts including prediction of epidemics, advanced disaster warnings and damage prevention methods, increased productivity in all industries, etc. The possibilities and benefits (and in many cases, risks) of AI are innumerable and at the current rate of its development, it will quite possibly be overwhelming. Regardless of its pros and cons, commercialization of AI is inevitable and therefore, this raises a material question: Do we have the appropriate laws in place to tackle issues arising out of commercialization (or use) of AI? The answer, unfortunately, is no.

    1.2.2. The Current Scenario

    Being an upcoming digital frontier, it is apparent that AI will have a huge impact on our current laws and practices. For instance, our current world IP regime only permits a ‘person’ to be a proprietor and/or owner of an IP (see Naruto v. Slater[xviii]) which implies that any form of IP that is generated/invented by an AI cannot be a subject matter of registration. However, a recent decision by the Chinese Court wherein a tech giant ‘Tencent’ claimed copyright infringement against a local financial news company overwork created by its Dreamwriter robot might reflect a contrary view. The Court in the said case held that an article generated by AI is protectable under Chinese copyright law.[xix] Holding a contrary view, the European Patent Office (EPO) in the case pertaining to patent applications filed by ‘DABUS’ an AI technology, gave a finding similar to the Naruto case wherein it held that application has to be filed by a human being.[xx] Professor Ryan Abbott along-with his multi-disciplinary team at the University of Surrey had filed (through their AI called DABUS) the first-ever patent application without a human inventor[xxi] indicating that the move towards AI-based IP filing is underway, however, given that the laws are currently not in place, the application was, unfortunately, refused.

    Although an old principle, the Courts around the world at times rely (either directly or indirectly) on the principle of “sweat of the brow”, which indicates the inventor’s effort and hard work invested in creating an IP. However, the application of the said principle becomes rather complicated when the issue of IP generated by AI comes into the picture. At the same time, the commercialization of AI might also lead to dilution of IP rights, given that the possibility of AI being better and quicker at generating IP than human beings cannot be ruled out. Undoubtedly, AI might eventually be considered as a ‘smarter’ variant of a human inventor since an AI would require a significantly less amount of time and effort to generate a registrable IP. Apart from the ones already mentioned above, several unknown issues are likely to arise upon commercialization of AI (to generate IP) and there is a dire need to highlight and resolve these issues at the earliest.

    The World Intellectual Property Organization (WIPO) has recently taken an initiative to invite public feedback on possible impacts of AI on the world IP regime[xxii] by conducting press conferences to tackle the impending issues vis-à-vis IP laws upon commercialization or use of AI. Although the topic of discussion during the previous conference was somewhat restricted to Patent laws and did not tackle IP laws holistically, the next round of sessions might emphasize on all IP laws and be more holistic towards progress. Needless to say, AI will impact our IP regime all the way from the creation of an IP to valuation, commercialization, transfer/assignment, etc. thereof, which would require a complete overhaul of our current laws in order to inculcate and appreciate the investment (in terms of time and costs) and labour involved in the creation of the AI, as well as use/transfer/assignment of an IP generated by that AI.

    1.2.3. India’s Approach towards Artificial Intelligence

    India has seen rapid growth in its information technology (IT) sector which has further contributed to other primary sectors like agricultural sector, healthcare sector, etc. by developing various mechanisms such as a system for online trading or integrated crop management system (amongst other things). It is safe to say that technology has a big role to play in India’s growth. Apart from the agricultural industry, the software industry has played a pivotal role in India’s move towards becoming the fastest-growing trillion-dollar economy.[xxiii]

    Being amongst the most profitable investment jurisdictions, India has recently been a hub for tech-related start-ups. Understanding the importance of technology, Indian entrepreneurs, along-with government support, have started to invest heavily in the technology field and with the help of FDI, there has been a substantial boom in the field of technology. Since other fields such as agriculture, healthcare and education are all somewhat dependent on this field, the scope of AI transforming all the other sectors through the tech sector is clearly perceivable.

    As discussed earlier, India’s healthcare sector is in a dire need for investment and development and on account of lack of funding and need to make medication accessible, reliance on AI would drastically reduce costs incurred in labour, research and development, trials, etc., which would eventually reduce the costs of pharmaceutical drugs themselves, thereby impacting the final sale price of the drug. Reliance on AI (by developing the tech sector) would extinguish the need for State governments to invest heavily in their healthcare programmes. Although the current investment might not cut it, a substantial investment, in that case, would not be required. AI support in the development and marketing of pharmaceutical drugs, thereby reducing the overall costs and increasing production and sale would also invite more FDI in India’s healthcare sector. This will also eventually make healthcare more accessible in less developed regions in India. Statistics indicate that healthcare is majorly accessible/available in limited States/Cities like Bengaluru, Chennai, Gurugram, etc.[xxiv] while cities like Singrauli continue to suffer.[xxv] With the major impediment of drug pricing out of the way, access to healthcare will become more of a focus and would eventually thrive with AI support.

    Insofar as the agricultural sector is concerned, the same plays an essential role in our developing economy. According to a report issued by India Brand Equity Foundation (IBEF), around 58% of Indian population relies on India’s agricultural sector with a contribution of an estimated $265.51 billion (approximately INR 18.55 lakh crore) of gross value added to its economy (in Financial Year 2019).[xxvi] This implies that majority of the lesser developed States and Cities in India rely solely on production and sale of their agricultural produce.[xxvii] With an FDI inflow of up to 100% and an increasing reliance on technology, the sector keeps looking for methods to increase crop yields in a cost-effective manner. Having said that, the agricultural sector still faces some major issues like weather instability and fluctuations, condition of agricultural labourers, poor farming techniques, inadequate irrigation facilities, etc.[xxviii]  Unlike the healthcare sector, the agricultural sector is already witnessing the impact of AI from companies like Microsoft India and Intello Labs which have introduced mechanisms to maximize crop yield and reduce wastage/infestation. For instance, Microsoft India has introduced an AI-based sowing app which determines and informs the farmers of the best time to sow their crop based on analysis of climate data for the specific area and amount of rainfall and soil moisture the crops have received.[xxix]  Farmers can benefit from these AI-based apps without spending any additional costs on installing sensors.

    Indian (and foreign) tech industries have already played an important role in providing ease of business through reliance on weak AI and therefore, if India invests and conducts thorough research into strong AI, the implications of AI can be countless. However, since research and investment in the field of strong AI are extremely limited in India, commercialization thereof seems far-fetched as of date. Due to lack of expertise in the field of AI, it has been difficult to conduct research and yield any result. Colleges/Universities often refrain from investing in the field of AI research due to lack of participation and heavy research costs. Also, since the education system in the majority of institutions is somewhat traditional, graduates (or post-graduates) lack the required skill set to work in this technical field.[xxx]

    In contrast, however, the Chinese government is already taking steps to become a leader in the AI space by 2030s. It has adopted a three-step method which involves appreciating AI-based applications by the year 2020, making cutting edge breakthroughs in the said field by 2025 and leading in the industry by 2030. To support this process, a Chinese Court has already ruled in favour of AI-generated copyright work in its decision favouring Tencent,[xxxi] a tech company focusing on communication and social platforms. Since India (through its tech industry) has started taking steps to work towards its AI technology (albeit weak AI for now) and has also entrusted its think-tank ‘NITI (National Institution for Transforming India ) Aayog’ for assistance in such development through the National Program on AI,[xxxii] we hope to see India catch-up to tech giants like China, USA and Europe.

    1.2.4. Development of Intellectual Property Laws on Artificial Intelligence: An Indian Perspective

     Since WIPO has only recently started discussing the implications of AI on global IP laws, the member states of WIPO are yet to come out with laws pertaining to AI-based IP. While beginning its public consultation process on AI and IP policy, Director General of WIPO Mr Francis Gurry said: “Artificial intelligence is set to radically alter the way in which we work and live, with great potential to help us solve common global challenges, but it is also prompting policy questions and challenges,”.[xxxiii]  On December 13, 2019 WIPO also published ‘Draft Issues Paper on Intellectual Property Policy and Artificial Intelligence’ with an intent to invite feedback/opinion on the most pressing issues IP policymakers will face in the near future. One of the most crucial questions where jurisdictions conflict is whether AI can be an inventor/owner of an IP. While the EPO held that an AI cannot be the inventor of a patent application, the Chinese Court observed the contrary, holding that an AI can be an inventor of a copyrightable subject matter. Apart from the issues arising vis-à-vis prosecution of such applications (assuming an AI can be an inventor/originator of an IP), another important issue would pertain to enforcement by or against IP owned by an AI. For instance, if an AI generates a copyrightable subject matter which is deceptively similar or identical to a copyrighted matter, against whom will a Suit claiming infringement and damages lie? Further, in case of a finding against the AI wherein damages have been awarded, will the AI be expected to bear the damages or the owner of the AI? To answer these complex questions, WIPO has invited inputs from member States on issues (not exhaustive) published on December 13, 2019.[xxxiv]

    In view afore-mentioned development, India should establish a team of technical and legal (IP) experts to review the current laws and issues drafted by the WIPO Secretariat, draft possible answers to the issues and suggest required amendments to our current laws to inculcate rights of AI in the best way possible and then discuss the same at a larger stage, i.e., at the 25th Session of the WIPO Committee on Development and Intellectual Property (CDIP). Until now, India’s role/participation in WIPO’s sessions/meetings has been passive and considering how AI would impact its various sectors, especially the agricultural and healthcare sector (a sector which needs an improvement), India must play an active role in the development of IP laws to support AI. Given the fact that India is one of the fastest-growing economies and one of its cities is also considered as the ‘Silicon Valley’ of India,[xxxv] commercialization/use of AI would greatly benefit its economy to the extent that it would substantially reduce labour costs and at the same time, benefit a lot of entrepreneurs in the tech industry. Additionally, AI would also be crucial for government offices as it would greatly reduce delay in processing time and errors. For instance, the use of AI in Indian Intellectual Property Offices would enable machines to review applications for basic defects such as non-filing of an essential document or improper authentication, etc. In case strong AI is adopted by these departments, it would also enable machines to raise basic objections on applications and upon clearance thereof, advertise or register the same, thereby reducing significant costs and time.

    It goes without saying that AI is the next big thing in the field of technology and once it is commercialized at a large scale, it is going to have a huge impact on our laws (especially IP laws). Given India’s interests and possible benefits in the field of AI, its involvement in the development of our current IP regime is pivotal.

     

    Notes

    [i] Jeff Williams, The Evolution of Intellectual Property, Law Office of Jeff Williams PLLC; link: https://txpatentattorney.com/blog/the-history-of-intellectual-property(published on November 11, 2015).

    [ii] Foreign Direct Investment, published by Make in India; link: http://www.makeinindia.com/policy/foreign-direct-investment.

    [iii] Top 17 Startup Legal Disputes, published by Wazzeer; link: https://wazzeer.com/blog/top-17-startup-legal-disputes (published on May 02, 2017).

    [iv] GIPC IP Index 2020, published by Global Innovation Policy Center; link: https://www.theglobalipcenter.com/ipindex2020-details/?country=in.

    [v] Darrell M. West, India-U.S. relations: Intellectual property rights, Brookings India; link: https://www.brookings.edu/opinions/india-u-s-relations-intellectual-property-rights (published on June 04, 2016).

    [vi] Thehasin Nazia & Rajarshi Choudhuri, The Problem of IPR Infringement in India’s Burgeoning Startup Ecosystem, IPWatchdog; link: https://www.ipwatchdog.com/2019/11/16/problem-ipr-infringement-indias-burgeoning-startup-ecosystem/id=116019 (published on November 16, 2019).

    [vii] Press Trust of India, Absence of legal awareness root cause of rights’ deprivation, Business Standard, Nagpur; link: https://www.business-standard.com/article/pti-stories/absence-of-legal-awareness-root-cause-of-rights-deprivation-119081800664_1.html (published on August 18, 2019).

    [viii] Kristina M. L. Acri née Lybecker, India’s Patent Law is No Model for the United States: Say No to No Combination Drug Patents Act, IP Watch Dog; link: https://www.ipwatchdog.com/2019/06/26/indias-patent-law-no-model-united-states/id=110727 (published on June 26, 2019).

    [ix] Amir Ullah Khan, India’s drug price fix is hurting healthcare, Live Mint; link: https://www.livemint.com/politics/policy/india-s-drug-price-fix-is-hurting-healthcare-11572334594083.html (published on October 29, 2019).

    [x] Ibid.

    [xi] E Kumar Sharma, Hard bargaining ahead, Business Today; link: https://www.businesstoday.in/magazine/focus/us-to-pressure-india-change-intellectual-property-ipr-regime/story/214440.html (published on February 01, 2015).

    [xii] Amir, supra note 9 at __(page No.)__.

    [xiii] Key diabetes, anti-cancer drugs among 92 in price-ceiling list, published by ET Bureau, The Economic Times; link: https://economictimes.indiatimes.com/industry/healthcare/biotech/pharmaceuticals/key-diabetes-anti-cancer-drugs-among-92-in-price-ceiling-list/articleshow/65433283.cms?from=mdr (published on August 17, 2018).

    [xiv] Section 84(1) of the Patents Act, 1970 :-

    Compulsory licenses –

    (1) At any time after the expiration of three years from the date of the 170 [grant] of a patent, any person interested may make an application to the Controller for grant of compulsory license on patent on any of the following grounds, namely:-

    (a) that the reasonable requirements of the public with respect to the patented invention have not been satisfied, or

    (b) that the patented invention is not available to the public at a reasonably affordable price, or

    (c) that the patented invention is not worked in the territory of India.

    [xv] Section 83 of the Patents Act, 1970:-

    General principles applicable to working of patented inventions –

    Without prejudice to the other provisions contained in this Act, in exercising the powers conferred by this Chapter, regard shall be had to the following general considerations, namely;-

    (a) that patents are granted to encourage inventions and to secure that the inventions are worked in India on a commercial scale and to the fullest extent that is reasonably practicable without undue delay;

    (b) that they are not granted merely to enable patentees to enjoy a monopoly for the importation of the patented article;

    (c) that the protection and enforcement of patent rights contribute to the promotion of technological innovation and to the transfer and dissemination of technology, to the mutual advantage of producers and users of technological knowledge and in a manner conducive to social and economic welfare, and to a balance of rights and obligations;

    (d) that patents granted do not impede protection of public health and nutrition and should act as instrument to promote public interest specially in sectors of vital importance for socio-economic and technological development of India;

    (e) that patents granted do not in any way prohibit Central Government in taking measures to protect public health;

    (f) that the patent right is not abused by the patentee or person deriving title or interest on patent from the patentee, and the patentee or a person deriving title or interest on patent from the patentee does not resort to practices which unreasonably restrain trade or adversely affect the international transfer of technology; and

    (g) that patents are granted to make the benefit of the patented invention available at reasonably affordable prices to the public.

    [xvi] Special Leave to Appeal (C) No(S). 30145 of 2014.

    [xvii] Ryan N. Phelan, Artificial Intelligence & the Intellectual Property Landscape, Marshall Gerstein & Borun LLP, published by Lexology; link: https://www.lexology.com/library/detail.aspx?g=8c2b5986-95bb-4e8e-9057-e4481bfaa471 (published on September 14, 2019).

    [xviii] No. 16-15469 (9th Cir. 2018).

    [xix] Stefano Zaccaria, AI-written articles are copyright-protected, rules Chinese court, World Intellectual Property Review (WIPR); published on January 10, 2020 (link:https://www.worldipreview.com/news/ai-written-articles-are-copyright-protected-rules-chinese-court-19102).

    [xx] EPO refuses DABUS patent applications designating a machine inventor, European Patent Office; link: https://www.epo.org/news-issues/news/2019/20191220.html(published on December 20, 2019).

    [xxi] Laura Butler, World first patent applications filed for inventions generated solely by artificial intelligence, University of Surrey; published on 01 August, 2019 (link: https://www.surrey.ac.uk/news/world-first-patent-applications-filed-inventions-generated-solely-artificial-intelligence).

    [xxii] WIPO Begins Public Consultation Process on Artificial Intelligence and Intellectual Property Policy, published by World Intellectual Property Organization (WIPO); PR/2019/843; published on December 13, 2019 (link: https://www.wipo.int/pressroom/en/articles/2019/article_0017.html).

    [xxiii] Caleb Silver, The Top 20 Economies in the World, Investopedia; link: https://www.investopedia.com/insights/worlds-top-economies/#5-india (published on November 19, 2019).

    [xxiv] Akriti Bajaj, Working towards building a healthier India, Invest India; link: https://www.investindia.gov.in/sector/healthcare (published on January 18, 2020).

    [xxv] Leroy Leo, Niti Aayog calls healthcare system a ‘sinking ship,’ urges private participation in Ayushman Bharat, Live Mint; link: https://www.livemint.com/news/india/niti-aayog-calls-healthcare-system-a-sinking-ship-urges-private-participation-in-ayushman-bharat-11574948865389.html (published on November 29, 2019).

    [xxvi] Agriculture in India: Information About Indian Agriculture & Its Importance, Indian Brand Equity Foundation (IBEF); link: https://www.ibef.org/industry/agriculture-india.aspx (published on November 2019).

    [xxvii] Ayushman Baruah, Artificial Intelligence in Indian Agriculture – An Industry and Startup Overview, Emerj; link: https://emerj.com/ai-sector-overviews/artificial-intelligence-in-indian-agriculture-an-industry-and-startup-overview (published on November 22, 2019).

    [xxviii] Vidya Sethy, Top 13 Problems Faced by Indian Agriculture, Your Article Library; link: http://www.yourarticlelibrary.com/agriculture/top-13-problems-faced-by-indian-agriculture/62852.

    [xxix] Ibid.

    [xxx] Neha Dewan, In the race for AI supremacy, has India missed the bus?, The Economic Times; link: https://economictimes.indiatimes.com/small-biz/startups/features/in-the-race-for-ai-supremacy-has-india-missed-the-bus/articleshow/69836362.cms (published on June 19, 2019).

    [xxxi] Rory O’Neill and Stefano Zaccaria,

    AI-written articles are copyright-protected, rules Chinese court, World Intellectual Property Review (WIPR); link: https://www.worldipreview.com/news/ai-written-articles-are-copyright-protected-rules-chinese-court-19102 (published on January 10, 2020).

    [xxxii] National Strategy On Artificial Intelligence, published by NITI Aayog; link: https://niti.gov.in/national-strategy-artificial-intelligence.

    [xxxiii] WIPO Begins Public Consultation Process on Artificial Intelligence and Intellectual Property Policy, PR/2019/843, World Intellectual Property Organization (WIPO), Geneva; link: https://www.wipo.int/pressroom/en/articles/2019/article_0017.html (published on December 13, 2019).

    [xxxiv] WIPO Secretariat, WIPO Conversation on Intellectual Property (IP) and Artificial Intelligence (AI), Second Session, WIPO/IP/AI/2/GE/20/1, World Intellectual Property Organization (WIPO); link: https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_ge_20/wipo_ip_ai_2_ge_20_1.pdf (published on December 13, 2019).

    [xxxv] Bangalore, published by Wikipedia; link: https://en.wikipedia.org/wiki/Bangalore (last updated on February 07, 0220).

     

    Image Credit: WIPO

  • Artificial Intelligence in the Battle against Coronavirus (COVID-19): A Survey and Future Research Directions

    Artificial Intelligence in the Battle against Coronavirus (COVID-19): A Survey and Future Research Directions

    Artificial intelligence (AI) has been applied widely in our daily lives in a variety of ways with numerous success stories. AI has also contributed to dealing with the coronavirus disease (COVID-19) pandemic, which is currently happening around the globe. This paper presents a survey of AI methods being used in various applications in the fight against the deadly COVID-19 outbreak and outlines the crucial roles of AI research in this unprecedented battle. We touch on a number of areas where AI plays as an essential component, from medical image processing, data analytics, text mining and natural language processing, the Internet of Things, to computational biology and medicine. A summary of COVID-19 related data sources that are available for research purposes is also presented. Research directions on exploring the potentials of AI and enhancing its capabilities and power in the battle are thoroughly discussed. It is envisaged that this study will provide AI researchers and the wider community an overview of the current status of AI applications and motivate researchers in harnessing AI potentials in the fight against COVID-19.

    Download Full Research Paper