Category: Science and Technology

  • China and the Sunset of the International Liberal Order

    China and the Sunset of the International Liberal Order

         

    Rise of Multipolar World Order – www.newsvoyagernet.com

           The irrational amounts that the Soviet Union allocated to its defense budget not only represented a huge burden on its economy, but imposed a tremendous sacrifice on the standard of living of its citizens. Subsidies to the rest of the members of the Soviet bloc had to be added to this bill.

             Such amounts were barely sustainable for a country that, as from the first half of the 1960s, was subjected to a continuous economic stagnation. This situation became aggravated by the strong decline of oil prices, USSR’s main export, since the mid 1980s. The reescalation of the Cold War undertaken by Jimmy Carter and Ronald Reagan, particularly the latter, put in motion an American military buildup, that could not be matched by Moscow.

             With the intention of avoiding the implosion of its system, Moscow triggered a reform process that attained none other than accelerating such outcome. Indeed, Mikhail Gorbachev opened the pressure cooker hoping to liberate, in a controlled manner, the force contained in its interior. Once liberated, however, this force swept away with everything on its path. Initially came its European satellites, subsequently Gorbachev’s power base, and, finally, the Soviet Union itself. The Soviet system had reached the point where it could not survive without changes, but neither could assimilate them. In other words, it had exhausted its survival capacity.

              Without a shot being fired, Washington had won the Cold War. The exuberant sentiment of triumph therein derived translated into the “end of history” thesis. Having defeated its ideological rival, liberalism had become the final point in the ideological evolution of humanity. If anything, tough, the years that followed to the Soviet implosion were marred by trauma and conflict. In the essential, however, the idea that the world was homogenizing under the liberal credo was correct.

             On the one hand, indeed, the multilateral institutions, systems of alliances and rules of the game created by the United States shortly after World War II, or in subsequent years, allowed for a global governance architecture. A rules based liberal international order imposed itself over the world. On the other hand, the so-called Washington Consensus became a market economy’s recipe of universal application. This homogenization process was helped by two additional factors. First, the seven largest economies that followed the U.S., were industrialized democracies firmly supportive of its leadership. Second, globalization in its initial stage acted as a sort of planetary transmission belt that spread the symbols, uses, and values of the leading power.

             The new millennium thus arrived with an all-powerful America, whose liberal imprint was homogenizing the planet. The United States had indeed attained global hegemony, and Fukuyama’s end of history thesis seemed to reflect the emerging reality.

    But things turned out to be more complex than this, and the history of the end of history proved out to be a brief one. In a few years’ time, global “Pax Americana” began to be challenged by the presence of a powerful rival that seemed to have emerged out of the blue: China. How had this happened?

             Beginning the 1970s, Beijing and Washington had reached a simple but transformative agreement. Henceforward, the United States would recognize the Chinese Communist regime as China’s legitimate government. Meanwhile, China would no longer seek to constrain America’s leadership in Asia. By extension, this provided China with an economic opening to the West. Although it would be only after Deng Xiaoping’s arrival to power, that the real meaning of the latter became evident.

             In spite of the multiple challenges encountered along the way, both the United States and China made a deliberate effort to remain within the road opened in 1972. Their agreement showed to be not only highly resilient, but able to evolve amid changing circumstances. The year 2008, however, became an inflexion point within their relationship. From then onwards, everything began to unravel. Why was it so?

             The answer may be found in a notion familiar to the Chinese mentality, but alien to the Western one – the shi. This concept can be synthesized as an alignment of forces able to shape a new situation. More generally, it encompasses ideas such as momentum, strategic advantage, or propensity for things to happen. Which were, hence, the alignment of forces that materialized in that particular year? There were straightforward answers to that question: The U.S.’ financial excesses that produced the world’s worst financial crisis since 1929; Beijing’s sweeping efficiency in overcoming the risk of contagion from this crisis; China’s capability to maintain its economic growth, which helped preventing a major global economic downturn; and concomitantly, the highly successful Beijing Olympic games of that year, which provided the country with a tremendous self-esteem boost.

             The United States, indeed, had proven not to be the colossus that the Chinese had presumed, while China itself turned out to be much stronger than assumed. This implied that the U.S. was passing its peak as a superpower, and that the curves of the Chinese ascension and the American decline, were about to cross each other. Deng Xiaoping’s advice for future leadership generations, had emphasized the need of preserving a low profile, while waiting for the attainment of a position of strength. In Chinese eyes, 2008 seemed to show that China was muscular enough to act more boldly. Moreover, with the shi in motion, momentum had to be exploited.

             Beijing’s post-2008 assertiveness became much bolder after Xi Jinping’s arrival to power in 2012-2013. China, in his mind, was ready to contend for global leadership. More to the point within its own region, China’s centrality and the perception of the U.S. as an alien power, had to translate into pushing out America’s presence.

    Challenged by China, Washington reacted forcefully. Chinese perceptions run counter to the fact that the U.S.’ had been a major power in East Asia since 1854, which translated into countless loss of American lives in four wars. Moreover, safeguarding the freedom of navigation in the South China Sea, a key principle within the rules based liberal international order, provided a strong sense of staying power. This was reinforced by the fact that America’s global leadership was also at stake, thus requiring not to yield presence in that area for reputational reasons. The containment of Beijing’s ascendancy, became thus a priority for Washington.

             However, accommodating two behemoths that feel entitled to pre-eminence is a daunting task. Specially so, when one of them feels under threat of exclusion from the region, and the other feels that its emergence is being constrained. On top, both remain prisoners of their history and of their national myths. This makes them incapable of looking at the facts, without distorting them with the subjective lenses of their perceived sense of mission and superiority.

             War ensuing, under those circumstances, is an ongoing risk. But if war is a possibility, Cold War is already a fact. This implies a multifaceted wrestle in which geopolitics, technology, trade, finances, alliances, and warfare capabilities are all involved. And even if important convergent interest between them still remains in place, ties are being cut by the day. As a matter of fact, if in the past economic interdependence helped to shield from geopolitical dissonances, the opposite is the case today. Indeed, a whole array of zero-sum geopolitical controversies are rapidly curtailing economic links.

             The U.S., particularly during the Biden administration, chose to contain China through a regional architecture of alliances and by way of linking NATO with Indo-Pacific problems and selective regional allies. The common denominator that gathers them together is the preservation of the rules based liberal international order. An order, threatened by China’s geostrategic regional expansionism.

     

     

     

    However, China itself is not short of allies. A revisionist axis, that aims at ending the rules based liberal international order, has taken shape. The same tries to throw back American power and to create its own spheres of influence. This axis represents a competing center of gravity, where countries dissatisfied with the prevailing international order can turn to. Together with China two additional Asia-Pacific powers, Russia and North Korea, are part of this bloc.

    Trump’s return to the White House might change the prevailing regional configuration of factors. Although becoming more challenging to Beijing from a trade perspective, he could substantially weaken not only the rules based liberal international order, but the architecture of alliances that contains China. The former, because the illiberal populism that he represents is at odds with the liberal order. The latter, not only because he could take the U.S. out of NATO, but because his transactional approach to foreign policy, which favors trade and money over geopolitics, could turn alliances upside down.

    The rules based liberal international order, which became universal over the ashes of the Soviet Union, could now be facing its sunset. This, not only because its main challenger, China, may strengthen its geopolitical position in the face of its rival alliances’ disruption, but, more significantly, because the U.S. itself may cease to champion it.

    Feature Image Credit: www.brookings.edu

     

  • UARCs: The American Universities that Produce Warfighters

    UARCs: The American Universities that Produce Warfighters

    America’s military-industrial complex (MIC) has grown enormously powerful and fully integrated into the Department of Defense of the US Government to further its global influence and control. Many American universities have become research centres for the MIC. Similarly, American companies have research programs in leading universities and educational institutions across the world, for example in few IITs in India. In the article below, Dr Sylvia J. Martin explores the role of University Affiliated Research Centers (UARCs) in the U.S. military-industrial complex. UARCs are institutions embedded within universities, designed to conduct research for the Department of Defense (DoD) and other military agencies. The article highlights how UARCs blur the lines between academic research and military objectives, raising ethical questions about the use of university resources for war-related activities. These centres focus on key areas such as nano-technology, immersive simulations, and weapons systems. For example, the University of South California’s Institute for Creative Technologies (ICT) was created to develop immersive training simulations for soldiers, drawing from both science and entertainment, while universities like Johns Hopkins and MIT are involved in anti-submarine warfare and soldier mobility technologies. Sylvia Martin critically examines the consequences of these relationships, particularly their impact on academic freedom and the potential prioritization of military needs over civilian research. She flags the resistance faced by some universities, like the University of Hawai’i, where concerns about militarisation, environmental damage and indigenous rights sparked protests against their UARCs. As UARCs are funded substantially, it becomes a source of major influence on the university. Universities, traditionally seen as centres for open, unbiased inquiry may become aligned with national security objectives, further entrenching the MIC within academics.

    This article was published earlier in Monthly Review.

    TPF Editorial Team

    UARCs: The American Universities that Produce Warfighters

    Dr Sylvia J Martin

    The University of Southern California (USC) has been one of the most prominent campuses for student protests against Israel’s campaign in Gaza, with students demanding that their university “fully disclose and divest its finances and endowment from companies and institutions that profit from Israeli apartheid, genocide, and occupation in Palestine, including the US Military and weapons manufacturing.”

    Students throughout the United States have called for their universities to disclose and divest from defense companies with ties to Israel in its onslaught on Gaza. While scholars and journalists have traced ties between academic institutions and U.S. defense companies, it is important to point out that relations between universities and the U.S. military are not always mediated by the corporate industrial sector.1 American universities and the U.S. military are also linked directly and organizationally, as seen with what the Department of Defense (DoD) calls “University Affiliated Research Centers (UARCs).” UARCs are strategic programs that the DoD has established at fifteen different universities around the country to sponsor research and development in what the Pentagon terms “essential engineering and technology capabilities.”2Established in 1996 by the Under Secretary of Defense for Research and Engineering, UARCs function as nonprofit research organizations at designated universities aimed to ensure that those capabilities are available on demand to its military agencies. While there is a long history of scientific and engineering collaboration between universities and the U.S. government dating back to the Second World War, UARCs reveal the breadth and depth of today’s military-university complex, illustrating how militarized knowledge production emerges from within the academy and without corporate involvement. UARCs demonstrate one of the less visible yet vital ways in which these students’ institutions help perpetuate the cycle of U.S.-led wars and empire-building.

    The University of Southern California (USC) has been one of the most prominent campuses for student protests against Israel’s campaign in Gaza, with students demanding that their university “fully disclose and divest its finances and endowment from companies and institutions that profit from Israeli apartheid, genocide, and occupation in Palestine, including the US Military and weapons manufacturing.”3  USC also happens to be home to one of the nation’s fifteen UARCs, the Institute of Creative Technology (ICT), which describes itself as a “trusted advisor to the DoD.”4  ICT is not mentioned in the students’ statement, yet the institute—and UARCs at other universities—are one of the many moving parts of the U.S. war machine that are nestled within higher education institutions, and a manifestation of the Pentagon’s “mission creep” that encompasses the arts as well as the sciences.5

    Institute of Creative Technologies – military.usc.edu

    Significantly, ICT’s remit to develop dual-use technologies (which claim to provide society-wide “solutions”) entails nurturing what the Institute refers to as “warfighters” for the battlefields of the future, and, in doing so, to increase warfighters’ “lethality.6 Established by the DoD in 1999 to pursue advanced modelling and simulation and training, ICT’s basic and applied research produces prototypes, technologies, and know-how that have been deployed for the U.S. Army, Navy, and Marine Corps. From artificial intelligence-driven virtual humans deployed to teach military leadership skills to futuristic 3D spatial visualization and terrain capture to prepare these military agencies for their operational environments, ICT specializes in immersive training programs for “mission rehearsal,” as well as tools that contribute to the digital innovations of global warmaking.7  Technologies and programs developed at ICT were used by U.S. troops in the U.S.-led Global War on Terror. One such program is UrbanSim, a virtual training application initiated in 2006 designed to improve army commanders’ skills for conducting counterinsurgency operations in Iraq and Afghanistan, delivering fictional scenarios through a gaming experience.8  From all of the warfighter preparation that USC’s Institute researches, develops, prototypes, and deploys, ICT boasts of generating over two thousand academic peer-reviewed publications.

    I encountered ICT’s work while conducting anthropological research on the relationship between the U.S. military and the media entertainment industry in Los Angeles.9  The Institute is located not on the university’s main University Park campus but by the coast, in Playa Vista, alongside offices for Google and Hulu. Although ICT is an approximately thirty-minute drive from USC’s main campus, this hub for U.S. warfighter lethality was enabled by an interdisciplinary collaboration with what was then called the School of Cinema-Television and the Annenberg School for Communications, and it remains entrenched within USC’s academic ecosystem, designated as a unit of its Viterbi School of Engineering, which is located on the main campus.10  Given the presence and power of UARCs at U.S. universities, we can reasonably ask: What is the difference between West Point Military Academy and USC, a supposedly civilian university? The answer, it seems, is not a difference in kind, but in degree. Indeed, universities with UARCs appear to be veritable military academies.

    What Are UARCs?

    UARCs are similar to federally funded research centres such as the Rand Corporation; however, UARCs are required to be situated within a university, which can be public or private.11  The existence of UARCs is not classified information, but their goals, projects, and implications may not be fully evident to the student bodies or university communities in which they are embedded, and there are differing levels of transparency among them about their funding. DoD UARCs “receive sole source funds, on average, exceeding $6 million annually,” and may receive other funding in addition to that from their primary military or federal sponsor, which may also differ among the fifteen UARCs.12  In 2021, funding from federal sources for UARCs ranged “from as much as $831 million for the Johns Hopkins University Applied Physics Lab to $5 million for the University of Alaska Geophysical Detection of Nuclear Proliferation.”13  Individual UARCs are generally created after the DoD’s Under Secretary of Defense for Research and Engineering initiates a selection process for the proposed sponsor, and typically are reviewed by their primary sponsor every five years for renewed contracts.14  A few UARCs, such as Johns Hopkins University’s Applied Physics Lab and the University of Texas at Austin’s Applied Research Lab, originated during the Second World War for wartime purposes but were designated as UARCs in 1996, the year the DoD formalized that status.15

    UARCs are supposed to provide their sponsoring agency and, ultimately, the DoD, access to what they deem “core competencies,” such as MIT’s development of nanotechnology systems for the “mobility of the soldier in the battlespace” and the development of anti-submarine warfare and ballistic and guided missile systems at Johns Hopkins University.16  Significantly, UARCs are mandated to maintain a close and enduring relationship with their military or federal sponsor, such as that of ICT with the U.S. Army. These close relationships are intended to facilitate the UARCs’ “in-depth knowledge of the agency’s research needs…access to sensitive information, and the ability to respond quickly to emerging research areas.”17  Such an intimate partnership for institutions of higher learning with these agencies means that the line between academic and military research is (further) blurred. With the interdisciplinarity of researchers and the integration of PhD students (and even undergraduate interns) into UARC operations such as USC’s ICT, the question of whether the needs of the DoD are prioritized over those of an ostensibly civilian institute of higher learning practically becomes moot: the entanglement is naturalized by a national security logic.

    Table 1 UARCs: The American Universities that Produce Warfighters

    Primary Sponsor University UARC Date of Designation (*original year established)
    Army University of Southern California Institute of Creative Technologies 1999
    Army Georgia Institute of Technology Georgia Tech Research Institute 1996 (*1995)
    Army Massachusetts Institute of Technology Institute for Soldier Nanotechnologies 2002
    Army University of California, Santa Barbara Institute for Collaborative Biotechnologies 2003
    Navy Johns Hopkins University Applied Physics Laboratory 1996 (*1942)
    Navy Pennsylvania State University Applied Research Laboratory 1996 (*1945)
    Navy University of Texas at Austin Applied Research Laboratories 1996 (*1945)
    Navy University of Washington Applied Physics Laboratory 1996 (*1943)
    Navy University of Hawai’i Applied Research Laboratory 2004
    Missile Defense Agency Utah State University Space Dynamics Laboratory 1996
    Office of the Under Secretary of Defense for Intelligence and Security University of Maryland, College Park Applied Research Laboratory for Intelligence and Security 2017 (*2003)
    Under Secretary of Defense for Research and Engineering Stevens Institute of Technology Systems Engineering Research Center 2008
    U.S. Strategic Command University of Nebraska National Strategic Research Institute 2012
    Department of the Assistant Secretary of Defense (Threat Reduction and Control) University of Alaska Fairbanks Geophysical Detection of Nuclear Proliferation 2018
    Air Force Howard University Research Institute for Tactical Autonomy 2023
    Sources: Joan Fuller, “Strategic Outreach—University Affiliated Research Centers,” Office of the Under Secretary of Defense (Research and Engineering), June 2021, 4; C. Todd Lopez, “Howard University Will Be Lead Institution for New Research Center,” U.S. Department of Defense News, January 23, 2023.

    A Closer Look

    The UARC at USC is unique from other UARCs in that, from its inception, the Institute explicitly targeted the artistic and humanities-driven resources of the university. ICT opened near the Los Angeles International Airport, in Marina del Rey, with a $45 million grant, tasked with developing a range of immersive technologies. According to the DoD, the core competencies that ICT offers include immersion, scenario generation, computer graphics, entertainment theory, and simulation technologies; these competencies were sought as the DoD decided that they needed to create more visually and narratively compelling and interactive learning environments for the gaming generation.18  USC was selected by the DoD not just because of the university’s work in science and engineering but also its close connections to the media entertainment industry, which USC fosters from its renowned School of Cinematic Arts (formerly the School of Cinema-Television), thereby providing the military access to a wide range of storytelling talents, from screenwriting to animation. ICT later moved to nearby Playa Vista, part of Silicon Beach, where the military presence also increased; by April 2016, the U.S. Army Research Lab West opened next door to ICT as another collaborative partner, further integrating the university into military work.19  This university-military partnership results in “prototypes that successfully transition into the hands of warfighters”; UARCs such as ICT are thus rendered a crucial link in what graduate student worker Isabel Kain from the Researchers Against War collective calls the “military supply chain.”20

    universities abandon any pretence to neutrality once they are assigned UARCs, as opponents at the University of Hawai’i at Mānoa (UH Mānoa) asserted when a U.S. Navy-sponsored UARC was designated for their campus in 2004. UH Mānoa faculty, students, and community members repeatedly expressed their concerns about the ethics of military research conducted on their campus, including the threat of removing “researchers’ rights to refuse Navy directives”

    USC was touted as “neutral ground” from which the U.S. Army could help innovate military training by one of ICT’s founders in his account of the Institute’s origin story.21  Yet, universities abandon any pretence to neutrality once they are assigned UARCs, as opponents at the University of Hawai’i at Mānoa (UH Mānoa) asserted when a U.S. Navy-sponsored UARC was designated for their campus in 2004. UH Mānoa faculty, students, and community members repeatedly expressed their concerns about the ethics of military research conducted on their campus, including the threat of removing “researchers’ rights to refuse Navy directives.”22  The proposed UARC at UH Mānoa occurred within the context of university community resistance to U.S. imperialism and militarism, which have inflicted structural violence on Hawaiian people, land, and waters, from violent colonization to the 1967 military testing of lethal sarin gas in a forest reserve.23 Hawai’i serves as the base of the military’s U.S. Indo-Pacific Command, where “future wars are in development,” professor Kyle Kajihiro of UH Mānoa emphasizes.24

    Writing in Mānoa Now about the proposed UARC in 2005, Leo Azumbuja opined that “it seems like ideological suicide to allow the Navy to settle on campus, especially the American Navy.”25 A key player in the Indo-Pacific Command, the U.S. Navy has long had a contentious relationship with Indigenous Hawaiians, most recently with the 2021 fuel leakage from the Navy’s Red Hill fuel facility, resulting in water contamination levels that the Hawai’i State Department of Health referred to as “a humanitarian and environmental disaster.”26  Court depositions have since revealed that the Navy knew about the fuel leakage into the community’s drinking water but waited over a week to inform the public, even as people became ill, making opposition to its proposed UARC unsurprising, if not requisite.27  The detonation of bombs and sonar testing that happens at the biennial international war games that the U.S. Navy has hosted in Hawai’i since 1971 have also damaged precious marine life and culturally sacred ecosystems, with the sonar tests causing whales to “swim hundreds of miles, rapidly change their depth (sometimes leading to bleeding from the eyes and ears), and even beach themselves to get away from the sounds of sonar.”28  Within this context, one of the proposed UARC’s core competencies was “understanding of [the] ocean environment.”29

    In a flyer circulated by DMZ Hawaii, UH Mānoa organizers called for universities to serve society, and “not be used by the military to further their war aims or to perfect ways of killing or controlling people.”30  Recalling efforts in previous decades on U.S. campuses to thwart the encroachment of military research, protestors raised questions about the UARC’s accountability and transparency regarding weapons production within the UH community. UH Mānoa’s strategic plan during the time that the Navy’s UARC was proposed and executed (2002–2010) called for recognition of “our kuleana (responsibility) to honour the Indigenous people and promote social justice for Native Hawaiians” and “restoring and managing the Mānoa stream and ecosystem”—priorities that the actions of the U.S. Navy disregarded.31  The production of knowledge for naval weapons within the auspices of this public, land-grant institution disrupts any pretension to neutrality the university may purport.

    while the UH administration claimed that the proposed UARC would not accept any classified research for the first three years, “the base contract assigns ‘secret’ level classification to the entire facility, making the release of any information subject to the Navy’s approval,” raising concerns about academic freedom, despite the fanfare over STEM and rankings

    Further resistance to the UARC designation was expressed by the UH Mānoa community: from April 28 to May 4, 2005, the SaveUH/StopUARC Coalition staged a six-day campus sit-in protest, and later that year, the UH Mānoa Faculty Senate voted 31–18 in favour of asking the administration to reject the UARC designation.32  According to an official statement released by UH Mānoa on January 23, 2006, at a university community meeting with the UH Regents in 2006, testimony from opponents to the UARC outnumbered supporters, who, reflecting the neoliberal turn of universities, expressed hope that their competitiveness in science, technology, engineering, and mathematics (STEM) would advance with a UARC designation, and benefit the university’s ranking.33  Yet in 2007, writing in DMZ Hawaii, Kajihiro clarified that while the UH administration claimed that the proposed UARC would not accept any classified research for the first three years, “the base contract assigns ‘secret’ level classification to the entire facility, making the release of any information subject to the Navy’s approval,” raising concerns about academic freedom, despite the fanfare over STEM and rankings.34  However, the campus resistance campaign was unsuccessful, and in September 2007, the UH Regents approved the Navy UARC designation. By 2008, the U.S. Navy-sponsored Applied Research Laboratory UARC at UH Mānoa opened.

    “The Military Normal”

    Yet with the U.S. creation of the national security state in 1947 and its pursuit of techno-nationalism since the Cold War, UARCs are direct pipelines to the intensification of U.S. empire

    UH Mānoa’s rationale for resistance begs the question: how could this university—indeed, any university—impose this military force onto its community? Are civilian universities within the United States merely an illusion, a deflection from education in the service of empire? What anthropologist Catherine Lutz called in 2009 the ethos of “the military normal” in U.S. culture toward its counterinsurgency wars in Iraq and Afghanistan—the commonsensical, even prosaic perspective on the inevitability of endless U.S.-led wars disseminated by U.S. institutions, especially mainstream media—helps explain the attitude toward this particular formalized capture of the university by the DoD.35  Defense funding has for decades permeated universities, but UARCs perpetuate the military normal by allowing the Pentagon to insert itself through research centres and institutes in the (seemingly morally neutral) name of innovation, within part of a broader neoliberal framework of universities as “engines” and “hubs,” or “anchor” institutions that offer to “leverage” their various forms of capital toward regional development in ways that often escape sustained scrutiny or critique.36  The normalization is achieved in some cases given that UARCs such as ICT strive to serve civilian needs as well as military ones with dual-use technologies and tools. Yet with the U.S. creation of the national security state in 1947 and its pursuit of techno-nationalism since the Cold War, UARCs are direct pipelines to the intensification of U.S. empire. Some of the higher-profile virtual military instructional programs developed at ICT at USC, such as its Emergent Leader Immersive Training Environment (ELITE) system, which provides immersive role-playing to train army leaders for various situations in the field, are funnelled to explicitly military-only learning institutions such as the Army Warrant Officer School.37

    The fifteenth and most recently created UARC, at Howard University in 2023—the first such designation for one of the historically Black colleges and universities (HBCUs)—boasts STEM inclusion

    The military normal generates a sense of moral neutrality, even moral superiority. The logic of the military normal, the offer of STEM education and training, especially through providing undergraduate internships and graduate training, and of course funding, not only rationalizes the implementation of UARCs, but ennobles it. The fifteenth and most recently created UARC, at Howard University in 2023—the first such designation for one of the historically Black colleges and universities (HBCUs)—boasts STEM inclusion.38  Partnering with the U.S. Air Force, Howard University’s UARC is receiving a five-year, $90 million contract to conduct AI research and develop tactical autonomy technology. Its Research Institute for Tactical Autonomy (RITA) leads a consortium of eight other HCBUs. As with the University of Hawai’i, STEM advantages are touted by the UARC, with RITA’s reach expanding in other ways: it plans to supplement STEM education for K–12 students to “ease their path to a career in the fields of artificial intelligence, cybersecurity, tactical autonomy, and machine learning,” noting that undergraduate and graduate students will also be able to pursue fully funded research opportunities at their UARC. With the corporatization of universities, neoliberal policies prioritize STEM for practical reasons, including the pursuit of university rankings and increases in both corporate and government funding. This fits well with increased linkages to the defence sector, which offers capital, jobs, technology, and gravitas. In a critique of Howard University’s central role for the DoD through its new UARC, Erica Caines at Black Agenda Reportinvokes the “legacies of Black resistance” at Howard University in a call to reduce “the state’s use of HBCUs.”39  In another response to Howard’s UARC, another editorial in Black Agenda Report draws upon activist Kwame Ture’s (Stokely Carmichael’s) autobiography for an illuminative discussion about his oppositional approach to the required military training and education at Howard University during his time there.40

    With their respectability and resources, universities, through UARCs, provide ideological cover for U.S. war-making and imperialistic actions, offering up student labour at undergraduate and graduate levels in service of that cover. When nearly eight hundred U.S. military bases around the world are cited as evidence of U.S. empire and the DoD requires research facilities to be embedded within places of higher learning, it is reasonable to expect that university communities—ostensibly civilian institutions—ask questions about UARC goals and operations, and how they provide material support and institutional gravitas to these military and federal agencies.41  In the case of USC, ICT’s stated goal of enhancing warfighter lethality runs counter to current USC student efforts to strive for more equitable conditions on campus and within its larger community (for example, calls to end “land grabs,” and “targeted repression and harassment of Black, Brown and Palestinian students and their allies on and off campus”) as well as other reductions in institutional harms.42  The university’s “Minor in Resistance to Genocide”—a program pursued by USC’s discarded valedictorian Asna Tabassum—also serves as mere cover, a façade, alongside USC’s innovations for warfighter lethality.

    the Hopkins Justice Collective at Johns Hopkins University recently proposed a demilitarization process to its university’s Public Interest Investment Advisory Committee that cited Johns Hopkins’s UARC, Applied Physics Lab, as being the “sole source” of DoD funding for the development and testing of AI-guided drone swarms used against Palestinians in 2021

    Many students and members of U.S. society want to connect the dots, as evident from the nationwide protests and encampments, and a push from within the academy to examine the military supply chain is intensifying. In addition to Researchers Against War members calling out the militarized research that flourishes in U.S. universities, the Hopkins Justice Collective at Johns Hopkins University recently proposed a demilitarization process to its university’s Public Interest Investment Advisory Committee that cited Johns Hopkins’s UARC, Applied Physics Lab, as being the “sole source” of DoD funding for the development and testing of AI-guided drone swarms used against Palestinians in 2021.43  Meanwhile, at UH Mānoa, the struggle continues: in February 2024, the Associated Students’ Undergraduate Senate approved a resolution requesting that the university’s Board of Regents terminate UH’s UARC contract, noting that UH’s own president is the principal investigator for a $75 million High-Performance Computer Center for the U.S. Air Force Research Laboratory that was contracted by the university’s UARC, Applied Research Laboratory.44  Researchers Against War organizing, the Hopkins Justice Collective’s proposal, the undaunted UH Mānoa students, and others help pinpoint the flows of militarized knowledge—knowledge that is developed by UARCs to strengthen warfighters from within U.S. universities, through the DoD, and to different parts of the world.45

    Notes

    1. Jake Alimahomed-Wilson et al., “Boeing University: How the California State University Became Complicit in Palestinian Genocide,” Mondoweiss, May 20, 2024; Brian Osgood, “U.S. University Ties to Weapons Contractors Under Scrutiny Amid War in Gaza,” Al Jazeera, May 13, 2024.
    2. Collaborate with Us: University Affiliated Research Center,” DevCom Army Research Laboratory, arl.devcom.army.mil.
    3. USC Divest From Death Coalition, “Divest From Death USC News Release,” April 24, 2024.
    4. USC Institute for Creative Technologies, “ICT Overview Video,” YouTube, 2:52, December 12, 2023.
    5. Gordon Adams and Shoon Murray, Mission Creep: The Militarization of U.S. Foreign Policy?(Washington DC: Georgetown University Press, 2014).
    6. USC Institute for Creative Technologies, “ICT Overview Video”; USC Institute for Creative Technologies, Historical Achievements: 1999–2019 (Los Angeles: University of Southern California, May 2021), ict.usc.edu.
    7. Yuval Abraham, “‘Lavender’: The AI Machine Directing Israel’s Bombing Spree in Gaza,” +972 Magazine.
    8. “UrbanSim,” USC Institute for Creative Technologies.
    9. Sylvia J. Martin, “Imagineering Empire: How Hollywood and the U.S. National Security State ‘Operationalize Narrative,’” Media, Culture & Society 42, no. 3 (April 2020): 398–413.
    10. Paul Rosenbloom, “Writing the Original UARC Proposal,” USC Institute for Creative Technologies, March 11, 2024.
    11. Susannah V. Howieson, Christopher T. Clavin, and Elaine M. Sedenberg, “Federal Security Laboratory Governance Panels: Observations and Recommendations,” Institute for Defense Analyses—Science and Technology Policy Institute, Alexandria, Virginia, 2013, 4.
    12. OSD Studies and Federally Funded Research and Development Centers Management Office (FFRDC), Engagement Guide: Department of Defense University Affiliated Research Centers (UARCs) (Alexandria, Virginia: OSD Studies and FFRDC Management Office, April 2013), 5.
    13. Christopher V. Pece, “Federal Funding to University Affiliated Research Centers Totaled $1.5 Billion in FY 2021,” National Center for Science and Engineering Statistics, National Science Foundation, 2024, ncses.nsf.gov.
    14. “UARC Customer Funding Guide,” USC Institute for Creative Technologies, March 13, 2024.
    15. Federally Funded Research and Development Centers (FFRDC) and University Affiliated Research Centers (UARC),” Department of Defense Research and Engineering Enterprise, rt.cto.mil.
    16. OSD Studies and FFRDC Management Office, Engagement Guide.
    17. Congressional Research Service, “Federally Funded Research and Development Centers (FFDRCs): Background and Issues for Congress,” April 3, 2020, 5.
    18. OSD Studies and FFRDC Management Office, Engagement Guide, 18.
    19. Institute for Creative Technologies (ICT),” USC Military and Veterans Initiatives, military.usc.edu.
    20. USC Institute for Creative Technologies, Historical Achievements: 1999–2019, 2; Linda Dayan, “‘Starve the War Machine’: Workers at UC Santa Cruz Strike in Solidarity with Pro-Palestinian Protesters,” Haaretz, May 21, 2024.
    21. Richard David Lindholm, That’s a 40 Share!: An Insider Reveals the Origins of Many Classic TV Shows and How Television Has Evolved and Really Works (Pennsauken, New Jersey: Book Baby, 2022).
    22. Leo Azambuja, “Faculty Senate Vote Opposing UARC Preserves Freedom,” Mānoa Now, November 30, 2005.
    23. Deployment Health Support Directorate, “Fact Sheet: Deseret Test Center, Red Oak, Phase I,” Office of the Assistant Secretary of the Defense (Health Affairs), health.mil.
    24. Ray Levy Uyeda, “U.S. Military Activity in Hawai’i Harms the Environment and Erodes Native Sovereignty,” Prism Reports, July 26, 2022.
    25. Azambuja, “Faculty Senate Vote Opposing UARC Preserves Freedom.”
    26. Kyle Kajihiro, “The Militarizing of Hawai’i: Occupation, Accommodation, Resistance,” in Asian Settler Colonialism, Jonathon Y. Okamura and Candace Fujikane, eds. (Honolulu: University of Hawai’i Press, 2008), 170–94; “Hearings Officer’s Proposed Decision and Order, Findings of Fact, and Conclusions of Law,” Department of Health, State of Hawaii vs. United States Department of the Navy, no. 21-UST-EA-02 (December 27, 2021).
    27. Christina Jedra, “Red Hill Depositions Reveal More Details About What the Navy Knew About Spill,” Honolulu Civil Beat, May 31, 2023.
    28. “Does Military Sonar Kill Marine Wildlife?,” Scientific American, June 10, 2009.
    29. Joan Fuller, “Strategic Outreach—University Affiliated Research Centers,” Office of the Under Secretary of Defense (Research and Engineering), June 2021, 4.
    30. DMZ Hawaii, “Save Our University, Stop UARC,” dmzhawaii.org.
    31. University of Hawai’i at Mānoa, Strategic Plan 2002–2010: Defining Our Destiny, 8–9.
    32. Craig Gima, “UH to Sign Off on Navy Center,” Star Bulletin, May 13, 2008.
    33. University of Hawai’i at Mānoa, “Advocates and Opponents of the Proposed UARC Contract Present Their Case to the UH Board of Regents,” press release, January 23, 2006.
    34. Kyle Kajihiro, “The Secret and Scandalous Origins of the UARC,” DMZ Hawaii, September 23, 2007.
    35. Catherine Lutz, “The Military Normal,” in The Counter-Counterinsurgency Manual, or Notes on Demilitarizing American Society, The Network of Concerned Anthropologists, ed. (Chicago: Prickly Paradigm Press, 2009).
    36. Anne-Laure Fayard and Martina Mendola, “The 3-Stage Process That Makes Universities Prime Innovators,” Harvard Business Review, April 19, 2024; Paul Garton, “Types of Anchor Institution Initiatives: An Overview of University Urban Development Literature,” Metropolitan Universities 32, no. 2 (2021): 85–105.
    37. Randall Hill, “ICT Origin Story: How We Built the Holodeck,” Institute for Creative Technologies, February 9, 2024.
    38. Brittany Bailer, “Howard University Awarded $90 Million Contract by Air Force, DoD to Establish First-Ever University Affiliated Research Center Led by an HCBU,” The Dig, January 24, 2023, thedig.howard.edu.
    39. Erica Caines, “Black University, White Power: Howard University Covers for U.S. Imperialism,” Black Agenda Report, February 1, 2023.
    40. Editors, “Howard University: Every Black Thing and Its Opposite, Kwame Ture,” The Black Agenda Review (Black Agenda Report), February 1, 2023.
    41. David Vine, Base Nation: How U.S. Military Bases Abroad Harm America and the World (New York: Metropolitan Books, 2015).
    42. USC Divest from Death Coalition, “Divest From Death USC News Release”; “USC Renames VKC, Implements Preliminary Anti-Racism Actions,” Daily Trojan, June 11, 2020.
    43. Hopkins Justice Collective, “PIIAC Proposal,” May 4, 2024.
    44. Bronson Azama to bor.testimony@hawaii.edu, “Testimony for 2/15/24,” February 15, 2024, University of Hawai’i; “UH Awarded Maui High Performance Computer Center Contract Valued up to $75 Million,” UH Communications, May 1, 2020.
    45. Isabel Kain and Becker Sharif, “How UC Researchers Began Saying No to Military Work,” Labor Notes, May 17, 2024.

     

    Feature Image: Deep Space Advanced Radar Capability (DARC) at Johns Hopkins Advanced Physical Laborotory, A UARC facility – www.jhuapl.edu

  • Social automation and APT attributions in National Cybersecurity

    Social automation and APT attributions in National Cybersecurity

    Advanced Persistent Threats (APTs) are a prime concern in the formulation and implementation of national cybersecurity policies. These threats often also involve complex social engineering tactics which are undergoing a quantitative and qualitative revolution with burgeoning AI capabilities. However, the attribution of these APT activities can be mired with technical and political considerations. We analysed 142 APT groups’ attributions along with their use of social interaction vectors to ascertain the nature of the risk environment and the operational threat landscape of AI and social automation. We discover that close to 80% of APT activities could be chalked up to merely 20% of key nation-state threat actors competing with each other. We further discuss the implications of this political threat distribution for national cybersecurity environments.

    Keywords: cybersecurity, AI Policy, advanced persistent threats, automation, social engineering


    Read full paper here

  • The Asymmetric Indo-US Technology Agreement Points to India’s Weak R&D Culture

    The Asymmetric Indo-US Technology Agreement Points to India’s Weak R&D Culture

    Prime Minister Narendra Modi’s state visit to the USA resulted in four significant agreements and the visit is hailed as one of very important gains for India and Indo-US strategic partnership. The focus has been on defence industrial and technology partnership. Media and many strategic experts are seeing the agreements as major breakthroughs for technology transfers to India, reflecting a very superficial analysis and a lack of understanding of what really entails technology transfer. Professor Arun Kumar sees these agreements as a sign of India’s technological weakness and USA’s smart manoeuvring to leverage India for long-term defence and technology client. The visit has yielded major business gains for USA’s military industrial complex and the silicon valley. Post the euphoria of the visit, Arun Kumar says its time for India to carefully evaluate the relevant technology and strategic policy angles.

     

    The Indo-US joint statement issued a few days back says that the two governments will “facilitate greater technology sharing, co-development and co-production opportunities between the US and the Indian industry, government and academic institutions.” This has been hailed as the creation of a new technology bridge that will reshape relations between the two countries

    General Electric (GE) is offering to give 80% of the technology required for the F414 jet engine, which will be co-produced with Hindustan Aeronautics Limited (HAL). In 2012, when the negotiations had started, GE had offered India 58%. India needs this engine for the Light Combat Aircraft Mark 2 (LCA Mk2) jets.

    The Indian Air Force has been using LCA Mk1A but is not particularly happy with it. It asked for improvements in it. Kaveri, the indigenous engine for the LCA under development since 1986, has not been successful. The engine development has failed to reach the first flight.

    So, India has been using the F404 engine in the LCA Mk1, which is 40 years old. The F414 is also a 30-year-old vintage engine. GE is said to be offering 12 key technologies required in modern jet engine manufacturing which India has not been able to master over the last 40 years. The US has moved on to more powerful fighter jet engines with newer technologies, like the Pratt & Whitney F135 and GE XA100.

    India is being allowed into the US-led critical mineral club. It will acquire the highly rated MQ-9B high-altitude long-endurance unmanned aerial vehicles. Micron Technologies will set up a semiconductor assembly and test facility in Gujarat by 2024, where it is hoped that the chips will eventually be manufactured. The investment deal of $2.75 billion is sweetened with the Union government giving 50% and Gujarat contributing 20%. India is also being allowed into the US-led critical mineral club.

    There will be cooperation in space exploration and India will join the US-led Artemis Accords. ISRO and NASA will collaborate and an Indian astronaut will be sent to the International Space Station. INDUS-X will be launched for joint innovation in defence technologies. Global universities will be enabled to set up campuses in each other’s countries, whatever it may imply for atmanirbharta.

    What does it amount to?

    The list is impressive. But, is it not one-sided, with India getting technologies it has not been able to develop by itself.

    Though the latest technology is not being given by the US, what is offered is superior to what India currently has. So, it is not just optics. But the real test will be how much India’s technological capability will get upgraded.

    Discussing the New Economic Policies launched in 1991, the diplomat got riled at my complaining that the US was offering us potato chips and fizz drinks but not high technology, and shouted, “Technology is a house we have built and we will never let you enter it.”

    What is being offered is a far cry from what one senior US diplomat had told me at a dinner in 1992. Discussing the New Economic Policies launched in 1991, the diplomat got riled at my complaining that the US was offering us potato chips and fizz drinks but not high technology, and shouted, “Technology is a house we have built and we will never let you enter it.”

    Everyone present there was stunned, but that was the reality.

    The issue is, does making a product in India mean a transfer of technology to Indians? Will it enable India to develop the next level of technology?

    India has assembled and produced MiG-21 jets since the 1960s and Su-30MKI jets since the 1990s. But most critical parts of the Su-30 come from Russia. India set up the Mishra Dhatu Nigam in 1973 to produce the critical alloys needed and production started in 1982, but self-sufficiency in critical alloys has not been achieved.

    So, production using borrowed technology does not mean absorption and development of the technology. Technology development requires ‘know-how’ and ‘know-why’.

    When an item is produced, we can see how it is produced and then copy that. But we also need to know how it is being done and importantly, why something is being done in a certain way. Advanced technology owners don’t share this knowledge with others.

    Technology is a moving frontier

    There are three levels of technology at any given point in time – high, intermediate, and low.

    The high technology of yesterday becomes the intermediate technology of today and the low technology of tomorrow. So, if India now produces what the advanced countries produced in the 1950s, it produces the low-technology products of today (say, coal and bicycles).

    If India produces what was produced in the advanced countries in the 1980s (say, cars and colour TV), it produces the intermediate technology products of today. It is not to say that some high technology is not used in low and intermediate-technology production.

    The high technologies of today are aerospace, nanotechnology, AI, microchips and so on. India is lagging behind in these technologies, like in producing passenger aircraft, sending people into space, making microchips, quantum computing, and so on.

    The advanced countries do not part with these technologies. The World Trade Organisation, with its provisions for TRIPS and TRIMS (Trade-Related Aspects of Intellectual Property Rights and Trade-Related Investment Measures), consolidated the hold of advanced countries on intermediate and low technologies that can be acquired by paying royalties. But high technology is closely held and not shared.

    Advancements in technology

    So, how can nations that lag behind in technology catch up with advanced nations? The Nobel laureate Kenneth Arrow pointed to ‘learning by doing’ – the idea that in the process of production, one learns.

    So, the use of a product does not automatically lead to the capacity to produce it, unless the technology is absorbed and developed. That requires R&D.

    Schumpeter suggested that technology moves through stages of invention, innovation and adaptation. So, the use of a product does not automatically lead to the capacity to produce it, unless the technology is absorbed and developed. That requires R&D.

    Flying the latest Airbus A321neo does not mean we can produce it. Hundreds of MiG-21 and Su-30 have been produced in India. But we have not been able to produce fighter jet engines, and India’s Kaveri engine is not yet successful. We routinely use laptops and mobile phones, and they are also assembled in India, but it does not mean that we can produce microchips or hard disks.

    Enormous resources are required to do R&D for advanced technologies and to produce them at an industrial scale. It requires a whole environment which is often missing in developing countries and certainly in India.

    Enormous resources are required to do R&D for advanced technologies and to produce them at an industrial scale. It requires a whole environment which is often missing in developing countries and certainly in India.

    Production at an experimental level can take place. In 1973, I produced epitaxial thin films for my graduate work. But producing them at an industrial scale is a different ballgame. Experts have been brought from the US, but that has not helped since high technology is now largely a collective endeavour.

    For more complex technologies, say, aerospace or complex software, there is ‘learning by using’. When an aircraft crashes or malware infects software, it is the producer who learns from the failure, not the user. Again, the R&D environment is important.

    In brief, using a product does not mean we can produce it. Further, producing some items does not mean that we can develop them further. Both require R&D capabilities, which thrive in a culture of research. That is why developing countries suffer from the ‘disadvantage of a late start’.

    A need for a focus on research and development

    R&D culture thrives when innovation is encouraged. Government policies are crucial since they determine whether the free flow of ideas is enabled or not. Also of crucial importance is whether thought leaders or sycophants are appointed to lead institutions, whether criticism is welcomed or suppressed, and whether the government changes its policies often under pressure from vested interests.

    Unstable policies increase the risk of doing research, thereby undermining it and dissuading the industry. The result is the repeated import of technology.

    The software policy of 1987, by opening the sector up to international firms, undermined whatever little research was being carried out then and turned most companies in the field into importers of foreign products, and later into manpower suppliers. Some of these companies became highly profitable, but have they produced any world-class software that is used in daily life?

    Expenditure on R&D is an indication of the priority accorded to it. India spends a lowly 0.75% of its GDP on R&D. Neither the government nor the private sector prioritises it. Businesses find it easier to manipulate policies using cronyism. Those who are close to the rulers do not need to innovate, while others know that they will lose out. So, neither focus on R&D.

    Innovation also depends on the availability of associated technologies – it creates an environment. An example is Silicon Valley, which has been at the forefront of innovation. It has also happened around universities where a lot of research capabilities have developed and synergy between business and academia becomes possible.

    This requires both parties to be attuned to research. In India, around some of the best-known universities like Delhi University, Allahabad University and Jawaharlal Nehru University, coaching institutions have mushroomed and not innovative businesses. None of these institutions are producing any great research, nor do businesses require research if they can import technology.

    A feudal setup

    Technology is an idea. In India, most authority figures don’t like being questioned. For instance, bright students asking questions are seen as troublemakers in most schools. The emphasis is largely on completing coursework for examinations. Learning is by rote, with most students unable to absorb the material taught.

    So, most examinations have standard questions requiring reproduction of what is taught in the class, rather than application of what is learned. My students at JNU pleaded against open-book exams. Our class of physics in 1967 had toppers from various higher secondary boards. We chose physics over IIT. We rebelled against such teaching and initiated reform, but ultimately most of us left physics – a huge loss to the subject.

    Advances in knowledge require critiquing its existing state – that is, by challenging the orthodoxy and status quo. So, the creative independent thinkers who generate socially relevant knowledge also challenge the authorities at their institutions and get characterised as troublemakers. The authorities largely curb autonomy within the institution and that curtails innovativeness.

    In brief, dissent – which is the essence of knowledge generation – is treated as a malaise to be eliminated. These are the manifestations of a feudal and hierarchical society which limits the advancement of ideas. Another crucial aspect of generating ideas is learning to accept failure. The Michelson–Morley experiment was successful in proving that there is no aether only after hundreds of failed experiments.

    Conclusion

    The willingness of the US to provide India with some technology without expecting reciprocity is gratifying. Such magnanimity has not been shown earlier and it is obviously for political (strategic) reasons. The asymmetry underlines our inability to develop technology on our own. The US is not giving India cutting-edge technologies that could make us a Vishwaguru.

    India needs to address its weakness in R&D. As in the past, co-producing a jet engine, flying drones or packaging and testing chips will not get us to the next level of technology, and we will remain dependent on imports later on.

    This can be corrected only through a fundamental change in our R&D culture that would enable technology absorption and development. That would require granting autonomy to academia and getting out of the feudal mindset that presently undermines scientific temper and hobbles our system of education.

     

    This article was published earlier in thewire.in

    Feature Image Credit: thestatesman.com

     

  • The Rivers Linking Scheme: Will it Work or End up a Disaster?

    The Rivers Linking Scheme: Will it Work or End up a Disaster?

    I keep hearing that Modiji is going to unveil the often-spoken and then shelved Rivers Link Up Scheme as his grand vision to enrich the farmers and unite India. In a country where almost two-thirds of the agricultural acreage is rainfed, water is wealth. Telangana has shown the way. Once India’s driest region has in just eight years been transformed into another granary of India. Three years ago he had promised to double farmer’s incomes by 2022 and has clearly failed. He now needs a big stunt. With elections due in 2024, he doesn’t even have to show any delivery. A promise will do for now.

    This is also a Sangh Parivar favourite and I am quite sure the nation will once again set out to undertake history’s greatest civil engineering project by seeking to link all our major rivers. It will irretrievably change India. If it works, it will bring water to almost every parched inch of land and just about every parched throat in the land.

    On the other hand, if it doesn’t work, Indian civilization as it exists even now might then be headed the way of the Indus valley or Mesopotamian civilizations destroyed by a vengeful nature, for interfering with nature is also a two-edged sword. If the Aswan High Dam turned the ravaging Nile into a saviour, the constant diversion of the rivers feeding Lake Baikal has turned it into a fast-receding and highly polluted inland sea ranking it as one of the world’s greatest ecological disasters. Even in the USA, though the dams across mighty Colorado have turned it into a ditch when it enters Mexico, California is still starved for water.

    I am not competent to comment on these matters and I will leave this debate for the technically competent and our perennial ecological Pooh-Bahs. But the lack of this very debate is cause for concern. It is true that the idea of linking up our rivers has been afloat for a long time. Sir Arthur Cotton was the first to propose it in the 1800s. The late KL Rao, considered by many to be an outstanding irrigation engineer and a former Union Minister for Irrigation, revived this proposal in the late 60s by suggesting the linking of the Ganges and Cauvery rivers. It was followed in 1977 by the more elaborate and gargantuan concept of garland canals linking the major rivers, thought up by a former airline pilot, Captain Dinshaw Dastur. Morarji Desai was an enthusiastic supporter of this plan.

    The return of Indira Gandhi in 1980 sent the idea back into dormancy, where it lay all these years, till President APJ Abdul Kalam revived it in his eve of the Independence Day address to the nation in 2002. It is well known that Presidents of India only read out what the Prime Ministers give them and hence the ownership title of Captain Dastur’s original idea clearly was vested with Atal Behari Vajpayee.

    That India has an acute water problem is widely known. Over sixty per cent of our cropped areas are still rain-fed, much too abjectly dependent on the vagaries of the monsoon. The high incidence of poverty in certain regions largely coincides with the source of irrigation, clearly suggesting that water for irrigation is integral to the elimination of poverty. In 1950-51 when Jawaharlal Nehru embarked on the great expansion of irrigation by building the “temples of modern India” by laying great dams across our rivers at places like Bhakra Nangal, Damodar Valley and Nagarjunasagar only 17.4% or 21 million hectares of the cropped area of 133 million hectares was irrigated. That figure rose to almost 35% by the late 80s and much of this was a consequence of the huge investment by the government in irrigation, amounting to almost Rs.50, 000 crores.

    Ironically enough this also coincided with the period when water and land revenue rates began to steeply decline to touch today’s nothing level. Like in the case of power, it seems that once the activity ceased to be profitable to the State, investment too tapered off.

    The scheme is humongous. It will link the Brahmaputra and Ganges with the Mahanadi, Godavari and Krishna, which in turn will connect to the Pennar and Cauvery. On the other side of the country, it will connect the Ganges, Yamuna with the Narmada traversing in part the supposed route of the mythical Saraswathi. This last link has many political and mystical benefits too.

    There are many smaller links as well such as joining the Ken and Betwa rivers in MP, the Kosi with the Gandak in UP, and the Parbati, Kalisindh and Chambal rivers in Rajasthan. The project when completed will consist of 30 links, with 36 dams and 10,800 km of canals diverting 174,000 million cubic meters of water. Just look at the bucks that will go into this big bang. It was estimated to cost Rs. 560,000 crores in 2002 and entail the spending of almost 2% of our GNP for the next ten years. Now it will cost twice or more than that, but our GDP is now three times more, and it might be more affordable, and hence more tempting to attempt.

    The order to get going with the project was the output of a Supreme Court bench made up of then Chief Justice BN Kirpal, and Justices KG Balakrishnan and Arjit Pasayat, which was hearing a PIL filed by the Dravida Peervai an obscure Tamil activist group. The learned Supreme Court sought the assistance of a Senior Advocate, Mr Ranjit Kumar, and acknowledging his advice recorded: “The learned Amicus Curiae has drawn our attention to Entry 56 List of the 7th Schedule to the Constitution of India and contends that the interlinking of the inter-State rivers can be done by the Parliament and he further contends that even some of the States are now concerned with the phenomena of drought in one part of the country, while there is flood in other parts and disputes arising amongst the egalitarian States relating to sharing of water. He submits that not only these disputes would come to an end but also the pollution levels in the rivers will be drastically decreased, once there is sufficient water in different rivers because of their interlinking.”

    The only problem with this formulation is that neither the learned Amicus Curiae nor the learned Supreme Court is quite so learned as to come to such sweeping conclusions.

     

    Feature Image Credit: Hindustan Times

     

    This article was published earlier in deccanchronicle.com

  • The “loss and damage” agenda at COP27

    The “loss and damage” agenda at COP27

    The dialogues on Climate Change Action have failed to produce effective measures. At the heart of the problem is the refusal of the developed countries to accept the reality that they were the beneficiaries of the industrial revolution, colonialism, and imperialism and have contributed the maximum to the current problems humanity faces on account of climate change. Hence, two-thirds of the world’s assertion that developed nations bear the costs of implementing corrective measures is very valid and logical.

    The 27th Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC) was hosted by the Government of the Arab Republic of Egypt from 06 November to 18 November (extended to 20 November). This conference comes at a time when the world witnessed massive heatwaves, flooding in Pakistan, wildfires across Spain and California, and droughts in East Africa. The mission of the conference is to take collective action to combat climate change under the Paris agreement and the convention. After a decade of climate talks, the question is, “are countries ready to take collective action against climate change”?

    Developed Nations’ Responsibility and Accountability

    Financial compensation remains a huge contestation point between developed and developing countries. Developing countries or the Global South face the adverse effects of climate change and demand compensation for the historical damage caused by colonialism and resource extraction that aided in the development of the Global North. This includes countries in the EU and the United States. Developed countries bear the most responsibility for emissions leading to global temperature rise — between 1751 and 2017, the United States, the EU and the UK were responsible for 47% of cumulative carbon dioxide emissions compared to just 6% from the entire African and South American continents. At COP15 in Copenhagen in 2009, Global North nations agreed to pledge $100 billion (€101 billion) annually by 2020 to help developing countries adapt to the impacts of climate change, for example, by providing farmers with drought-resistant crops or paying for better flood defences. But according to the Organization for Economic Cooperation and Development (OECD), which tracks funding, in 2020 wealthy countries pledged just over $83 billion.

    Developed countries bear the most responsibility for emissions leading to global temperature rise — between 1751 and 2017, the United States, the EU and the UK were responsible for 47% of cumulative carbon dioxide emissions compared to just 6% from the entire African and South American continents.

    Such compensation for loss and damage has been a focal point in all climate summits since 1991. In terms of institutional developments, the COP19 conference in 2013 established the Warsaw Mechanism for Loss and Damage, which is supposed to enhance global understanding of climate risk, promote transnational dialogue and cooperation, and strengthen “action and support”. At COP25, the Santiago Network on Loss and Damage (SNLD) was set up to provide research and technical assistance on the issue of loss and damage from human-induced climate change. The meeting did not discuss the working process of the network and hence it was taken up in COP26, where no elaborate changes were made. Although in COP26, the Glasgow facility to finance solutions for loss and damage was brought by G77 countries, developed countries such as the US and the EU bloc did not go beyond agreeing to a three-year dialogue.

    Developed countries constantly focus on holding dialogues rather than coming up with solutions for climate risk mitigation.

    The US’s stance on financing vulnerable countries to find solutions against climate change is constantly shifting. The trend indicates that the US wants to focus on curbing global warming rather than dwell on past losses and damages that have already occurred. The Global North is reluctant to acknowledge the mere definition of loss and damage, as an acknowledgement will make them liable for 30 years’ worth of climate change impact.  Developed countries constantly focus on holding dialogues rather than coming up with solutions for climate risk mitigation. In a statement prior to COP27, U.S. climate envoy John Kerry expressed concern about how the shifting focus on loss and damage “could delay our ability to do the most important thing of all, which is [to] achieve mitigation sufficient to reduce the level of adaptation.”

    USA’s leads Evasive Tactics

    The Bonn Summit held in June 2022 which set a precedent for the COP27 agenda ended in disagreement as the US and EU refused to accept funding for loss and damage as an agenda. Although, during the conclusion of COP27, the countries were successful in agreeing to establish a fund for loss and damage. Governments also agreed to establish a ‘transitional committee’ to make recommendations on how to operationalize both the new funding arrangements and the fund at COP28 next year. The first meeting of the transitional committee is expected to take place before the end of March 2023.

    Parties also agreed on the institutional arrangements to operationalize the Santiago Network for Loss and Damage, to mobilise technical assistance to developing countries that are particularly vulnerable to the adverse effects of climate change. Governments agreed to move forward on the Global Goal on Adaptation, which will conclude at COP28 and inform the first Global Stocktake, improving resilience amongst the most vulnerable. New pledges, totalling more than USD 230 million, were made to the Adaptation Fund at COP27. These pledges will help many more vulnerable communities adapt to climate change through concrete adaptation solutions.

    Despite a groundbreaking agreement, the most common question asked by the public is “are the climate summits any good?”

    The question arises due to the absence of effective leadership to monitor or condemn nations over the destruction of the environment. The summits have created a sense of accountability for all nations, irrespective of the stage of vulnerability. While vulnerable states bear a higher cost due to climate change, all states collectively pledge to reduce carbon emissions and achieve net-zero emissions by 2050. While a monitoring mechanism is absent, non-governmental organisations (NGOs) and civil societies actively advocate for climate change mitigation measures and also criticise both state and non-state actors for their lack of initiatives against the cost. Incidentally, COP27 partnered with Coca-Cola for sponsorship and many activists slammed the move as Coca-Cola is one of the top five polluters in 2022, producing around 120 billion throwaway plastic bottles a year.

    Apart from that, many other funding networks and initiatives have been introduced to support vulnerable countries against climate change. Under Germany’s G7 presidency, the G7 along with the vulnerable 20 countries or V20  launched the Global Shield against Climate Risks during COP27. The Shield gathers activities in the field of climate risk finance and preparedness together under one roof. Under the Shield, solutions to provide protection will be devised that can be implemented swiftly if climate-related damages occur. At COP27, Chancellor Olaf Scholz announced Germany’s contribution of 170 million euros to the Shield. Of this, 84 million euros are earmarked for the central financing structure of the Shield, the other funds for complementary instruments of climate risk financing, which will be implemented towards concrete safeguarding measures over the next few years.

    On 20 September, Denmark became the first developed country in the world to provide financial compensation to developing countries for ‘loss and damage’ caused by climate change. The country pledged approximately EUR 13 million (100 million Danish krone) to civil society organisations based in developing nations working on climate change-related loss and damage. Germany and Denmark are so far the only financial supporters of the initiative launched at COP27.

    What can India do?

    India has launched Mission LiFE, an initiative to bring a lifestyle change that reduces the burden on the environment. During the event, the MoEFCC – UNDP Compendium ‘Prayaas Se Prabhaav Tak – From Mindless Consumption to Mindful Utilisation’ was launched. It focuses on reduced consumption, circular economy, Vasudhaiva Kutumbakam, and sustainable resource management. India has also signed the Mangrove Alliance for Climate (MAC), determined to protect mangroves and create a carbon sink of 3 billion CO2 by expanding the forest cover.

    India has maintained a stance where it has neither advocated for nor against financial compensation for loss and damage. However, it has always called on developed countries to provide finance for developing technology or sharing technical know-how to reduce climate risk. Such an approach can help other countries to push for financial aid to develop technology instead of using their own resources.

    Further, India holds a unique position among developing countries as an emerging economy. With its diplomatic prowess under the Modi government, India can play an ideal role in negotiating with developed countries. India has maintained a stance where it has neither advocated for nor against financial compensation for loss and damage. However, it has always called on developed countries to provide finance for developing technology or sharing technical know-how to reduce climate risk. Such an approach can help other countries to push for financial aid to develop technology instead of using their own resources. India is also focused on phasing out the use of fossil fuels and not just the use of coal, which is another consistent move that adds to the country’s credentials. With the weaponization of energy by Russia since the onset of the Ukraine war, India’s call for action has garnered intensive support from both developed and developing nations. With the support of the Global South, India can assume a leadership role in establishing south-south cooperation with respect to climate risk mitigation and shift to renewable energy such as solar power.

    Conclusion

    Climate funds are important for designing and implementing solutions as developing and vulnerable countries find it difficult to diversify resources from developmental activities. The question largely remains whether the COP27 countries will adhere to the agreement concluded at the summit. There is no conclusive evidence on when the fund will be set up and the liability if countries fail to contribute to the fund. Eventually, it comes down to the countries- both state and non-state actors to effectively reduce fossil fuel consumption and reduce wastage, as many countries still focus on exploiting African gas reserves to meet their energy requirements. Ambitious goals with no actual results are a trend that is expected to continue till the next summit, and with such a trend the world has a long way to go to curb the temperature at 1.5 degree Celsius at pre-industrial levels.

    Feature Image Credit: www.cnbc.com

    Article Image: aljazeera.com 

  • Ghosts in the Machine: The Past, Present, and Future of India’s Cyber Security

    Ghosts in the Machine: The Past, Present, and Future of India’s Cyber Security

    [powerkit_button size=”lg” style=”info” block=”false” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/10/Research-Paper-TPF-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Introduction

    When the National Cybersecurity Policy was released in 2013, the response from experts was rather underwhelming [1], [2]. A reaction to a string of unpalatable incidents, from Snowden’s revelations [3] and massive compromise of India’s civilian and military infrastructure [4] to the growing international pressure on Indian IT companies to fix their frequent data breaches [5], the 2013 policy was a macro example of weak structures finding refuge in a haphazard post-incident response. The next iteration of the policy is in formulation under the National Cybersecurity Coordinator. However, before we embark upon solving our cyber-physical domain’s future threat environment, it is perhaps wise to look back upon the perilous path that has brought us here.  

    Early History of Electronic Communications in India

    The institutional “cybersecurity thinking” of post-independence Indian government structures can be traced to 1839 when the East India Company’s then Governor-General of India, Lord Dalhousie, had asked a telegraph system to be built in Kolkata, the then capital of the British Raj. By 1851, the British had deployed the first trans-India telegraph line, and by 1854, the first Telegraph Act had been passed. Similar to the 2008 amendment to the IT Act which allowed the government to intercept, monitor and decrypt any information on any computer, the 1860 amendment to the Telegraph Act too granted the British to take over any leased telegraph lines to access any of the telegraphs transmitted. After all, the new wired communication technology of the day had become an unforeseen flashpoint during the 1857 rebellion.

    Post-independence, under the socialist fervour of Nehruvian politics, the government further nationalised all foreign telecommunications companies and continued the British policy of total control over telecommunications under its own civil service structure, which too came pre-packaged from the British.

    Historians note that the telegraph operators working for the British quickly became targets of intrigues and lethal violence during the mutiny [6], somewhat akin to today’s Sysadmins being a top social engineering priority for cyber threat actors [7]. One of the sepoy mutineers of 1857, while on his way to the hangman’s halter, famously cried out at a telegraph line calling it the cursed string that had strangled the Indians [8]. On the other side of affairs, after having successfully suppressed the mutiny, Robert Montgomery famously remarked that the telegraph had just saved India [9]. Within the telegraph system, the problems of information security popped up fairly quickly after its introduction in India. Scholars note that commercial intelligence was frequently peddled in underground Indian markets by government telegraph clerks [10], in what can perhaps be described as one of the first “data breaches” that bureaucrats in India had to deal with. 

    British had formulated different rules for telecommunications in India and England. While they did not have the total monopoly and access rights over all transmissions in Britain, for the purpose of maintaining political control, in India they did [11]. Post-independence, under the socialist fervour of Nehruvian politics, the government further nationalised all foreign telecommunications companies and continued the British policy of total control over telecommunications under its own civil service structure, which too came pre-packaged from the British.

    The Computer and “The System”

    Major reforms are often preceded by major failures. The government imported its first computer in 1955 but did not show any interest in any policy regarding these new machines. That only changed in 1963, when the government under the pressure to reform after a shameful military defeat and the loss of significant territory to China, instituted a Committee on Electronics under Homi Jehangir Bhabha to assess the strategic utilities that computers might provide to the military [12].  

    In 1965, as punitive sanctions for the war with Pakistan, the US cut off India’s supply of all electronics, including computers. This forced the government to set up the Electronics Committee of India which worked alongside the Electronics Corporation of India (ECIL), mandated to build indigenous design and electronic manufacturing capabilities. But their approach was considered highly restrictive and discretionary, which instead of facilitating, further constrained the development of computers, related electronics, and correspondingly useful policies in India [13]. Moreover, no one was even writing commercial software in India, while at the same the demand for export-quality software was rising. The situation was such that ECIL had to publish full-page advertisements for the development of export-quality software [12]. Consequently, in the early 1970s, Mumbai-based Tata Consultancy Services managed to become the first company to export software from India. As the 1970s progressed and India moved into the 1980s, it gradually became clearer to more and more people in the government that their socialist policies were not working [14]. 

    In 1984, the same year when the word ‘Cyberspace’ appeared in a sci-fi novel called Neuromancer, a policy shift towards computing and communications technologies was seen in the newly formed government under Rajiv Gandhi [12]. The new computer policy, shaped largely by N. Sheshagiri who was the Director General of the National Informatics Centre, significantly simplified procedures for private actors and was released within twenty days of the prime minister taking the oath. Owing to this liberalisation, the software industry in India took off and in 1988, 38 leading software companies in India came together to establish the National Association of Software and Service Companies (NASSCOM) with the intent to shape the government’s cyber policy agendas. As we are mostly concerned about cybersecurity, it should be noted that in 1990, it was NASSCOM that carried out probably the first IT security-related public awareness campaign in India which called for reducing software piracy and increasing the lawful use of IT [5].   

    Unfortunately, India’s 1990s were mired by coalition governments and a lack of coherent policy focus. In 1998, when Atal Bihari Vajpayee became the Prime Minister, the cyber policy took the most defining turn with the development of the National IT Policy. The IT Act, thus released in 2000 and amended further in 2008, became the first document explicitly dealing with cybercrime. Interestingly, the spokesman and a key member of the task force behind the national IT policy was Dewang Mehta, the then president of NASSCOM. Nevertheless, while computer network operations had become regular in international affairs [15], there was still no cyber policy framework or doctrine to deal with the risks from sophisticated (and state-backed) APT actors that were residing outside the jurisdiction of Indian authorities. There still is not.  

    In 2008, NASSCOM established the Data Security Council of India (DSCI), which along with its parent body took it upon itself to run cybersecurity awareness campaigns for law enforcement and other public sector organisations in India. However, the “awareness campaign” centric model of cybersecurity strategy does not really work against APT actors, as became apparent soon when researchers at the University of Toronto discovered the most massive infiltration of India’s civilian and military computers by APT actors [4]. In 2013, the Snowden revelations about unrestrained US spying on India also ruffled domestic feathers for lack of any defensive measures or policies [3]. Coupled with these surprise(?) and unpalatable revelations, there was also the increasing and recurring international pressure on Indian IT to put an end to the rising cases of data theft where sensitive data of their overseas customers was regularly found in online underground markets [16].  

    Therefore, with the government facing growing domestic and international pressure to revamp its approach towards cybersecurity, MeitY released India’s first National Cybersecurity Policy in 2013 [17]. Ministry of Home Affairs (MHA) also released detailed guidelines “in the wake of persistent threats” [18]. However, the government admitted to not having the required expertise in the matter, and thus the preparation of the MHA document was outsourced to DSCI. Notwithstanding that, MHA’s document was largely an extension of the Manual on Departmental Security Instructions released in 1994 which had addressed the security of paper-based information. Consequently, the MHA document produced less of a national policy and more of a set of instructions to departments about sanitising their computer networks and resources, including a section on instructions to personnel over social media usage. 

    The 2013 National Cybersecurity Policy proposed certain goals and “5-year objectives” toward building national resilience in cyberspace. At the end of a long list of aims, the 2013 policy suggested adopting a “prioritised approach” for implementation which will be operationalised in the future by a detailed guide and plan of action at national, sectoral, state, ministry, department, and enterprise levels. However, as of this writing the promised implementation details, or any teeth, are still missing from the National Cybersecurity Policy. As continued APT activities [19] show, the measures towards creating situation awareness have also not permeated beyond the technical/collection layer.

    In 2014, the National Cyber Coordination Centre (NCCC) was established, with the primary aim of building situational awareness of cyber threats in India. Given the underwhelming response to the 2013 policy [1], [2], the National Cybersecurity Policy was surmised to be updated in 2020, but as of this writing, the update is still being formulated by the National Cybersecurity Coordinator who heads the NCCC. The present policy gap makes it an opportune subject to discuss certain fundamental issues with cyber situation awareness and the future of cyber defences in the context of the trends in APT activities. 

    Much to Catch Up

    Recently, the Government of India’s Kavach (an employee authentication app for anyone using a ‘gov.in’ or ‘nic.in’ emails-id) was besieged by APT36 [20]. APT36 is a Pak-affiliated actor and what one might call a tier-3 APT i.e., what they lack in technical sophistication, they try to make up for that with passion and perseverance. What makes it interesting is that the malicious activity went on for over a year, before a third-party threat observer flagged it. Post-pandemic, APT activities have not just increased but also shown an inclination towards integrating online disinformation into the malware capabilities [21]. APT actors (and bots), who have increasingly gotten better at hiding in plain sight over social networks, have now a variety of AI techniques to integrate into their command and control – we’ve seen the use of GANs to mimic traffic of popular social media sites for hiding command and control traffic [22], an IoT botnet that had a machine-learning component which the attacker could switch on/off depending upon people’s responses in online social networks [21], as well as malware that can “autonomously” locate its command and control node over public communication platforms without having any hard-coded information about the attacker [23]. 

    Post-pandemic, APT activities have not just increased but also shown an inclination towards integrating online disinformation into the malware capabilities.

    This is an offence-persistent environment. In this “space”, there always exists an information asymmetry where the defender generally knows less about the attacker than the opposite being true. Wargaming results have shown that unlike conventional conflicts, where an attack induces the fear of death and destruction, a cyber-attack generally induces anxiety [24], and consequently, people dealing with cyber attacks act to offset those anxieties and not their primal fears. Thus, in response to cyber-attacks, their policies reflect risk aversion, not courage, physical or moral. It need not be the case if policymakers recognise this and integrate it into their decision-making heuristics. Unfortunately, the National Cybersecurity Policy released in 2013 stands out to be a fairly risk-averse and a placeholder document. Among many other, key issues are: 

    • The policy makes zero references to automation and AI capabilities. This would have been understandable in other domains, like poultry perhaps, but is not even comprehensible in present-day cybersecurity.   
    • The policy makes zero references to hardware attacks. Consequently, developing any capability for assessing insecurity at hardware/firmware levels, which is a difficult job, is also overlooked at the national level itself. 
    • There are several organisations within the state, civilian and military, that have stakes and roles of varying degrees in a robust National Cybersecurity Policy. However, the policy makes zero attempts at recognising and addressing these specific roles and responsibilities, or any areas of overlap therein.
    • The policy does not approach cyber activity as an overarching operational construct that permeates all domains, but rather as activity in a specific domain called “cyberspace”. Consequently, it lacks the doctrinal thinking that would integrate cyber capabilities with the use of force. A good example of this is outer space, where cyber capabilities are emerging as a potent destabiliser [25] and cybersecurity constitutes the operational foundation of space security, again completely missing from the National Cybersecurity Policy.   
    • The policy is also light on subjects critical to cybersecurity implementation, such as the approach towards internet governance, platform regulation, national encryption regime, and the governance of underlying technologies. 

    A Note on the Human Dimension of Cybersecurity

    There exist two very broad types of malicious behaviour online, one that is rapid and superficial, and another that are deep and persistent. The present approaches to building situation awareness in cyberspace are geared towards the former, leading to spatiotemporally “localised and prioritised” assessments [26], matters pertaining to the immediate law and order situations and not stealthy year-long campaigns. Thus, while situation awareness itself is a psychological construct dealing with decision-making, in cybersecurity operations it overwhelmingly has turned into software-based visualisation of the incoming situational data. This is a growing gap that must also be addressed by the National Cybersecurity Policy. 

    The use of computational tools and techniques to automate and optimise the social interactions of a software agent presents itself as a significant force multiplier for cyber threat actors.

    In technology-mediated environments, people have to share the actual situation awareness with the technology artifacts [27]. Complete dependence on technology for cyber situation awareness has proven to be problematic, for example in the case of Stuxnet, where the operators at the targeted plant saw on their computer screens that the centrifuges were running normally, and simply believed that to be true. The 2016 US election interference only became clearer at the institutional level after several months of active social messaging and doxing operations had already been underway [28], and the story of Telebots’ attack on Ukrainian electricity grids is even more telling – a powerplant employee whose computer was being remotely manipulated, sat making a video of this activity, asking his colleague if it could be their own organisation’s IT staff “doing their thing” [29].

    This lack of emphasis on human factors has been a key gap in cybersecurity, which APTs never fail to exploit. Further, such actors rely upon considerable social engineering in initial access phases, a process which is also getting automated faster than policymakers can play catchup to [30]. The use of computational tools and techniques to automate and optimise the social interactions of a software agent presents itself as a significant force multiplier for cyber threat actors. Therefore, it is also paramount to develop precise policy guidelines that implement the specific institutional structures, processes, and technological affordances required to mitigate the risks of malicious social automation on the unsuspecting population, as well as on government institutions.  

    Concluding Remarks

    There is a running joke that India’s strategic planning is overseen by accountants and reading through the document of National Cybersecurity Policy 2013, that does not seem surprising. We have had a troubling policy history when it comes to electronics and communications and are still in the process of shedding our colonial burden. A poorly framed National Cybersecurity Policy will only take us away from self-reliance in cyberspace and towards an alliance with principal offenders themselves. Notwithstanding, an information-abundant organisation like NCCC has undoubtedly to make some choices about where and what to concentrate its attentional resources upon, however, the present National Cybersecurity Policy appears neither to be a component of any broader national security strategy nor effective or comprehensive enough for practical implementation in responding to the emerging threat environment. 

    References

    [1] N. Alawadhi, “Cyber security policy must be practical: Experts,” The Economic Times, Oct. 22, 2014. Accessed: Sep. 14, 2022. [Online]. Available: https://economictimes.indiatimes.com/tech/internet/cyber-security-policy-must-be-practical-experts/articleshow/44904596.cms

    [2] A. Saksena, “India Scrambles on Cyber Security,” The Diplomat, Jun. 18, 2014. https://thediplomat.com/2014/06/india-scrambles-on-cyber-security/ (accessed Sep. 18, 2022).

    [3] C. R. Mohan, “Snowden Effect,” Carnegie India, 2013. https://carnegieindia.org/2013/06/19/snowden-effect-pub-52148 (accessed Sep. 18, 2022).

    [4] R. Dharmakumar and S. Prasad, “Hackers’ Haven,” Forbes India, Sep. 19, 2011. https://www.forbesindia.com/printcontent/28462 (accessed Sep. 18, 2022).

    [5] D. Karthik and R. S. Upadhyayula, “NASSCOM: Is it time to retrospect and reinvent,” Indian Inst. Manag. Ahmedabad, 2014.

    [6] H. C. Fanshawe, Delhi past and present. J. Murray, 1902.

    [7] C. Simms, “Is Social Engineering the Easy Way in?,” Itnow, vol. 58, no. 2, pp. 24–25, 2016.

    [8] J. Lienhard, “No. 1380: Indian telegraph,” Engines Our Ingen., 1998.

    [9] A. Vatsa, “When telegraph saved the empire – Indian Express,” Nov. 18, 2012. http://archive.indianexpress.com/news/when-telegraph-saved-the-empire/1032618/0 (accessed Sep. 17, 2022).

    [10] L. Hoskins, BRITISH ROUTES TO INDIA. ROUTLEDGE, 2020.

    [11] D. R. Headrick, The invisible weapon: Telecommunications and international politics, 1851-1945. Oxford University Press on Demand, 1991.

    [12] B. Parthasarathy, “Globalizing information technology: The domestic policy context for India’s software production and exports,” Iterations Interdiscip. J. Softw. Hist., vol. 3, pp. 1–38, 2004.

    [13] I. J. Ahluwalia, “Industrial Growth in India: Stagnation Since the Mid-Sixties,” J. Asian Stud., vol. 48, pp. 413–414, 1989.

    [14] R. Subramanian, “Historical Consciousness of Cyber Security in India,” IEEE Ann. Hist. Comput., vol. 42, no. 4, pp. 71–93, 2020.

    [15] C. Wiener, “Penetrate, Exploit, Disrupt, Destroy: The Rise of Computer Network Operations as a Major Military Innovation,” PhD Thesis, 2016.

    [16] N. Kshetri, “Cybersecurity in India: Regulations, governance, institutional capacity and market mechanisms,” Asian Res. Policy, vol. 8, no. 1, pp. 64–76, 2017.

    [17] MeitY, “National Cybersecurity Policy.” Government of India, 2013.

    [18] MHA, “NATIONAL INFORMATION SECURITY POLICY AND GUIDELINES.” Government of India, 2014.

    [19] S. Patil, “Cyber Attacks, Pakistan emerges as China’s proxy against India,” Obs. Res. Found., 2022.

    [20] A. Malhotra, V. Svajcer, and J. Thattil, “Operation ‘Armor Piercer:’ Targeted attacks in the Indian subcontinent using commercial RATs,” Sep. 23, 2021. http://blog.talosintelligence.com/2021/09/operation-armor-piercer.html (accessed Sep. 02, 2022).

    [21] NISOS, “Fronton: A Botnet for Creation, Command, and Control of Coordinated Inauthentic Behavior.” May 2022.

    [22] M. Rigaki, “Arming Malware with GANs,” presented at the Stratosphere IPS, Apr. 2018. Accessed: Oct. 19, 2021. [Online]. Available: https://www.stratosphereips.org/publications/2018/5/5/arming-malware-with-gans

    [23] Z. Wang et al., “DeepC2: AI-Powered Covert Command and Control on OSNs,” in Information and Communications Security, vol. 13407, C. Alcaraz, L. Chen, S. Li, and P. Samarati, Eds. Cham: Springer International Publishing, 2022, pp. 394–414. doi: 10.1007/978-3-031-15777-6_22.

    [24] J. Schneider, “Cyber and crisis escalation: insights from wargaming,” 2017.

    [25] J. Pavur, “Securing new space: on satellite cyber-security,” PhD Thesis, University of Oxford, 2021.

    [26] U. Franke and J. Brynielsson, “Cyber situational awareness – A systematic review of the literature,” Comput. Secur., vol. 46, pp. 18–31, Oct. 2014, doi: 10.1016/j.cose.2014.06.008.

    [27] N. A. Stanton, P. M. Salmon, G. H. Walker, E. Salas, and P. A. Hancock, “State-of-science: situation awareness in individuals, teams and systems,” Ergonomics, vol. 60, no. 4, pp. 449–466, Apr. 2017, doi: 10.1080/00140139.2017.1278796.

    [28] “Open Hearing On The Intelligence Community’s Assessment on Russian Activities and Intentions in the 2016 U.S. Elections.” Jan. 10, 2017. Accessed: Dec. 22, 2021. [Online]. Available: https://www.intelligence.senate.gov/hearings/open-hearing-intelligence-communitys-assessment-russian-activities-and-intentions-2016-us#

    [29] R. Lipovsky, “Tactics, Techniques, and Procedures of the World’s Most Dangerous Attackers,” presented at the Microsoft BlueHat 2020, 2020. [Online]. Available: https://www.youtube.com/watch?v=9LAFV6XDctY

    [30] D. Ariu, E. Frumento, and G. Fumera, “Social engineering 2.0: A foundational work,” in Proceedings of the Computing Frontiers Conference, 2017, pp. 319–325.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/10/Research-Paper-TPF-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

  • How blockchain can help dismantle corruption in government services

    How blockchain can help dismantle corruption in government services

    As India celebrated its 76th independence day with great fanfare and jubilation, it is time to introspect on the most serious threat to India’s growth and emergence as a world. This threat is corruption, which is internal and societal. Over the 75 years of modern India’s journey, corruption has become endemic in Indian society. Infused by the political culture, corruption has seeped into every aspect of governance, be it the executive, legislature, or judiciary. This is so because an average citizen has come to accept bribing as a routine and inevitable part of daily life. Hence, if India has to eliminate the scourge of corruption it needs a massive transformation of its society. This can come only through the sustained practice of transparency, ruthless accountability, efficiency, and deterrent punishment. Corruption is commonly perceived as related to monetary benefits but it is much more in terms of misuse of power, coercion, disinformation, lack of transparency, non-performance, inefficiency and delay tactics, and the lack of accountability/responsibility. There is a misconception that digitisation will overcome corruption. Unless timelines, tamper-proof records, and transparency are ensured the corrupt will find ways to get around. These are clearly seen in the revenue tax systems, licensing systems, land registration systems etc. Even though these departments have digitised the processes well, there is a proliferation of middlemen linking the client and the department. This can only be eliminated by the right policies that enforce strict timelines, respond to citizens’ complaints, enforce accountability and transparency on the officials and create clarity for the public in the usage of such systems. The adoption of blockchain technologies could go a long way toward eliminating corruption in India. Widespread corruption has been India’s greatest threat and it is never more urgent than now to address this problem through innovative technologies like blockchain.

    TPF republishes this article on ‘Blockchain and Governance’  from the World Economic Forum under the creative commons license 4.0

    TPF Editorial Team

    Key Points

    • Blockchain could increase the fairness and efficiency of government systems while reducing opportunities for corruption;
    • Blockchain could improve the transparency and disclosure of procurement processes, investment in which can be lost to corruption;
    • The emerging technology can also enhance the property and land registry systems, streamlining lengthy processes and protecting people’s rights.

    Governments regularly have to make trade-offs between efficiency and fairness in their services. Unfortunately, choosing one over the other often increases the likelihood of corruption. In efficient systems, the public is largely content to operate within the bounds of that system; inefficient systems cause large numbers of individuals to seek less-than-legal workarounds. Similarly, fair systems engender trust, pride and a sense of community; while unfair systems encourage individuals to seek out illegal alternatives without remorse.

    Occasionally, new technologies come along that offer the opportunity to increase both efficiency and fairness. Blockchain is one such opportunity and it has a variety of use-cases for government applications. Here are two in more detail:

    Blockchain and procurement

    Public procurement is the process of governments acquiring goods, services and works. It represents a significant portion of governmental budgets, accounting for 29% of general government expenditure totalling €4.2 trillion in OECD countries in 2013. With so much money at stake, it is unsurprising that OECD estimates that 10-30% of the investment in publicly funded construction projects may be lost to corruption.

    Public procurement is vulnerable to corruption for a number of reasons. Parties in the procurement process, both on the public and private sides, are induced into corrupt acts by the size of potential financial gains, the close interaction between public officials and businesses, and how easy it is to hide corrupt actions. Blockchain has the potential to protect against these weaknesses at almost every stage of the procurement process.

    In the planning stage, public officials create evaluation criteria by which bidding companies will be judged. In the bidding evaluation stage, public officials assign scores to companies using the evaluation criteria as their rubric. Without transparency, there are many opportunities for compromised public officials to rig the outcome of the evaluation process. Evaluation criteria could be retroactively changed or company bids altered, for example. Blockchain can guarantee any change is public, the original information is retained and there is a record of who made the change.

    Blockchain can also encourage a wider coalition of stakeholders to participate in and monitor procurement cycles. Too often, the most active stakeholders in any given procurement process are the public officials and the businesses directly involved – a potential problem when more than half of all foreign bribery cases likely occur to obtain public procurement contracts. Watchdog organizations, end-users, the media and citizens are discouraged from participating because procurement information is not readily available, untrustworthy, modified and/or delayed. Blockchain can provide an easily accessible, tamper-proof and real-time window into ongoing procurement processes

    Projects integrating blockchain into procurement, such as this pilot programme in Colombia, conclude that “blockchain-based e-procurement systems provide unique benefits related to procedural transparency, permanent record-keeping and honest disclosure.” The Colombia project noted several drawbacks, such as scalability and vendor anonymity, but newer proposals like this one to overhaul India’s public procurement system are taking steps to overcome those and other shortcomings.

    Blockchain and registries

    Land title registries track the ownership of land and property for a given region. Registration titling systems have had important consequences for the economy, leading to “better access to formal credit, higher land values, higher investment in land, and higher income.” Yet they are far from perfect. They are inefficient, for example, closing a property sale can take months and typically consumes 2-5% of the purchase price of a home. Registration systems can act as bottlenecks for land transactions. There are complaints going back to 2015 of England’s Land Registry having six-month transaction delays and similar complaints persisted in 2020.

    The inefficiencies in land titling systems are a major source of corruption. The Organized Crime and Corruption Reporting Project’s 2019 report on land registry corruption in Bangladesh found that obtaining a licence as a deed writer incurs a bribe to the highest-level administrators. Land registry corruption is not restricted to developing regions: in regions with longer histories of legal stability, it simply becomes more complex. Anti-corruption NGO, Global Witness, estimated in 2019 that £100 billion worth of property in England and Wales was secretly owned by anonymous companies registered in tax havens.

    A good first step to fighting corruption is by cutting down on inefficiencies. Blockchain can streamline much of the process. Take, for example, the number of steps required in the UK for one person to sell the property to another person and compare this with a blockchain-based registry system.

    Some countries are already experiencing positive results. In 2018, Georgia registered more than 1.5 million land titles through their blockchain-based system.

    An urban land registry project underway in Africa uses blockchain to address the problems of digitizing urban land registries. In many densely populated impoverished urban areas, no pre-existing land registry or paper trail exists. Relying on the meagre data available often causes legal disputes. Courts quickly become overwhelmed and digitization efforts stall.

    Blockchain is now being added to the project. To confirm property rights, the new system seeks out and consults community elders. Through a blockchain-based application, those elders receive the authority to confirm the validity of land registry claims. The elders can check directly with residents if they consent to the land assessment. By delegating cryptographically guaranteed authority to respected community members, the quality of the data is improved and the number of land dispute cases handled by the judiciary should decrease. Finally, the remaining cases should resolve faster since the elders’ cryptographic confirmations are admissible as evidence for land dispute resolution.

    The final challenge: Adoption

    The government blockchain-based projects referenced in this article represent just a few of a growing number of pilot or in-production applications of blockchain. This shows that governments are serious about fixing inefficient and unfair services. The potential gains from blockchain are substantial, yet as a new technology, there are many challenges in designing and implementing blockchain-based applications. For large institutions such as governments to deploy blockchain-based applications in a timely fashion and reap the benefits, education and tools are imperative.

  • On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Abstract:

    With the advent of satellite imagery and smartphone sensors, cartographic expertise has reached everyone’s pocket and we’re witnessing a software-isation of maps that will underlie a symbiotic relationship between our physical spaces and virtual environments. This extended reality comes with enormous economic, military, and technological potential. While there exist a range of technical, social and ethical issues still to be worked out – time and tide wait for no one is a metaphor well applied to the Metaverse and its development. This article briefly introduces the technological landscape, and then moves over to a discussion of Geospatial Digital Twinning and its techno-strategic utility and implications. We suggest that India should, continue on the existing dichotomy of Open Series and Defence Series Maps, initiate Geospatial Digital Twins of specific areas of interest as a pilot for the development, testing, and integration of national metaverse standards and rules. Further, a working group in collaboration with a body like NASSCOM needs to be formed to develop the architecture and norms that facilitate Indian economic and strategic interests through the Metaverse and other extended reality solutions.

    Introduction

    Cartographers argue that maps are value-laden images, which do not just represent a geographical reality but also become an essential tool for political discourse and military planning. Not surprisingly then, early scholars had termed cartography as a science of the princes. In fact, the history of maps is deeply intertwined with the emergence of the Westphalian nation-state itself, with the states being the primary sponsors of any cartographic activity in and around their territories[1].
    Earlier the outcome of such activities even constituted secret knowledge, for example, it was the British Military Intelligence HQ in Shimla which ran and coordinated many of the cartographic activities for the British in the subcontinent[2]. Thus, given our post-independence love for Victorian institutions, until 2021 even Google Maps had remained an illegal service in India[3].

    One of the key stressors which brought this long-awaited change in policy was the increased availability of relatively low-cost but high-resolution satellite imagery in open online markets. But this remote sensing is only one of the developments impacting modern mapmaking. A host of varied but converging technologies particularly Artificial Intelligence, advanced sensors, Virtual and Augmented Reality, and the increasing bandwidth for data transmission – are enabling a new kind of map. This new kind of map will not just be a model of reality, but rather a live and immersive simulation of reality. We can call it a Geospatial Digital Twin (GDT) – and it will be a 4D artefact, i.e. given its predictive component and temporal data assimilation, a user could also explore the hologram/VR through time and evaluate possible what-if scenarios.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Read the Full Paper
    [/powerkit_button]

  • Valuing Folk Crop Varieties for Agroecology and Food Security

    Valuing Folk Crop Varieties for Agroecology and Food Security

    India’s Ministry of Environment, Forest and Climate Change (MoEFCC) has recently, through an office memorandum, excluded the new generation genetically modified (GM) plants – also known as genetically edited (GE) plants – from the ambit of India’s biosafety rules. The use of GMO plant seeds like Monsanto’s Bt Cotton gave promising results initially but over a longer period it has resulted in many problems leading to large number of marginal farmer suicides. Based on this bitter experience the Government of India has brought in place very stringent bio-safety rules. However, with new biotech breakthroughs like Genome Editing techniques, there is a huge pressure from corporate giants like Monsanto, Bayer etc to open up agricultural markets in major countries like India and the global south. There is a fear that American capitalism driven biotech companies may destroy indigenous bio-diversities that could result in food insecurity in the long run. India adopted ‘Green Revolution’ in a big way to increase its food production. It lead to the use of High Yield Variety seeds and mono-cultural farming in a big way. Half a century later, there is a need to review the after effects of the ‘Green Revolution’ as the country is plagued by over use of fertilisers, pesticides, water scarcity, increasing salinity, and battling loss of nutrition in farmlands due to the loss of traditional crop diversity. India was home to a vast gene pool of 110000 varieties of native rice before the Green revolution, of which less than 600 are surviving today. The use of GMO crops will lead to further destruction of Indian food diversity. Genome editing, a newer technology, should be examined carefully from a policy perspective. The European Union treats all GMO and GE as one and therefore it has a single stringent policy. Dr Debal Deb has done a pioneering work in saving many of the indigenous rice varieties and campaigns against the industrial agriculture. His is a larger and vital perspective of Agricultural ecology. The Peninsula Foundation revisits his article of 2009 to drive home the importance of preserving and enhancing India’s bio-diversity and agricultural ecology as pressures from capitalist biotech predators loom large for commercial interests.

    – TPF Editorial Team

    On May 25, 2009, Hurricane Aila hit the deltaic islands of the Sunderban of West Bengal. The estuarine water surged and destroyed the villages. Farmer’s homes were engulfed by the swollen rivers, their properties vanished with the waves, and their means of livelihood disappeared, as illustrated by the empty farm fields, suddenly turned salty. In addition, most of the ponds and bore wells became salinized.

    Since Aila’s devastation, there has been a frantic search for the salt-tolerant rice seeds created by the ancestors of the current Sunderban farmers. With agricultural modernization, these heirloom crop varieties had slipped through the farmers’ hands.

    But now, after decades of complacency, farmers and agriculture experts alike have been jolted into realizing that on the saline Sunderban soil, modern high-yield varieties are no match for the “primitive,” traditional rice varieties. But the seeds of those diverse salt-tolerant varieties are unavailable now; just one or two varieties are still surviving on the marginal farms of a few poor farmers, who now feel the luckiest. The government rice gene banks have documents to show that they have all these varieties preserved, but they cannot dole out any viable seeds to farmers in need. That is the tragedy of the centralized ex situ gene banks, which eventually serve as morgues for seeds, killed by decades of disuse.

    The only rice seed bank in eastern India that conserves salt-tolerant rice varieties in situ is Vrihi, which has distributed four varieties of salt-tolerant rice in small quantities to a dozen farmers in Sunderban. The success of these folk rice varieties on salinized farms demonstrates how folk crop genetic diversity can ensure local food security. These folk rice varieties also promote sustainable agriculture by obviating the need for all external inputs of agrochemicals.

    Folk Rice Varieties, the Best Bet

    Not only the salinization of soil in coastal farmlands but also the too-late arrival of the monsoon this year has caused seedlings of modern rice varieties to wither on all un-irrigated farms and spelled doom for marginal farmers’ food security throughout the subcontinent. Despite all the brouhaha about the much-hyped Green Revolution, South Asia’s crop production still depends heavily on the monsoon rains and too much, too late, too early, or too scanty rain causes widespread failure of modern crop varieties. Around 60 per cent of India’s agriculture is unirrigated and totally dependent on rain.

    In 2002, the monsoon failure in July resulted in a seasonal rainfall deficit of 19 percent and caused a profound loss of agricultural production with a drop of over 3 percent in India’s GDP (Challinor et al. 2006). This year’s shortfall of the monsoon rain is likely to cause production to fall 10 to 15 million tons short of the 100 million tons of total production forecast for India at the beginning of the season (Chameides 2009). This projected shortfall also represents about 3 percent of the expected global rice harvest of 430 million tons.

    In the face of such climatic vagaries, modern agricultural science strives to incorporate genes for adaptation — genes that were carefully selected by many generations of indigenous farmer-breeders centuries ago. Thousands of locally-adapted rice varieties (also called “landraces”) were created by farmer selection to withstand fluctuations in rainfall and temperature and to resist various pests and pathogens. Most of these varieties, however, have been replaced by a few modern varieties, to the detriment of food security.

    Until the advent of the Green Revolution in the 1960s, India was believed to have been home to about 110,000 rice varieties (Richharia and Govindasamy 1990), most of which have gone extinct from farm fields. Perhaps a few thousand varieties are still surviving on marginal farms, where no modern cultivar can grow. In the eastern state of West Bengal, about 5600 rice varieties were cultivated, of which 3500 varieties of rice were shipped to the International Rice Research Institute (IRRI) of the Philippines during the period from 1975 to 1983 (Deb 2005). After an extensive search over the past fourteen years for extant rice varieties in West Bengal and a few neighboring states, I was able to rescue only 610 rice landraces from marginal farms. All others–about 5000–have disappeared from farm fields. The 610 extant rice varieties are grown every year on my conservation farm, Basudha. Every year, these seeds are distributed to willing farmers from the Vrihi seed bank free of charge.

    Vrihi (meaning “rice seed” in Sanskrit) is the largest non-governmental seed repository of traditional rice varieties in eastern India. These varieties can withstand a much wider range of fluctuations in temperature and soil nutrient levels as well as water stress than any of the modern rice varieties. This year’s monsoon delay has not seriously affected the survivorship and performance of the 610 rice varieties on the experimental farm, nor did the overabundant rainfall a few years earlier.

    Circumstances of Loss

    If traditional landraces are so useful, how could the farmers afford to lose them? The dynamics are complex but understandable. When government agencies and seed companies began promoting “miracle seeds,” many farmers were lured and abandoned their heirloom varieties. Farmers saw the initial superior yields of the high input–responsive varieties under optimal conditions and copied their “successful” neighbors. Soon, an increasing number of farmers adopted the modern, “Green Revolution” (GR) seeds, and farmers not participating in the GR were dubbed backward, anti-modern, and imprudent. Seed companies, state agriculture departments, the World Bank, universities, and national and international development NGOs (non-governmental organizations) urged farmers to abandon their traditional seeds and farming practices–both the hardware and software of agriculture. After a few years of disuse, traditional seed stocks became unviable and were thereby lost. Thus, when farmers began to experience failure of the modern varieties in marginal environmental conditions, they had no other seeds to fall back on. Their only option was, and still is, to progressively increase water and agrochemical inputs to the land. In the process, the escalating cost of modern agriculture eventually bound the farmers in an ever-tightening snare of debt. After about a century of agronomists’ faith in technology to ensure food security, farming has become a risky enterprise, with ever greater debt for farmers. Over 150,000 farmers are reported to have committed suicide between 1995 and 2004 in India (Government of India 2007), and the number grew by an annual average of 10,000 until 2007 (Posani 2009).

    The government gave ample subsidies for irrigation and fertilizers to convert marginal farms into more productive farms and boosted rice production in the first decade that GR seeds were used. Soon after, however, yield curves began to decline. After 40 years of GR, the productivity of rice is declining at an alarming rate (Pingali 1994). IRRI’s own study revealed yield decreases after cultivation of the “miracle rice variety” IR8 over a 10-year period (Flinn et al 1982). Today, just to keep the land productive, rice farmers in South Asia apply over 11 times more synthetic nitrogen fertilizers and 12.8 times more phosphate fertilizers per hectare than they did in the late 1960s (FAI 2008). Cereal yield has plummeted back to the pre-GR levels, yet many farmers cannot recall that they had previously obtained more rice per unit of input than what they are currently getting. Most farmers have forgotten the average yields of the traditional varieties and tend to believe that all traditional varieties were low-yielding. They think that the modern “high-yielding” varieties must yield more because they are so named.

    In contrast, demonstration of the agronomic performance of the 610 traditional rice varieties on Basudha farm over the past 14 years has convinced farmers that many traditional varieties can out-yield any modern cultivar. Moreover, the savings in terms of water and agrochemical inputs and the records of yield stability against the vagaries of the monsoon have convinced them of the economic advantages of ecological agriculture over chemical agriculture. Gradually, an increasing number of farmers have been receiving traditional seeds from the Vrihi seed bank and exchanging them with other farmers. As of this year, more than 680 farmers have received seeds from Vrihi and are cultivating them on their farms. None of them have reverted to chemical farming or to GR varieties.

    Extraordinary Heirlooms

    Every year, farmer-researchers meticulously document the morphological and agronomic characteristics of each of the rice varieties being conserved on our research farm, Basudha. With the help of simple equipment–graph paper, rulers, measuring tape, and a bamboo microscope (Basu 2007)–the researchers document 30 descriptors of rice, including leaf length and width; plant height at maturity; leaf and internode color; flag leaf angle; color and size of awns; color, shape and size of rice seeds and decorticated grains; panicle density; seed weight; dates of flowering and maturity; presence or absence of aroma; and diverse cultural uses.

    Vrihi’s seed bank collection includes numerous unique landraces, such as those with novel pigmentation patterns and wing-like appendages on the rice hull. Perhaps the most remarkable are Jugal, the double-grain rice, and Sateen, the triple-grain rice. These characteristics have been published and copyrighted (Deb 2005) under Vrihi’s name to protect the intellectual property rights of indigenous farmers.

    A few rice varieties have unique therapeutic properties. Kabiraj-sal is believed to provide sufficient nutrition to people who cannot digest a typical protein diet. Our studies suggest that this rice contains a high amount of labile starch, a fraction of which yields important amino acids (the building blocks of proteins). The pink starch of Kelas and Bhut moori is an essential nutrient for tribal women during and after pregnancy, because the tribal people believe it heals their anemia. Preliminary studies indicate a high content of iron and folic acid in the grains of these rice varieties. Local food cultures hold Dudh-sar and Parmai-sal in high esteem because they are “good for children’s brains.” While rigorous experimental studies are required to verify such folk beliefs, the prevalent institutional mindset is to discard folk knowledge as superstitious, even before testing it– until, that is, the same properties are patented by a multinational corporation.

    Traditional farmers grow some rice varieties for their specific adaptations to the local environmental and soil conditions. Thus, Rangi, Kaya, Kelas, and Noichi are grown on rainfed dryland farms, where no irrigation facility exists. Late or scanty rainfall does not affect the yield stability of these varieties. In flood-prone districts, remarkable culm elongation is seen in Sada Jabra, Lakshmi-dighal, Banya-sal, Jal kamini, and Kumrogorh varieties, which tend to grow taller with the level of water inundating the field. The deepest water that Lakshmi-dighal can tolerate was recorded to be six meters. Getu, Matla, and Talmugur can withstand up to 30 ppt (parts per thousand) of salinity, while Harma nona is moderately saline tolerant. No modern rice variety can survive in these marginal environmental conditions. Traditional crop varieties are often recorded to have out-yielded modern varieties in marginal environmental conditions (Cleveland et al. 2000).

    Farmer-selected crop varieties are not only adapted to local soil and climatic conditions but are also fine-tuned to diverse local ecological conditions and cultural preferences. Numerous local rice landraces show marked resistance to insect pests and pathogens. Kalo nunia, Kartik-sal, and Tulsi manjari are blast-resistant. Bishnubhog and Rani kajal are known to be resistant to bacterial blight (Singh 1989). Gour-Nitai, Jashua, and Shatia seem to resist caseworm (Nymphula depunctalis) attack; stem borer (Tryporyza spp.) attack on Khudi khasa, Loha gorah, Malabati, Sada Dhepa, and Sindur mukhi varieties is seldom observed.

    Farmers’ agronomic practices, adapting to the complexity of the farm food web interactions, have also resulted in selection of certain rice varieties with distinctive characteristics, such as long awn and erect flag leaf. Peasant farmers in dry lateritic areas of West Bengal and Jharkhand show a preference for long and strong awns, which deter grazing from cattle and goats (Deb 2005). Landraces with long and erect flag leaves are preferred in many areas, because they ensure protection of grains from birds.

    Different rice varieties are grown for their distinctive aroma, color, and tastes. Some of these varieties are preferred for making crisped rice, some for puffed rice, and others for fragrant rice sweets to be prepared for special ceremonies. Blind to this diversity of local food cultures and farm ecological complexity, the agronomic modernization agenda has entailed drastic truncation of crop genetic diversity as well as homogenization of food cultures on all continents.

    Sustainable Agriculture and Crop Genetic Diversity

    Crop genetic diversity, which our ancestors enormously expanded over millennia (Doebley 2006), is our best bet for sustainable food production against stochastic changes in local climate, soil chemistry, and biotic influences. Reintroducing the traditional varietal mixtures in rice farms is a key to sustainable agriculture. A wide genetic base provides “built-in insurance” (Harlan 1992) against crop pests, pathogens, and climatic vagaries.

    Traditional crop landraces are an important component of sustainable agriculture because their long-term yield stability is superior to most modern varieties. An ample body of evidence exists to indicate that whenever there is a shortage of irrigation water or of fertilizers–due to drought, social problems, or a disruption of the supply network– “modern crops typically show a reduction in yield that is greater and covers wider areas, compared with folk varieties” (Cleveland et al. 1994). Under optimal farming conditions, some folk varieties may have lower mean yields than high-yield varieties but exhibit considerably higher mean yields in the marginal environments to which they are specifically adapted.

    All these differences are amply demonstrated on Basudha farm in a remote corner of West Bengal, India. This farm is the only farm in South Asia where over 600 rice landraces are grown every year for producing seeds. These rice varieties are grown with no agrochemicals and scant irrigation. On the same farm, over 20 other crops, including oil seeds, vegetables, and pulses, are also grown each year. To a modern, “scientifically trained” farmer as well as a professional agronomist, it’s unbelievable that over the past eight years, none of the 610 varieties at Basudha needed any pesticides–including bio-pesticides–to control rice pests and pathogens. The benefit of using varietal mixtures to control diseases and pests has been amply documented in the scientific literature (Winterer et al. 1994; Wolfe 2000; Leung et al. 2003). The secret lies in folk ecological wisdom: biological diversity enhances ecosystem persistence and resilience. Modern ecological research (Folke et al. 2004; Tilman et al. 2006; Allesina and Pascual 2008) supports this wisdom.

    If the hardware of sustainable agriculture is crop diversity, the software consists of biodiversity-enhancing farming techniques. The farming technique is the “program” of cultivation and can successfully “run” on appropriate hardware of crop genetic and species diversity. In the absence of the appropriate hardware however, the software of ecological agriculture cannot give good results, simply because the techniques evolved in an empirical base of on-farm biodiversity. Multiple cropping, the use of varietal mixtures, the creation of diverse habitat patches, and the fostering of populations of natural enemies of pests are the most certain means of enhancing agroecosystem complexity. More species and genetic diversity mean greater complexity, which in turn creates greater resilience–that is, the system’s ability to return to its original species composition and structure following environmental perturbations such as pest and disease outbreaks or drought, etc.

    Ecological Functions of On-Farm Biodiversity

    Food security and sustainability at the production level are a consequence of the agroecosystem’s resilience, which can only be maintained by using diversity on both species and crop genetic levels. Varietal mixtures are a proven method of reducing diseases and pests. Growing companion crops like pigeonpea, chickpea, rozelle, yams, Ipomea fistulosa, and hedge bushes provide alternative hosts for many herbivore insects, thereby reducing pest pressure on rice. They also provide important nutrients for the soil, while the leaves of associate crops like pigeonpea (Cajanus cajan) can suppress growth of certain grasses like Cyperus rotundus.

    Pest insects and mollusks can be effectively controlled, even eliminated, by inviting carnivorous birds and reptiles (unless they have been eliminated from the area by pesticides and industrial toxins). Erecting bamboo “T’s” or placing dead tree branches on the farm encourages a range of carnivorous birds, including the drongo, bee eaters, owls, and nightjars, to perch on them. Leaving small empty patches or puddles of water on the land creates diverse ecosystems and thus enhances biodiversity. The hoopoe, the cattle egret, the myna, and the crow pheasant love to browse for insects in these open spaces.

    Measures to retain soil moisture to prevent nutrients from leaching out are also of crucial importance. The moisturizing effect of mulching triggers certain key genes that synergistically operate to delay crop senescence and reduce disease susceptibility (Kumar et al. 2004). The combined use of green mulch and cover crops nurtures key soil ecosystem components–microbes, earthworms, ants, ground beetles, millipedes, centipedes, pseudoscorpions, glow worms, and thrips — which all contribute to soil nutrient cycling.

    Agricultural sustainability consists of long-term productivity, not short-term increase of yield. Ecological agriculture, which seeks to understand and apply ecological principles to farm ecosystems, is the future of modern agriculture. To correct the mistakes committed in the course of industrial agriculture over the past 50 years, it is imperative that the empirical agricultural knowledge of past centuries and the gigantic achievements of ancient farmer-scientists are examined and employed to reestablish connections to the components of the agroecosystem. The problems of agricultural production that arise from the disintegration of agorecosystem complexity can only be solved by restoring this complexity, not by simplifying it with technological fixes.

    Further Reading and Resources: in situ conservation and agroecology

    References

    Allesina S and Pascual M (2008). Network structure, predator-prey modules, and stability in large food webs. Theoretical Ecology 1(1):55-64.

    Basu, P (2007). Microscopes made from bamboo bring biology into focus. Nature Medicine 13(10): 1128. http://www.nature.com/nm/journal/v13/n10/pdf/nm1007-1128a.pdf.

    Challinor A, Slingo J, Turner A and Wheeler T (2006). Indian Monsoon: Contribution to the Stern Review. University of Reading. www.hm-treasury.gov.uk/d/Challinor_et_al.pdf.

    Chameides B (2009). Monsoon fails, India suffers. The Green Grok. Nicholas School of the Environment at Duke University. www.nicholas.duke.edu/thegreengrok/monsoon_india.

    Cleveland DA, Soleri D and Smith SE (1994). Do folk crop varieties have a role in sustainable agriculture? BioScience 44(11): 740–751.

    Cleveland DA, Soleri D and Smith SE (2000). A biological framework for understanding farmers’ plant breeding. Economic Botany 54(3): 377–394.

    Deb D (2005). Seeds of Tradition, Seeds of Future: Folk Rice Varieties from east India. Research Foundation for Science Technology & Ecology. New Delhi.

    Doebley J (2006). Unfallen grains: how ancient farmers turned weeds into crops. Science 312(5778): 1318–1319.

    FAI (2008). Fertiliser Statistics, Year 2007-2008. Fertilizer Association of India. New Delhi. http://www.faidelhi.org/

    Flinn JC, De Dutta SK and Labadan E (1982). An analysis of long term rice yields in a wetland soil. Field Crops Research 7(3): 201–216.

    Folke C, Carpenter S, Walker B, Scheffer M, Elmqvist T, Gunderson L and Holling CS (2004). Regime shifts, resilience and biodiversity in ecosystem management. Annual Review of Ecology, Evolution and Systematics 35: 557–581.

    Government of India (2007). Report of the Expert Group on Agricultural Indebtedness. Ministry of Agriculture. New Delhi. http://www.igidr.ac.in/pdf/publication/PP-059.pdf

    Harlan JR (1992) Crops and Man (2nd edition). , p. 148. American Society of Agronomy, Inc and Crop Science Society of America, Inc., Madison, WI.

    Kumar V, Mills DJ, Anderson JD and Mattoo AK (2004). An alternative agriculture system is defined by a distinct expression profile of select gene transcripts and proteins. PNAS 101(29): 10535–10540

    Leung H, Zhu Y, Revilla-Molina I, Fan JX, Chen H, Pangga I, Vera Cruz C and Mew TW (2003). Using genetic diversity to achieve sustainable rice disease management. Plant Disease 87(10): 1156–1169.

    Pingali PI (1994). Technological prospects for reversing the declining trend in Asia’s rice productivity. In: Agricultural Technology: Policy Issues for the International Community (Anderson JR, ed), pp. 384–401. CAB International.

    Posani B (2009). Crisis in the Countryside: Farmer suicides and the political economy of agrarian distress in India. DSI Working Paper No. 09-95. Development Studies Institute, London School of Economics and Political Science. London. http://www.lse.ac.uk/collections/DESTIN/pdf/WP95.pdf

    Richharia RH and Govindasamy S (1990). Rices of India. Academy of Development Science. Karjat.

    Note: The only reliable data are given in Richharia and Govindasamy (1990), who estimated that about 200,000 varieties existed in India until the advent of the Green Revolution. Assuming many of these folk varieties were synonymous, an estimated 110,000 varieties were in cultivation. Such astounding figures win credibility from the fact that Dr. Richharia collected 22,000 folk varieties (currently in custody of Raipur University) from Chhattisgarh alone – one of the 28 States of India. The IRRI gene bank preserves 86,330 accessions from India [FAO (2003) Genetic diversity in rice. In: Sustainable rice production for food security. International Rice Commission/ FAO. Rome. (web publication) URL: http://www.fao.org/docrep/006/y4751e/y4751e0b.htm#TopOfPage ]

    Singh RN (1989). Reaction of indigenous rice germplasm to bacterial blight. National Academy of Science Letters 12: 231-232.

    Tilman D, Reich PB and Knops JMH (2006). Biodiversity and ecosystem stability in a decade-long grassland experiment. Nature 441: 629-632.

    Winterer J, Klepetka B, Banks J and Kareiva P (1994). Strategies for minimizing the vulnerability of rice to pest epidemics. In: Rice Pest Science and Management. (Teng PS, Heong KL and Moody K, eds.), pp. 53–70. International Rice Research Institute, Manila.

    Wolfe MS (2000). Crop strength through diversity. Nature 406: 681–682.

    This article was published earlier in Independent Science News and is republished under the Creative Commons Attribution 3.0 License.

    Feature Image Credit: www.thebetterindia.com