Category: Science, Technology & Security

  • The Asymmetric Indo-US Technology Agreement Points to India’s Weak R&D Culture

    The Asymmetric Indo-US Technology Agreement Points to India’s Weak R&D Culture

    Prime Minister Narendra Modi’s state visit to the USA resulted in four significant agreements and the visit is hailed as one of very important gains for India and Indo-US strategic partnership. The focus has been on defence industrial and technology partnership. Media and many strategic experts are seeing the agreements as major breakthroughs for technology transfers to India, reflecting a very superficial analysis and a lack of understanding of what really entails technology transfer. Professor Arun Kumar sees these agreements as a sign of India’s technological weakness and USA’s smart manoeuvring to leverage India for long-term defence and technology client. The visit has yielded major business gains for USA’s military industrial complex and the silicon valley. Post the euphoria of the visit, Arun Kumar says its time for India to carefully evaluate the relevant technology and strategic policy angles.

     

    The Indo-US joint statement issued a few days back says that the two governments will “facilitate greater technology sharing, co-development and co-production opportunities between the US and the Indian industry, government and academic institutions.” This has been hailed as the creation of a new technology bridge that will reshape relations between the two countries

    General Electric (GE) is offering to give 80% of the technology required for the F414 jet engine, which will be co-produced with Hindustan Aeronautics Limited (HAL). In 2012, when the negotiations had started, GE had offered India 58%. India needs this engine for the Light Combat Aircraft Mark 2 (LCA Mk2) jets.

    The Indian Air Force has been using LCA Mk1A but is not particularly happy with it. It asked for improvements in it. Kaveri, the indigenous engine for the LCA under development since 1986, has not been successful. The engine development has failed to reach the first flight.

    So, India has been using the F404 engine in the LCA Mk1, which is 40 years old. The F414 is also a 30-year-old vintage engine. GE is said to be offering 12 key technologies required in modern jet engine manufacturing which India has not been able to master over the last 40 years. The US has moved on to more powerful fighter jet engines with newer technologies, like the Pratt & Whitney F135 and GE XA100.

    India is being allowed into the US-led critical mineral club. It will acquire the highly rated MQ-9B high-altitude long-endurance unmanned aerial vehicles. Micron Technologies will set up a semiconductor assembly and test facility in Gujarat by 2024, where it is hoped that the chips will eventually be manufactured. The investment deal of $2.75 billion is sweetened with the Union government giving 50% and Gujarat contributing 20%. India is also being allowed into the US-led critical mineral club.

    There will be cooperation in space exploration and India will join the US-led Artemis Accords. ISRO and NASA will collaborate and an Indian astronaut will be sent to the International Space Station. INDUS-X will be launched for joint innovation in defence technologies. Global universities will be enabled to set up campuses in each other’s countries, whatever it may imply for atmanirbharta.

    What does it amount to?

    The list is impressive. But, is it not one-sided, with India getting technologies it has not been able to develop by itself.

    Though the latest technology is not being given by the US, what is offered is superior to what India currently has. So, it is not just optics. But the real test will be how much India’s technological capability will get upgraded.

    Discussing the New Economic Policies launched in 1991, the diplomat got riled at my complaining that the US was offering us potato chips and fizz drinks but not high technology, and shouted, “Technology is a house we have built and we will never let you enter it.”

    What is being offered is a far cry from what one senior US diplomat had told me at a dinner in 1992. Discussing the New Economic Policies launched in 1991, the diplomat got riled at my complaining that the US was offering us potato chips and fizz drinks but not high technology, and shouted, “Technology is a house we have built and we will never let you enter it.”

    Everyone present there was stunned, but that was the reality.

    The issue is, does making a product in India mean a transfer of technology to Indians? Will it enable India to develop the next level of technology?

    India has assembled and produced MiG-21 jets since the 1960s and Su-30MKI jets since the 1990s. But most critical parts of the Su-30 come from Russia. India set up the Mishra Dhatu Nigam in 1973 to produce the critical alloys needed and production started in 1982, but self-sufficiency in critical alloys has not been achieved.

    So, production using borrowed technology does not mean absorption and development of the technology. Technology development requires ‘know-how’ and ‘know-why’.

    When an item is produced, we can see how it is produced and then copy that. But we also need to know how it is being done and importantly, why something is being done in a certain way. Advanced technology owners don’t share this knowledge with others.

    Technology is a moving frontier

    There are three levels of technology at any given point in time – high, intermediate, and low.

    The high technology of yesterday becomes the intermediate technology of today and the low technology of tomorrow. So, if India now produces what the advanced countries produced in the 1950s, it produces the low-technology products of today (say, coal and bicycles).

    If India produces what was produced in the advanced countries in the 1980s (say, cars and colour TV), it produces the intermediate technology products of today. It is not to say that some high technology is not used in low and intermediate-technology production.

    The high technologies of today are aerospace, nanotechnology, AI, microchips and so on. India is lagging behind in these technologies, like in producing passenger aircraft, sending people into space, making microchips, quantum computing, and so on.

    The advanced countries do not part with these technologies. The World Trade Organisation, with its provisions for TRIPS and TRIMS (Trade-Related Aspects of Intellectual Property Rights and Trade-Related Investment Measures), consolidated the hold of advanced countries on intermediate and low technologies that can be acquired by paying royalties. But high technology is closely held and not shared.

    Advancements in technology

    So, how can nations that lag behind in technology catch up with advanced nations? The Nobel laureate Kenneth Arrow pointed to ‘learning by doing’ – the idea that in the process of production, one learns.

    So, the use of a product does not automatically lead to the capacity to produce it, unless the technology is absorbed and developed. That requires R&D.

    Schumpeter suggested that technology moves through stages of invention, innovation and adaptation. So, the use of a product does not automatically lead to the capacity to produce it, unless the technology is absorbed and developed. That requires R&D.

    Flying the latest Airbus A321neo does not mean we can produce it. Hundreds of MiG-21 and Su-30 have been produced in India. But we have not been able to produce fighter jet engines, and India’s Kaveri engine is not yet successful. We routinely use laptops and mobile phones, and they are also assembled in India, but it does not mean that we can produce microchips or hard disks.

    Enormous resources are required to do R&D for advanced technologies and to produce them at an industrial scale. It requires a whole environment which is often missing in developing countries and certainly in India.

    Enormous resources are required to do R&D for advanced technologies and to produce them at an industrial scale. It requires a whole environment which is often missing in developing countries and certainly in India.

    Production at an experimental level can take place. In 1973, I produced epitaxial thin films for my graduate work. But producing them at an industrial scale is a different ballgame. Experts have been brought from the US, but that has not helped since high technology is now largely a collective endeavour.

    For more complex technologies, say, aerospace or complex software, there is ‘learning by using’. When an aircraft crashes or malware infects software, it is the producer who learns from the failure, not the user. Again, the R&D environment is important.

    In brief, using a product does not mean we can produce it. Further, producing some items does not mean that we can develop them further. Both require R&D capabilities, which thrive in a culture of research. That is why developing countries suffer from the ‘disadvantage of a late start’.

    A need for a focus on research and development

    R&D culture thrives when innovation is encouraged. Government policies are crucial since they determine whether the free flow of ideas is enabled or not. Also of crucial importance is whether thought leaders or sycophants are appointed to lead institutions, whether criticism is welcomed or suppressed, and whether the government changes its policies often under pressure from vested interests.

    Unstable policies increase the risk of doing research, thereby undermining it and dissuading the industry. The result is the repeated import of technology.

    The software policy of 1987, by opening the sector up to international firms, undermined whatever little research was being carried out then and turned most companies in the field into importers of foreign products, and later into manpower suppliers. Some of these companies became highly profitable, but have they produced any world-class software that is used in daily life?

    Expenditure on R&D is an indication of the priority accorded to it. India spends a lowly 0.75% of its GDP on R&D. Neither the government nor the private sector prioritises it. Businesses find it easier to manipulate policies using cronyism. Those who are close to the rulers do not need to innovate, while others know that they will lose out. So, neither focus on R&D.

    Innovation also depends on the availability of associated technologies – it creates an environment. An example is Silicon Valley, which has been at the forefront of innovation. It has also happened around universities where a lot of research capabilities have developed and synergy between business and academia becomes possible.

    This requires both parties to be attuned to research. In India, around some of the best-known universities like Delhi University, Allahabad University and Jawaharlal Nehru University, coaching institutions have mushroomed and not innovative businesses. None of these institutions are producing any great research, nor do businesses require research if they can import technology.

    A feudal setup

    Technology is an idea. In India, most authority figures don’t like being questioned. For instance, bright students asking questions are seen as troublemakers in most schools. The emphasis is largely on completing coursework for examinations. Learning is by rote, with most students unable to absorb the material taught.

    So, most examinations have standard questions requiring reproduction of what is taught in the class, rather than application of what is learned. My students at JNU pleaded against open-book exams. Our class of physics in 1967 had toppers from various higher secondary boards. We chose physics over IIT. We rebelled against such teaching and initiated reform, but ultimately most of us left physics – a huge loss to the subject.

    Advances in knowledge require critiquing its existing state – that is, by challenging the orthodoxy and status quo. So, the creative independent thinkers who generate socially relevant knowledge also challenge the authorities at their institutions and get characterised as troublemakers. The authorities largely curb autonomy within the institution and that curtails innovativeness.

    In brief, dissent – which is the essence of knowledge generation – is treated as a malaise to be eliminated. These are the manifestations of a feudal and hierarchical society which limits the advancement of ideas. Another crucial aspect of generating ideas is learning to accept failure. The Michelson–Morley experiment was successful in proving that there is no aether only after hundreds of failed experiments.

    Conclusion

    The willingness of the US to provide India with some technology without expecting reciprocity is gratifying. Such magnanimity has not been shown earlier and it is obviously for political (strategic) reasons. The asymmetry underlines our inability to develop technology on our own. The US is not giving India cutting-edge technologies that could make us a Vishwaguru.

    India needs to address its weakness in R&D. As in the past, co-producing a jet engine, flying drones or packaging and testing chips will not get us to the next level of technology, and we will remain dependent on imports later on.

    This can be corrected only through a fundamental change in our R&D culture that would enable technology absorption and development. That would require granting autonomy to academia and getting out of the feudal mindset that presently undermines scientific temper and hobbles our system of education.

     

    This article was published earlier in thewire.in

    Feature Image Credit: thestatesman.com

     

  • The “loss and damage” agenda at COP27

    The “loss and damage” agenda at COP27

    The dialogues on Climate Change Action have failed to produce effective measures. At the heart of the problem is the refusal of the developed countries to accept the reality that they were the beneficiaries of the industrial revolution, colonialism, and imperialism and have contributed the maximum to the current problems humanity faces on account of climate change. Hence, two-thirds of the world’s assertion that developed nations bear the costs of implementing corrective measures is very valid and logical.

    The 27th Conference of Parties (COP) of the United Nations Framework Convention on Climate Change (UNFCCC) was hosted by the Government of the Arab Republic of Egypt from 06 November to 18 November (extended to 20 November). This conference comes at a time when the world witnessed massive heatwaves, flooding in Pakistan, wildfires across Spain and California, and droughts in East Africa. The mission of the conference is to take collective action to combat climate change under the Paris agreement and the convention. After a decade of climate talks, the question is, “are countries ready to take collective action against climate change”?

    Developed Nations’ Responsibility and Accountability

    Financial compensation remains a huge contestation point between developed and developing countries. Developing countries or the Global South face the adverse effects of climate change and demand compensation for the historical damage caused by colonialism and resource extraction that aided in the development of the Global North. This includes countries in the EU and the United States. Developed countries bear the most responsibility for emissions leading to global temperature rise — between 1751 and 2017, the United States, the EU and the UK were responsible for 47% of cumulative carbon dioxide emissions compared to just 6% from the entire African and South American continents. At COP15 in Copenhagen in 2009, Global North nations agreed to pledge $100 billion (€101 billion) annually by 2020 to help developing countries adapt to the impacts of climate change, for example, by providing farmers with drought-resistant crops or paying for better flood defences. But according to the Organization for Economic Cooperation and Development (OECD), which tracks funding, in 2020 wealthy countries pledged just over $83 billion.

    Developed countries bear the most responsibility for emissions leading to global temperature rise — between 1751 and 2017, the United States, the EU and the UK were responsible for 47% of cumulative carbon dioxide emissions compared to just 6% from the entire African and South American continents.

    Such compensation for loss and damage has been a focal point in all climate summits since 1991. In terms of institutional developments, the COP19 conference in 2013 established the Warsaw Mechanism for Loss and Damage, which is supposed to enhance global understanding of climate risk, promote transnational dialogue and cooperation, and strengthen “action and support”. At COP25, the Santiago Network on Loss and Damage (SNLD) was set up to provide research and technical assistance on the issue of loss and damage from human-induced climate change. The meeting did not discuss the working process of the network and hence it was taken up in COP26, where no elaborate changes were made. Although in COP26, the Glasgow facility to finance solutions for loss and damage was brought by G77 countries, developed countries such as the US and the EU bloc did not go beyond agreeing to a three-year dialogue.

    Developed countries constantly focus on holding dialogues rather than coming up with solutions for climate risk mitigation.

    The US’s stance on financing vulnerable countries to find solutions against climate change is constantly shifting. The trend indicates that the US wants to focus on curbing global warming rather than dwell on past losses and damages that have already occurred. The Global North is reluctant to acknowledge the mere definition of loss and damage, as an acknowledgement will make them liable for 30 years’ worth of climate change impact.  Developed countries constantly focus on holding dialogues rather than coming up with solutions for climate risk mitigation. In a statement prior to COP27, U.S. climate envoy John Kerry expressed concern about how the shifting focus on loss and damage “could delay our ability to do the most important thing of all, which is [to] achieve mitigation sufficient to reduce the level of adaptation.”

    USA’s leads Evasive Tactics

    The Bonn Summit held in June 2022 which set a precedent for the COP27 agenda ended in disagreement as the US and EU refused to accept funding for loss and damage as an agenda. Although, during the conclusion of COP27, the countries were successful in agreeing to establish a fund for loss and damage. Governments also agreed to establish a ‘transitional committee’ to make recommendations on how to operationalize both the new funding arrangements and the fund at COP28 next year. The first meeting of the transitional committee is expected to take place before the end of March 2023.

    Parties also agreed on the institutional arrangements to operationalize the Santiago Network for Loss and Damage, to mobilise technical assistance to developing countries that are particularly vulnerable to the adverse effects of climate change. Governments agreed to move forward on the Global Goal on Adaptation, which will conclude at COP28 and inform the first Global Stocktake, improving resilience amongst the most vulnerable. New pledges, totalling more than USD 230 million, were made to the Adaptation Fund at COP27. These pledges will help many more vulnerable communities adapt to climate change through concrete adaptation solutions.

    Despite a groundbreaking agreement, the most common question asked by the public is “are the climate summits any good?”

    The question arises due to the absence of effective leadership to monitor or condemn nations over the destruction of the environment. The summits have created a sense of accountability for all nations, irrespective of the stage of vulnerability. While vulnerable states bear a higher cost due to climate change, all states collectively pledge to reduce carbon emissions and achieve net-zero emissions by 2050. While a monitoring mechanism is absent, non-governmental organisations (NGOs) and civil societies actively advocate for climate change mitigation measures and also criticise both state and non-state actors for their lack of initiatives against the cost. Incidentally, COP27 partnered with Coca-Cola for sponsorship and many activists slammed the move as Coca-Cola is one of the top five polluters in 2022, producing around 120 billion throwaway plastic bottles a year.

    Apart from that, many other funding networks and initiatives have been introduced to support vulnerable countries against climate change. Under Germany’s G7 presidency, the G7 along with the vulnerable 20 countries or V20  launched the Global Shield against Climate Risks during COP27. The Shield gathers activities in the field of climate risk finance and preparedness together under one roof. Under the Shield, solutions to provide protection will be devised that can be implemented swiftly if climate-related damages occur. At COP27, Chancellor Olaf Scholz announced Germany’s contribution of 170 million euros to the Shield. Of this, 84 million euros are earmarked for the central financing structure of the Shield, the other funds for complementary instruments of climate risk financing, which will be implemented towards concrete safeguarding measures over the next few years.

    On 20 September, Denmark became the first developed country in the world to provide financial compensation to developing countries for ‘loss and damage’ caused by climate change. The country pledged approximately EUR 13 million (100 million Danish krone) to civil society organisations based in developing nations working on climate change-related loss and damage. Germany and Denmark are so far the only financial supporters of the initiative launched at COP27.

    What can India do?

    India has launched Mission LiFE, an initiative to bring a lifestyle change that reduces the burden on the environment. During the event, the MoEFCC – UNDP Compendium ‘Prayaas Se Prabhaav Tak – From Mindless Consumption to Mindful Utilisation’ was launched. It focuses on reduced consumption, circular economy, Vasudhaiva Kutumbakam, and sustainable resource management. India has also signed the Mangrove Alliance for Climate (MAC), determined to protect mangroves and create a carbon sink of 3 billion CO2 by expanding the forest cover.

    India has maintained a stance where it has neither advocated for nor against financial compensation for loss and damage. However, it has always called on developed countries to provide finance for developing technology or sharing technical know-how to reduce climate risk. Such an approach can help other countries to push for financial aid to develop technology instead of using their own resources.

    Further, India holds a unique position among developing countries as an emerging economy. With its diplomatic prowess under the Modi government, India can play an ideal role in negotiating with developed countries. India has maintained a stance where it has neither advocated for nor against financial compensation for loss and damage. However, it has always called on developed countries to provide finance for developing technology or sharing technical know-how to reduce climate risk. Such an approach can help other countries to push for financial aid to develop technology instead of using their own resources. India is also focused on phasing out the use of fossil fuels and not just the use of coal, which is another consistent move that adds to the country’s credentials. With the weaponization of energy by Russia since the onset of the Ukraine war, India’s call for action has garnered intensive support from both developed and developing nations. With the support of the Global South, India can assume a leadership role in establishing south-south cooperation with respect to climate risk mitigation and shift to renewable energy such as solar power.

    Conclusion

    Climate funds are important for designing and implementing solutions as developing and vulnerable countries find it difficult to diversify resources from developmental activities. The question largely remains whether the COP27 countries will adhere to the agreement concluded at the summit. There is no conclusive evidence on when the fund will be set up and the liability if countries fail to contribute to the fund. Eventually, it comes down to the countries- both state and non-state actors to effectively reduce fossil fuel consumption and reduce wastage, as many countries still focus on exploiting African gas reserves to meet their energy requirements. Ambitious goals with no actual results are a trend that is expected to continue till the next summit, and with such a trend the world has a long way to go to curb the temperature at 1.5 degree Celsius at pre-industrial levels.

    Feature Image Credit: www.cnbc.com

    Article Image: aljazeera.com 

  • The Great Chips War

    The Great Chips War

    The supply chain disruptions for semiconductor chips and the increasing sanctions imposed by the US on high-tech chips access to China and Russia has signalled the critical relevance of control over this technology and process for national security. Chip design and manufacture involve heavy capital investments and access to special machinery that is monopolised by very few American-controlled/influenced companies in Europe and East Asia. India, having missed the boat earlier due to poor investment decisions, has recognised chip manufacturing as a critical strategic industry and is investing efforts to establish significant capabilities. This could take years as challenges still remain.  – TPF Editorial Team

    Following the US Commerce Department’s announcement of severe new restrictions on sales of advanced semiconductors and other US high-tech goods to China, the Sino-American rivalry has entered an important new phase. Even under the best circumstances, China will have a difficult time adapting to its new reality.

    In addition to dealing with the fallout from open warfare in eastern Europe, the world is witnessing the start of a full-scale economic war between the United States and China over technology. This conflict will be highly consequential, and it is escalating rapidly. Earlier this month, the US Commerce Department introduced severe new restrictions on the sale of advanced semiconductors and other US high-tech goods to China. While Russia has used missiles to try to cripple Ukraine’s energy and heating infrastructure, the US is now using export restrictions to curtail China’s military, intelligence, and security services.

    The new chip war is a war for control of the future.

    Moreover, in late August, US President Joe Biden signed the CHIPS Act, which includes subsidies and other measures to bolster America’s domestic semiconductor industry. Semiconductors are and will remain, at the heart of the twenty-first-century economy. Without microchips, our smartphones would be dumb phones, our cars wouldn’t move, our communications networks wouldn’t function, any form of automation would be unthinkable, and the new era of artificial intelligence that we are entering would remain the stuff of sci-fi novels. Controlling the design, fabrication, and value chains that produce these increasingly important components of our lives is thus of the utmost importance. The new chip war is a war for control of the future.

    The semiconductor value chain is hyper-globalized, but the US and its closest allies control all the key nodes. Chip design is heavily concentrated in America, and production would not be possible without advanced equipment from Europe, and fabrication of the most advanced chips – including those that are critical for AI – is located exclusively in East Asia. The most important player by far is Taiwan, but South Korea is also in the picture.

    In its own pursuit of technological supremacy, China has become increasingly reliant on these chips, and its government has been at pains to boost domestic production and achieve “self-sufficiency.” In recent years, China has invested massively to build up its own semiconductor design and manufacturing capabilities. But while there has been some progress, it remains years behind the US; and, crucially, the most advanced chips are still beyond China’s reach.

    It has now been two years since the US banned all sales of advanced chips to the Chinese telecom giant Huawei, which was China’s global technology flagship at the time. The results have been dramatic. After losing 80% of its global market share for smartphones, Huawei was left with no choice but to sell off its smartphone unit, Honor, and reorient its corporate mission. With its latest move, the US is now aiming to do to all of China what it did to Huawei.

    This dramatic escalation of the technology war is bound to have equally dramatic economic and political consequences, some of which will be evident immediately, and some of which will take some time to materialize. China most likely has stocked up on chips and is already working to create sophisticated new networks to circumvent the sanctions. (After Huawei spun it off in late 2020, Honor quickly staged a comeback, selling phones that use chips from the US multinational Qualcomm.)

    Still, the new sanctions are so broad that, over time, they will almost certainly strike a heavy blow not only to China’s high-tech sector but also to many other parts of its economy. A European company that exports to China now must be doubly sure that its products contain no US-connected chips. And, owing to the global nature of the value chain, many chips from Taiwan or South Korea also will be off-limits.

    The official aim of the US policy is to keep advanced chips out of the Chinese military’s hands. But the real effect will be to curtail China’s development in the sectors that will be critical to national power in the decades ahead.

    The official aim of the US policy is to keep advanced chips out of the Chinese military’s hands. But the real effect will be to curtail China’s development in the sectors that will be critical to national power in the decades ahead. China will certainly respond with even stronger efforts to develop its own capabilities. But even under the best circumstances, and despite all the resources it will throw at the problem, any additional efforts will take time to bear fruit, especially now that US restrictions are depriving China of the inputs that it needs to achieve self-sufficiency.

    The new chips war eliminates any remaining doubt that we are witnessing a broader Sino-American decoupling. That development will have far-reaching implications – only some of them foreseeable – for the rest of the global economy.

    Ukraine is already repairing and restarting the power stations that have been hit by Russian missile barrages since the invasion began in February. But it will be much more difficult for China to overcome the loss of key technologies. As frightening as Russia’s twentieth-century-style war is, the real sources of power in the twenty-first century do not lie in territorial conquest. The most powerful countries will be those that master the economic, technological, and diplomatic domains.

    This article was published earlier in Project Syndicate.

    Images Credit: Globaltimes.cn

  • Ghosts in the Machine: The Past, Present, and Future of India’s Cyber Security

    Ghosts in the Machine: The Past, Present, and Future of India’s Cyber Security

    [powerkit_button size=”lg” style=”info” block=”false” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/10/Research-Paper-TPF-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Introduction

    When the National Cybersecurity Policy was released in 2013, the response from experts was rather underwhelming [1], [2]. A reaction to a string of unpalatable incidents, from Snowden’s revelations [3] and massive compromise of India’s civilian and military infrastructure [4] to the growing international pressure on Indian IT companies to fix their frequent data breaches [5], the 2013 policy was a macro example of weak structures finding refuge in a haphazard post-incident response. The next iteration of the policy is in formulation under the National Cybersecurity Coordinator. However, before we embark upon solving our cyber-physical domain’s future threat environment, it is perhaps wise to look back upon the perilous path that has brought us here.  

    Early History of Electronic Communications in India

    The institutional “cybersecurity thinking” of post-independence Indian government structures can be traced to 1839 when the East India Company’s then Governor-General of India, Lord Dalhousie, had asked a telegraph system to be built in Kolkata, the then capital of the British Raj. By 1851, the British had deployed the first trans-India telegraph line, and by 1854, the first Telegraph Act had been passed. Similar to the 2008 amendment to the IT Act which allowed the government to intercept, monitor and decrypt any information on any computer, the 1860 amendment to the Telegraph Act too granted the British to take over any leased telegraph lines to access any of the telegraphs transmitted. After all, the new wired communication technology of the day had become an unforeseen flashpoint during the 1857 rebellion.

    Post-independence, under the socialist fervour of Nehruvian politics, the government further nationalised all foreign telecommunications companies and continued the British policy of total control over telecommunications under its own civil service structure, which too came pre-packaged from the British.

    Historians note that the telegraph operators working for the British quickly became targets of intrigues and lethal violence during the mutiny [6], somewhat akin to today’s Sysadmins being a top social engineering priority for cyber threat actors [7]. One of the sepoy mutineers of 1857, while on his way to the hangman’s halter, famously cried out at a telegraph line calling it the cursed string that had strangled the Indians [8]. On the other side of affairs, after having successfully suppressed the mutiny, Robert Montgomery famously remarked that the telegraph had just saved India [9]. Within the telegraph system, the problems of information security popped up fairly quickly after its introduction in India. Scholars note that commercial intelligence was frequently peddled in underground Indian markets by government telegraph clerks [10], in what can perhaps be described as one of the first “data breaches” that bureaucrats in India had to deal with. 

    British had formulated different rules for telecommunications in India and England. While they did not have the total monopoly and access rights over all transmissions in Britain, for the purpose of maintaining political control, in India they did [11]. Post-independence, under the socialist fervour of Nehruvian politics, the government further nationalised all foreign telecommunications companies and continued the British policy of total control over telecommunications under its own civil service structure, which too came pre-packaged from the British.

    The Computer and “The System”

    Major reforms are often preceded by major failures. The government imported its first computer in 1955 but did not show any interest in any policy regarding these new machines. That only changed in 1963, when the government under the pressure to reform after a shameful military defeat and the loss of significant territory to China, instituted a Committee on Electronics under Homi Jehangir Bhabha to assess the strategic utilities that computers might provide to the military [12].  

    In 1965, as punitive sanctions for the war with Pakistan, the US cut off India’s supply of all electronics, including computers. This forced the government to set up the Electronics Committee of India which worked alongside the Electronics Corporation of India (ECIL), mandated to build indigenous design and electronic manufacturing capabilities. But their approach was considered highly restrictive and discretionary, which instead of facilitating, further constrained the development of computers, related electronics, and correspondingly useful policies in India [13]. Moreover, no one was even writing commercial software in India, while at the same the demand for export-quality software was rising. The situation was such that ECIL had to publish full-page advertisements for the development of export-quality software [12]. Consequently, in the early 1970s, Mumbai-based Tata Consultancy Services managed to become the first company to export software from India. As the 1970s progressed and India moved into the 1980s, it gradually became clearer to more and more people in the government that their socialist policies were not working [14]. 

    In 1984, the same year when the word ‘Cyberspace’ appeared in a sci-fi novel called Neuromancer, a policy shift towards computing and communications technologies was seen in the newly formed government under Rajiv Gandhi [12]. The new computer policy, shaped largely by N. Sheshagiri who was the Director General of the National Informatics Centre, significantly simplified procedures for private actors and was released within twenty days of the prime minister taking the oath. Owing to this liberalisation, the software industry in India took off and in 1988, 38 leading software companies in India came together to establish the National Association of Software and Service Companies (NASSCOM) with the intent to shape the government’s cyber policy agendas. As we are mostly concerned about cybersecurity, it should be noted that in 1990, it was NASSCOM that carried out probably the first IT security-related public awareness campaign in India which called for reducing software piracy and increasing the lawful use of IT [5].   

    Unfortunately, India’s 1990s were mired by coalition governments and a lack of coherent policy focus. In 1998, when Atal Bihari Vajpayee became the Prime Minister, the cyber policy took the most defining turn with the development of the National IT Policy. The IT Act, thus released in 2000 and amended further in 2008, became the first document explicitly dealing with cybercrime. Interestingly, the spokesman and a key member of the task force behind the national IT policy was Dewang Mehta, the then president of NASSCOM. Nevertheless, while computer network operations had become regular in international affairs [15], there was still no cyber policy framework or doctrine to deal with the risks from sophisticated (and state-backed) APT actors that were residing outside the jurisdiction of Indian authorities. There still is not.  

    In 2008, NASSCOM established the Data Security Council of India (DSCI), which along with its parent body took it upon itself to run cybersecurity awareness campaigns for law enforcement and other public sector organisations in India. However, the “awareness campaign” centric model of cybersecurity strategy does not really work against APT actors, as became apparent soon when researchers at the University of Toronto discovered the most massive infiltration of India’s civilian and military computers by APT actors [4]. In 2013, the Snowden revelations about unrestrained US spying on India also ruffled domestic feathers for lack of any defensive measures or policies [3]. Coupled with these surprise(?) and unpalatable revelations, there was also the increasing and recurring international pressure on Indian IT to put an end to the rising cases of data theft where sensitive data of their overseas customers was regularly found in online underground markets [16].  

    Therefore, with the government facing growing domestic and international pressure to revamp its approach towards cybersecurity, MeitY released India’s first National Cybersecurity Policy in 2013 [17]. Ministry of Home Affairs (MHA) also released detailed guidelines “in the wake of persistent threats” [18]. However, the government admitted to not having the required expertise in the matter, and thus the preparation of the MHA document was outsourced to DSCI. Notwithstanding that, MHA’s document was largely an extension of the Manual on Departmental Security Instructions released in 1994 which had addressed the security of paper-based information. Consequently, the MHA document produced less of a national policy and more of a set of instructions to departments about sanitising their computer networks and resources, including a section on instructions to personnel over social media usage. 

    The 2013 National Cybersecurity Policy proposed certain goals and “5-year objectives” toward building national resilience in cyberspace. At the end of a long list of aims, the 2013 policy suggested adopting a “prioritised approach” for implementation which will be operationalised in the future by a detailed guide and plan of action at national, sectoral, state, ministry, department, and enterprise levels. However, as of this writing the promised implementation details, or any teeth, are still missing from the National Cybersecurity Policy. As continued APT activities [19] show, the measures towards creating situation awareness have also not permeated beyond the technical/collection layer.

    In 2014, the National Cyber Coordination Centre (NCCC) was established, with the primary aim of building situational awareness of cyber threats in India. Given the underwhelming response to the 2013 policy [1], [2], the National Cybersecurity Policy was surmised to be updated in 2020, but as of this writing, the update is still being formulated by the National Cybersecurity Coordinator who heads the NCCC. The present policy gap makes it an opportune subject to discuss certain fundamental issues with cyber situation awareness and the future of cyber defences in the context of the trends in APT activities. 

    Much to Catch Up

    Recently, the Government of India’s Kavach (an employee authentication app for anyone using a ‘gov.in’ or ‘nic.in’ emails-id) was besieged by APT36 [20]. APT36 is a Pak-affiliated actor and what one might call a tier-3 APT i.e., what they lack in technical sophistication, they try to make up for that with passion and perseverance. What makes it interesting is that the malicious activity went on for over a year, before a third-party threat observer flagged it. Post-pandemic, APT activities have not just increased but also shown an inclination towards integrating online disinformation into the malware capabilities [21]. APT actors (and bots), who have increasingly gotten better at hiding in plain sight over social networks, have now a variety of AI techniques to integrate into their command and control – we’ve seen the use of GANs to mimic traffic of popular social media sites for hiding command and control traffic [22], an IoT botnet that had a machine-learning component which the attacker could switch on/off depending upon people’s responses in online social networks [21], as well as malware that can “autonomously” locate its command and control node over public communication platforms without having any hard-coded information about the attacker [23]. 

    Post-pandemic, APT activities have not just increased but also shown an inclination towards integrating online disinformation into the malware capabilities.

    This is an offence-persistent environment. In this “space”, there always exists an information asymmetry where the defender generally knows less about the attacker than the opposite being true. Wargaming results have shown that unlike conventional conflicts, where an attack induces the fear of death and destruction, a cyber-attack generally induces anxiety [24], and consequently, people dealing with cyber attacks act to offset those anxieties and not their primal fears. Thus, in response to cyber-attacks, their policies reflect risk aversion, not courage, physical or moral. It need not be the case if policymakers recognise this and integrate it into their decision-making heuristics. Unfortunately, the National Cybersecurity Policy released in 2013 stands out to be a fairly risk-averse and a placeholder document. Among many other, key issues are: 

    • The policy makes zero references to automation and AI capabilities. This would have been understandable in other domains, like poultry perhaps, but is not even comprehensible in present-day cybersecurity.   
    • The policy makes zero references to hardware attacks. Consequently, developing any capability for assessing insecurity at hardware/firmware levels, which is a difficult job, is also overlooked at the national level itself. 
    • There are several organisations within the state, civilian and military, that have stakes and roles of varying degrees in a robust National Cybersecurity Policy. However, the policy makes zero attempts at recognising and addressing these specific roles and responsibilities, or any areas of overlap therein.
    • The policy does not approach cyber activity as an overarching operational construct that permeates all domains, but rather as activity in a specific domain called “cyberspace”. Consequently, it lacks the doctrinal thinking that would integrate cyber capabilities with the use of force. A good example of this is outer space, where cyber capabilities are emerging as a potent destabiliser [25] and cybersecurity constitutes the operational foundation of space security, again completely missing from the National Cybersecurity Policy.   
    • The policy is also light on subjects critical to cybersecurity implementation, such as the approach towards internet governance, platform regulation, national encryption regime, and the governance of underlying technologies. 

    A Note on the Human Dimension of Cybersecurity

    There exist two very broad types of malicious behaviour online, one that is rapid and superficial, and another that are deep and persistent. The present approaches to building situation awareness in cyberspace are geared towards the former, leading to spatiotemporally “localised and prioritised” assessments [26], matters pertaining to the immediate law and order situations and not stealthy year-long campaigns. Thus, while situation awareness itself is a psychological construct dealing with decision-making, in cybersecurity operations it overwhelmingly has turned into software-based visualisation of the incoming situational data. This is a growing gap that must also be addressed by the National Cybersecurity Policy. 

    The use of computational tools and techniques to automate and optimise the social interactions of a software agent presents itself as a significant force multiplier for cyber threat actors.

    In technology-mediated environments, people have to share the actual situation awareness with the technology artifacts [27]. Complete dependence on technology for cyber situation awareness has proven to be problematic, for example in the case of Stuxnet, where the operators at the targeted plant saw on their computer screens that the centrifuges were running normally, and simply believed that to be true. The 2016 US election interference only became clearer at the institutional level after several months of active social messaging and doxing operations had already been underway [28], and the story of Telebots’ attack on Ukrainian electricity grids is even more telling – a powerplant employee whose computer was being remotely manipulated, sat making a video of this activity, asking his colleague if it could be their own organisation’s IT staff “doing their thing” [29].

    This lack of emphasis on human factors has been a key gap in cybersecurity, which APTs never fail to exploit. Further, such actors rely upon considerable social engineering in initial access phases, a process which is also getting automated faster than policymakers can play catchup to [30]. The use of computational tools and techniques to automate and optimise the social interactions of a software agent presents itself as a significant force multiplier for cyber threat actors. Therefore, it is also paramount to develop precise policy guidelines that implement the specific institutional structures, processes, and technological affordances required to mitigate the risks of malicious social automation on the unsuspecting population, as well as on government institutions.  

    Concluding Remarks

    There is a running joke that India’s strategic planning is overseen by accountants and reading through the document of National Cybersecurity Policy 2013, that does not seem surprising. We have had a troubling policy history when it comes to electronics and communications and are still in the process of shedding our colonial burden. A poorly framed National Cybersecurity Policy will only take us away from self-reliance in cyberspace and towards an alliance with principal offenders themselves. Notwithstanding, an information-abundant organisation like NCCC has undoubtedly to make some choices about where and what to concentrate its attentional resources upon, however, the present National Cybersecurity Policy appears neither to be a component of any broader national security strategy nor effective or comprehensive enough for practical implementation in responding to the emerging threat environment. 

    References

    [1] N. Alawadhi, “Cyber security policy must be practical: Experts,” The Economic Times, Oct. 22, 2014. Accessed: Sep. 14, 2022. [Online]. Available: https://economictimes.indiatimes.com/tech/internet/cyber-security-policy-must-be-practical-experts/articleshow/44904596.cms

    [2] A. Saksena, “India Scrambles on Cyber Security,” The Diplomat, Jun. 18, 2014. https://thediplomat.com/2014/06/india-scrambles-on-cyber-security/ (accessed Sep. 18, 2022).

    [3] C. R. Mohan, “Snowden Effect,” Carnegie India, 2013. https://carnegieindia.org/2013/06/19/snowden-effect-pub-52148 (accessed Sep. 18, 2022).

    [4] R. Dharmakumar and S. Prasad, “Hackers’ Haven,” Forbes India, Sep. 19, 2011. https://www.forbesindia.com/printcontent/28462 (accessed Sep. 18, 2022).

    [5] D. Karthik and R. S. Upadhyayula, “NASSCOM: Is it time to retrospect and reinvent,” Indian Inst. Manag. Ahmedabad, 2014.

    [6] H. C. Fanshawe, Delhi past and present. J. Murray, 1902.

    [7] C. Simms, “Is Social Engineering the Easy Way in?,” Itnow, vol. 58, no. 2, pp. 24–25, 2016.

    [8] J. Lienhard, “No. 1380: Indian telegraph,” Engines Our Ingen., 1998.

    [9] A. Vatsa, “When telegraph saved the empire – Indian Express,” Nov. 18, 2012. http://archive.indianexpress.com/news/when-telegraph-saved-the-empire/1032618/0 (accessed Sep. 17, 2022).

    [10] L. Hoskins, BRITISH ROUTES TO INDIA. ROUTLEDGE, 2020.

    [11] D. R. Headrick, The invisible weapon: Telecommunications and international politics, 1851-1945. Oxford University Press on Demand, 1991.

    [12] B. Parthasarathy, “Globalizing information technology: The domestic policy context for India’s software production and exports,” Iterations Interdiscip. J. Softw. Hist., vol. 3, pp. 1–38, 2004.

    [13] I. J. Ahluwalia, “Industrial Growth in India: Stagnation Since the Mid-Sixties,” J. Asian Stud., vol. 48, pp. 413–414, 1989.

    [14] R. Subramanian, “Historical Consciousness of Cyber Security in India,” IEEE Ann. Hist. Comput., vol. 42, no. 4, pp. 71–93, 2020.

    [15] C. Wiener, “Penetrate, Exploit, Disrupt, Destroy: The Rise of Computer Network Operations as a Major Military Innovation,” PhD Thesis, 2016.

    [16] N. Kshetri, “Cybersecurity in India: Regulations, governance, institutional capacity and market mechanisms,” Asian Res. Policy, vol. 8, no. 1, pp. 64–76, 2017.

    [17] MeitY, “National Cybersecurity Policy.” Government of India, 2013.

    [18] MHA, “NATIONAL INFORMATION SECURITY POLICY AND GUIDELINES.” Government of India, 2014.

    [19] S. Patil, “Cyber Attacks, Pakistan emerges as China’s proxy against India,” Obs. Res. Found., 2022.

    [20] A. Malhotra, V. Svajcer, and J. Thattil, “Operation ‘Armor Piercer:’ Targeted attacks in the Indian subcontinent using commercial RATs,” Sep. 23, 2021. http://blog.talosintelligence.com/2021/09/operation-armor-piercer.html (accessed Sep. 02, 2022).

    [21] NISOS, “Fronton: A Botnet for Creation, Command, and Control of Coordinated Inauthentic Behavior.” May 2022.

    [22] M. Rigaki, “Arming Malware with GANs,” presented at the Stratosphere IPS, Apr. 2018. Accessed: Oct. 19, 2021. [Online]. Available: https://www.stratosphereips.org/publications/2018/5/5/arming-malware-with-gans

    [23] Z. Wang et al., “DeepC2: AI-Powered Covert Command and Control on OSNs,” in Information and Communications Security, vol. 13407, C. Alcaraz, L. Chen, S. Li, and P. Samarati, Eds. Cham: Springer International Publishing, 2022, pp. 394–414. doi: 10.1007/978-3-031-15777-6_22.

    [24] J. Schneider, “Cyber and crisis escalation: insights from wargaming,” 2017.

    [25] J. Pavur, “Securing new space: on satellite cyber-security,” PhD Thesis, University of Oxford, 2021.

    [26] U. Franke and J. Brynielsson, “Cyber situational awareness – A systematic review of the literature,” Comput. Secur., vol. 46, pp. 18–31, Oct. 2014, doi: 10.1016/j.cose.2014.06.008.

    [27] N. A. Stanton, P. M. Salmon, G. H. Walker, E. Salas, and P. A. Hancock, “State-of-science: situation awareness in individuals, teams and systems,” Ergonomics, vol. 60, no. 4, pp. 449–466, Apr. 2017, doi: 10.1080/00140139.2017.1278796.

    [28] “Open Hearing On The Intelligence Community’s Assessment on Russian Activities and Intentions in the 2016 U.S. Elections.” Jan. 10, 2017. Accessed: Dec. 22, 2021. [Online]. Available: https://www.intelligence.senate.gov/hearings/open-hearing-intelligence-communitys-assessment-russian-activities-and-intentions-2016-us#

    [29] R. Lipovsky, “Tactics, Techniques, and Procedures of the World’s Most Dangerous Attackers,” presented at the Microsoft BlueHat 2020, 2020. [Online]. Available: https://www.youtube.com/watch?v=9LAFV6XDctY

    [30] D. Ariu, E. Frumento, and G. Fumera, “Social engineering 2.0: A foundational work,” in Proceedings of the Computing Frontiers Conference, 2017, pp. 319–325.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/10/Research-Paper-TPF-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

  • The Bridge on River Chenab

    The Bridge on River Chenab

    “The only way to discover the limits of the possible is to go beyond them into the impossible”

    -Arthur C. Clarke

    Introduction

    On 13 Aug 2022, the bridge on the River Chenab in the Reasi District of J&K was finally completed. It was a case of the impossible becoming possible. It all happened because of a very high degree of self-belief of those who planned it and the sincerity of thousands of those who worked hard for the last 18 years.   Indeed, it was the best gift the Indian Railway in general and Indian Engineers, in particular, could give to India on the 76th Independence Day of India. It is also highly symbolic that it is located in the State of J&K and in a way appeared to be a giant step towards the integration of J&K with the rest of the country.

    The Bridge over the River Chenab is part of the Jammu-Udhampur-Baramulla Railway line, which is being constructed. While Sections of Jammu-Udhampur, Udhampur-Katra and Banihal-Baramulla are already completed and opened for traffic, section Katra- Banihal is still not complete. The degree of difficulty in this section is enormous. Besides this Bridge on Chenab (more about it a little later), the Bridge on Anji Khad (which is under construction) and a total of 35 tunnels and 37 bridges make this section of 111 km in the mountainous terrain extremely challenging and an engineering marvel in the making.

    Progress of the Project –  It is a 356 km railway project, starting at Jammu and going up to Baramulla. It was started in 1983 with the objective of connecting Jammu Tawi to Udhampur.  Construction of the route faced natural challenges including major earthquake zones, extreme temperatures and inhospitable terrain.  Finally, in 2005  The 53 km long Jammu–Udhampur section opened after 21 years with 20 tunnels and 158 bridges. The cost of the project had escalated to ₹515 crores from the original estimated cost of ₹50 crores.  In 1994 The railway accepted the necessity to extend the track to Baramulla. However, at that point it was thought that the project will have two disconnected arms; one from Jammu to Udhampur and the second from Qazigund to Baramulla. In 2002 the GoI declared this project to be a national project. This means hereafter, the entire funding will be from the Central Budget. At that time the necessity was also accepted to connect the two disconnected arms. The estimated cost of the project assessed then was   ₹6,000 crore.  In 2008 the 66 km section between Anantnag and Manzhama (outside Srinagar) was opened for traffic. In 2009 this Service was extended to Baramulla. During the same year, the line from Anantnag was extended to Qazigund.

    Also  Around the same time, an extension of the track from Baramulla to Kupwara was proposed, and its survey got completed in 2009. In 2009 itself, work on the section between Katra and Qazigund resumed after a review based on geotechnical studies. In 2011, an 11.215 Km long Banihal Qazigund tunnel across the Pir Panjal Range was completed.  This paved the way for a trial run in Dec 2012 from Banihal to Qazigund. In 2014 the train route from Udhampur to Katra was also operationalised. Now the only missing link in this nationally vital rail line was Katra-Banihal. Finally, in 2018 the GoI approved the extension of the railway line to Kupwara.

    Degree of Difficulty in Katra- Banihal Section – This is a 111 km long stretch. 97.34 km of this stretch will be through tunnels. There are 20 Major (including the bridge across the Chenab river and a bridge on Anji Khad) and 10 minor bridges on this stretch. 

    Bridge Across Chenab

    Location: The Chenab Rail Bridge is a steel and concrete arch bridge between Bakkal and Kauri in the Reasi district of J&K, India.  It needs to be noted that it is the highest railway bridge in the world. After many hick-ups, finally in 2012 excavation of the foundation of the bridge commenced. The tender was with Afcons Infrastructure Limited. The alignment crosses a deep gorge of the Chenab River, which necessitates the construction of a long-span railway bridge with a viaduct for approaches on either side. 

    Details: It is a 785 meters long single arch bridge where the main arch is 467 meters. The total span of the bridge is 1315 meters including a viaduct of 650 meters on the Northern side, Deck height is 359 meters above the river bed and 322 meters above the water surface which is 35 meters more than the height of the Eiffel Tower. The project also entails the construction of 203 km of access roads.  The deck is 13.5 meters wide, where two rail tracks will be available. The total cost of the Bridge is Rs 1486 Crores.

     

    Design: The steel arch has been planned because the construction of the pillar was difficult and the load had to be distributed. Chords have been provided to cater for the swaying load. The steel structures of the bridge were manufactured in workshops built in the mountains. The workshops had been moved to the building site because there is no proper road network in the challenging terrain. The longest building parts that could be delivered to the site were 12 meters in length. Therefore, four workshops were established in the mountains. Workshops and paint shops were built on both sides of the valley. All steel materials, except for the smallest rolled profiles, were delivered to the mountains as steel boards. The insufficient infrastructure of the area caused additional problems. There was no electricity and the water of the river was not suitable for manufacturing concrete. All electricity had to be produced at the site and the water was delivered from further away in the mountains. The job was also challenging because the track had a curvature in the approach bridge. In this section, the construction stage bearings had been designed in such a way that it was possible to launch the steel deck in the curvature portion as well. The bridge consists of about 25000 tonnes of steel structures, the main portion of which was used for the arch bridge section. It is a unique design and as such none of the Indian codes fully catered for the design validation. Therefore it was decided to follow the BS Code. The design also caters for wind load effects as per wind tunnel tests. It can cater for wind pressure of 1500 Pa. It is a blast resistance design. The design of the decking has been checked for fatigue as per the BS Code. The most important aspect is that it caters for redundancy within the structure, for a lower level of operation during mishaps and against collapse in extreme cases of one-pier failure. The area has high seismicity and the design was planned to withstand earthquakes of the severity of 8 on the Richter Scale. The bridge design is for a rail speed of 100 kmph. This means it can withstand very high-intensity of vibrations. The designed life of the bridge is 120 years and to take care of assessed steel fatigue the fatigue design selected is BS:5400 Part-10. The bridge will be able to withstand a temperature of minus 200C and a wind speed of 266 kmph.

    Team: The viaduct and foundation have been designed by M/s WSP(Finland) and the Arch design has been made by M/s Leonhart, Andra and Partners (Germany), the foundation protection has been designed by IISc Bangalore. The executing agency has been M/s Konkan Railway Corporation Limited.

    Status of Katra-Banihal project

    Although, the construction of Chenab Bridge is a major milestone in the progress of the project, however, still many more landmarks are required to be crossed before the completion of the project. Foremost of them is the Anji Khad bridge which is expected to be ready only by Dec 2022. It is expected that this rail Section will finally be operational by the middle of 2023.

    Conclusion

    The Jammu-Udhampur-Katra-Banihal-Srinagar-Baramulla Rail project is a vital national project which has a major bearing on national security and nation building. It is a matter of pride that Indian Engineers have achieved what at one point had appeared impossible. It will help in the integration of J&K with the rest of the country and will help strategically in many ways. The completion of the project will also give confidence to expeditiously complete other projects of national importance like; the railway line to Leh and the Railway line to Tenga in the North-East.

    End Note:

    1. Conceptual Design of the Chenab Bridge in India by Pekka Pulkkine WSP Finland, S Hopf and A Jutila. Available on Research Gate: https://www.researchgate.net/publication/257725212_Conceptual_Design_of_the_Chenab_Bridge_in_India.

    2. An internet upload: https://byjus.com/current-affairs/chenab-bridge/

    3. A Report by OT Staff, “Once the bridge is completed, it will provide all-weather connectivity between Kashmir and the rest of India” reported on 07 Apr 2021 and uploaded on https://www.outlookindia.com/outlooktraveller/travelnews/story/71397/all-about-the-chenab-bridge

    4. An internet upload: https://en.wikipedia.org/wiki/Jammu–Baramulla_line

    5. An internet upload: https://en.wikipedia.org/wiki/Chenab_Rail_Bridge

    6. An internet upload: https://www.pib.gov.in/PressReleasePage.aspx?PRID=1709652

    7. Zee Media Bureau, “Indian Railways: Delhi-Kashmir, Katra-Banihal train route to open soon, project nears completion” dated 08 Aug 2022 and uploaded on https://zeenews.india.com/railways/indian-railways-delhi-kashmir-katra-banihal-train-route-to-open-soon-project-nears-completion-2494827.html

    Image 1 Credits: Arun Ganesh

    Image 2 Credits: Indian Railways

    Image 3 Credits: Indian Express

    Image 4 Credits: Indian Railways

    Feature Image Credits: The Indian Express

  • On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Abstract:

    With the advent of satellite imagery and smartphone sensors, cartographic expertise has reached everyone’s pocket and we’re witnessing a software-isation of maps that will underlie a symbiotic relationship between our physical spaces and virtual environments. This extended reality comes with enormous economic, military, and technological potential. While there exist a range of technical, social and ethical issues still to be worked out – time and tide wait for no one is a metaphor well applied to the Metaverse and its development. This article briefly introduces the technological landscape, and then moves over to a discussion of Geospatial Digital Twinning and its techno-strategic utility and implications. We suggest that India should, continue on the existing dichotomy of Open Series and Defence Series Maps, initiate Geospatial Digital Twins of specific areas of interest as a pilot for the development, testing, and integration of national metaverse standards and rules. Further, a working group in collaboration with a body like NASSCOM needs to be formed to develop the architecture and norms that facilitate Indian economic and strategic interests through the Metaverse and other extended reality solutions.

    Introduction

    Cartographers argue that maps are value-laden images, which do not just represent a geographical reality but also become an essential tool for political discourse and military planning. Not surprisingly then, early scholars had termed cartography as a science of the princes. In fact, the history of maps is deeply intertwined with the emergence of the Westphalian nation-state itself, with the states being the primary sponsors of any cartographic activity in and around their territories[1].
    Earlier the outcome of such activities even constituted secret knowledge, for example, it was the British Military Intelligence HQ in Shimla which ran and coordinated many of the cartographic activities for the British in the subcontinent[2]. Thus, given our post-independence love for Victorian institutions, until 2021 even Google Maps had remained an illegal service in India[3].

    One of the key stressors which brought this long-awaited change in policy was the increased availability of relatively low-cost but high-resolution satellite imagery in open online markets. But this remote sensing is only one of the developments impacting modern mapmaking. A host of varied but converging technologies particularly Artificial Intelligence, advanced sensors, Virtual and Augmented Reality, and the increasing bandwidth for data transmission – are enabling a new kind of map. This new kind of map will not just be a model of reality, but rather a live and immersive simulation of reality. We can call it a Geospatial Digital Twin (GDT) – and it will be a 4D artefact, i.e. given its predictive component and temporal data assimilation, a user could also explore the hologram/VR through time and evaluate possible what-if scenarios.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Read the Full Paper
    [/powerkit_button]

  • The Geopolitical Consolidation of Artificial Intelligence

    The Geopolitical Consolidation of Artificial Intelligence

    Key Points

    • IT hardware and Semiconductor manufacturing has become strategically important and critical geopolitical tools of dominant powers. Ukraine war related sanctions and Wassenaar Arrangement regulations invoked to ban Russia from importing or acquiring electronic components over 25 Mhz.
    • Semi conductors present a key choke point to constrain or catalyse the development of AI-specific computing machinery.
    • Taiwan, USA, South Korea, and Netherlands dominate the global semiconductor manufacturing and supply chain. Taiwan dominates the global market and had 60% of the global share in 2021. Taiwan’s one single company – TSMC (Taiwan Semiconductor Manufacturing Co), the world’s largest foundry, alone accounted for 54% of total global revenue.
    • China controls two-thirds of all silicon production in the world.
    • Monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence (AI), exacerbating the strategic and innovation bottlenecks for developing countries like India.
    • Developing a competitive advantage over existing leaders would require not just technical breakthroughs but also some radical policy choices and long-term persistence.
    • India should double down over research programs on non-silicon based computing with a national urgency instead of pursuing a catch-up strategy.

    Russia was recently restricted, under category 3 to category 9 of the Wassenaar Arrangement, from purchasing any electronic components over 25MHz from Taiwanese companies. That covers pretty much all modern electronics. Yet, the tangibles of these sanctions must not deceive us into overlooking the wider impact that hardware access and its control have on AI policies and software-based workflows the world over. As Artificial Intelligence technologies reach a more advanced stage, the capacity to fabricate high-performance computing resources i.e. semiconductor production becomes key strategic leverage in international affairs.

    Semiconductors present a key chokepoint to constrain or catalyse the development of AI-specific computing machinery. In fact, most of the supply of semiconductors relies on a single country – Taiwan. The Taiwan Semiconductor Manufacturing Corporation (TSMC) manufactures Google’s Tensor Processing Unit (TPU), Cerebras’s Wafer Scale Engine (WSE), as well as Nvidia’s A100 processor. The following table provides a more detailed1 assessment:

    Hardware Type

    AI Accelerator/Product Name

    Manufacturing Country

    Application-Specific Integrated Circuits (ASICs)

    Huawei Ascend 910

    Taiwan

    Cerebras WSE

    Taiwan

    Google TPUs

    Taiwan

    Intel Habana

    Taiwan

    Tesla FSD

    USA

    Qualcomm Cloud AI 100

    Taiwan

    IBM TrueNorth

    South Korea

    AWS Inferentia

    Taiwan

    AWS Trainium

    Taiwan

    Apple A14 Bionic

    Taiwan

    Graphic Processing Units (GPUs)

    AMD Radeon

    Taiwan

    Nvidia A100

    Taiwan

    Field-Programmable Gate Arrays (FPGAs)

    Intel Agilex

    USA

    Xilinx Virtex

    Taiwan

    Xilinx Alveo

    Taiwan

    AWS EC2 FI

    Taiwan

    As can be seen above, the cake of computing hardware is largely divided in such a way that the largest pie holders also happen to form a singular geopolitical bloc vis-a-vis China. This further shapes the evolution of territorial contests in the South China Sea. This monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence, especially exacerbating the strategic and innovation bottlenecks for developing countries like India. Since the invention of the transistor in 1947, and her independence, India has found herself in an unenviable position where there stands zero commercial semiconductor manufacturing capacity after all these years while her office-bearers continually promise of leading in the fourth industrial revolution.

    Bottlenecking Global AI Research

    There are two aspects of developing these AI accelerators – designing the specifications and their fabrication. AI research firms first design chips which optimise hardware performance to execute specific machine learning calculations. Then, semiconductor firms, operating in a range of specialities and specific aspects of fabrication, make those chips and increase the performance of computing hardware by adding more and more transistors to pieces of silicon. This combination of specific design choices and advanced hardware fabrication capability forms the bedrock that will decide the future of AI, not the amount of data a population is generating and localising.

    However, owing to the very high fixed costs of semiconductor manufacturing, AI research has to be focused on data and algorithms. Therefore, innovations in AI’s algorithmic efficiency and model scaling have to compensate for a lack of equivalent situations in the AI’s hardware. The aggressive consolidation and costs of hardware fabrication mean that firms in AI research are forced to outsource their hardware fabrication requirements. In fact, as per DARPA2, because of the high costs of getting their designs fabricated, AI hardware startups do not even receive much private capital and merely 3% of all venture funding between 2017-21 in AI/ML has gone to startups working on AI hardware.

    But TSMC’s resources are limited and not everyone can afford them. To get TSMC’s services, companies globally have to compete with the likes of Google and Nvidia, therefore prices go further high because of the demand side competition. Consequently, only the best and the biggest work with TSMC, and the rest have to settle for its competitors. This has allowed this single company to turn into a gatekeeper in AI hardware R&D. And as the recent sanctions over Russia demonstrate, it is now effectively playing the pawn which has turned the wazir in a tense geopolitical endgame.

    Taiwan’s AI policy also reflects this dominance in ICT and semiconductors – aiming to develop “world-leading AI-on-Device solutions that create a niche market and… (make Taiwan) an important partner in the value chain of global intelligent systems”.3 The foundation of strong control over the supply of AI hardware and also being #1 in the Global Open Data Index, not just gives Taiwan negotiating leverage in geopolitical competition, but also allows it to focus on hardware and software collaboration based on seminal AI policy unlike most countries where the AI policy and discourse revolve around managing the adoption and effects of AI, and not around shaping the trajectory of its engineering and conceptual development like the countries with hardware advantage.

    Now to be fair, R&D is a time-consuming, long-term activity which has a high chance of failure. Thus, research focus naturally shifts towards low-hanging fruits, projects that can be achieved in the short-term before the commissioning bureaucrats are rotated. That’s why we cannot have a nationalised AGI research group, as nobody will be interested in a 15-20 year-long enterprise when you have promotions and election cycles to worry about. This applies to all high-end bleeding-edge technology research funding everywhere – so, quantum communications will be prioritised over quantum computing, building larger and larger datasets over more intelligent algorithms, and silicon-based electronics over researching newer computing substrates and storage – because those things are more friendly to short-term outcome pressures and bureaucracies aren’t exactly known to be a risk-taking institution.

    Options for India

    While China controls 2/3 of all the silicon production in the world and wants to control the whole of Taiwan too (and TSMC along with its 54% share in logic foundries), the wider semiconductor supply chain is a little spreadout too for any one actor’s comfort. The leaders mostly control a specialised niche of the supply chain, for example, the US maintains a total monopoly on Electronic Design Automation (EDA) software solutions, the Netherlands has monopolised Extreme UltraViolet and Argon Flouride scanners, and Japan has been dishing out 300 mm wafers used to manufacture more than 99 percent of the chips today.4 The end-to-end delivery of one chip could have it crossing international borders over 70 times.5 Since this is a matured ecosystem, developing a competitive advantage over existing leaders would require not just proprietary technical breakthroughs but also some radical policy choices and long term persistence.

    It is also needless to say that the leaders are also able to attract and retain the highest quality talent from across the world. On the other hand, we have a situation where regional politicians continue cribbing about incoming talent even from other Indian states. This is therefore the first task for India, to become a technology powerhouse, she has to, at a bare minimum, be able to retain all her top talent and attract more. Perhaps, for companies in certain sectors or of certain size, India must make it mandatory to spend at least X per cent of revenue on R&D and offer incentives to increase this share – it’ll revamp things from recruitment and retention to business processes and industry-academia collaboration – and in the long-run prove to be a lot more socioeconomically useful instrument than the CSR regulation.

    It should also not escape anyone that the human civilisation, with all its genius and promises of man-machine symbiosis, has managed to put all its eggs in a single basket that is also under the constant threat of Chinese invasion. It is thus in the interest of the entire computing industry to build geographical resiliency, diversity and redundancy in the present-day semiconductor manufacturing capacity. We don’t yet have the navy we need, but perhaps in a diplomatic-naval recognition of Taiwan’s independence from China, the Quad could manage to persuade arrangements for an uninterrupted semiconductor supply in case of an invasion.

    Since R&D in AI hardware is essential for future breakthroughs in machine intelligence – but its production happens to be extremely concentrated, mostly by just one small island country, it behoves countries like India to look for ways to undercut the existing paradigm of developing computing hardware (i.e. pivot R&D towards DNA Computing etc) instead of only trying to pursue a catch-up strategy. The current developments are unlikely to solve India’s blues in integrated circuits anytime soon. India could parallelly, and I’d emphatically recommend that she should, take a step back from all the madness and double down on research programs on non-silicon-based computing with a national urgency. A hybrid approach toward computing machinery could also resolve some of the bottlenecks that AI research is facing due to dependencies and limitations of present-day hardware.

    As our neighbouring adversary Mr Xi says, core technologies cannot be acquired by asking, buying, or begging. In the same spirit, even if it might ruffle some feathers, a very discerning reexamination of the present intellectual property regime could also be very useful for the development of such foundational technologies and related infrastructure in India as well as for carving out an Indian niche for future technology leadership.

    References:

    1. The Other AI Hardware Problem: What TSMC means for AI Compute. Available at https://semiliterate.substack.com/p/the-other-ai-hardware-problem

    2. Leef, S. (2019). Automatic Implementation of Secure Silicon. In ACM Great Lakes Symposium on VLSI (Vol. 3)

    3. AI Taiwan. Available at https://ai.taiwan.gov.tw/

    4. Khan et al. (2021). The Semiconductor Supply Chain: Assessing National Competitiveness. Center for Security and Emerging Technology.
    5. Alam et al. (2020). Globality and Complexity of the Semiconductor Ecosystem. Accenture.

  • Technology, Politics and China’s Quest for Energy Dominance

    Technology, Politics and China’s Quest for Energy Dominance

    [powerkit_button size=”lg” style=”info” block=”false” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/05/Technology-Politics-and-Chinas-Quest-for-Renewable-Energy-Dominance-3.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Abstract:

    This paper will empirically investigate the role of technology in international politics through a case study of China’s development of renewable energy infrastructure (solar PV and wind energy) and its impact on international politics. This paper looks at how technology helps shape a state’s identity using renewable energy technology as an explanatory variable. The paper employs Grygiel’s Model of Geopolitics to analyse the case study; geopolitics because much of China’s development in the renewable sector has been a function of its geography and abundance of natural resources.

    Introduction:

    China has experienced decades of near double-digit economic growth and since the 2000s, has witnessed a growing population and rapid industrialization that has correspondingly driven demand for energy. Its expeditious implementation of economic reforms has elevated it to the status of a global power capable of challenging the US-established status quo. Stability is increasingly being viewed as a function of China’s behaviour vis-à-vis its strategic rivals, primarily the US, and to a lesser extent Japan, India, Russia and the littoral states of Southeast Asia. But more importantly, it has been China’s near fanatic fervour to rise as a technologically superior state, as the US emerged post the World Wars, that has generated interest. The modernization of its military, near the meteoric rise of installed capacities for renewable sources of energy and technological revolution, underscores the importance and role technological advancement plays in a state’s development. Technology and international politics have a near symbiotic relationship and the former has the potential to fundamentally alter the way states exercise their sovereignty in pursuit of their national interests.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/05/Technology-Politics-and-Chinas-Quest-for-Renewable-Energy-Dominance-3.pdf” target=”_blank” nofollow=”false”]
    Read the Full Paper
    [/powerkit_button]

  • Recent advances in the use of ZFN-mediated gene editing for human gene therapy

    Recent advances in the use of ZFN-mediated gene editing for human gene therapy

    Targeted genome editing with programmable nucleases has revolutionized biomedical research. The ability to make site-specific modifications to the human genome, has invoked a paradigm shift in gene therapy. Using gene editing technologies, the sequence in the human genome can now be precisely engineered to achieve a therapeutic effect. Zinc finger nucleases (ZFNs) were the first programmable nucleases designed to target and cleave custom sites. This article summarizes the advances in the use of ZFN-mediated gene editing for human gene therapy and discusses the challenges associated with translating this gene editing technology into clinical use.

    Zinc finger nucleases: first of the programmable nucleases

    In the late seventies, scientists observed that when DNA is transfected into yeast cells, it integrates at homologous sites by homologous recombination (HR). In stark contrast, when DNA was transfected into mammalian cells, it was found to integrate randomly at non-homologous sites by non-homologous end joining (NHEJ). HR events were so rare that it required laborious positive and negative selection techniques to detect them in mammalian cells [1]. Later work performed by Maria Jasin’s lab using I-SceI endonuclease (a meganuclease) and a homologous DNA fragment with sequences flanking the cleavage site, revealed that a targeted chromosomal double-strand break (DSB) at homologous sites can stimulate gene targeting by several orders of magnitude in mammalian cells that are refractory to spontaneous HR [2]. However, for this experiment to be successful, the recognition site for I-SceI endonuclease had to be incorporated at the desired chromosomal locus of the mammalian genome by classical HR techniques. Thus, the generation of a unique, site-specific genomic DSB had remained the rate limiting step in using homology-directed repair (HDR) for robust and precise genome modifications of human cells, that is, until the creation of zinc finger nucleases (ZFNs) – the first of the programmable nucleases that could be designed to target and cleave custom sites [3,4].

    Because HR events are very rare in human cells, classical gene therapy – use of genes to achieve a therapeutic effect – had focused on the random integration of normal genes into the human genome to reverse the adverse effects of disease-causing mutations. The development of programmable nucleases – ZFNs, TALENs and CRISPR-Cas9 – to deliver a targeted DSB at a pre-determined chromosomal locus to induce genome editing, has revolutionized the biological and biomedical sciences. The ability to make site-specific modifications to the human genome has invoked a paradigm shift in gene therapy. Using gene-editing technologies, the sequence in the human genome can now be precisely engineered to achieve a therapeutic effect. Several strategies are available for therapeutic gene editing which include: 1) knocking-out genes by NHEJ; 2) targeted addition of therapeutic genes to a safe harbour locus of the human genome for in vivo protein replacement therapy (IVPRT); and 3) correction of disease-causing mutations in genes.

    The first truly targetable reagents were the ZFNs that showed that arbitrary DNA sequences in the human genome could be cleaved by protein engineering, ushering in the era of human genome editing [4]. We reported the creation of ZFNs by fusing modular zinc finger proteins (ZFPs) to the non-specific cleavage domain of the FokI restriction enzyme in 1996 [3]. ZFPs are comprised of ZF motifs, each of which is composed of approximately 30 amino acid residues containing two invariant pairs of cysteines and histidines that bind a zinc atom. ZF motifs are highly prevalent in eukaryotes. The Cys2His2 ZF fold is a unique ββα structure that is stabilized by a zinc ion [5]. Each ZF usually recognizes a 3–4-bp sequence and binds to DNA by inserting the α-helix into the major groove of the double helix. Three to six such ZFs are linked together in tandem to generate a ZFP that binds to a 9–18-bp target site within the genome. Because the recognition specificities can be manipulated experimentally, ZFNs offered a general means of delivering a unique, site-specific DSB to the human genome. Furthermore, studies on the mechanism of cleavage by 3-finger ZFNs established that the cleavage domains must dimerize to affect an efficient DSB and that their preferred substrates were paired binding sites (inverted repeats) [6]. This realization immediately doubled the size of the target sequence recognition of 3-finger ZFNs from 9- to 18-bp, which is long enough to specify a unique genomic address within cells. Moreover, two ZFNs with different sequence specificities could cut at heterologous binding sites (other than inverted repeats), when they are appropriately positioned and oriented within a genome.

    ZFNs paved the way for human genome editing

    In collaboration with Dana Carroll’s lab, we then showed that a ZFN-induced DSB stimulates HR in frog oocytes in 2001 [7]. The groundbreaking experiments on ZFNs established the potential for inducing targeted recombination in a variety of organisms that are refractory to spontaneous HR, and ushered in the era of site-specific genome engineering, also commonly known as genome editing. A number of studies using ZFNs for genome editing in different organisms and cells, soon followed [4,8–10]. The modularity of DNA recognition by ZFs, made it possible to design ZFNs for a multitude of genomic targets for various biological and biomedical applications [4]. Thus, the ZFN platform laid the foundation for genome editing and helped to define the parameters and approaches for nuclease-based genome engineering.

    Despite the remarkable successes of ZFNs, the modularity of ZF recognition did not readily translate into a simple code that enabled easy assembly of highly specific ZFPs from ZF modules. Generation of ZFNs with high sequence specificity was difficult to generate for routine use by at large scientists. This is because the ZF motifs do not always act as completely independent modules in their DNA sequence recognition; they are influenced more often than not by their neighbours. ZF motifs that recognize each of the 64 possible DNA triplets with high specificity, never materialized. Simple modular assembly of ZFs did not always yield highly specific ZFPs, hence ZFNs. Thus, DNA recognition by ZF motifs turned out to be more complex than originally perceived. With this realization came the understanding that the ZFPs have to be selected in a context-dependent manner that required several cycles of laborious selection techniques and further optimization. This is not to say that it can’t be done, but just that it requires substantial cost and time-consuming effort. This is evidenced by the successful ZFN-induced genome editing applications to treat a variety of human diseases that are underway. For example, ZFN-induced mutagenesis of HIV co-receptor CCR5 as a form of gene therapy has the potential to provide a functional cure for HIV/AIDS.

    Successor technologies – TALENs and CRISPR/Cas9 – have made the delivery of a site-specific DSB to the mammalian genome much easier and simpler. Custom nuclease design was facilitated further by the discovery of TAL effector proteins from plant pathogens, in which two amino acids (repeat variable di-residues, also known as RVDs) within a TAL module, recognize a single base pair, independent of the neighbouring modules [11,12]. In a similar fashion to ZFNs, TAL effector modules were fused to the FokI cleavage domain to form TAL effector nucleases, known as TALENs [13]. The development of TALENs simplified our ability to make custom nucleases by straightforward modular design for the purposes of genome editing. However, the discovery of CRISPR/Cas9 – an RNA-guided nuclease in bacterial adoptive immunity – has made it even easier and cheaper, given that no protein engineering is required [14–17]. A constant single nuclease (Cas9) is used for cleavage together with an RNA that directs the target site specificity based on Watson-Crick base pairing. CRISPR/Cas9 system has democratized the use of genome editing, by making it readily accessible and affordable by small labs around the world.

    ZFN specificity & safety

    The efficacy of ZFNs to a large extent depends on the specificity of the ZFPs that are fused to the FokI nuclease domain. The higher the specificity of the ZFPs, the lower the ZFN’s off-target cleavage, and hence toxicity. The early ZFNs designed for genomic targets displayed significant off-target activity and toxicity due to promiscuous binding and cleavage, particularly when encoded in plasmids and expressed in high levels in human cells. One way to increase the specificity of the ZFNs is to increase the number of ZF motifs within each ZFN of the pair. This helps to improve specificity, but it is not always sufficient. Many different mechanisms could account for the off-target activity. They include ZFNs binding to single or unintended target sites as well as to homodimer sites (the inverted repeat sites for each of the ZFN pair). Binding of a ZFN monomer to single or unintended target sites could be followed by dimerization of the cleavage domain to another monomer in solution. Therefore, one approach to reduce ZFNs toxicity is to re-design the dimer interface of the cleavage domains to weaken the interaction and generate a heterodimer variant pair that will actively cleave only at heterodimer binding sites and not at the homodimer or single or unintended binding sites. We had previously shown that the activity of the ZFNs could be abolished by mutating the amino acid residues that form the salt bridges at the FokI dimer interface [6]. Two groups achieved a reduction in ZFN’s off-target cleavage activity and toxicity by introducing amino acid substitutions at the dimer interface of the cleavage domain that inhibited homodimer formation, but promoted the obligate heterodimer formation and cleavage [18,19]. We showed further improvements to the obligate heterodimer ZFN pairs by combining the amino acid substitutions reported by the two groups [20].

    Another approach to reducing ZFN toxicity is to use ZF nickases that cleave at only one predetermined DNA strand of a targeted site. ZFN nickases are produced by inactivating the catalytic domain of one monomer within the ZFN pair [4]. ZFN nickases induce greatly reduced levels of mutagenic NHEJ, since nicks are not efficient substrates for NHEJ. However, this comes at a cost, in terms of lowered efficiency of cleavage. A standard approach that has been widely used to increase the sequence specificity of ZFPs (and the DNA binding proteins in general) is to abolish non-specific protein contacts to the DNA backbone by amino acid substitutions. Again, this comes at the price of ZFPs’ lowered binding affinity for their targets, resulting in lower efficiency of on-target cleavage.

    Methods for ZFN delivery into cells

    The first experiments to show that ZFNs were able to cleave a chromatin substrate and stimulate HR in intact cells were performed by microinjection of ZFNs (proteins) and synthetic substrates into Xenopus oocytes [7]. Plasmid-encoded ZFNs and donors have also been co-transfected into human cells by using electroporation, nucleofection or commercially available chemical reagents. This potentially has two drawbacks: 1) the plasmids continue to express the ZFNs that accumulate at high levels in cells, promoting promiscuous DNA binding and off-target cleavage; and 2) there is also the possibility that the plasmid could integrate into the genome of the cells. To circumvent these problems, one could transfect mRNAs coding for the ZFNs along with donor DNA into cells. Adeno-associated virus (AAV) and lentivirus (LV) are the common vehicles used for the delivery of ZFNs and the donor into human cells.

    First-in-human study

    ZFN-mediated CCR5 disruption was the first-in-human application of genome editing, which was aimed at blocking HIV entry into cells [21]. Most HIV strains use CCR5 co-receptor to enter into cells. The CCR5∆32 allele contains a 32-bp deletion that results in a truncated protein; it is not expressed on the cell surface. The allele confers protection against HIV-1 infection without any adverse health effects in homozygotes. Heterozygotes show reduced levels of CCR5; their disease progression to AIDs is delayed by 1 to 2 years. The potential benefit of CCR5 targeted gene therapy was highlighted in the only reported case of an HIV cure. The so-called “Berlin patient” received allogeneic bone marrow transplants from a CCR5∆32 donor during treatment of acute myeloid leukaemia and ever since has remained HIV-1 free without antiviral treatment (ART). This report gave impetus to gene therapy efforts to create CCR5-negative autologous T cells or hematopoietic stem/progenitor cells (HSPCs) in HIV-infected patients. The expectation was that the edited cells will provide the same anti-HIV effects as in the Berlin patient, but without the risks associated with the allogeneic transplantation. CCR5 knockout via NHEJ was used in this strategy, since gene modification efficiency by HDR is relatively low. ZFN-induced genome editing of CCR5 is the most clinically advanced platform, with several ongoing clinical trials in T cells and HSPCs [22].

    The Phase I clinical trial (#NCT00842634), of knocking out the CCR5 receptor to treat HIV, was conducted by Carl June’s lab in collaboration with Sangamo Biosciences (California) scientists. The goal was to assess the safety of modifying autologous CD4+ T cells in HIV-1–infected individuals [21]. Twelve patients on ART were infused with autologous CD4+ T cells, in which the CCR5 gene was inactivated by ZFN treatment. The study reported: 1) a significant increase in CD4+T cells post-infusion; and 2) long-term persistence of CCR5-modified CD4+ T cells in peripheral blood and mucosal tissue. The therapeutic effects of the ZFN treatment in five patients were monitored by a 12-week interruption of ART. The study established that the rate of decline of the CCR5-modified CD4+ T cells was slower than that of the unmodified cells, indicating a protective effect of CCR5 disruption [22]. One patient showed both delayed viral rebound and a peak viral count that was lower than the patient’s historical levels. This patient was later identified as being heterozygous for CCR5∆32, which suggested that the beneficial effects of the ZFN treatment were magnified in this patient, probably due to increased levels of bi-allelic modification [22]. Thus, heterozygous individuals may have a greater potential for a functional HIV cure. The obvious next step is to apply the ZFN treatment to earlier precursors or stem cells. Editing HSPCs instead of CD4+ T cells have the potential to provide a long-lasting source of modified cells. The success of this strategy has been established in preclinical studies [23] and a recent clinical trial (#NCT02500849) has been initiated using this approach. Programs to disrupt CCR5 in T cells and HSPCs, using the other nuclease platforms that include TALENs, CRISPR/Cas9 and megaTALs (a meganuclease fused to TAL effector modules), are also underway; these are at the pre-clinical stage.

    ZFN preclinical trials aimed at treating human monogenic diseases

    Sangamo Biosciences, Inc. has leveraged its proprietary database of proven ZFNs (that includes an extensive library of functional ZF modules and 2-finger units for the assembly of highly specific ZFNs) and its ZFN patent portfolio to enter into research collaborations with academic scientists for the application of ZFN-mediated gene editing strategies to treat a number of human diseases. Many of these programs are at the preclinical stage.

    An interesting gene editing approach is gene replacement therapy. ZFN-mediated gene editing has shown promise for in vivo correction of the hFIX gene in hepatocytes of haemophilia B mice. Katherine High’s lab in collaboration with Sangamo scientists, is developing a general strategy for liver-directed protein replacement therapies using ZFN-mediated site-specific integration of therapeutic transgenes within the albumin gene locus [24]. Using in vivo AAV delivery, they have achieved long-term expression of hFVIII and hFIX in mouse models of haemophilia A and B at therapeutic levels. Because albumin is very highly expressed, modifying less than 1% of liver cells can produce therapeutic levels of relevant proteins, essentially correcting the disorders. Several pre-clinical studies are now underway to develop liver-directed protein replacement therapies for lysosomal storage disorders including Hurler, Hunter, Gaucher, Fabry and many others. We have previously shown that the CCR5 gene could serve as a safe harbour locus for protein replacement therapies [25]. We reported that by targeted addition of the large CFTR transcription unit at the CCR5 chromosomal locus of human-induced pluripotent stem cells (hiPSCs), one could achieve efficient CFTR expression. Thus, therapeutic genes could be expressed from the CCR5 chromosomal locus for autologous cell-based transgene-correction therapy to treat various recessive monogenic human disorders. Other safe harbour loci such as AAVS1 in the human genome are also available for gene replacement therapy.

    Many labs around the world are also working to develop gene-editing strategies to treat several other diseases such as sickle cell anaemia, SCID, cancer (CAR T cells for immunotherapy) and many others, which are not discussed here. A list of clinical and pre-clinical studies using genome editing technologies for gene and cell therapy of various diseases is outlined elsewhere [26].

    Challenges facing ZFN-based gene editing before routine translation to the clinic

    Several challenges still remain that need to be addressed before we see the routine translation of ZFN-based gene editing to the clinic. They include: 1) potential harmful human genome perturbations due to off-target DSBs, which may be genotoxic or oncogenic; 2) current gene editing efficiencies may not be sufficient for certain diseases, particularly where gene-edited cells have no survival advantage; 3) safe and efficient delivery of ZFNs into target cells and tissues, when using the in vivo approach; and 4) the treatment costs, if and when ZFN-based gene editing is translated to clinic for routine use.

    First, these gene-editing tools need further refinement before they can be safely and effectively used in the clinic. The off-target effects of gene editing technologies are discussed in detail elsewhere [4]. The efficacy of ZFNs is largely governed by the specificity of the ZFPs that are fused to the FokI cleavage domain. The higher the specificity of the ZFPs, the lower the ZFNs’ off-target cleavage is and hence toxicity. As seen with the CCR5 clinical trial, some highly evolved ZFNs are very specific. In the clinic, engineered highly specific ZFNs will be used repeatedly to treat many different individuals [4]. Therefore, the design and construction of highly evolved ZFNs for a particular disease target, will likely be a small part of the overall effort.

    Second, further improvements to gene editing efficiencies are needed for successful therapeutic genome editing. HSPCs gene editing may not yield a sufficient number of edited cells for autologous transplantation due to the difficulties associated with the ex vivo culture and expansion. An alternative approach is to modify patient-specific iPSCs, which then could be reprogrammed into HSPCs. Since clonal selection, expansion and differentiation of gene edited iPSCs are performed ex vivo, this may enable very high editing efficiencies, particularly when coupled with HDR-mediated insertion of a selection cassette. This would also allow for complete genome-wide analysis of gene edited cells for off-target effects. The patient-specific ex vivo approach has the potential to become a viable clinical alternative to modifying autologous HSPCs [25, 27]. In the case of autosomal recessive disorders that require two copies of the gene to be mutated, correction of mono-allele in sufficient number of cells may be enough to confer a therapeutic effect in patients. However, in the case of autosomal dominant disorders that require only one mutated copy of the gene, bi-allelic modification in sufficient number of cells, will be essential to achieve a therapeutic effect in patients. Therefore, methods need to be developed to increase the levels of bi-allelic modification in human cells.

    Third, another potential issue pertains to the safe and efficient delivery of ZFNs into the appropriate target cells and tissues [4]. ZFNs are much smaller than TALENs or Cas9. Therefore, ZFNs can be readily delivered using AAV or LV constructs. The method of ZFN delivery could also vary depending on the human cell types. For example, Ad5/F35-mediated delivery of ZFNs was very efficient in CD4+ T cells while it was less efficient in HSPCs [23]. The nontoxic mRNA electroporation has been efficient for the introduction of ZFNs into HSPCs. This approach has been adapted in a recent clinical trial (#NCT02500849). Recently, Kohn’s lab compared the efficiency, specificity, and mutational signatures during the reactivation of fetal haemoglobin expression by BCL11A knock-out in human CD34+ progenitor cells, using ZFNs, TALENs and CRISPR/Cas9 [28]. ZFNs showed more allelic disruption in the BCL11A locus when compared to the TALENs or CRISPR/Cas9. This was consistent with increased levels of fetal haemoglobin in erythroid cells generated in vitro from gene-edited CD34+ cells. Genome-wide analysis revealed high specific BCL11A cleavage by ZFNs, while evaluated TALENs and CRISPR/Cas9 showed off-target cleavage activity. This study highlights the high variability in cleavage efficiencies at different loci and in different cell types by the different technology platforms. Therefore, there is a critical need to investigate ways to further optimize the delivery of these nucleases into human cells.

    Fourth, if and when therapeutic gene editing is translated into clinics for routine use, a major challenge will relate to the treatment costs associated with these technologies. In the age of $1000 per pill and $100,000 – $300,000 per year treatment costs for certain chronic disease conditions, it is critical to simplify these 21st century cures, if they are to become accessible and affordable for the average citizen and the poor populations of the third world. Many labs are working towards simultaneous gene correction and generation of patient-specific iPSCs to simplify treatment [4]. CRISPR/Cas9 may be best suited for this strategy [29].

    Finally, since all these gene-editing platforms have been shown to cleave at off-target sites with mutagenic consequences, a word of caution is warranted: a careful, systematic and thorough investigation of off-target effects at the genome-wide scale, for each and every reagent that will be used to treat human diseases, is absolutely essential to ensure patient safety. For these reasons, therapeutic gene editing by these technology platforms, will ultimately depend on risk versus benefit analysis and informed consent.

    Financial & competing interests disclosure

    Dr Chandrasegaran is the inventor of the ZFN technology. Johns Hopkins University (JHU) licensed the technology exclusively to Sangamo Biosciences, Inc. (concomitant to its formation in 1995) to develop ZFNs for various biological and biomedical applications. As part of the JHU licensing agreement, Dr Chanrasegaran served on the Sangamo scientific advisory board from 1995 to 2000 and received royalties and stock as per JHU guidelines. The JHU ZFN patents expired in 2012 and became part of the public domain. No writing assistance was utilized in the production of this manuscript.

    References

    1. Mansour SL, Thomas KR, Cappechi M. Disruption of proto-oncogene int-2 in mouse embryo-derived stem cells: a general strategy for targeting mutations to non-selectable genes. Nature 1988; 366: 348–52.
    CrossRef

    2. Rouet P, Smith F, Jasin M. Expression of a site-specific endonuclease stimulates homologous recombination in mammalian cells. Proc. Natl Acad. Sci. USA 1994; 91: 6064–8.
    CrossRef

    3. Kim Y-G, Cha J, Chandrasegaran S. Hybrid restriction enzymes: Zinc finger fusions to FokI cleavage domain. Proc. Natl Acad. Sci. USA 1996; 93: 1156–60.
    CrossRef

    4. Chandrasegaran S, Carroll D. Origins of programmable nucleases for genome engineering. J. Mol. Biol. 2016; 428: 963–89.
    CrossRef

    5. Pavletich NP, Pabo CO. Zinc finger-DNA recognition: crystal structure of a Zif268-DNA complex at 2.1 Å. Science 1991; 252: 809–17.
    CrossRef

    6. Smith JJ, Bibikova M, Whitby F, Reddy AR, Chandrasegaran S, Carroll D. Requirements for double-strand cleavage by chimeric restriction enzymes with zinc finger DNA-Recognition domain. Nucleic Acids Res. 2000; 28: 3361–9.
    CrossRef

    7. Bibikova M, Carroll D, Segal DJ et al. Stimulation of homologous recombination through targeted cleavage by a chimeric nuclease.Mol. Cell. Biol. 2001; 21: 289–97.
    CrossRef

    8. Bibikova M, Golic M, Golic KG, Carroll D. Targeted chromosomal cleavage and mutagenesis in Drosophila using zinc-finger nucleases. Genetics 2002; 161: 1169–75.

    9. Bibikova M, Beumer K, Trautman JK, Carroll D. Enhancing gene targeting using designed zinc finger nucleases. Science 2003; 300: 764.
    CrossRef

    10. Urnov FD, Miller JC, Lee YL et al. Highly efficient endogenous human gene correction using designed zinc-finger nucleases. Nature 2005; 435: 646–51.
    CrossRef

    11. Moscou MJ, Bogdanove AJ. A simple cipher governs DNA recognition by TAL effectors. Science 2009; 326: 1501.
    CrossRef

    12. Boch J, Scholze H, Schornack S. Breaking the code of DNA binding specificity of TAL-type III effectors. Science 2009; 326: 1509–12.
    CrossRef

    13. Christian M, Cermark T, Doyle EL et al. Targeting DNA double-strand breaks with TAL effector nucleases. Genetics 2010; 186: 757–61.
    CrossRef

    14. Gasiunas G, Barrangou R, Horvath P, Siksnys V. Cas9-crRNA ribonucleoprotein complex mediates specific DNA cleavage for adaptive immunity in bacteria. Proc. Natl Acad. Sci. USA 2012; 109: E2579–86.
    CrossRef

    15. Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science 2012; 337: 816–21.
    CrossRef

    16. Mali P, Yang L, Esvelt KM et al. RNA-guided human genome engineering via Cas9. Science 2013; 339: 823–6.
    CrossRef

    17. Cong L, Ran FA, Cox D et al. Multiplex genome engineering using CRISPR/Cas systems. Science 2013; 339: 819–23.
    CrossRef

    18. Miller JC, Holmes MC, Wang J et al. An improved zinc-finger nuclease architecture for highly specific genome editing. Nat. Biotechnol. 2007; 25: 778–85.
    CrossRef

    19. Szczepek M, Brondani V, Buchel J et al. Structure-based redesign of the dimerization interface reduces the toxicity of zinc-finger nucleases. Nat. Biotechnol.2007; 25: 786-793.
    CrossRef

    20. Ramalingam S, Kandavelou K, Rajenderan R, Chandrasegaran S. Creating designed zinc finger nucleases with minimal cytotoxicity. J. Mol. Biol. 2011; 405: 630–41.
    CrossRef

    21. Tebas P, Stein D, Tang WW et al. Gene editing of CCR5 in autologous CD4 T cells of persons infected with HIV. N. Engl. J. Med. 2014; 370: 901–10.
    CrossRef

    22. Wang CX, Cannon PM. The clinical applications of genome editing in HIV. Blood 2016; 127: 2546–52.
    CrossRef

    23. DiGiusto DL, Cannon PM, Holmes MC et al. Preclinical development and qualification of ZFN-mediated CCR5 disruption in human hematopoietic stem/progenitor cells. Mol. Ther. Methods Clin. Dev. 2016; 3: 16067.
    CrossRef

    24. Sharma R, Anguela XM, Doyon Y et al. In vivo editing of the albumin locus as a platform for protein replacement therapy. Blood 2015; 126: 1777–84.
    CrossRef

    25. Ramalingam S, London V, Kandavelou K et al. Generation and genetic engineering of human induced pluripotent stem cells using designed zinc finger nucleases. Stem Cells Dev. 2013; 22: 595–610.
    CrossRef

    26. Maeder ML, Gersbach CA. Genome editing technologies for gene and cell therapy. Mol. Ther. 2016; 24: 430–46.
    CrossRef

    27. Ramalingam S, Annaluru N, Kandavelou K, Chandrasegaran S. TALEN-mediated generation and genetic correction of disease-specific hiPSCs. Curr. Gene Ther.2014; 14: 461–72.
    CrossRef

    28. Bjurström CF, Mojadidi M, Phillips J, Kuo C et al. Reactivating fetal hemoglobin expression in human adult erythroblasts through BCL11A knockdown using targeted nucleases. Mol. Ther. – Nucleic Acids 2016; 5: e351. 29.

    29. Howden SE, Maufort JP, Duffin BM et al. Simultaneous Reprogramming and Gene Correction of Patient Fibroblasts. Stem Cell Rep. 2015; 5: 1109–18.
    CrossRef

    This article was published earlier in 2017 in CELL & GENE THERAPY INSIGHTS. It is republished under the Creative Commons Licence.

    Feature Image Credit: www.nationalhogfarmer.com

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in