Category: Transformational Paradigms

  • Most of the world’s ocean is unprotected: This is why that needs to change

    Most of the world’s ocean is unprotected: This is why that needs to change

    • More than three billion people rely on the ocean for their livelihoods, most of them in developing countries.
    • Only 7% of the world’s ocean, a vital resource for fighting climate change, is under any protection, and just 3% is highly protected.
    • The ‘Blue Leaders’ campaign urges countries to join international treaties that would protect the ocean and all the benefits it provides to humanity.

    The ocean is a vital life support system for the planet, and we are running out of time to preserve the marine biodiversity that it is home to and upon which we all depend.

    Having played a key role thus far in the mitigation of climate change, our blue ally is quickly running out of steam. With water temperature and sea levels rising, acidification, pollution, unsustainable exploitation of marine resources, depletion of fish stocks, the near disappearance of coral reefs, and the destruction of fragile ecosystems, the ocean is being disproportionately impacted by human activities.

    Now, more than ever, we must consider the possible implications of its demise.

    The ocean plays an indispensable role in providing and regulating resources that are vital to sustaining life on Earth — from rainwater to drinking water, and as a source of our food, weather, and the oxygen we breathe

    Securing our ocean’s future

    Recognizing the key role that the ocean plays for people all over the world, the United Nations has adopted a sustainable development goal focused on conserving the ocean, with targets for action on an array of problems. While some progress has been made, more is yet needed to secure our ocean’s future.

    Scientists have called for securing at least 30% of marine waters as fully or highly protected sanctuaries, free from damaging human activities like bottom trawl fishing and seabed mining. By doing so, we can give the ocean a fighting chance in the face of climate change.

    Today, just 7% of the world’s ocean is under protection, and only 3% is highly protected. Moreover, there is no legal mechanism in place to establish fully protected marine areas in the high seas and deep seabed areas, our shared international waters that constitute nearly two thirds of the global ocean.

    Marine coastlines are home to 2.4 billion people — approximately 40% of the world’s population. More than three billion people rely on the ocean for their livelihoods, most of them in developing countries. Degradation of coastal and marine ecosystems threatens the physical, economic, and food security of communities around the world.

    Continuing along our current path towards ocean destruction will impact human lives and livelihoods.

    The role of the ocean and coastal and marine ecosystems in climate change mitigation is often overlooked. Protecting and restoring ocean habitats such as seagrass beds, salt marshes, and mangroves, and their associated food webs, can sequester carbon dioxide from the atmosphere at rates up to five times greater than tropical forests.

    Choosing not to prioritize the protection of our ocean is depriving us of the tools we desperately need to achieve our climate mitigation goals.

    Commitments are needed

    With multiple high-level ocean negotiations planned in 2022, this year is one filled with opportunity for the preservation of our oceans. Our only hope for a better future lies in the adoption of unprecedentedly bold ocean conservation commitments.

    The science is clear: to maximize the health and resilience of the global ocean, at least 30% of it must be protected through a network of “highly” and “fully” protected Marine Protected Areas (MPAs) by 2030.

    To achieve this goal, a new treaty for the conservation and management of marine life in the high seas must be concluded to ensure that human activities are managed to prevent significant adverse impacts, with robust oversight mechanisms and provisions to establish fully protected MPAs in the high seas.

    Governments who have joined the “Blue Leaders” campaign call on all countries to rally behind these commitments at the upcoming meeting of the Conference of the Parties to the Convention on Biological Diversity (CoP15), expected to take place in Kunming, China in August 2022.

    Another key moment is the UN Ocean Conference, which is scheduled to be held in Lisbon, Portugal, from 27 June to 1 July. Each of these meetings offers an opportunity for countries to come together, join the Blue Leaders, and take the action that our ocean desperately needs.

    The ocean knows no boundaries: it unites us all as a physical link between coastal countries, communities, and individuals, and as the source of our food, water, and air. We all face similar challenges and similar opportunities. Let us be bold for the ocean together.

    Feature Image: pewtrusts.org
    This article was published earlier in weforum.org  and is republished under the Creative Commons 4.0 International Public License.
  • How blockchain can help dismantle corruption in government services

    How blockchain can help dismantle corruption in government services

    As India celebrated its 76th independence day with great fanfare and jubilation, it is time to introspect on the most serious threat to India’s growth and emergence as a world. This threat is corruption, which is internal and societal. Over the 75 years of modern India’s journey, corruption has become endemic in Indian society. Infused by the political culture, corruption has seeped into every aspect of governance, be it the executive, legislature, or judiciary. This is so because an average citizen has come to accept bribing as a routine and inevitable part of daily life. Hence, if India has to eliminate the scourge of corruption it needs a massive transformation of its society. This can come only through the sustained practice of transparency, ruthless accountability, efficiency, and deterrent punishment. Corruption is commonly perceived as related to monetary benefits but it is much more in terms of misuse of power, coercion, disinformation, lack of transparency, non-performance, inefficiency and delay tactics, and the lack of accountability/responsibility. There is a misconception that digitisation will overcome corruption. Unless timelines, tamper-proof records, and transparency are ensured the corrupt will find ways to get around. These are clearly seen in the revenue tax systems, licensing systems, land registration systems etc. Even though these departments have digitised the processes well, there is a proliferation of middlemen linking the client and the department. This can only be eliminated by the right policies that enforce strict timelines, respond to citizens’ complaints, enforce accountability and transparency on the officials and create clarity for the public in the usage of such systems. The adoption of blockchain technologies could go a long way toward eliminating corruption in India. Widespread corruption has been India’s greatest threat and it is never more urgent than now to address this problem through innovative technologies like blockchain.

    TPF republishes this article on ‘Blockchain and Governance’  from the World Economic Forum under the creative commons license 4.0

    TPF Editorial Team

    Key Points

    • Blockchain could increase the fairness and efficiency of government systems while reducing opportunities for corruption;
    • Blockchain could improve the transparency and disclosure of procurement processes, investment in which can be lost to corruption;
    • The emerging technology can also enhance the property and land registry systems, streamlining lengthy processes and protecting people’s rights.

    Governments regularly have to make trade-offs between efficiency and fairness in their services. Unfortunately, choosing one over the other often increases the likelihood of corruption. In efficient systems, the public is largely content to operate within the bounds of that system; inefficient systems cause large numbers of individuals to seek less-than-legal workarounds. Similarly, fair systems engender trust, pride and a sense of community; while unfair systems encourage individuals to seek out illegal alternatives without remorse.

    Occasionally, new technologies come along that offer the opportunity to increase both efficiency and fairness. Blockchain is one such opportunity and it has a variety of use-cases for government applications. Here are two in more detail:

    Blockchain and procurement

    Public procurement is the process of governments acquiring goods, services and works. It represents a significant portion of governmental budgets, accounting for 29% of general government expenditure totalling €4.2 trillion in OECD countries in 2013. With so much money at stake, it is unsurprising that OECD estimates that 10-30% of the investment in publicly funded construction projects may be lost to corruption.

    Public procurement is vulnerable to corruption for a number of reasons. Parties in the procurement process, both on the public and private sides, are induced into corrupt acts by the size of potential financial gains, the close interaction between public officials and businesses, and how easy it is to hide corrupt actions. Blockchain has the potential to protect against these weaknesses at almost every stage of the procurement process.

    In the planning stage, public officials create evaluation criteria by which bidding companies will be judged. In the bidding evaluation stage, public officials assign scores to companies using the evaluation criteria as their rubric. Without transparency, there are many opportunities for compromised public officials to rig the outcome of the evaluation process. Evaluation criteria could be retroactively changed or company bids altered, for example. Blockchain can guarantee any change is public, the original information is retained and there is a record of who made the change.

    Blockchain can also encourage a wider coalition of stakeholders to participate in and monitor procurement cycles. Too often, the most active stakeholders in any given procurement process are the public officials and the businesses directly involved – a potential problem when more than half of all foreign bribery cases likely occur to obtain public procurement contracts. Watchdog organizations, end-users, the media and citizens are discouraged from participating because procurement information is not readily available, untrustworthy, modified and/or delayed. Blockchain can provide an easily accessible, tamper-proof and real-time window into ongoing procurement processes

    Projects integrating blockchain into procurement, such as this pilot programme in Colombia, conclude that “blockchain-based e-procurement systems provide unique benefits related to procedural transparency, permanent record-keeping and honest disclosure.” The Colombia project noted several drawbacks, such as scalability and vendor anonymity, but newer proposals like this one to overhaul India’s public procurement system are taking steps to overcome those and other shortcomings.

    Blockchain and registries

    Land title registries track the ownership of land and property for a given region. Registration titling systems have had important consequences for the economy, leading to “better access to formal credit, higher land values, higher investment in land, and higher income.” Yet they are far from perfect. They are inefficient, for example, closing a property sale can take months and typically consumes 2-5% of the purchase price of a home. Registration systems can act as bottlenecks for land transactions. There are complaints going back to 2015 of England’s Land Registry having six-month transaction delays and similar complaints persisted in 2020.

    The inefficiencies in land titling systems are a major source of corruption. The Organized Crime and Corruption Reporting Project’s 2019 report on land registry corruption in Bangladesh found that obtaining a licence as a deed writer incurs a bribe to the highest-level administrators. Land registry corruption is not restricted to developing regions: in regions with longer histories of legal stability, it simply becomes more complex. Anti-corruption NGO, Global Witness, estimated in 2019 that £100 billion worth of property in England and Wales was secretly owned by anonymous companies registered in tax havens.

    A good first step to fighting corruption is by cutting down on inefficiencies. Blockchain can streamline much of the process. Take, for example, the number of steps required in the UK for one person to sell the property to another person and compare this with a blockchain-based registry system.

    Some countries are already experiencing positive results. In 2018, Georgia registered more than 1.5 million land titles through their blockchain-based system.

    An urban land registry project underway in Africa uses blockchain to address the problems of digitizing urban land registries. In many densely populated impoverished urban areas, no pre-existing land registry or paper trail exists. Relying on the meagre data available often causes legal disputes. Courts quickly become overwhelmed and digitization efforts stall.

    Blockchain is now being added to the project. To confirm property rights, the new system seeks out and consults community elders. Through a blockchain-based application, those elders receive the authority to confirm the validity of land registry claims. The elders can check directly with residents if they consent to the land assessment. By delegating cryptographically guaranteed authority to respected community members, the quality of the data is improved and the number of land dispute cases handled by the judiciary should decrease. Finally, the remaining cases should resolve faster since the elders’ cryptographic confirmations are admissible as evidence for land dispute resolution.

    The final challenge: Adoption

    The government blockchain-based projects referenced in this article represent just a few of a growing number of pilot or in-production applications of blockchain. This shows that governments are serious about fixing inefficient and unfair services. The potential gains from blockchain are substantial, yet as a new technology, there are many challenges in designing and implementing blockchain-based applications. For large institutions such as governments to deploy blockchain-based applications in a timely fashion and reap the benefits, education and tools are imperative.

  • The Bridge on River Chenab

    The Bridge on River Chenab

    “The only way to discover the limits of the possible is to go beyond them into the impossible”

    -Arthur C. Clarke

    Introduction

    On 13 Aug 2022, the bridge on the River Chenab in the Reasi District of J&K was finally completed. It was a case of the impossible becoming possible. It all happened because of a very high degree of self-belief of those who planned it and the sincerity of thousands of those who worked hard for the last 18 years.   Indeed, it was the best gift the Indian Railway in general and Indian Engineers, in particular, could give to India on the 76th Independence Day of India. It is also highly symbolic that it is located in the State of J&K and in a way appeared to be a giant step towards the integration of J&K with the rest of the country.

    The Bridge over the River Chenab is part of the Jammu-Udhampur-Baramulla Railway line, which is being constructed. While Sections of Jammu-Udhampur, Udhampur-Katra and Banihal-Baramulla are already completed and opened for traffic, section Katra- Banihal is still not complete. The degree of difficulty in this section is enormous. Besides this Bridge on Chenab (more about it a little later), the Bridge on Anji Khad (which is under construction) and a total of 35 tunnels and 37 bridges make this section of 111 km in the mountainous terrain extremely challenging and an engineering marvel in the making.

    Progress of the Project –  It is a 356 km railway project, starting at Jammu and going up to Baramulla. It was started in 1983 with the objective of connecting Jammu Tawi to Udhampur.  Construction of the route faced natural challenges including major earthquake zones, extreme temperatures and inhospitable terrain.  Finally, in 2005  The 53 km long Jammu–Udhampur section opened after 21 years with 20 tunnels and 158 bridges. The cost of the project had escalated to ₹515 crores from the original estimated cost of ₹50 crores.  In 1994 The railway accepted the necessity to extend the track to Baramulla. However, at that point it was thought that the project will have two disconnected arms; one from Jammu to Udhampur and the second from Qazigund to Baramulla. In 2002 the GoI declared this project to be a national project. This means hereafter, the entire funding will be from the Central Budget. At that time the necessity was also accepted to connect the two disconnected arms. The estimated cost of the project assessed then was   ₹6,000 crore.  In 2008 the 66 km section between Anantnag and Manzhama (outside Srinagar) was opened for traffic. In 2009 this Service was extended to Baramulla. During the same year, the line from Anantnag was extended to Qazigund.

    Also  Around the same time, an extension of the track from Baramulla to Kupwara was proposed, and its survey got completed in 2009. In 2009 itself, work on the section between Katra and Qazigund resumed after a review based on geotechnical studies. In 2011, an 11.215 Km long Banihal Qazigund tunnel across the Pir Panjal Range was completed.  This paved the way for a trial run in Dec 2012 from Banihal to Qazigund. In 2014 the train route from Udhampur to Katra was also operationalised. Now the only missing link in this nationally vital rail line was Katra-Banihal. Finally, in 2018 the GoI approved the extension of the railway line to Kupwara.

    Degree of Difficulty in Katra- Banihal Section – This is a 111 km long stretch. 97.34 km of this stretch will be through tunnels. There are 20 Major (including the bridge across the Chenab river and a bridge on Anji Khad) and 10 minor bridges on this stretch. 

    Bridge Across Chenab

    Location: The Chenab Rail Bridge is a steel and concrete arch bridge between Bakkal and Kauri in the Reasi district of J&K, India.  It needs to be noted that it is the highest railway bridge in the world. After many hick-ups, finally in 2012 excavation of the foundation of the bridge commenced. The tender was with Afcons Infrastructure Limited. The alignment crosses a deep gorge of the Chenab River, which necessitates the construction of a long-span railway bridge with a viaduct for approaches on either side. 

    Details: It is a 785 meters long single arch bridge where the main arch is 467 meters. The total span of the bridge is 1315 meters including a viaduct of 650 meters on the Northern side, Deck height is 359 meters above the river bed and 322 meters above the water surface which is 35 meters more than the height of the Eiffel Tower. The project also entails the construction of 203 km of access roads.  The deck is 13.5 meters wide, where two rail tracks will be available. The total cost of the Bridge is Rs 1486 Crores.

     

    Design: The steel arch has been planned because the construction of the pillar was difficult and the load had to be distributed. Chords have been provided to cater for the swaying load. The steel structures of the bridge were manufactured in workshops built in the mountains. The workshops had been moved to the building site because there is no proper road network in the challenging terrain. The longest building parts that could be delivered to the site were 12 meters in length. Therefore, four workshops were established in the mountains. Workshops and paint shops were built on both sides of the valley. All steel materials, except for the smallest rolled profiles, were delivered to the mountains as steel boards. The insufficient infrastructure of the area caused additional problems. There was no electricity and the water of the river was not suitable for manufacturing concrete. All electricity had to be produced at the site and the water was delivered from further away in the mountains. The job was also challenging because the track had a curvature in the approach bridge. In this section, the construction stage bearings had been designed in such a way that it was possible to launch the steel deck in the curvature portion as well. The bridge consists of about 25000 tonnes of steel structures, the main portion of which was used for the arch bridge section. It is a unique design and as such none of the Indian codes fully catered for the design validation. Therefore it was decided to follow the BS Code. The design also caters for wind load effects as per wind tunnel tests. It can cater for wind pressure of 1500 Pa. It is a blast resistance design. The design of the decking has been checked for fatigue as per the BS Code. The most important aspect is that it caters for redundancy within the structure, for a lower level of operation during mishaps and against collapse in extreme cases of one-pier failure. The area has high seismicity and the design was planned to withstand earthquakes of the severity of 8 on the Richter Scale. The bridge design is for a rail speed of 100 kmph. This means it can withstand very high-intensity of vibrations. The designed life of the bridge is 120 years and to take care of assessed steel fatigue the fatigue design selected is BS:5400 Part-10. The bridge will be able to withstand a temperature of minus 200C and a wind speed of 266 kmph.

    Team: The viaduct and foundation have been designed by M/s WSP(Finland) and the Arch design has been made by M/s Leonhart, Andra and Partners (Germany), the foundation protection has been designed by IISc Bangalore. The executing agency has been M/s Konkan Railway Corporation Limited.

    Status of Katra-Banihal project

    Although, the construction of Chenab Bridge is a major milestone in the progress of the project, however, still many more landmarks are required to be crossed before the completion of the project. Foremost of them is the Anji Khad bridge which is expected to be ready only by Dec 2022. It is expected that this rail Section will finally be operational by the middle of 2023.

    Conclusion

    The Jammu-Udhampur-Katra-Banihal-Srinagar-Baramulla Rail project is a vital national project which has a major bearing on national security and nation building. It is a matter of pride that Indian Engineers have achieved what at one point had appeared impossible. It will help in the integration of J&K with the rest of the country and will help strategically in many ways. The completion of the project will also give confidence to expeditiously complete other projects of national importance like; the railway line to Leh and the Railway line to Tenga in the North-East.

    End Note:

    1. Conceptual Design of the Chenab Bridge in India by Pekka Pulkkine WSP Finland, S Hopf and A Jutila. Available on Research Gate: https://www.researchgate.net/publication/257725212_Conceptual_Design_of_the_Chenab_Bridge_in_India.

    2. An internet upload: https://byjus.com/current-affairs/chenab-bridge/

    3. A Report by OT Staff, “Once the bridge is completed, it will provide all-weather connectivity between Kashmir and the rest of India” reported on 07 Apr 2021 and uploaded on https://www.outlookindia.com/outlooktraveller/travelnews/story/71397/all-about-the-chenab-bridge

    4. An internet upload: https://en.wikipedia.org/wiki/Jammu–Baramulla_line

    5. An internet upload: https://en.wikipedia.org/wiki/Chenab_Rail_Bridge

    6. An internet upload: https://www.pib.gov.in/PressReleasePage.aspx?PRID=1709652

    7. Zee Media Bureau, “Indian Railways: Delhi-Kashmir, Katra-Banihal train route to open soon, project nears completion” dated 08 Aug 2022 and uploaded on https://zeenews.india.com/railways/indian-railways-delhi-kashmir-katra-banihal-train-route-to-open-soon-project-nears-completion-2494827.html

    Image 1 Credits: Arun Ganesh

    Image 2 Credits: Indian Railways

    Image 3 Credits: Indian Express

    Image 4 Credits: Indian Railways

    Feature Image Credits: The Indian Express

  • On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    On Metaverse & Geospatial Digital Twinning: Techno-Strategic Opportunities for India

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Download
    [/powerkit_button]

    Abstract:

    With the advent of satellite imagery and smartphone sensors, cartographic expertise has reached everyone’s pocket and we’re witnessing a software-isation of maps that will underlie a symbiotic relationship between our physical spaces and virtual environments. This extended reality comes with enormous economic, military, and technological potential. While there exist a range of technical, social and ethical issues still to be worked out – time and tide wait for no one is a metaphor well applied to the Metaverse and its development. This article briefly introduces the technological landscape, and then moves over to a discussion of Geospatial Digital Twinning and its techno-strategic utility and implications. We suggest that India should, continue on the existing dichotomy of Open Series and Defence Series Maps, initiate Geospatial Digital Twins of specific areas of interest as a pilot for the development, testing, and integration of national metaverse standards and rules. Further, a working group in collaboration with a body like NASSCOM needs to be formed to develop the architecture and norms that facilitate Indian economic and strategic interests through the Metaverse and other extended reality solutions.

    Introduction

    Cartographers argue that maps are value-laden images, which do not just represent a geographical reality but also become an essential tool for political discourse and military planning. Not surprisingly then, early scholars had termed cartography as a science of the princes. In fact, the history of maps is deeply intertwined with the emergence of the Westphalian nation-state itself, with the states being the primary sponsors of any cartographic activity in and around their territories[1].
    Earlier the outcome of such activities even constituted secret knowledge, for example, it was the British Military Intelligence HQ in Shimla which ran and coordinated many of the cartographic activities for the British in the subcontinent[2]. Thus, given our post-independence love for Victorian institutions, until 2021 even Google Maps had remained an illegal service in India[3].

    One of the key stressors which brought this long-awaited change in policy was the increased availability of relatively low-cost but high-resolution satellite imagery in open online markets. But this remote sensing is only one of the developments impacting modern mapmaking. A host of varied but converging technologies particularly Artificial Intelligence, advanced sensors, Virtual and Augmented Reality, and the increasing bandwidth for data transmission – are enabling a new kind of map. This new kind of map will not just be a model of reality, but rather a live and immersive simulation of reality. We can call it a Geospatial Digital Twin (GDT) – and it will be a 4D artefact, i.e. given its predictive component and temporal data assimilation, a user could also explore the hologram/VR through time and evaluate possible what-if scenarios.

    [powerkit_button size=”lg” style=”info” block=”true” url=”https://admin.thepeninsula.org.in/wp-content/uploads/2022/07/TPF_Working-Paper_MetaGDT-1.pdf” target=”_blank” nofollow=”false”]
    Read the Full Paper
    [/powerkit_button]

  • The Geopolitical Consolidation of Artificial Intelligence

    The Geopolitical Consolidation of Artificial Intelligence

    Key Points

    • IT hardware and Semiconductor manufacturing has become strategically important and critical geopolitical tools of dominant powers. Ukraine war related sanctions and Wassenaar Arrangement regulations invoked to ban Russia from importing or acquiring electronic components over 25 Mhz.
    • Semi conductors present a key choke point to constrain or catalyse the development of AI-specific computing machinery.
    • Taiwan, USA, South Korea, and Netherlands dominate the global semiconductor manufacturing and supply chain. Taiwan dominates the global market and had 60% of the global share in 2021. Taiwan’s one single company – TSMC (Taiwan Semiconductor Manufacturing Co), the world’s largest foundry, alone accounted for 54% of total global revenue.
    • China controls two-thirds of all silicon production in the world.
    • Monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence (AI), exacerbating the strategic and innovation bottlenecks for developing countries like India.
    • Developing a competitive advantage over existing leaders would require not just technical breakthroughs but also some radical policy choices and long-term persistence.
    • India should double down over research programs on non-silicon based computing with a national urgency instead of pursuing a catch-up strategy.

    Russia was recently restricted, under category 3 to category 9 of the Wassenaar Arrangement, from purchasing any electronic components over 25MHz from Taiwanese companies. That covers pretty much all modern electronics. Yet, the tangibles of these sanctions must not deceive us into overlooking the wider impact that hardware access and its control have on AI policies and software-based workflows the world over. As Artificial Intelligence technologies reach a more advanced stage, the capacity to fabricate high-performance computing resources i.e. semiconductor production becomes key strategic leverage in international affairs.

    Semiconductors present a key chokepoint to constrain or catalyse the development of AI-specific computing machinery. In fact, most of the supply of semiconductors relies on a single country – Taiwan. The Taiwan Semiconductor Manufacturing Corporation (TSMC) manufactures Google’s Tensor Processing Unit (TPU), Cerebras’s Wafer Scale Engine (WSE), as well as Nvidia’s A100 processor. The following table provides a more detailed1 assessment:

    Hardware Type

    AI Accelerator/Product Name

    Manufacturing Country

    Application-Specific Integrated Circuits (ASICs)

    Huawei Ascend 910

    Taiwan

    Cerebras WSE

    Taiwan

    Google TPUs

    Taiwan

    Intel Habana

    Taiwan

    Tesla FSD

    USA

    Qualcomm Cloud AI 100

    Taiwan

    IBM TrueNorth

    South Korea

    AWS Inferentia

    Taiwan

    AWS Trainium

    Taiwan

    Apple A14 Bionic

    Taiwan

    Graphic Processing Units (GPUs)

    AMD Radeon

    Taiwan

    Nvidia A100

    Taiwan

    Field-Programmable Gate Arrays (FPGAs)

    Intel Agilex

    USA

    Xilinx Virtex

    Taiwan

    Xilinx Alveo

    Taiwan

    AWS EC2 FI

    Taiwan

    As can be seen above, the cake of computing hardware is largely divided in such a way that the largest pie holders also happen to form a singular geopolitical bloc vis-a-vis China. This further shapes the evolution of territorial contests in the South China Sea. This monopolisation of semiconductor supply by a singular geopolitical bloc poses critical challenges for the future of Artificial Intelligence, especially exacerbating the strategic and innovation bottlenecks for developing countries like India. Since the invention of the transistor in 1947, and her independence, India has found herself in an unenviable position where there stands zero commercial semiconductor manufacturing capacity after all these years while her office-bearers continually promise of leading in the fourth industrial revolution.

    Bottlenecking Global AI Research

    There are two aspects of developing these AI accelerators – designing the specifications and their fabrication. AI research firms first design chips which optimise hardware performance to execute specific machine learning calculations. Then, semiconductor firms, operating in a range of specialities and specific aspects of fabrication, make those chips and increase the performance of computing hardware by adding more and more transistors to pieces of silicon. This combination of specific design choices and advanced hardware fabrication capability forms the bedrock that will decide the future of AI, not the amount of data a population is generating and localising.

    However, owing to the very high fixed costs of semiconductor manufacturing, AI research has to be focused on data and algorithms. Therefore, innovations in AI’s algorithmic efficiency and model scaling have to compensate for a lack of equivalent situations in the AI’s hardware. The aggressive consolidation and costs of hardware fabrication mean that firms in AI research are forced to outsource their hardware fabrication requirements. In fact, as per DARPA2, because of the high costs of getting their designs fabricated, AI hardware startups do not even receive much private capital and merely 3% of all venture funding between 2017-21 in AI/ML has gone to startups working on AI hardware.

    But TSMC’s resources are limited and not everyone can afford them. To get TSMC’s services, companies globally have to compete with the likes of Google and Nvidia, therefore prices go further high because of the demand side competition. Consequently, only the best and the biggest work with TSMC, and the rest have to settle for its competitors. This has allowed this single company to turn into a gatekeeper in AI hardware R&D. And as the recent sanctions over Russia demonstrate, it is now effectively playing the pawn which has turned the wazir in a tense geopolitical endgame.

    Taiwan’s AI policy also reflects this dominance in ICT and semiconductors – aiming to develop “world-leading AI-on-Device solutions that create a niche market and… (make Taiwan) an important partner in the value chain of global intelligent systems”.3 The foundation of strong control over the supply of AI hardware and also being #1 in the Global Open Data Index, not just gives Taiwan negotiating leverage in geopolitical competition, but also allows it to focus on hardware and software collaboration based on seminal AI policy unlike most countries where the AI policy and discourse revolve around managing the adoption and effects of AI, and not around shaping the trajectory of its engineering and conceptual development like the countries with hardware advantage.

    Now to be fair, R&D is a time-consuming, long-term activity which has a high chance of failure. Thus, research focus naturally shifts towards low-hanging fruits, projects that can be achieved in the short-term before the commissioning bureaucrats are rotated. That’s why we cannot have a nationalised AGI research group, as nobody will be interested in a 15-20 year-long enterprise when you have promotions and election cycles to worry about. This applies to all high-end bleeding-edge technology research funding everywhere – so, quantum communications will be prioritised over quantum computing, building larger and larger datasets over more intelligent algorithms, and silicon-based electronics over researching newer computing substrates and storage – because those things are more friendly to short-term outcome pressures and bureaucracies aren’t exactly known to be a risk-taking institution.

    Options for India

    While China controls 2/3 of all the silicon production in the world and wants to control the whole of Taiwan too (and TSMC along with its 54% share in logic foundries), the wider semiconductor supply chain is a little spreadout too for any one actor’s comfort. The leaders mostly control a specialised niche of the supply chain, for example, the US maintains a total monopoly on Electronic Design Automation (EDA) software solutions, the Netherlands has monopolised Extreme UltraViolet and Argon Flouride scanners, and Japan has been dishing out 300 mm wafers used to manufacture more than 99 percent of the chips today.4 The end-to-end delivery of one chip could have it crossing international borders over 70 times.5 Since this is a matured ecosystem, developing a competitive advantage over existing leaders would require not just proprietary technical breakthroughs but also some radical policy choices and long term persistence.

    It is also needless to say that the leaders are also able to attract and retain the highest quality talent from across the world. On the other hand, we have a situation where regional politicians continue cribbing about incoming talent even from other Indian states. This is therefore the first task for India, to become a technology powerhouse, she has to, at a bare minimum, be able to retain all her top talent and attract more. Perhaps, for companies in certain sectors or of certain size, India must make it mandatory to spend at least X per cent of revenue on R&D and offer incentives to increase this share – it’ll revamp things from recruitment and retention to business processes and industry-academia collaboration – and in the long-run prove to be a lot more socioeconomically useful instrument than the CSR regulation.

    It should also not escape anyone that the human civilisation, with all its genius and promises of man-machine symbiosis, has managed to put all its eggs in a single basket that is also under the constant threat of Chinese invasion. It is thus in the interest of the entire computing industry to build geographical resiliency, diversity and redundancy in the present-day semiconductor manufacturing capacity. We don’t yet have the navy we need, but perhaps in a diplomatic-naval recognition of Taiwan’s independence from China, the Quad could manage to persuade arrangements for an uninterrupted semiconductor supply in case of an invasion.

    Since R&D in AI hardware is essential for future breakthroughs in machine intelligence – but its production happens to be extremely concentrated, mostly by just one small island country, it behoves countries like India to look for ways to undercut the existing paradigm of developing computing hardware (i.e. pivot R&D towards DNA Computing etc) instead of only trying to pursue a catch-up strategy. The current developments are unlikely to solve India’s blues in integrated circuits anytime soon. India could parallelly, and I’d emphatically recommend that she should, take a step back from all the madness and double down on research programs on non-silicon-based computing with a national urgency. A hybrid approach toward computing machinery could also resolve some of the bottlenecks that AI research is facing due to dependencies and limitations of present-day hardware.

    As our neighbouring adversary Mr Xi says, core technologies cannot be acquired by asking, buying, or begging. In the same spirit, even if it might ruffle some feathers, a very discerning reexamination of the present intellectual property regime could also be very useful for the development of such foundational technologies and related infrastructure in India as well as for carving out an Indian niche for future technology leadership.

    References:

    1. The Other AI Hardware Problem: What TSMC means for AI Compute. Available at https://semiliterate.substack.com/p/the-other-ai-hardware-problem

    2. Leef, S. (2019). Automatic Implementation of Secure Silicon. In ACM Great Lakes Symposium on VLSI (Vol. 3)

    3. AI Taiwan. Available at https://ai.taiwan.gov.tw/

    4. Khan et al. (2021). The Semiconductor Supply Chain: Assessing National Competitiveness. Center for Security and Emerging Technology.
    5. Alam et al. (2020). Globality and Complexity of the Semiconductor Ecosystem. Accenture.

  • Recent advances in the use of ZFN-mediated gene editing for human gene therapy

    Recent advances in the use of ZFN-mediated gene editing for human gene therapy

    Targeted genome editing with programmable nucleases has revolutionized biomedical research. The ability to make site-specific modifications to the human genome, has invoked a paradigm shift in gene therapy. Using gene editing technologies, the sequence in the human genome can now be precisely engineered to achieve a therapeutic effect. Zinc finger nucleases (ZFNs) were the first programmable nucleases designed to target and cleave custom sites. This article summarizes the advances in the use of ZFN-mediated gene editing for human gene therapy and discusses the challenges associated with translating this gene editing technology into clinical use.

    Zinc finger nucleases: first of the programmable nucleases

    In the late seventies, scientists observed that when DNA is transfected into yeast cells, it integrates at homologous sites by homologous recombination (HR). In stark contrast, when DNA was transfected into mammalian cells, it was found to integrate randomly at non-homologous sites by non-homologous end joining (NHEJ). HR events were so rare that it required laborious positive and negative selection techniques to detect them in mammalian cells [1]. Later work performed by Maria Jasin’s lab using I-SceI endonuclease (a meganuclease) and a homologous DNA fragment with sequences flanking the cleavage site, revealed that a targeted chromosomal double-strand break (DSB) at homologous sites can stimulate gene targeting by several orders of magnitude in mammalian cells that are refractory to spontaneous HR [2]. However, for this experiment to be successful, the recognition site for I-SceI endonuclease had to be incorporated at the desired chromosomal locus of the mammalian genome by classical HR techniques. Thus, the generation of a unique, site-specific genomic DSB had remained the rate limiting step in using homology-directed repair (HDR) for robust and precise genome modifications of human cells, that is, until the creation of zinc finger nucleases (ZFNs) – the first of the programmable nucleases that could be designed to target and cleave custom sites [3,4].

    Because HR events are very rare in human cells, classical gene therapy – use of genes to achieve a therapeutic effect – had focused on the random integration of normal genes into the human genome to reverse the adverse effects of disease-causing mutations. The development of programmable nucleases – ZFNs, TALENs and CRISPR-Cas9 – to deliver a targeted DSB at a pre-determined chromosomal locus to induce genome editing, has revolutionized the biological and biomedical sciences. The ability to make site-specific modifications to the human genome has invoked a paradigm shift in gene therapy. Using gene-editing technologies, the sequence in the human genome can now be precisely engineered to achieve a therapeutic effect. Several strategies are available for therapeutic gene editing which include: 1) knocking-out genes by NHEJ; 2) targeted addition of therapeutic genes to a safe harbour locus of the human genome for in vivo protein replacement therapy (IVPRT); and 3) correction of disease-causing mutations in genes.

    The first truly targetable reagents were the ZFNs that showed that arbitrary DNA sequences in the human genome could be cleaved by protein engineering, ushering in the era of human genome editing [4]. We reported the creation of ZFNs by fusing modular zinc finger proteins (ZFPs) to the non-specific cleavage domain of the FokI restriction enzyme in 1996 [3]. ZFPs are comprised of ZF motifs, each of which is composed of approximately 30 amino acid residues containing two invariant pairs of cysteines and histidines that bind a zinc atom. ZF motifs are highly prevalent in eukaryotes. The Cys2His2 ZF fold is a unique ββα structure that is stabilized by a zinc ion [5]. Each ZF usually recognizes a 3–4-bp sequence and binds to DNA by inserting the α-helix into the major groove of the double helix. Three to six such ZFs are linked together in tandem to generate a ZFP that binds to a 9–18-bp target site within the genome. Because the recognition specificities can be manipulated experimentally, ZFNs offered a general means of delivering a unique, site-specific DSB to the human genome. Furthermore, studies on the mechanism of cleavage by 3-finger ZFNs established that the cleavage domains must dimerize to affect an efficient DSB and that their preferred substrates were paired binding sites (inverted repeats) [6]. This realization immediately doubled the size of the target sequence recognition of 3-finger ZFNs from 9- to 18-bp, which is long enough to specify a unique genomic address within cells. Moreover, two ZFNs with different sequence specificities could cut at heterologous binding sites (other than inverted repeats), when they are appropriately positioned and oriented within a genome.

    ZFNs paved the way for human genome editing

    In collaboration with Dana Carroll’s lab, we then showed that a ZFN-induced DSB stimulates HR in frog oocytes in 2001 [7]. The groundbreaking experiments on ZFNs established the potential for inducing targeted recombination in a variety of organisms that are refractory to spontaneous HR, and ushered in the era of site-specific genome engineering, also commonly known as genome editing. A number of studies using ZFNs for genome editing in different organisms and cells, soon followed [4,8–10]. The modularity of DNA recognition by ZFs, made it possible to design ZFNs for a multitude of genomic targets for various biological and biomedical applications [4]. Thus, the ZFN platform laid the foundation for genome editing and helped to define the parameters and approaches for nuclease-based genome engineering.

    Despite the remarkable successes of ZFNs, the modularity of ZF recognition did not readily translate into a simple code that enabled easy assembly of highly specific ZFPs from ZF modules. Generation of ZFNs with high sequence specificity was difficult to generate for routine use by at large scientists. This is because the ZF motifs do not always act as completely independent modules in their DNA sequence recognition; they are influenced more often than not by their neighbours. ZF motifs that recognize each of the 64 possible DNA triplets with high specificity, never materialized. Simple modular assembly of ZFs did not always yield highly specific ZFPs, hence ZFNs. Thus, DNA recognition by ZF motifs turned out to be more complex than originally perceived. With this realization came the understanding that the ZFPs have to be selected in a context-dependent manner that required several cycles of laborious selection techniques and further optimization. This is not to say that it can’t be done, but just that it requires substantial cost and time-consuming effort. This is evidenced by the successful ZFN-induced genome editing applications to treat a variety of human diseases that are underway. For example, ZFN-induced mutagenesis of HIV co-receptor CCR5 as a form of gene therapy has the potential to provide a functional cure for HIV/AIDS.

    Successor technologies – TALENs and CRISPR/Cas9 – have made the delivery of a site-specific DSB to the mammalian genome much easier and simpler. Custom nuclease design was facilitated further by the discovery of TAL effector proteins from plant pathogens, in which two amino acids (repeat variable di-residues, also known as RVDs) within a TAL module, recognize a single base pair, independent of the neighbouring modules [11,12]. In a similar fashion to ZFNs, TAL effector modules were fused to the FokI cleavage domain to form TAL effector nucleases, known as TALENs [13]. The development of TALENs simplified our ability to make custom nucleases by straightforward modular design for the purposes of genome editing. However, the discovery of CRISPR/Cas9 – an RNA-guided nuclease in bacterial adoptive immunity – has made it even easier and cheaper, given that no protein engineering is required [14–17]. A constant single nuclease (Cas9) is used for cleavage together with an RNA that directs the target site specificity based on Watson-Crick base pairing. CRISPR/Cas9 system has democratized the use of genome editing, by making it readily accessible and affordable by small labs around the world.

    ZFN specificity & safety

    The efficacy of ZFNs to a large extent depends on the specificity of the ZFPs that are fused to the FokI nuclease domain. The higher the specificity of the ZFPs, the lower the ZFN’s off-target cleavage, and hence toxicity. The early ZFNs designed for genomic targets displayed significant off-target activity and toxicity due to promiscuous binding and cleavage, particularly when encoded in plasmids and expressed in high levels in human cells. One way to increase the specificity of the ZFNs is to increase the number of ZF motifs within each ZFN of the pair. This helps to improve specificity, but it is not always sufficient. Many different mechanisms could account for the off-target activity. They include ZFNs binding to single or unintended target sites as well as to homodimer sites (the inverted repeat sites for each of the ZFN pair). Binding of a ZFN monomer to single or unintended target sites could be followed by dimerization of the cleavage domain to another monomer in solution. Therefore, one approach to reduce ZFNs toxicity is to re-design the dimer interface of the cleavage domains to weaken the interaction and generate a heterodimer variant pair that will actively cleave only at heterodimer binding sites and not at the homodimer or single or unintended binding sites. We had previously shown that the activity of the ZFNs could be abolished by mutating the amino acid residues that form the salt bridges at the FokI dimer interface [6]. Two groups achieved a reduction in ZFN’s off-target cleavage activity and toxicity by introducing amino acid substitutions at the dimer interface of the cleavage domain that inhibited homodimer formation, but promoted the obligate heterodimer formation and cleavage [18,19]. We showed further improvements to the obligate heterodimer ZFN pairs by combining the amino acid substitutions reported by the two groups [20].

    Another approach to reducing ZFN toxicity is to use ZF nickases that cleave at only one predetermined DNA strand of a targeted site. ZFN nickases are produced by inactivating the catalytic domain of one monomer within the ZFN pair [4]. ZFN nickases induce greatly reduced levels of mutagenic NHEJ, since nicks are not efficient substrates for NHEJ. However, this comes at a cost, in terms of lowered efficiency of cleavage. A standard approach that has been widely used to increase the sequence specificity of ZFPs (and the DNA binding proteins in general) is to abolish non-specific protein contacts to the DNA backbone by amino acid substitutions. Again, this comes at the price of ZFPs’ lowered binding affinity for their targets, resulting in lower efficiency of on-target cleavage.

    Methods for ZFN delivery into cells

    The first experiments to show that ZFNs were able to cleave a chromatin substrate and stimulate HR in intact cells were performed by microinjection of ZFNs (proteins) and synthetic substrates into Xenopus oocytes [7]. Plasmid-encoded ZFNs and donors have also been co-transfected into human cells by using electroporation, nucleofection or commercially available chemical reagents. This potentially has two drawbacks: 1) the plasmids continue to express the ZFNs that accumulate at high levels in cells, promoting promiscuous DNA binding and off-target cleavage; and 2) there is also the possibility that the plasmid could integrate into the genome of the cells. To circumvent these problems, one could transfect mRNAs coding for the ZFNs along with donor DNA into cells. Adeno-associated virus (AAV) and lentivirus (LV) are the common vehicles used for the delivery of ZFNs and the donor into human cells.

    First-in-human study

    ZFN-mediated CCR5 disruption was the first-in-human application of genome editing, which was aimed at blocking HIV entry into cells [21]. Most HIV strains use CCR5 co-receptor to enter into cells. The CCR5∆32 allele contains a 32-bp deletion that results in a truncated protein; it is not expressed on the cell surface. The allele confers protection against HIV-1 infection without any adverse health effects in homozygotes. Heterozygotes show reduced levels of CCR5; their disease progression to AIDs is delayed by 1 to 2 years. The potential benefit of CCR5 targeted gene therapy was highlighted in the only reported case of an HIV cure. The so-called “Berlin patient” received allogeneic bone marrow transplants from a CCR5∆32 donor during treatment of acute myeloid leukaemia and ever since has remained HIV-1 free without antiviral treatment (ART). This report gave impetus to gene therapy efforts to create CCR5-negative autologous T cells or hematopoietic stem/progenitor cells (HSPCs) in HIV-infected patients. The expectation was that the edited cells will provide the same anti-HIV effects as in the Berlin patient, but without the risks associated with the allogeneic transplantation. CCR5 knockout via NHEJ was used in this strategy, since gene modification efficiency by HDR is relatively low. ZFN-induced genome editing of CCR5 is the most clinically advanced platform, with several ongoing clinical trials in T cells and HSPCs [22].

    The Phase I clinical trial (#NCT00842634), of knocking out the CCR5 receptor to treat HIV, was conducted by Carl June’s lab in collaboration with Sangamo Biosciences (California) scientists. The goal was to assess the safety of modifying autologous CD4+ T cells in HIV-1–infected individuals [21]. Twelve patients on ART were infused with autologous CD4+ T cells, in which the CCR5 gene was inactivated by ZFN treatment. The study reported: 1) a significant increase in CD4+T cells post-infusion; and 2) long-term persistence of CCR5-modified CD4+ T cells in peripheral blood and mucosal tissue. The therapeutic effects of the ZFN treatment in five patients were monitored by a 12-week interruption of ART. The study established that the rate of decline of the CCR5-modified CD4+ T cells was slower than that of the unmodified cells, indicating a protective effect of CCR5 disruption [22]. One patient showed both delayed viral rebound and a peak viral count that was lower than the patient’s historical levels. This patient was later identified as being heterozygous for CCR5∆32, which suggested that the beneficial effects of the ZFN treatment were magnified in this patient, probably due to increased levels of bi-allelic modification [22]. Thus, heterozygous individuals may have a greater potential for a functional HIV cure. The obvious next step is to apply the ZFN treatment to earlier precursors or stem cells. Editing HSPCs instead of CD4+ T cells have the potential to provide a long-lasting source of modified cells. The success of this strategy has been established in preclinical studies [23] and a recent clinical trial (#NCT02500849) has been initiated using this approach. Programs to disrupt CCR5 in T cells and HSPCs, using the other nuclease platforms that include TALENs, CRISPR/Cas9 and megaTALs (a meganuclease fused to TAL effector modules), are also underway; these are at the pre-clinical stage.

    ZFN preclinical trials aimed at treating human monogenic diseases

    Sangamo Biosciences, Inc. has leveraged its proprietary database of proven ZFNs (that includes an extensive library of functional ZF modules and 2-finger units for the assembly of highly specific ZFNs) and its ZFN patent portfolio to enter into research collaborations with academic scientists for the application of ZFN-mediated gene editing strategies to treat a number of human diseases. Many of these programs are at the preclinical stage.

    An interesting gene editing approach is gene replacement therapy. ZFN-mediated gene editing has shown promise for in vivo correction of the hFIX gene in hepatocytes of haemophilia B mice. Katherine High’s lab in collaboration with Sangamo scientists, is developing a general strategy for liver-directed protein replacement therapies using ZFN-mediated site-specific integration of therapeutic transgenes within the albumin gene locus [24]. Using in vivo AAV delivery, they have achieved long-term expression of hFVIII and hFIX in mouse models of haemophilia A and B at therapeutic levels. Because albumin is very highly expressed, modifying less than 1% of liver cells can produce therapeutic levels of relevant proteins, essentially correcting the disorders. Several pre-clinical studies are now underway to develop liver-directed protein replacement therapies for lysosomal storage disorders including Hurler, Hunter, Gaucher, Fabry and many others. We have previously shown that the CCR5 gene could serve as a safe harbour locus for protein replacement therapies [25]. We reported that by targeted addition of the large CFTR transcription unit at the CCR5 chromosomal locus of human-induced pluripotent stem cells (hiPSCs), one could achieve efficient CFTR expression. Thus, therapeutic genes could be expressed from the CCR5 chromosomal locus for autologous cell-based transgene-correction therapy to treat various recessive monogenic human disorders. Other safe harbour loci such as AAVS1 in the human genome are also available for gene replacement therapy.

    Many labs around the world are also working to develop gene-editing strategies to treat several other diseases such as sickle cell anaemia, SCID, cancer (CAR T cells for immunotherapy) and many others, which are not discussed here. A list of clinical and pre-clinical studies using genome editing technologies for gene and cell therapy of various diseases is outlined elsewhere [26].

    Challenges facing ZFN-based gene editing before routine translation to the clinic

    Several challenges still remain that need to be addressed before we see the routine translation of ZFN-based gene editing to the clinic. They include: 1) potential harmful human genome perturbations due to off-target DSBs, which may be genotoxic or oncogenic; 2) current gene editing efficiencies may not be sufficient for certain diseases, particularly where gene-edited cells have no survival advantage; 3) safe and efficient delivery of ZFNs into target cells and tissues, when using the in vivo approach; and 4) the treatment costs, if and when ZFN-based gene editing is translated to clinic for routine use.

    First, these gene-editing tools need further refinement before they can be safely and effectively used in the clinic. The off-target effects of gene editing technologies are discussed in detail elsewhere [4]. The efficacy of ZFNs is largely governed by the specificity of the ZFPs that are fused to the FokI cleavage domain. The higher the specificity of the ZFPs, the lower the ZFNs’ off-target cleavage is and hence toxicity. As seen with the CCR5 clinical trial, some highly evolved ZFNs are very specific. In the clinic, engineered highly specific ZFNs will be used repeatedly to treat many different individuals [4]. Therefore, the design and construction of highly evolved ZFNs for a particular disease target, will likely be a small part of the overall effort.

    Second, further improvements to gene editing efficiencies are needed for successful therapeutic genome editing. HSPCs gene editing may not yield a sufficient number of edited cells for autologous transplantation due to the difficulties associated with the ex vivo culture and expansion. An alternative approach is to modify patient-specific iPSCs, which then could be reprogrammed into HSPCs. Since clonal selection, expansion and differentiation of gene edited iPSCs are performed ex vivo, this may enable very high editing efficiencies, particularly when coupled with HDR-mediated insertion of a selection cassette. This would also allow for complete genome-wide analysis of gene edited cells for off-target effects. The patient-specific ex vivo approach has the potential to become a viable clinical alternative to modifying autologous HSPCs [25, 27]. In the case of autosomal recessive disorders that require two copies of the gene to be mutated, correction of mono-allele in sufficient number of cells may be enough to confer a therapeutic effect in patients. However, in the case of autosomal dominant disorders that require only one mutated copy of the gene, bi-allelic modification in sufficient number of cells, will be essential to achieve a therapeutic effect in patients. Therefore, methods need to be developed to increase the levels of bi-allelic modification in human cells.

    Third, another potential issue pertains to the safe and efficient delivery of ZFNs into the appropriate target cells and tissues [4]. ZFNs are much smaller than TALENs or Cas9. Therefore, ZFNs can be readily delivered using AAV or LV constructs. The method of ZFN delivery could also vary depending on the human cell types. For example, Ad5/F35-mediated delivery of ZFNs was very efficient in CD4+ T cells while it was less efficient in HSPCs [23]. The nontoxic mRNA electroporation has been efficient for the introduction of ZFNs into HSPCs. This approach has been adapted in a recent clinical trial (#NCT02500849). Recently, Kohn’s lab compared the efficiency, specificity, and mutational signatures during the reactivation of fetal haemoglobin expression by BCL11A knock-out in human CD34+ progenitor cells, using ZFNs, TALENs and CRISPR/Cas9 [28]. ZFNs showed more allelic disruption in the BCL11A locus when compared to the TALENs or CRISPR/Cas9. This was consistent with increased levels of fetal haemoglobin in erythroid cells generated in vitro from gene-edited CD34+ cells. Genome-wide analysis revealed high specific BCL11A cleavage by ZFNs, while evaluated TALENs and CRISPR/Cas9 showed off-target cleavage activity. This study highlights the high variability in cleavage efficiencies at different loci and in different cell types by the different technology platforms. Therefore, there is a critical need to investigate ways to further optimize the delivery of these nucleases into human cells.

    Fourth, if and when therapeutic gene editing is translated into clinics for routine use, a major challenge will relate to the treatment costs associated with these technologies. In the age of $1000 per pill and $100,000 – $300,000 per year treatment costs for certain chronic disease conditions, it is critical to simplify these 21st century cures, if they are to become accessible and affordable for the average citizen and the poor populations of the third world. Many labs are working towards simultaneous gene correction and generation of patient-specific iPSCs to simplify treatment [4]. CRISPR/Cas9 may be best suited for this strategy [29].

    Finally, since all these gene-editing platforms have been shown to cleave at off-target sites with mutagenic consequences, a word of caution is warranted: a careful, systematic and thorough investigation of off-target effects at the genome-wide scale, for each and every reagent that will be used to treat human diseases, is absolutely essential to ensure patient safety. For these reasons, therapeutic gene editing by these technology platforms, will ultimately depend on risk versus benefit analysis and informed consent.

    Financial & competing interests disclosure

    Dr Chandrasegaran is the inventor of the ZFN technology. Johns Hopkins University (JHU) licensed the technology exclusively to Sangamo Biosciences, Inc. (concomitant to its formation in 1995) to develop ZFNs for various biological and biomedical applications. As part of the JHU licensing agreement, Dr Chanrasegaran served on the Sangamo scientific advisory board from 1995 to 2000 and received royalties and stock as per JHU guidelines. The JHU ZFN patents expired in 2012 and became part of the public domain. No writing assistance was utilized in the production of this manuscript.

    References

    1. Mansour SL, Thomas KR, Cappechi M. Disruption of proto-oncogene int-2 in mouse embryo-derived stem cells: a general strategy for targeting mutations to non-selectable genes. Nature 1988; 366: 348–52.
    CrossRef

    2. Rouet P, Smith F, Jasin M. Expression of a site-specific endonuclease stimulates homologous recombination in mammalian cells. Proc. Natl Acad. Sci. USA 1994; 91: 6064–8.
    CrossRef

    3. Kim Y-G, Cha J, Chandrasegaran S. Hybrid restriction enzymes: Zinc finger fusions to FokI cleavage domain. Proc. Natl Acad. Sci. USA 1996; 93: 1156–60.
    CrossRef

    4. Chandrasegaran S, Carroll D. Origins of programmable nucleases for genome engineering. J. Mol. Biol. 2016; 428: 963–89.
    CrossRef

    5. Pavletich NP, Pabo CO. Zinc finger-DNA recognition: crystal structure of a Zif268-DNA complex at 2.1 Å. Science 1991; 252: 809–17.
    CrossRef

    6. Smith JJ, Bibikova M, Whitby F, Reddy AR, Chandrasegaran S, Carroll D. Requirements for double-strand cleavage by chimeric restriction enzymes with zinc finger DNA-Recognition domain. Nucleic Acids Res. 2000; 28: 3361–9.
    CrossRef

    7. Bibikova M, Carroll D, Segal DJ et al. Stimulation of homologous recombination through targeted cleavage by a chimeric nuclease.Mol. Cell. Biol. 2001; 21: 289–97.
    CrossRef

    8. Bibikova M, Golic M, Golic KG, Carroll D. Targeted chromosomal cleavage and mutagenesis in Drosophila using zinc-finger nucleases. Genetics 2002; 161: 1169–75.

    9. Bibikova M, Beumer K, Trautman JK, Carroll D. Enhancing gene targeting using designed zinc finger nucleases. Science 2003; 300: 764.
    CrossRef

    10. Urnov FD, Miller JC, Lee YL et al. Highly efficient endogenous human gene correction using designed zinc-finger nucleases. Nature 2005; 435: 646–51.
    CrossRef

    11. Moscou MJ, Bogdanove AJ. A simple cipher governs DNA recognition by TAL effectors. Science 2009; 326: 1501.
    CrossRef

    12. Boch J, Scholze H, Schornack S. Breaking the code of DNA binding specificity of TAL-type III effectors. Science 2009; 326: 1509–12.
    CrossRef

    13. Christian M, Cermark T, Doyle EL et al. Targeting DNA double-strand breaks with TAL effector nucleases. Genetics 2010; 186: 757–61.
    CrossRef

    14. Gasiunas G, Barrangou R, Horvath P, Siksnys V. Cas9-crRNA ribonucleoprotein complex mediates specific DNA cleavage for adaptive immunity in bacteria. Proc. Natl Acad. Sci. USA 2012; 109: E2579–86.
    CrossRef

    15. Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science 2012; 337: 816–21.
    CrossRef

    16. Mali P, Yang L, Esvelt KM et al. RNA-guided human genome engineering via Cas9. Science 2013; 339: 823–6.
    CrossRef

    17. Cong L, Ran FA, Cox D et al. Multiplex genome engineering using CRISPR/Cas systems. Science 2013; 339: 819–23.
    CrossRef

    18. Miller JC, Holmes MC, Wang J et al. An improved zinc-finger nuclease architecture for highly specific genome editing. Nat. Biotechnol. 2007; 25: 778–85.
    CrossRef

    19. Szczepek M, Brondani V, Buchel J et al. Structure-based redesign of the dimerization interface reduces the toxicity of zinc-finger nucleases. Nat. Biotechnol.2007; 25: 786-793.
    CrossRef

    20. Ramalingam S, Kandavelou K, Rajenderan R, Chandrasegaran S. Creating designed zinc finger nucleases with minimal cytotoxicity. J. Mol. Biol. 2011; 405: 630–41.
    CrossRef

    21. Tebas P, Stein D, Tang WW et al. Gene editing of CCR5 in autologous CD4 T cells of persons infected with HIV. N. Engl. J. Med. 2014; 370: 901–10.
    CrossRef

    22. Wang CX, Cannon PM. The clinical applications of genome editing in HIV. Blood 2016; 127: 2546–52.
    CrossRef

    23. DiGiusto DL, Cannon PM, Holmes MC et al. Preclinical development and qualification of ZFN-mediated CCR5 disruption in human hematopoietic stem/progenitor cells. Mol. Ther. Methods Clin. Dev. 2016; 3: 16067.
    CrossRef

    24. Sharma R, Anguela XM, Doyon Y et al. In vivo editing of the albumin locus as a platform for protein replacement therapy. Blood 2015; 126: 1777–84.
    CrossRef

    25. Ramalingam S, London V, Kandavelou K et al. Generation and genetic engineering of human induced pluripotent stem cells using designed zinc finger nucleases. Stem Cells Dev. 2013; 22: 595–610.
    CrossRef

    26. Maeder ML, Gersbach CA. Genome editing technologies for gene and cell therapy. Mol. Ther. 2016; 24: 430–46.
    CrossRef

    27. Ramalingam S, Annaluru N, Kandavelou K, Chandrasegaran S. TALEN-mediated generation and genetic correction of disease-specific hiPSCs. Curr. Gene Ther.2014; 14: 461–72.
    CrossRef

    28. Bjurström CF, Mojadidi M, Phillips J, Kuo C et al. Reactivating fetal hemoglobin expression in human adult erythroblasts through BCL11A knockdown using targeted nucleases. Mol. Ther. – Nucleic Acids 2016; 5: e351. 29.

    29. Howden SE, Maufort JP, Duffin BM et al. Simultaneous Reprogramming and Gene Correction of Patient Fibroblasts. Stem Cell Rep. 2015; 5: 1109–18.
    CrossRef

    This article was published earlier in 2017 in CELL & GENE THERAPY INSIGHTS. It is republished under the Creative Commons Licence.

    Feature Image Credit: www.nationalhogfarmer.com

  • Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?

    Clearview AI is offering its controversial tech to Ukraine for identifying enemy soldiers – while autonomous killing machines are on the rise

    Technology that can recognise the faces of enemy fighters is the latest thing to be deployed to the war theatre of Ukraine. This military use of artificial intelligence has all the markings of a further dystopian turn to what is already a brutal conflict.

    The US company Clearview AI has offered the Ukrainian government free use of its controversial facial recognition technology. It offered to uncover infiltrators – including Russian military personnel – combat misinformation, identify the dead and reunite refugees with their families.

    To date, media reports and statements from Ukrainian government officials have claimed that the use of Clearview’s tools has been limited to identifying dead Russian soldiers in order to inform their families as a courtesy. The Ukrainian military is also reportedly using Clearview to identify its own casualties.

    This contribution to the Ukrainian war effort should also afford the company a baptism of fire for its most important product. Battlefield deployment will offer the company the ultimate stress test and yield valuable data, instantly turning Clearview AI into a defence contractor – potentially a major one – and the tool into military technology.

    If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. This is not a remote possibility. Last year, the UN reported that an autonomous drone had killed people in Libya in 2020, and there are unconfirmed reports of autonomous weapons already being used in the Ukrainian theatre.

    Our concern is that hope that Ukraine will emerge victorious from what is a murderous war of aggression may cloud vision and judgement concerning the dangerous precedent set by the battlefield testing and refinement of facial-recognition technology, which could in the near future be integrated into autonomous killing machines.

    To be clear, this use is outside the remit of Clearview’s current support for the Ukrainian military; and to our knowledge Clearview has never expressed any intention for its technology to be used in such a manner. Nonetheless, we think there is real reason for concern when it comes to military and civilian use of privately owned facial-recognition technologies.

    Clearview insists that its tool should complement and not replace human decision-making. A good sentiment but a quaint one

    The promise of facial recognition in law enforcement and on the battlefield is to increase precision, lifting the proverbial fog of war with automated precise targeting, improving the efficiency of lethal force while sparing the lives of the ‘innocent’.

    But these systems bring their own problems. Misrecognition is an obvious one, and it remains a serious concern, including when identifying dead or wounded soldiers. Just as serious, though, is that lifting one fog makes another roll in. We worry that for the sake of efficiency, battlefield decisions with lethal consequences are likely to be increasingly ‘blackboxed’ – taken by a machine whose working and decisions are opaque even to its operator. If autonomous weapons systems incorporated privately owned technologies and databases, these decisions would inevitably be made, in part, by proprietary algorithms owned by the company.

    Clearview rightly insists that its tool should complement and not replace human decision-making. The company’s CEO also said in a statement shared with openDemocracy that everyone who has access to its technology “is trained on how to use it safely and responsibly”. A good sentiment but a quaint one. Prudence and safeguards such as this are bound to be quickly abandoned in the heat of battle.

    Clearview’s systems are already used by police and private security operations – they are common in US police departments, for instance. Criticism of such use has largely focused on bias and possible misidentification of targets, as well as over-reliance on the algorithm to make identifications – but the risk also runs the other way.

    The more precise the tool actually is, the more likely it will be incorporated into autonomous weapons systems that can be turned not only on invading armies but also on political opponents, members of specific ethnic groups, and so on. If anything, improving the reliability of the technology makes it all the more sinister and dangerous. This doesn’t just apply to privately owned technology, but also to efforts by states such as China to develop facial recognition tools for security use.

    Outside combat, too, the use of facial recognition AI in the Ukrainian war carries significant risks. When facial recognition is used in the EU for border control and migration purposes – and it is, widely – it is public authorities that are collecting the sensitive biomarker data essential to facial recognition, the data subject knows that it is happening and EU law strictly regulates the process. Clearview, by contrast, has already repeatedly fallen foul of the EU’s GDPR (General Data Protection Regulation) and has been heavily sanctioned by data security agencies in Italy and France.

    If privately owned facial recognition technologies are used to identify Ukrainian citizens within the EU, or in border zones, to offer them some form of protective status, a grey area would be established between military and civilian use within the EU itself. Any such facial recognition system would have to be used on civilian populations within the EU. A company like Clearview could promise to keep its civil and military databases separate, but this would need further regulation – and even then would pose the question as to how a single company can be entrusted with civil data which it can easily repurpose for military use. That is in fact what Clearview is already offering the Ukrainian government: it is building its military frontline recognition operation on civil data harvested from Russian social media records.

    Then there is the question of state power. Once out of the box, facial recognition may prove simply too tempting for European security agencies to put back. This has already been reported in the US where the members of the New York Police Department are reported to have used Clearview’s tool to circumvent data protection and privacy rules within the department and to have installed Clearview’s app on private devices in violation of NYPD policy.

    This is a particular risk with relation to the roll-out and testing in Ukraine. If Ukrainian accession to the European Union is fast-tracked, as many are arguing it should be, it will carry into the EU the use of Clearview’s AI as an established practice for military and potentially civilian use, both initially conceived without malice or intention of misuse, but setting what we think is a worrying precedent.

    The Russian invasion of Ukraine is extraordinary in its magnitude and brutality. But throwing caution to the wind is not a legitimate doctrine for the laws of war or the rules of engagement; this is particularly so when it comes to potent new technology. The defence of Ukraine may well involve tools and methods that, if normalised, will ultimately undermine the peace and security of European citizens at home and on future fronts. EU politicians should be wary of this. The EU must use whatever tools are at its disposal to bring an end to the conflict in Ukraine and to Russian aggression, but it must do so ensuring the rule of law and the protection of citizens.

    This article was published earlier in openDemocracy, and is republished under Creative Commons Licence

    Feature Image Credit: www.businessinsider.in

  • Cryptos and CBDC: Is the RBI on the Right Track?

    Cryptos and CBDC: Is the RBI on the Right Track?

    “The history of money is entering a new chapter”. The RBI needs to heed this caution and not be defensive.

    Cryptocurrency will be discouraged by the government was the message from the FM during the budget discussion in parliament. There will be heavy taxation and no relief in capital gains for past losses. But, India has to contend with growing use of cryptos in these uncertain times. Russian kleptocrats are reportedly using cryptos to evade sanctions. Ukraine which has been a center for cryptos trading due to its lax rules is now using them to get funds.

    President Joe Biden recently signed an executive order requiring government agencies to assess use of digital currency and cryptos due to their growing importance. The Indian authorities have also been trying to bring legislation to deal with the issue since October 2021. Would the US clarifying its position help India also decide on cryptos?

    The SC has asked the government to clarify its position on the legality of cryptos. The FM in the Budget 2022-23 proposed taxing the capital gains and crypto transactions but did not declare them illegal. The RBI Governor was more expansive in February when he highlighted three things. First, “Private cryptocurrencies are a big threat to our financial and macroeconomic stability”. Second, investors are “investing at their own risk” and finally, “these cryptocurrencies have no underlying (asset)… not even a tulip”. Subsequently, a RBI Deputy Governor called cryptos worse than a Ponzi scheme and suggested that they not be “legitimized”. It is only recently that the RBI has announced that it will float Central Bank Digital Currency (CBDC)

    Difficult to Declare Cryptos Illegal

    The governor calling cryptos as cryptocurrency has unintentionally identified them as a currency. His statements indicate RBI’s worry about its place in the economy’s financial system as cryptos proliferate and become more widely used. This threat emerges from the decentralized character of cryptos based on the Blockchain technology which the Central Banks cannot regulate and which enables enterprising private entities (like, Satoshi Nakamoto initiated Bitcoins in 2009) to float cryptos which can function as assets and money.

    The total valuation of cryptos recently was upward of $2 trillion – more than the value of gold held globally. Undoubtedly, this impacts the financial systems and sovereignty of nations. So, the RBI rather than be defensive needs to think through how to deal with cryptos.

    Cryptos which operate via the net can be banned only if all nations come together. Even then, tax havens may allow cryptos to function defying the global agreement. They have been facilitating flight of capital and illegality in spite of pressures from powerful nations.

    The genie is out of the bottle. The total valuation of cryptos recently was upward of $2 trillion – more than the value of gold held globally. Undoubtedly, this impacts the financial systems and sovereignty of nations. So, the RBI rather than be defensive needs to think through how to deal with cryptos.

    Cryptos as Currency

    Source: Crypto-current.co

    Will a CBDC help tackle the emerging problem? Indeed not, since it can only be a fiat currency and not a crypto. However, cryptos can function as money. This difference needs to be understood.

    A currency is a token used in market transactions. Historically, not only paper money but cows and copper coins have been used as tokens since they are useful in themselves. But paper currency is useless till the government declares it to be a fiat currency. Everyone by consensus then accepts it at the value printed on it.

    So, paper currency with little use value derives its value from state backing and not any underlying commodity. Cryptos are a string of numbers in a computer programme and are even more worthless. And, without state backing. So, how do they become acceptable as tokens for exchange?

    Their acceptability to the rich enables them to act as money. Paintings with little use value have high valuations because the collectivity of the rich agrees to it. Cryptos are like that.

    Bitcoin, the most prominent crypto, has been designed to become expensive. Its total number is limited to 21 million and progressively it requires more and more of computer power and energy to produce (called mining like, for gold). As the cost of producing the Bitcoin has risen, its price has increased. This has led to speculative investment which drives the price higher, attracting more people to join. So, since 2009, in spite of wildly fluctuating prices, they have yielded high returns making speculation successful.

    Unlike the Tulip Mania

    The statement that cryptos have no underlying asset, not even a tulip refers to the time when tulip prices rose dramatically before they collapsed. But, tulips could not be used as tokens, while cryptos can be used via the internet. Also, the supply of tulips could expand rapidly as its price went up but the number of Bitcoins is limited.

    So, cryptos acquire value and become an asset which can be transacted via the net. This enables them to function as money. True, transactions using Bitcoins are difficult due to their underlying protocol, but other simpler cryptos are available.

    The different degrees of difficulties underlying cryptos arises from the problem of `double spending’. Fiat currency whether in physical or electronic form has the property that once it is spent, it cannot be spent again, except fraudulently, because it is no more with the spender. But, a software on a computer can be repeatedly used.

    Blockchain and encryption solved the problem by devising protocols like, the `proof of work’ and `proof of stake’. They enable the use of cryptos for transactions. The former protocol is difficult. The latter is simpler but prone to hacking and fraud. Today, thousands of different kinds of cryptos exist – Bitcoin like cryptos, Alt coins and Stable coins. Some of them may be fraudulent and people have lost money.

    CBDC, Unlike Cryptos

    Source: cointelegraph.com

    Blockchain enables decentralization. That is, everyone on the crypto platform has a say. But, the Central Banks would not want that. Further, they would want a fiat currency to be exclusively issued and controlled by them. But the protocols mentioned above theoretically enable everyone to `mine’ and create currency. So, for CBDC to be in central control, solve the `double spending’ problem and be a crypto (not just a digital version of currency) seems impossible.

    A centralized CBDC will require RBI to validate each transaction – something it does not do presently. Once a currency note is issued, RBI does not keep track of its use in transactions. Keeping track will be horrendously complex which could make the crypto like CBDC unusable unless new secure protocols are designed. No wonder, according to IMF MD, “… around 100 countries are exploring CBDCs at one level or another. Some researching, some testing, and a few already distributing CBDC to the public. … the IMF is deeply involved in it ..”

    Conclusion

    Issuing CBDCs will not only be complicated but presently cannot be a substitute for cryptos which will eventually be used as money. This will impact the functioning of the Central Banks and commercial banks. Further, it is now too late to ban cryptos unless there is global coordination which seems unlikely. The rich who benefit from cryptos will oppose banning them. Can the US work out a solution? The IMF MD has said, “The history of money is entering a new chapter”. The RBI needs to heed this caution and not be defensive.

     

    Slightly shortened version of this article was published earlier in The Hindu.

    Feature Image Credit: doralfamilyjournal.com

     

  • Mining the Moon

    Mining the Moon

    In view of our upcoming event on ‘Scramble for the Skies: The Great Power Competition to control the Resources of Outer Space’, TPF is happy republish this old but excellent article under the Creative Commons License 4.0. Establishing outer space colonies and ‘mining the moon’ is a very distinct possibility in the near future. However, commercial scale of this process may take decades. Space resources, in terms of materials to be mined, will become the major focus in the coming decades.

    This article by Paul K Byrne was published originally in The Conversation.

    If you were transported to the Moon this very instant, you would surely and rapidly die. That’s because there’s no atmosphere, the surface temperature varies from a roasting 130 degrees Celsius (266 F) to a bone-chilling minus 170 C (minus 274 F). If the lack of air or horrific heat or cold don’t kill you then micrometeorite bombardment or solar radiation will. By all accounts, the Moon is not a hospitable place to be.

    Yet if human beings are to explore the Moon and, potentially, live there one day, we’ll need to learn how to deal with these challenging environmental conditions. We’ll need habitats, air, food and energy, as well as fuel to power rockets back to Earth and possibly other destinations. That means we’ll need resources to meet these requirements. We can either bring them with us from Earth – an expensive proposition – or we’ll need to take advantage of resources on the Moon itself. And that’s where the idea of “in-situ resource utilization,” or ISRU, comes in.

    Underpinning efforts to use lunar materials is the desire to establish either temporary or even permanent human settlements on the Moon – and there are numerous benefits to doing so. For example, lunar bases or colonies could provide invaluable training and preparation for missions to farther flung destinations, including Mars. Developing and utilizing lunar resources will likely lead to a vast number of innovative and exotic technologies that could be useful on Earth, as has been the case with the International Space Station.

    As a planetary geologist, I’m fascinated by how other worlds came to be, and what lessons we can learn about the formation and evolution of our own planet. And because one day I hope to actually visit the Moon in person, I’m particularly interested in how we can use the resources there to make human exploration of the solar system as economical as possible.

    A rendering of a possible lunar habitat. credit: Eos.org

    In-situ resource utilization

    ISRU sounds like science fiction, and for the moment it largely is. This concept involves identifying, extracting and processing material from the lunar surface and interior and converting it into something useful: oxygen for breathing, electricity, construction materials and even rocket fuel.

    Many countries have expressed a renewed desire to go back to the Moon. NASAhas a multitude of plans to do so, China landed a rover on the lunar farside in January and has an active rover there right now, and numerous other countrieshave their sights set on lunar missions. The necessity of using materials already present on the Moon becomes more pressing.

    Anticipation of lunar living is driving engineering and experimental work to determine how to efficiently use lunar materials to support human exploration. For example, the European Space Agency is planning to land a spacecraft at the lunar South Pole in 2022 to drill beneath the surface in search of water ice and other chemicals. This craft will feature a research instrument designed to obtain water from the lunar soil or regolith.

    There have even been discussions of eventually mining and shipping back to Earth the helium-3 locked in the lunar regolith. Helium-3 (a non-radioactive isotope of helium) could be used as fuel for fusion reactors to produce vast amounts of energy at very low environmental cost – although fusion as a power source has not yet been demonstrated, and the volume of extractable helium-3 is unknown. Nonetheless, even as the true costs and benefits of lunar ISRU remain to be seen, there is little reason to think that the considerable current interest in mining the Moon won’t continue.

     

    It’s worth noting that the Moon may not be a particularly suitable destination for mining other valuable metals such as gold, platinum or rare earth elements. This is because of the process of differentiation, in which relatively heavy materials sink and lighter materials rise when a planetary body is partially or almost fully molten.

    This is basically what goes on if you shake a test tube filled with sand and water. At first, everything is mixed together, but then the sand eventually separates from the liquid and sinks to the bottom of the tube. And just as for Earth, most of the Moon’s inventory of heavy and valuable metals are likely deep in the mantle or even the core, where they’re essentially impossible to access. Indeed, it’s because minor bodies such as asteroids generally don’t undergo differentiation that they’re such promising targets for mineral exploration and extraction.

    Artist’s impression of In Situ Resource Utilisation. Credit: Universe Today

    Lunar formation

    Apollo 17 astronaut Harrison H. Schmitt standing beside a boulder on the lunar surface. NASA

    Indeed, the Moon holds a special place in planetary science because it is the only other body in the solar system where human beings have set foot. The NASA Apollo program in the 1960s and 70s saw a total of 12 astronauts walk, bounce and rove on the surface. The rock samples they brought back and the experimentsthey left there have enabled a greater understanding of not only our Moon, but of how planets form in general, than would ever have been possible otherwise.

    From those missions, and others over the ensuing decades, scientists have learned a great deal about the Moon. Instead of growing from a cloud of dust and ice as the planets in the solar system did, we’ve discovered that our nearest neighbor is probably the result of a giant impact between the proto-Earth and a Mars-sized object. That collision ejected a huge volume of debris, some of which later coalesced into the Moon. From analyses of lunar samples, advanced computer modeling and comparisons with other planets in the solar system, we’ve learned among many other things that colossal impacts could be the rule, not the exception, in the early days of this and other planetary systems.

    Carrying out scientific research on the Moon would yield dramatic increases in our understanding of how our natural satellite came to be, and what processes operate on and within the surface to make it look the way it does.

    The coming decades hold the promise of a new era of lunar exploration, with humans living there for extended periods of time enabled by the extraction and use of the Moon’s natural resources. With steady, determined effort, then, the Moon can become not only a home to future explorers, but the perfect stepping stone from which to take our next giant leap.

     

    Feature Image Credit: SciTechDaily

     

  • New ‘Drone Rules’ is set to transform Drone business in India

    New ‘Drone Rules’ is set to transform Drone business in India

    Not many would know that Goldman Sachs has predicted that in the next five years the drone market will be worth over a hundred billion US dollars. India became an IT hub in the 1990s and Indian programmers were sought-after during the dot-com boom. This was not because of some great policy decisions that we took at that time but rather it was because of no policy on the subject. There were times when computers gathered dust in some ministries because the minister felt computers are sinister equipment that could take away people’s livelihood.

    ‘Drones’ are said to be the next big thing that the world has ever seen since IT and Dotcom in terms of technology disruption and touching the lives of people in all spheres. Traditional modes of transportation of goods, surveillance, survey, and foraying into newer areas like agriculture, marine et cetera are some areas where the drone is already making waves.

    The recent ‘Draft Drone Rules’, released for public comments by the civil aviation ministry, is a welcome change from the previous one which gave the impression that obtaining a license would be a herculean task. Some companies like AutomicroUAS Aerotech Pvt ltd and many others did obtain a license using provisions of the previous policy. The new draft policy is a more user and business-friendly drone policy. This is a very good and the first decision by the new civil aviation minister, Jyotiraditya Scindia, after assuming office.  Some of the highlights of the new drone policy are: –

    • Up to 500 kgs of drone Aircraft Rules, 1937 is no more applicable. This is a significant change because the Aircraft rules 1937 is specifically applicable for airplanes that carry humans and therefore, have been made with that purpose.
    • There are a significant number of people who fly nano and micro drones in India. Including operators of model aircraft. Ubiquitous drones include drones flying at marriage parties and increased use of drone shots in the entertainment field. These people now can fly these drones/model aircraft without having a drone pilot license. This singular step itself will bolster not only self-employment but also reduce unemployment in the country. Being a drone pilot is also looked at as one of the coolest things today.
    • Drone imports will still be controlled by DGFT (director-general foreign trade). This currently could be looked at as a bit of an impediment for those entrepreneurs who are dependent on imports of certain drone parts. However, in the long run, this provision could bolster making those parts in India and selling them abroad. Easing of import of drones/drone parts currently and bringing in stricter rules as time goes by would have been a better option. This aspect could be looked at by the government to promote innovators and children who are looking to learn, for who importing certain critical drone components is vital. It is highly recommended that drone imports controlled by DGFT be done away with for the time being.
    • The creation of a drone corridor is likely to change the face of the Indian Economy. Logistics Operation, last-mile connectivity, the short haul of goods between two towns, and the cost of connectivity between places are set to change dramatically. This change alone, in my opinion, is likely to bring a significant impact in times to come. Not many have realized the power of creating drone corridors and all that remains to be seen is how this rule is taken forward by the government in improving logistics connectivity and creation of drone highways in times to come.
    • The drone research and development Organisation as a provision in the rule is futuristic and is likely to change the face of the drone industry in India. Correctly harnessed and nurtured, this rule could enable the development of many centres of excellence of drones. The government needs to create an equivalence of ‘Silicon Valley’ for the drones so that organisations dealing with hardware, software, artificial intelligence et cetera can come together and take this endeavour forward.
    • There are several companies across the world that are working on unmanned traffic management (UTM) including an Indian company called Avianco. These companies now could collaborate with the government of India in providing unmanned traffic information and could work as a service provider for tracking of drones as well as providing drone operators with simple NPNT permission, which is one of the provisions in the new drone policy.
    • Third-party drone insurance could be adequate as specified in the rules. However, drones are costly equipment. Readers would be surprised to know that most of these drones are costlier than small hatchback cars. Therefore, owners of these drones may want to go for comprehensive insurance. This is a huge opportunity for insurance and insurance facilitation companies like TropoGo, in the area of drone insurance. In times to come, the number of drone insurance policies may well overtake the number of vehicle insurance policies in the world. Since drones are set to replace many of the traditional workforce and industries.
    • ‘Drone promotion Council’ as specified in these rules should have come up as of yesterday, but it’s never too late. Those countries who missed this ‘Drone-Bus’ may get left behind in the overall economic progress in times to come. Therefore, setting up the ‘drone promotion council’ is the need of the hour.
    • Highlights of the new ‘Draft Drones Rules’ are shown below:

     

    The new drone policy of India is a welcome change. It is a well-thought-out, simplified policy that India has seen in recent times. This policy aligns with Prime Minister Modi‘s vision for India in terms of reducing unemployment, improving ease of doing business, self-employment, making India go digital, and becoming a technology leader in the world. What the future holds will entirely depend on how these rules are interpreted and implemented efficiently without the usual horrors of the red-tapism of the past.

     

    Image Credit: www.geospatialworld.net