Category: Artificial Intelligence & Robotics

  • The Challenges of AI-enabled Underwater Platforms

    The Challenges of AI-enabled Underwater Platforms

    Vijay Sakhuja  Aug 03, 2018

    The Chinese People’s Liberation Army Navy (PLAN) is likely to acquire a new type of submarine by the early 2020s. According to the South China Morning Post, the Shenyang Institute of Automation under the Chinese Academy of Sciences (CAS) is engaged in developing a series of extra-large unmanned underwater vehicle (XLUUV) that will feature Artificial Intelligence (AI). The vessels will be capable of performing a number of tasks without “human intervention,” “handle their assignments and return to base on their own,” and carry out reconnaissance, surveillance, combat operations against enemy targets, and undertake activities such as whale tracking. It will be possible to integrate these vessels with other manned and unmanned platforms and systems at sea, in air as also on land to carry out coordinated missions.

    Lin Yang, the project director and a marine technology specialist, has noted that Chinese interest in these platforms is prompted by US plans to acquire XLUUVs capable of carrying “a variety of payloads, from sensors to weapons.” Two prototype units have been contracted, one to Lockheed Martin and the other to Boeing, and they have been granted US$ 43.2 million and US$ 42.3 million, respectively for research, design, and testing in 2020. The winner will receive orders for production of up to five platforms. Unlike China and the US, Russia is developing the Status-6 autonomous torpedo capable of delivering 100-megaton warhead capable of “wiping out all living things” within a 1,500 km radius.

    These developments are clear signs of the role of AI-enabled underwater platforms and weapons in the future, and add a new dimension to underwater operations. There are at least four issues concerning them that merit attention.

    First is naval warfare. Navies have traditionally employed conventional submarines for intelligence gathering, laying mines, attacking enemy submarines and ships, and more recently, conducting strikes against shore targets by using land attack cruise missiles. The usual tactic for conventional submarine has been the ‘lie-in-wait’ position at the entrance to harbours or close to choke points and attack the enemy. Like their conventional counterparts, AI-enabled platforms can serve as scouts, and smaller platforms can masquerade as decoys to attract the enemy, forcing it to expose its position. If necessary, the AI-tool kit should be able to detect, track, generate high speed, and attack the enemy like a torpedo.

    It is useful to mention that the US’ XLUUVs will “operate autonomously for weeks or even months, periodically phoning home to check for new orders,” giving the US Navy a significant advantage in tactical operations. Similarly, the Status-6 autonomous torpedo can be used by the Russian defence ministry’s special division for deep-sea research and deliver “deep-sea equipment or installing surveillance devices.”

    Second, the XLUUVs may entail new legal challenges. There is an ongoing debate raging over regulating lethal autonomous weapon systems (LAWS), including a call to ban fully autonomous weapon systems centered on the Principle of Non-Delegation of the Authority to Kill by non-human mechanisms. A global campaign – Coalition to Stop Killer Robots – has called for an international ban on ‘killer robots’, and “a treaty for emerging weapons.” There is a belief that morality and generally accepted ethics need to be injected into the use of AI-enabled weapon systems given that “inanimate machines cannot understand or respect the value of life.” If the XLUUVs are put to combat operations, it would result in the weaponisation of AI, and this empowers humans to absolve themselves of any moral consequences of killing or using these for self-defence.

    It is important to mention that engineers and scientists from the technology industry signed a pledge in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) and called upon “governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons.” They have since been joined by corporates such as Google DeepMind, the XPRIZE Foundation, University College London, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society.

    Third, the XLUUVs rely primarily on AI to conduct operations. These platforms would transit long distances passing through a variety of undersea topography, ie ridges, mounts, trenches, rocks, slopes and basins, and would be vulnerable to collisions, detection by civilian research and survey vessels, enemy submarines and warship, and underwater military detection systems including those used for seismic warnings. Further, underwater activity such as laying of oil and gas pipelines and fiber optic cables can impact their safety. Besides, natural occurrences such as currents and tides can result in drift and cause considerable difficulty to being positioned in the designated destination.

    Fourth regards the impact of AI-enabled underwater platforms on the marine environment, particularly marine life such as whales, sharks, dolphins and other migratory species. Sonar transmissions by XLUUVs can cause potential damage to mammals’ sensory organs resulting in disorientation or death. Whales may even misunderstand sonar waves as that of an attacker, and panic can drive them towards the platform.

    The development of XLUUVs presents clear dangers and could have potentially destabilising consequences for all countries. Further, their impact on marine life and the associated ecosystem – which is already witnessing stress due to pollution and plastic – does not appear to have been taken into consideration. Finally, an international treaty for emerging AI-enabled underwater platforms needs to be prepared, debated, and signed.

    This article was originally published in IPCS.

    http://ipcs.org/comm_select.php?articleNo=5497

  • PLA Navy and Robotics

    PLA Navy and Robotics

    Vijay Sakhuja December 03, 2017

    Early this year, the UK Royal United Services Institute announced that by 2020 China’s naval order of battle would be 500 ships comprising aircraft carriers, nuclear and conventional submarines, frigates and destroyers, amphibious ships and logistic vessels. Further, the Chinese defence spending would increase from US $ 123 billion in 2012 to US $ 233 billion by 2020. Surely, China is on its way to build a large and powerful navy and in the last eight years Chinese naval shipyards built 83 ships. The speed of production has been characterized as ‘making dumplings’.

    One of the significant features of this naval build-up is investments in science and innovation led by digital technologies. Among these, the ‘robotic revolution’ merits attention. Till about 2013, China was the top importer accounting for 20 per cent of industrial robots produced globally (36,560 units as compared to Japan’s 26,015 units, and US in third place with 23,679 units). Since then Chinese demand for robots has increased, and in 2016, it installed 90,000 imported units nearly a third of the global total, which is expected to increase to 160,000 units by 2019.

    While import substitution has proved useful, the Chinese government has invested in the development of indigenous robots to support the country’s US $11 billion robot market. The plans aim to ensure that China-branded robots constitute over 50 per cent of total sales by 2020 from 31 per cent in 2016. The country plans to produce 100,000 robots annually by 2020, compared with 33,000 in 2015. This is in line with the country’s “Made in China 2025” strategy led by industrialization and informatization focused on innovation, smart technology, the mobile Internet, cloud computing,  big data and the Internet of Things.

    No doubt the robot led industrialization has boosted China’s production and export competitiveness in a number of sectors such as car manufacturing, electronics, appliances, logistics, and food industry, but its use in the military has not lagged behind. In June 2017, the state-owned China Electronic Technology Group demonstrated 119 tiny propeller aircraft (X-6 Skywalker, a commercial model) loaded with software and sensors capable of communicating with the other drones in the swarm.

    There are nearly 1,300 drones currently in operation with the PLA and the PLA Air Force, but use of robots in the naval domain is more recent. Perhaps it was the discovery in 2015 of a torpedo-like spy device off Hainan province provided the requisite impetus to invest in unmanned platforms. The spy device of US origin was confirmed as an intelligence gathering system to obtain information on the Chinese naval operations in the South China Sea.

    China’s advancements in underwater platform technology has been demonstrated by the indigenously built Haiyi-7000 (Chinese for “sea wings”) unmanned platform which dived to a depth of 5,751 meters in the Mariana Trench, western Pacific. Apparently, the technology for the platform was obtained from the US.

    The PLA Navy’s has been quite enthusiastic about using unmanned platforms. In 2016, a naval exhibit showcased Chinese plans to build an Underwater Great Wall comprising of sensors moored to the ocean bed 3,000 meters deep, to work in tandem with autonomous unmanned underwater vehicles (AUV’s) launched from torpedo tubes, surface ships, missiles and aircraft, to monitor underwater vessel movements including tracking enemy submarines particularly those of the US and Japan. Although China is yet to develop a mature technology for underwater drones, Chinese scientists are working to build swarms of 3D-printed and cheap autonomous underwater robots connected through underwater communication and datalink technologies, as well as precise navigation systems and multiple sensor payloads.

    Another noteworthy Chinese demonstration of its interest in robot ships is validated by the development of a stealthy robotic trimaran warship D3000. According to China Aerospace and Science Technology Corporation, a Chinese defense contractor, this vessel is designed to operate autonomously for months, or as part of a larger task force with manned ships. The D3000 can serve as a mothership for other unmanned systems and pass the data of targets or unfriendly objects to ships and aircraft to work out firing solutions.

    China is offering to foreign customers new unmanned systems that are still undergoing testing or have just entered service in the Chinese military. For instance, China showcased a 42-foot trimaran High Speed Intercept Boat in 2016 in Malaysia. The vessel can be armed, achieve speeds of 80 knots, and can ‘potentially operate in unmanned swarms.

    The concept of operations for the USVs involve undertaking various missions and tasks “including escort, interdiction of civilian freighters, patrolling offshore assets and working in a system of systems with other unmanned systems, including drones and submersibles”.

    In 2014 President Xi Jinping during a speech to the Chinese Academy of Sciences called for a “robot revolution”’ to raise industrial productivity and it is fair to argue that Chinese shipyards, naval research centers and the PLA Navy  are ‘certainly not going to sit the robotic revolution out”.  The Chinese government has characterized the robotic industry as the “jewel in the crown of manufacturing”. This is best demonstrated by the importance of robotics in the 13th Five-Year Plan (2016-2020), the Made in China 2025 program, Robotics Industry Development Plan and Three-Year Guidance for Internet Plus Artificial Intelligence Plan (2016-2018). In essence, China is exhibiting a high degree of confidence in its ability to develop modern unmanned naval technologies.

    Dr Vijay Sakhuja is a co-founder and trustee of TPF.

  • Artificial Intelligence: The Good and the Evil

    Artificial Intelligence: The Good and the Evil

    Vijay Sakhuja   June 02, 2018

    The jubilation over opportunities presented by artificial intelligence is quite telling, and its usage has found favour among a number of stakeholders. Researchers and proponents believe that future artificial intelligence enabled machines would restructure many sectors of the industry such as transportation, health, science, finance, and automate all human tasks including restaurants. Intelligent machines will be in the forefront and according to Google’s director of engineering Ray Kurzweil, by 2019 ‘computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans’. In essence, the technology developers are now working to teach the machines and make artificial intelligence as good as or even better than human-level intelligence though their own efforts.
    Amid this euphoria, there is also a strong belief that an uncontrolled and ‘runaway’ march of artificial intelligence towards final maturation could be catastrophic and invite dystopian problems. Elon Musk, CEO of Tesla and SpaceX CEO has cautioned that artificial intelligence is a ‘fundamental risk to the existence of human civilization’ and “we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late,”
    The military domain is also in the throes of transformation led by disruptive technologies such as the artificial intelligence, big data, quantum computing, deep machine leaning, to name a few. Robots are believed to be panacea for a number of military tasks and missions including warfighting by killer robots as fully autonomous weapons. The adverse impact of fully autonomous weapons such as the killer robots is not yet fully understood.
    However, there have been some positive developments in this regard in another project. For instance, nearly 4000 employees of Google submitted a letter in April 2018 to their leadership stating that the company should not develop technologies which would get the company into the ‘business of war’. They urged that the ongoing Project Maven is stopped and the company should “draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
    Project Maven, formally known as the Algorithmic Warfare Cross-Functional Team, is a U.S. Department of Defense (DoD) program for development of drones which uses artificial intelligence and machine learning technology to help analyze huge amounts of captured surveillance footage. The project will enable the Pentagon to “deploy machine-learning techniques that internet companies use to distinguish cats and cars to spot and track objects of military interest, such as people, vehicles, and buildings.” Further, it will be possible to “automatically annotate objects such as boats, trucks, and buildings on digital maps.” The DoD plans to equip image analysis technologies onboard the unarmed and armed drones and then it will be “only a short step to autonomous drones authorized to kill without human supervision or meaningful human control.” The initial plan was to have the system ready by December 2017 but the project has run into difficulty after Google employees raised their objections.
    In this context, the global movement against robot killers led by Campaign to Stop Killer Robots since 2013 and has found favour among at least 28 countries. They are seeking an international treaty or instrument whereby a human control exits over any lethal functions of a weapon system. Their voice has gained significant momentum during the last five years and the global coalition against killer robots constitutes 64 international, regional, and national non-governmental organizations (NGOs) in 28 countries that calls for a preemptive ban on fully autonomous weapons.
    While that may be the shape of things to come in the future, the fear is that technology developers may not be able to determine what is ‘good’ and what is ‘evil’. Issues such as ethics and morality are fast taking precedence and Google employees’ call to rein in artificial intelligence and control its future development merits attention.
    Last month, on May 14, scholars, academics, and researchers who study, teach about, and develop information technology came in support of the Google employees and expressed concern that Google had “moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally” It is now reported that Google along with its parent company Alphabet have made note of these issues and are beginning to address some ethical issues related to the “development of artificial intelligence (AI) and machine learning, but, as yet, have not taken a position on the unchecked use of autonomy and AI in weapon systems.”
    The question before the technology developer is therefore not about its ability to produce high-end technology, but how to teach morality and ethics to the machines. It is fair to argue that uncontrolled coalescence of artificial intelligence and self-learning machines would cause greater harm to the society particularly in the context of killer robots and drones that have found fancy among a few militaries.
    Dr Vijay Sakhuja is the founding Member of The Peninsula Foundation, Chennai.

    Dr Vijay Sakhuja is a co-founder and trustee of  TPF.