5g/Internet of Things Good Contention Answers

China 5G in Europe bad

China-US 5G & AI arms race now

Gerstein, July 5, 2019, Daniel M. Gerstein served as the undersecretary (acting) and deputy undersecretary in the Department of Homeland Security’s Science and Technology Directorate from 2011–2014. Gerstein’s latest book, The Story of Technology: How We Got Here and What the Future Holds, will be published by Prometheus Books in August 2019, Is the United States Ready for a Tech War?, https://nationalinterest.org/feature/united-states-ready-tech-war-65506
The early skirmish lines of this growing tech war can be seen in several key areas. In the next generation of wireless communications, the 5G competition—which will likely see Internet speeds of one hundred times greater capacity than today—is nothing short of an arms race between China and the United States, with national security and economic consequences at stake. Chinese company Huawei looks positioned to emerge the leader, with the recent Russia-Huawei deal further highlighting this concern…. Artificial intelligence and cyber have also emerged as important components in this tech war. It seems almost no topic is immune from discussions about the potential for AI, from AI-enabled robots that could lead to a workerless society to killer robots on a future battlefield. In cyber, high profile attacks have called into question the security of the Internet, the protection of data that resides in cyberspace, and our very privacy.

Foreign investment in the EU’s 5g networks threaten cyber security

European Commission and HR/VP contribution to the European Council, March 9, 2019, https://ec.europa.eu/commission/sites/beta-political/files/communication-eu-china-a-strategic-outlook.pdf EU-China: A Strategic Outlook
Foreign investment in strategic sectors, acquisitions of critical assets, technologies and infrastructure in the EU, involvement in EU standard-setting and supply of critical equipment can pose risks to the EU’s security. This is particularly relevant for critical infrastructure, such as 5G networks that will be essential for our future and need to be fully secure. 5G networks will provide the future backbone of our societies and economies, connecting billions of objects and systems, including sensitive information and communication technology systems in critical sectors. Any vulnerability in 5G networks could be exploited in order to compromise such systems and digital infrastructure – potentially causing very serious damage.

China could use network dominance to disrupt US military operations and support authoritarianism

Statement of rFormer US Military Leaders, April 3, 2019, https://www.washingtonpost.com/context/former-u-s-military-leaders-warn-of-risks-to-future-combat-operations-posed-by-chinese-built-5g-networks/?noteId=75d276c8-8ac5-4dc7-98c7-981ec0a5da72&questionId=b41ae8c1-2697-4dd7-9a16-a56318713b37&utm_term=.f2035886abed, Former U.S. military leaders warn of risks to future combat operations posed by Chinese-built 5G networks
As military leaders who have commanded U.S. and allied troops around the world, we have grave concerns about a future where a Chinese-developed 5G network is widely adopted among our allies and partners. Our concerns fall into three main categories: 1. Espionage: Chinese-designed 5G networks will provide near-persistent data transfer back to China that the Chinese government could capture at will. This is not our opinion or even that of our intelligence community, but the directive of China’s 2017 Intelligence Law, which legally requires that “any organization or citizen shall support, assist, and cooperate with” the security services of China’s One-Party State.

  1. Future military operations: The Department of Defense is still considering how it could use future 5G networks to share intelligence or conduct military operations. The immense bandwidth and access potential inherent in commercial 5G systems means effective military operations in the future could benefit from military data being pushed over these networks. There is reason for concern that in the future the U.S. will not be able to use networks that rely on Chinese technology for military operations in the territories of traditional U.S. allies or emerging partners in Europe, Asia, and beyond. While our concern is for future operations, the time for action is now. Physical infrastructure like ports can easily change ownership, but digital infrastructure is more pernicious because, once constructed, there are limited options to reverse course. This is even more true for Chinese telecommunications firms whose systems are not interoperable with other companies’ equipment, cultivating a persistent reliance on the

Chinese firm.

  1. Democracy and human rights: The export of China’s 5G technologies and suite of related digital products to other countries will advance a pernicious high-tech authoritarianism. If China is invited by foreign governments to build these networks, Beijing could soon have access to the most private data of billions of people, including social media, medical services, gaming, location services, payment and banking information, and more. This information will give China’s repressive government unprecedented powers of foreign influence to favor authoritarian allies, coerce neighboring countries seeking to preserve their sovereignty, and punish human-rights activists the world over. This will make authoritarian governments more powerful while making liberal states more vulnerable. We believe calls for the intelligence community to produce a “smoking gun” to illustrate Beijing’s pernicious behavior misunderstand the challenge at hand. The Chinese Cyber Security Law and other national strategies like “Military-Civil Fusion” mean that nothing Chinese firms do can be independent of the state. Firms must support the law enforcement, intelligence, and national security interests of the Chinese Communist Party—a system fundamentally antithetical to the privacy and security of Chinese citizens and all those using Chinese networks overseas. The onus should instead be on Beijing to explain why it is prudent for countries to rely on Chinese telecommunications technology when Beijing’s current practices threaten the integrity of personal data, government secrets, military operations, and liberal governance

Admiral James Stavridis
USN (Ret.) Commander, U.S. European Command; U.S. Southern Command
General Philip Breedlove
USAF (Ret.) Commander, U.S. European Command
Admiral Samuel Locklear III
USN (Ret.) Commander, U.S. Pacific Command
Admiral Timothy J. Keating
USN (Ret.) Commander, U.S. Pacific Command
Lieutenant General James R. Clapper Jr.
USAF (Ret.) Director of National Intelligence
General Keith B. Alexander
USA (Ret.) Commander, U.S. Cyber Command & Director, National Security Agency

War

European Parliament Report, April 5, 2019, 5G in the EU and Chinese telecoms suppliers, https://www.europarl.europa.eu/RegData/etudes/ATAG/2019/637912/EPRS_ATA(2019)637912_EN.pdf
By contrast, Huawei enjoyssignificant market penetration in the EU on account of its competitive prices and supposedly better quality. Until recently there has been little public awareness in the EU of how the close ties Chinese public and private firms have with the Chinese Communist Party, in order to thrive in the Chinese eco-system, may expose liberal democracies to cyber-attacks, cyber-espionage, digital authoritarianism, and information warfare in the context of 5G. A 2018 US consultancy report points to a range of risk factors associated with Huawei, the most serious concerns being those related to cybersecurity, state-sponsored espionage, military influence and foreign political interference. China uses advanced technologies for the systematic digital surveillance of its population, notably in its restive Xinjiang province, while the EU pursues a human-centric approach to advanced technologies, with the protection of the digital rights of the individual being key. From a legal perspective, Chinese companies and individuals are obliged under penal sanctions to cooperate in intelligence gathering under the Chinese National Intelligence Law as well as under other related Chinese laws. Hence, the US has claimed that China could use Huawei’s 5G network gear as a Trojan horse, by compelling operators to spy, steal corporate, government or military secrets and transmit data to the Chinese authorities.

Allies buying Huawei harms US national security

James Andrew Lewis is a senior vice president at the Center for Strategic and International Studies in Washington, D.C., July 2, 2019, https://www.csis.org/analysis/not-much-concession-huawei, Not Much of a Concession on Huawei
Huawei harms national security primarily when other countries, especially our allies, buy and use its equipment. In the long term, China’s use of espionage and subsidies to build a national champions like Huawei must be confronted, but the near-term effect of the president’s G20 announcement is minimal—it does not change the status quo in security. The primary task is to persuade other countries not to buy Huawei. Reminding them of the company’s dependence on U.S. technology will only help.

Being on China’s internet networks threatens cyber security and increases data drive surveillance

European Law Blog, June 25, 2019 , The Road that divided the EU: Italy joins China’s Belt and Road Initiative, https://europeanlawblog.eu/2019/06/25/the-road-that-divided-the-eu-italy-joins-chinas-belt-and-road-initiative/
A number of officials voiced concerns that through developing telecommunications networks China could spy and disrupt European communications[25]. These concerns are especially pertinent in the light of the ongoing cybersecurity controversy surrounding Chinese tech giant Huawei[26]. In order to avoid such adverse effects, ‘[s]ome EU officials advocate the bloc’s right to veto Chinese investments across the region[27]’, pointing out, moreover, that China has a poor reputation regarding transparency and that unfair trade practices will only favour Chinese firms.These concerns are also reflected in the European Commission Report titled ‘EU-China –

The link is magnified, as China needs foreign 5G development to sustain its industry

J Holslag, 2019, Jonathan Holslag is a Belgian professor, author and policy advisor. J Holslag is a professor international politics at the Free University of Brussels, where he teaches diplomatic history and international politics, and also lectures on geopolitics at various defence academies in Europe, The Silk Road Trap, page number at end of card
It has a mission to increase cross-border land, submarine and trunk networks, and to build so-called telecom hubs and network operation centres. It has a long list of such linkages, with Russia, Mongolia, Kazakhstan, Kyrgyzstan, India, Vietnam, Macau, Myanmar, Nepal, Japan, Singapore and Europe. It states as its objective also to serve as a catalyst for growth of all related industries – business services, equipment producers, infrastructure developers, e-commerce – as well as to make Chinese standards the leading standards in the world, such as the 4G TD-LTE protocol and Huawei’s 5G Protocol.113 The strategy is essentially the same as for the other segments of the sector. A report by the Ministry of Commerce states:   First, the communication infrastructure of many countries along the Belt and Road is backward. Second, the domestic telecommunications industry suffers from excess capacity so that expanding on the international market is the best way out, for instance by promoting telecom services, telecom network construction and supporting and protecting the entire telecom industry chain to go out. Third, telecom operators can also take advantage of the going out of domestic internet and e-commerce enterprises and integrate themselves into a unified platform.114 Holslag, Jonathan. The Silk Road Trap (p. 139). Wiley. Kindle Edition.

Loss of privacy and complete control of big data means totalitarian social control

Simon Denyer, October 22, 2016, Washington Post, China’s plan to organize its society relies on “big data” to regulate everyone, https://www.washingtonpost.com/world/asia_pacific/chinas-plan-to-organize-its-whole-society-around-big-data-a-rating-for-everyone/2016/10/20/1cd0dd9c-9516-11e6-ae9d-0030ac1899cd_story.html?tid=pm_world_pop_b
Imagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are. In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date. This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020. It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.” A high-level policy document released in September listed the sanctions that could be imposed on any person or company deemed to have fallen short. The overriding principle: “If trust is broken in one place, restrictions are imposed everywhere.” A whole range of privileges would be denied, while people and companies breaking social trust would also be subject to expanded daily supervision and random inspections. The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.” The government hasn’t announced exactly how the plan will work — for example, how scores will be compiled and different qualities weighted against one another. But the idea is that good behavior will be rewarded and bad behavior punished, with the Communist Party acting as the ultimate judge. This is what China calls “Internet Plus,” but critics call a 21st-century police state. A version of Big Brother? Harnessing the power of big data and the ubiquity of smartphones, e-commerce and social media in a society where 700 million people live large parts of their lives online, the plan will also vacuum up court, police, banking, tax and employment records. Doctors, teachers, local governments and businesses could additionally be scored by citizens for their professionalism and probity. “China is moving towards a totalitarian society, where the government controls and affects individuals’ private lives,” said Beijing-based novelist and social commentator Murong Xuecun. “This is like Big Brother, who has all your information and can harm you in any way he wants.” At the heart of the social credit system is an attempt to control China’s vast, anarchic and poorly regulated market economy, to punish companies selling poisoned food or phony medicine, to expose doctors taking bribes and uncover con men preying on the vulnerable. “Fraud has become ever more common in society,” Lian Weiliang, vice chairman of the National Development and Reform Commission, the country’s main economic planning agency, said in April. “Swindlers have to pay a price.” Yet in Communist China, the plans inevitably take on an authoritarian aspect: This is not just about regulating the economy, but also about creating a new socialist utopia under the Communist Party’s benevolent guidance. “A huge part of Chinese political theater is to claim that there is an idealized future, a utopia to head towards,” said Rogier Creemers, a professor of law and governance at Leiden University in the Netherlands. “Now after half a century of Leninism, and with technological developments that allow for the vast collection and processing of information, there is much less distance between the loftiness of the party’s ambition and its hypothetical capability of actually doing something,” he said. But the narrowing of that distance raises expectations, says Creemers, who adds that the party could be biting off more than it can chew. Assigning all of China’s people a social credit rating that weighs up and scores every aspect of their behavior would not only be a gigantic technological challenge but also thoroughly subjective — and could be extremely unpopular. “From a technological feasibility question to a political feasibility question, to actually get to a score, to roll this out across a population of 1.3 billion, that would be a huge challenge,” Creemers said. A target for hackers The Communist Party may be obsessed with control, but it is also sensitive to public opinion, and authorities were forced to backtrack after a pilot project in southern China in 2010 provoked a backlash. That project, launched in Jiangsu province’s Suining County in 2010, gave citizens points for good behavior, up to a maximum of 1,000. But a minor violation of traffic rules would cost someone 20 points, and running a red light, driving while drunk or paying a bribe would cost 50. Some of the penalties showed the party’s desire to regulate its citizens’ private lives — participating in anything deemed to be a cult or failing to care for elderly relatives incurred a 50-point penalty. Other penalties reflected the party’s obsession with maintaining public order and crushing any challenge to its authority — causing a “disturbance” that blocks party or government offices meant 50 points off; using the Internet to falsely accuse others resulted in a 100-point deduction. Winning a “national honor” — such as being classified as a model citizen or worker — added 100 points to someone’s score. On this basis, citizens were classified into four levels: Those given an “A” grade qualified for government support when starting a business and preferential treatment when applying to join the party, government or army; or applying for a promotion. People with “D” grades were excluded from official support or employment. The project provoked comparisons with the “good citizen cards” introduced by Japan’s occupying army in China in the 1930s. On social media, residents protested that this was “society turned upside down,” and it was citizens who should be grading government officials “and not the other way around.” The Suining government later told state media that it had revised the project, still recording social credit scores but abandoning the A-to-D classifications. Officials declined to be interviewed for this article. Despite the outcry in Suining, the central government seems determined to press ahead with its plans. Part of the reason is economic. With few people in China owning credit cards or borrowing money from banks, credit information is scarce. There is no national equivalent of the FICO score widely used in the United States to evaluate consumer credit risks. At the same time, the central government aims to police the sort of corporate malfeasance that saw tens of thousands of babies hospitalized after consuming adulterated milk and infant formula in 2008, and millions of children given compromised vaccines this year. Yet it is also an attempt to use the data to enforce a moral authority as designed by the Communist Party. The Cyberspace Administration of China wants anyone demonstrating “dishonest” online behavior blacklisted, while a leading academic has argued that a media blacklist of “irresponsible reporting” would encourage greater self-discipline and morality in journalism. Lester Ross, partner-in-charge of the Beijing office of law firm WilmerHale,
says the rules are designed to stop anyone “stepping out of line” and could intimidate lawyers seeking to put forward an aggressive defense of their clients. He sees echoes of the Cultural Revolution, in which Mao Zedong identified “five black categories” of people considered enemies of the revolution, including landlords, rich farmers and rightists, who were singled out for struggle sessions, persecution and re-education. Under the social credit plan, the punishments are less severe — prohibitions on riding in “soft sleeper” class on trains or going first class in planes, for example, or on staying at the finer hotels, traveling abroad or sending children to the best schools — but nonetheless far-reaching. Xuecun’s criticism of the government won him millions of followers on Weibo, China’s equivalent of Twitter, until the censors swung into action. He fears the new social credit plan could bring more problems for those who dare to speak out. “My social-media account has been canceled many times, so the government can say I am a dishonest person,” he said. “Then I can’t go abroad and can’t take the train.” Under government-approved pilot projects, eight private companies have set up credit databases that compile a wide range of online, financial and legal information. One of the most popular is Sesame Credit, part of the giant Alibaba e-commerce company that runs the world’s largest online shopping platform. Tens of millions of users with high scores have been able to rent cars and bicycles without leaving deposits, company officials say, and can avoid long lines at hospitals by paying fees after leaving with a few taps on a smartphone. The Baihe online dating site encourages users to display their Sesame Credit scores to attract potential partners; 15 percent of its users do so. One woman, who works in advertising but declined to be named to protect her privacy, said she had used Baihe for more than two years. Looking for people who display good Sesame Credit scores helps her weed out scammers, she said. “First I will look at his photo, then I will look at his profile,” she said. “He has to use real-name authentication. But I will trust him and talk to him if he has Sesame Credit.” But it is far from clear that the system will be safe from scams. William Glass, a threat intelligence analyst at cybersecurity expert FireEye, says a centralized system would be both vulnerable and immensely attractive to hackers. “There is a big market for this stuff, and as soon as this system sets up, there is great incentive for cybercriminals and even state-backed actors to go in, whether to steal information or even to alter it,” he said. “This system will be the ground truth of who you are. But considering that all this information is stored digitally, it is certainly not immutable, and people can potentially go in and change it.”

China-US internet bifurcation now

Lee, May 2019, The end of Chimerica The passing of global economic consensus and the rise of US–China strategic technological competition (2019), Professor John Lee is a senior fellow at the United States Studies Centre at the University of Sydney. He is also a senior fellow at the, Hudson Institute in Washington DC. He served as senior adviser to Australian Foreign Minister Julie Bishop from 2016 to 2018., https://s3-ap-southeast-2.amazonaws.com/ad-aspi/2019-04/SI%20136%20The%20end%20of%20Chimerica.pdf?zquA2PovEXIFcI6_E30FH9PuLVx7PMHM
Prominent businesspeople, such as Google’s Eric Schmidt, have already predicted that there’ll be ‘two internets’, one led by the US and the other by China.47 The exclusion of Huawei from the 5G networks of the US, Australia and New Zealand is further evidence of bifurcation.

Government information sharing requirements threaten the security of other country’s internet networks if they use 5G

Office of the Secretary of Defense, 2019, January, https://media.defense.gov/2019/May/02/2002127082/-1/-1/1/2019_CHINA_MILITARY_POWER_REPORT.pdf, Annual Report to Congress:
Other plans address the development of various sectors of China’s robust Internet ecosystem to include cloud computing, the big data industry, e-commerce, and nextgeneration broadband wireless communications networks, including fifthgeneration (5G) wireless networks. Due to information-sharing requirements with Chinese security services as stipulated in Chinese laws, worldwide expansion of Chinese-made equipment in 5G networks will challenge the security and resiliency of other countries’ networks.
Vs. health care

Turn — Huge cyber security risks in health care, triggering network access and patient death

Bill Siwicki, August 17, 2016, 5 steps to cybersecurity for Internet of Things Medical Devices, https://www.healthcareitnews.com/news/5-steps-cybersecurity-internet-things-medical-devices
The healthcare industry is plagued with data breaches and other cybersecurity nightmares. At the same time, connected medical devices – components of the so-called Internet of Things – are multiplying, opening more holes in security and creating terrible potential for patient casualties. Without doubt, unsecured medical devices currently are putting hospitals and patients at risk, according to “Healthcare’s IoT Dilemma: Connected Medical Devices,” a new report from Forrester Research analyst Chris Sherman. “You have less control over connected medical devices than any other aspect of your technology environment,” the report said. “Many times, vendors control patch and update cycles, and vulnerabilities persist that require segmentation from your network. Considering that many of these devices are in direct contact with patients, this is a major cause for concern.” Additionally, medical devices are vulnerable to four attack scenarios, the report said. “Threats against medical devices include denial-of-service (DoS), patient data theft, therapy manipulation and asset destruction,” the report said. “Each represents risk to your organization, with DoS currently being the most severe.”

Turn – data will be used to discriminate against those with health problems

Kelsey Finch, Westin Research Fellow, International Association of Privacy Professionals, 2015, Welcome to the Metropticon: Protecting Privacy in a Hyperconnected Town,” FORDHAM URBAN LAW JOURNAL v. 41, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2549&context=ulj,
In urban and nonurban settings, big data analysis exacerbates concerns about unfairness and discrimination.118 It allows for granular distinctions to be made between individual characteristics, preferences, and activities. Reports by the Federal Trade Commission, for example, indicate that data brokers regularly categorize consumers by inferred interests, sorting them in categories like “Dog Owner,” “Winter Activity Enthusiast,” “Expectant Parent,” or “Diabetes Interest,” and into age-, ethnicicty- and income-focused categories like “Urban Mixers” (which includes “a high concentration of Latinos and African Americans with low incomes”) or “Rural Everlasting” (which includes “single men and women over the age of 66 with ‘low educational attainment and low net worths’”).119 Big data analytics can help mask discriminatory intent behind apparently innocuous mirrors and proxies.120 For example, disparate policies based on location can implicate redlining, the act of denying or increasing the cost of services to residents of neighborhoods comprised mostly of minorities.121 Urban preemptive policing schemes are another area where data-driven policies could mask discriminatory agendas.

Turn – economic discrimination against the unhealthy

Scott Pepper, Professor of Law, University of Colorado School of Law, August 2015, Regulating the Internet of Things: First Steps, https://www.texaslrev.com/wp-content/uploads/2015/08/Peppet-93-1.pdf
First, subpart II(A) explores the ways in which the Internet of Things may create new forms of discrimination—including both racial or protected class discrimination and economic discrimination—by revealing so much information about consumers. Computer scientists have long known that the phenomenon of “sensor fusion” dictates that the information from two disconnected sensing devices can, when combined, create greater information than that of either device in isolation.32 Just as two eyes generate depth of field that neither eye alone can perceive, two Internet of Things sensors may reveal unexpected inferences. For example, a fitness monitor’s separate measurements of heart rate and respiration can in combination reveal not only a user’s exercise routine, but also cocaine, heroin, tobacco, and alcohol use, each of which produces unique biometric signatures.33 Sensor fusion means that on the Internet of Things, “every thing may reveal everything.” By this I mean that each type of consumer sensor (e.g., personal health monitor, automobile black box, or smart grid meter) can be used for many purposes beyond that particular sensor’s original use or context, particularly in combination with data from other Internet of Things devices. Soon we may discover that we can infer whether you are a good credit risk or likely to be a good employee from driving data, fitness data, home energy use, or your smartphone’s sensor data. This makes each Internet of Things device—however seemingly small or inconsequential—important as a policy matter, because any device’s data may be used in far-removed contexts to make decisions about insurance, employment, credit, housing, or other sensitive economic issues. Most troubling, this creates the possibility of new forms of racial, gender, or other discrimination against those in protected classes if Internet of Things data can be used as hidden proxies for such characteristics. In addition, such data may lead to new forms of economic discrimination as lenders, employers, insurers, and other economic actors use Internet of Things data to sort and treat differently unwary consumers. Subpart II(A) explores the problem of discrimination created by the Internet of Things, and the ways in which both traditional discrimination law and privacy statutes, such as the Fair Credit Reporting Act (FCRA),34 are currently unprepared to address these new challenges

Turn – health data can be used to infer other characteristics that can be used to discriminate in ways that laws can’t control

Scott Pepper, Professor of Law, University of Colorado School of Law, August 2015, Regulating the Internet of Things: First Steps, https://www.texaslrev.com/wp-content/uploads/2015/08/Peppet-93-1.pdf
The Legal Problem: Antidiscrimination and Credit Reporting Law Is Unprepared.—There are two main legal implications of the possibility that everything may begin to reveal everything. First, will the Internet of Things lead to new forms of discrimination against protected classes, such as race? Second, will the Internet of Things lead to troubling forms of economic discrimination or sorting? a. Racial & Other Protected Class Discrimination.—If the Internet of Things creates many new data sources from which unexpected inferences can be drawn, and if those inferences are used by economic actors to make decisions, one can immediately see the possibility of seemingly innocuous data being used as a surrogate for racial or other forms of illegal discrimination. One might not know a credit applicant’s race, but one might be able to guess that race based on where and how a person drives, where and how that person lives, or a variety of other habits, behaviors, and characteristics revealed by analysis of data from a myriad of Internet of Things devices. Similarly, it would not be surprising if various sensor devices—a Fitbit, heart-rate tracker, or driving sensor, for example—could easily discern a user’s age, gender, or disabilities. If sensor fusion leads to a world in which “everything reveals everything,” then many different types of devices may reveal sensitive personal characteristics. As a result, the Internet of Things may make possible new forms of obnoxious discrimination. This is a novel problem and one that legal scholars are just beginning to recognize.241 I am not convinced that the most blatant and obnoxious forms of animus-based discrimination are likely to turn to Internet of Things data— if a decision maker wants to discriminate based on race, age, or gender, they likely can do so without the aid of such Internet of Things informational proxies. Nevertheless, the problem is worth considering because traditional antidiscrimination law is in some ways unprepared for these new forms of data. Racial and other forms of discrimination are obviously illegal under Title VII.242 Title I of the Americans with Disabilities Act (ADA) forbids discrimination against those with disabilities,243 and the Genetic Information Nondiscrimination Act (GINA) bars discrimination based on genetic inheritance.244 These traditional antidiscrimination laws leave room, however, for new forms of discrimination based on Internet of Things data. For example, nothing prevents discrimination based on a potential employee’s health status, so long as the employee does not suffer from what the ADA would consider a disability.245 Similarly, antidiscrimination law does not prevent economic sorting based on our personalities, habits, and character traits.246 Employers are free not to hire those with personality traits they don’t like; insurers are free to avoid insuring—or charge more to—those with risk preferences they find too expensive to insure; lenders are free to differentiate between borrowers with traits that suggest trustworthiness versus questionable character.247 As analysis reveals more and more correlations between Internet of Things data, however, this exception or loophole in antidiscrimination law may collapse under its own weight. A decision at least facially based on conduct—such as not to hire a particular employee because of her lack of exercise discipline—may systematically bias an employer against a certain group if that group does not or cannot engage in that conduct as much as others. Moreover, seemingly voluntary “conduct” may shade into an immutable trait depending on our understanding of genetic predisposition. Nicotine addiction and obesity, for example, may be less voluntary than biologically determined.248 The level of detail provided by Internet of Things data will allow such fine-grained differentiation that it may easily begin to resemble illegal forms of discrimination. Currently, traditional antidiscrimination law has not yet considered these problems.
AI Turn

5G makes AI work

Ernst & Youg, 2019, China is posed to win the 5G race, , https://www.ey.com/Publication/vwLUAssets/ey-china-is-poised-to-win-the-5g-race-en/$FILE/ey-china-is-poised-to-win-the-5g-race-en.pdf
And for perhaps the first time in China’s modern history, Huawei’s growing market share and technological prowess are putting a champion of the Chinese government in a position to dominate a next-generation technology. 5G will offer hugely faster data speeds than today’s mobile technology, which is important for consumers. But 5G will also be the technology that ensures artificial intelligence functions seamlessly, that driverless cars don’t crash, that machines in automated factories can communicate flawlessly in real time around the world, and that nearly every device on earth will be wired together.

China wants to expand global data collection to enhance AI

Nigel Inkster (2019) The Huawei Affair and China’s Technology Ambitions, Survival, 61:1, 105-111, DOI: 10.1080/00396338.2019.1568041, Nigel Norman Inkster CMG is the former director of operations and intelligence for the British Secret Intelligence Service, and is currently the Director of Transnational Threats and Political Risk at the International Institute for Strategic Studies
rrespective of the outcome of the Meng case or the trade talks set up in Buenos Aires, it is clear that strategic competition between the US and China will shape the geopolitics of the twenty-first century in ways that are difficult to predict. A key battleground is the high-technology sector. As a result of US actions, China can be expected to redouble efforts to enhance capabilities where it cannot yet fly solo, while simultaneously seeking to make rapid progress in areas such as artificial intelligence, where its ability to collect and utilise large quantities of data could potentially confer a sig- nificant advantage. China will also probably continue to expand its global digital presence and to bind other states to its vision for the future through the provision of equipment and training. At present, a modus vivendi between two competing ideologies and values systems looks elusive.

Ceding AI dominance causes China to disrupt US military superiority- enables massive increases in effectiveness across the board.

Michael C. Horowitz, 18, [Michael C. Horowitz – former Emory Debater and associate professor of political science and the associate director of Perry World House at the University of Pennsylvania, “The promise and peril of military applications of artificial intelligence”, Bulletin of the Atomic Scientists, 4-23-2018, https://thebulletin.org/landing_article/the-promise-and-peril-of-military-applications-of-artificial-intelligence/] Valiaveedu The promise of AI—including its ability to improve the speed and accuracy of everything from logistics and battlefield planning to human decision making—is driving militaries around the world to accelerate research and development. Here’s why. Artificial intelligence (AI) is having a moment in the national security space. While the public may still equate the notion of artificial intelligence in the military context with the humanoid robots of the Terminator franchise, there has been a significant growth in discussions about the national security consequences of artificial intelligence. These discussions span academia, business, and governments, from Oxford philosopher Nick Bostrom’s concern about the existential risk to humanity posed by artificial intelligence to Tesla founder Elon Musk’s concern that artificial intelligence could trigger World War III to Vladimir Putin’s statement that leadership in AI will be essential to global power in the 21st century. What does this really mean, especially when you move beyond the rhetoric of revolutionary change and think about the real world consequences of potential applications of artificial intelligence to militaries? Artificial intelligence is not a weapon. Instead, artificial intelligence, from a military perspective, is an enabler, much like electricity and the combustion engine. Thus, the effect of artificial intelligence on military power and international conflict will depend on particular applications of AI for militaries and policymakers. What follows are key issues for thinking about the military consequences of artificial intelligence, including principles for evaluating what artificial intelligence “is” and how it compares to technological changes in the past, what militaries might use artificial intelligence for, potential limitations to the use of artificial intelligence, and then the impact of AI military applications for international politics. The potential promise of AI—including its ability to improve the speed and accuracy of everything from logistics to battlefield planning and to help improve human decision-making—is driving militaries around the world to accelerate their research into and development of AI applications. For the US military, AI offers a new avenue to sustain its military superiority while potentially reducing costs and risk to US soldiers. For others, especially Russia and China, AI offers something potentially even more valuable—the ability to disrupt US military superiority. National competition in AI leadership is as much or more an issue of economic competition and leadership than anything else, but the potential military impact is also clear. There is significant uncertainty about the pace and trajectory of artificial intelligence research, which means it is always possible that the promise of AI will turn into more hype than reality. Moreover, safety and reliability concerns could limit the ways that militaries choose to employ AI. What kind of technology is artificial intelligence? Artificial intelligence represents the use of machines, or computers, to simulate activities thought to require human intelligence, and there are different AI methods used by researchers, companies, and governments, including machine learning and neural networks. Existing work on the trajectory of AI technology development suggests that, even among AI researchers, there is a great deal of uncertainty about the potential pace of advances in AI. While some researchers believe breakthroughs that could enable artificial general intelligence (AGI) are just a few years away, others think it could be decades, or more, before such a breakthrough occurs. Thus, this article focuses on “narrow” artificial intelligence, or the application of AI to solve specific problems, such as AlphaGo Zero, an AI system designed to defeat the game “Go.” From a historical perspective, it is clear that AI represents a broad technology with the potential, if optimists about technology development are correct, to influence large swaths of the economy and society, depending on the pace of innovation. It is something that could be part of many things, depending on the application, rather than a discrete piece of equipment, such as a rocket or an airplane. Thus, AI is better thought of, for military purposes, as an enabler. What could artificial intelligence mean for militaries? What might militaries do with artificial intelligence, though, and why is this important for international politics? Put another way, what challenges of modern warfare might some militaries believe that artificial intelligence can help them solve? Three potential application areas of AI illustrate why militaries have interest. First, the challenge for many modern militaries when it comes to data is similar to that faced by companies or government in general—there is often too much data, and it is hard to process it fast enough. Narrow AI applications to process information offer the potential to speed up the data interpretation process, freeing human labor for higher level tasks. For example, Project Maven in the United States military seeks to use algorithms to more rapidly interpret imagery from drone surveillance feeds. This type of narrow AI application for militaries has clear commercial analogues and could go well beyond image recognition. From image recognition to processing of publicly available or classified information databases, processing applications of AI could help militaries more accurately and quickly interpret information, which could lead to better decision making. Second, from hypersonics to cyber-attacks, senior military and civilian leaders believe the speed of warfare is increasing. Whether you think about it in terms of an OODA (observe, orient, decide, act) loop or simply the desire to attack an enemy before they know you have arrived, speed can provide an advantage in modern wars. Speed is not just about the velocity of an airplane or a munition, however. Speed is about decision-making. Just as with remotely-piloted systems, aircraft “piloted” by AI and freed from the limitations of protecting a human pilot, could trade many of the advantages that come with human pilots in the cockpit for speed and maneuverability. In the case of air defense, for example, operating at machine speed could enable a system to protect a military base or city more effectively when facing saturation attacks of missiles than a person, whose reflexes could be overwhelmed, no matter how competent he or she is. This is already the principle under which Israel’s Iron Dome system operates. Third, AI could enable a variety of new military concepts of operation on the battlefield, such as the oft-discussed “loyal wingman” idea, which posits a human airplane pilot or tank driver who could coordinate a number of uninhabited assets as well. The more complicated the battlespace, however, the more useful it will be for those “wingmen” to have algorithms that help them respond in cases where the coordinating human controller cannot directly guide them. Swarms, similarly, will likely require AI for coordination. Clearly, militaries have incentives to research potential applications of AI that could improve military effectiveness. These incentives are not simply a matter of competitive pressure from other militaries; there are internal political and bureaucratic reasons that impel countries toward autonomous weapon systems. For democracies such as the United States, autonomous systems in theory offer the potential to achieve tasks at lo
wer cost and risk to human personnel.
For example, the US Army Robotics and Autonomous Systems Strategy, published in 2017, specifically references the ability of autonomous systems to increase effectiveness at lower cost and risk. For more autocratic regimes such as China and Russia, AI means control. AI systems could allow autocratic nations to reduce their reliance on people, allowing them to operate their militaries while relying on a smaller, more loyal, part of the population. This discussion of military applications of AI is broader than the question of lethal autonomous weapon systems, which the Convention on Certain Conventional Weapons of the United Nations has debated for several years. One application of AI for military purposes might be the creation of autonomous systems with the ability to use lethal force, but there are many others. Barriers to effective uses of artificial intelligence. Military adoption of AI faces both technological and organizational challenges, and some are the types of first-order concerns about safety and reliability that could derail the enterprise so the vaunted AI-based transformation of modern militaries never really occurs. These technological challenges fall into two broad categories: internal reliability and external exploitation. The specific character of narrow AI systems means they are trained for very particular tasks, whether that is playing chess or interpreting images. In warfare, however, the environment shifts rapidly due to fog and friction, as Clausewitz famously outlined. If the context for the application of a given AI system changes, AI systems may be unable to adapt. This fundamental brittleness thus becomes a risk to the reliability of the system. AI systems deployed against each other on the battlefield could generate complex environments that go beyond the ability of one or more systems to comprehend, further accentuating the brittleness of the systems and increasing the potential for accidents and mistakes. The very nature of AI, which means a machine determining the best action and taking it, may make it hard to predict the behavior of AI systems. For example, when AlphaGo defeated Lee Sedol, one of the best Go players in the world, the second game included a moment when AlphaGo made a move so unusual that Sedol left the room for 15 minutes to consider what had just happened. It turned out that the move was simply something that even an elite human player would not consider, but the machine had figured out. That shows the great potential of AI to improve on decision-making processes. However, militaries run based on reliability and trust—if human operators, whether in a command center or on the battlefield, do not know exactly what an AI will do in a given situation, it could complicate planning, making operations more difficult and accidents more likely. The challenge of programming an AI system for every possible contingency can also undermine reliability. Take an AI system trained to play the game Tetris. The researchers that developed it discovered that the AI had trained itself to pause the game anytime it was about to lose, to fulfill the command that instructed it to maximize the probability of victory with every move. This adaptation by the AI reflects behavioral uncertainty beyond what most militaries would tolerate. Challenges with bias and appropriate training data could further make reliability difficult. Explainability represents another challenge for AI systems. It is important for a system to not just be reliable, but be explainable in a way that allows others to have trust. If an AI system behaves a certain way in classifying an image or avoiding adversary radars, but cannot output why it made a particular choice, humans may be less likely to trust it. Reliability is not simply a matter of AI system design. Warfare is a competitive endeavor, and just as militaries and intelligence organizations attempt to hack and disrupt the operations of potential adversaries in peacetime and wartime today, the same would likely be true of a world with AI systems, whether those systems were in a back office in Kansas or deployed on a battlefield. Researchers have already demonstrated the way that image recognition algorithms are susceptible to pixel-level poisoned data that leads to classification problems. Algorithms trained on open-source data could be particularly vulnerable to this challenge as adversaries attempt to “poison” the data that other countries might even be plausibly using to train algorithms for military purposes. This adversarial data problem is significant. Hacking could also lead to the exploitation of algorithms trained on more secure networks, illustrating a critical interaction between cybersecurity and artificial intelligence in the national security realm. When will militaries use artificial intelligence? A key aspect often lost in the public dialogue over AI and weapons is that militaries will not generally want to use AI-based systems unless they are appreciably better than existing systems at achieving a particular task, whether it is interpreting an image, bombing a target, or planning a battle. Given these problems of safety and reliability, which are amplified in a competitive environment in which adversaries attempt to disrupt each other’s systems, what promise exists for AI in the military context? Militaries are unlikely to stop researching AI applications simply because of these safety problems. But these safety and reliability problems could influence the types of AI systems developed, as well as their integration into “regular” military operational planning. Consider three layers of military technological integration—development, deployment, and use. At each of these stages, the potential promise of improved effectiveness, lower risk to human soldiers, and lower cost will compete with the challenges of safety and reliability to influence how militaries behave. Military research in AI is occurring in a competitive environment. China’s aggressive push into AI research across the board has many in the United States worrying, for example, about China surpassing the US in AI capabilities. Given the possible upsides of AI integration, many militaries will fear being left behind by the capacities of other actors. No country wants its fighters, ships, or tanks to be at risk from an adversary swarm, or simply adversary systems that react and shoot faster in a combat environment. Despite the build-up, we are still at the outset of the integration of AI into military systems. Missy Cummings, director of Duke’s Humans and Autonomy Lab, argues that despite increases in research and development of autonomous systems by militaries around the world, progress has been “incremental” and organizations are “struggling to make the leap from development to operational implementation.” At the development stage, testing and integration activities should reveal potential safety and reliability challenges and make that a key area of investment for military applications of AI. Now, militaries may decide that those risks are acceptable in some cases, because of the risk of falling behind and the belief that they can correct programming challenges on the fly. Essentially, militaries will weigh the trade-off between reliability and capability in the AI space, with AI systems potentially offering greater capabilities, but with reliability risks. When it comes to deploying or using AI systems, militaries may weigh considerations specific to a particular conflict. As the stakes of a conflict go up, if a military views defeat as more likely, it will naturally become more risk acceptant in its deployment of technologies with great promise, but also with reliability concerns. In contrast, as the stakes decline and a military believes it can win while taking less technological risk, the deployment of AI systems will lag until militaries believe that are as reliable or more reliable than existing systems. The history of military and commercial technology development suggests both reasons for caution and reasons to
believe that safety and reliability problems may not lead to a full halt in AI military integration—nor should they, from the perspective of militaries. Karl Benz produced the first motor vehicle powered by a gasoline-powered combustion engine in 1886. This occurred a generation after the first patents were filed for the gas-powered internal combustion engine, and it took another generation after Benz’s automobile for the car to overtake the horse as a dominant means of transportation. What took so long? The answer is safety and reliability. Challenges faced by the early internal combustion engine included reliability, costs, a lack of standardized machinery, manufacturing inconsistency, and constant breakdowns because of the complexity of the automobiles the combustion engine was powering. This familiar story has been repeated whenever new technologies are invented. They face significant safety and reliability issues, while promising greater capabilities relative to the status quo. Sometimes, as with the combustion engine or the airplane, those issues are overcome. Other times, as with military uses of airships or dirigibles, the safety issues loom so large that the technology never becomes reliably more effective than alternatives. AI and the future of war. The consequences of AI for the world will likely exceed the consequences for military power and the future of war. The impact of automation on the future of work could have massive economic and societal consequences that will occur regardless of choices that militaries make about whether to develop, or not, AI applications for specific military areas. Emmanuel Macron, the president of France, recently argued that AI will disrupt business models and jobs at a scale that requires a new French AI strategy to ensure French leadership in AI development. He sounded the alarm, though, on one potential military application of AI, stating that he is “dead against” using AI to kill on the battlefield. Yet France’s intelligence community is already using AI to improve the speed and reliability of data processing, believing that this can help improve the performance of the French military on the battlefield. The potential promise of AI, despite safety and reliability concerns, means leading militaries around the world will certainly see the risks of standing still. From data processing to swarming concepts to battlefield management, AI could help militaries operate faster and more accurately, while putting fewer humans at risk. Or not. The safety and reliability problems endemic to current machine learning and neural network methods mean that adversarial data, among other issues, will present a challenge to many military applications of AI. Senior leaders in the United States national security establishment seem well aware of the risks for the United States in an era of technological change. Secretary of Defense James Mattis recently stated that “it’s still early” but that he is now questioning whether AI could change “the fundamental nature of war.” Yet US investments in AI are relatively modest compared to China, whose national AI strategy is unleashing a wave of investment in Chinese academic, commercial, and military circles. The United States may be more self-aware than Great Britain was a century ago about the ways that broader technological changes could influence its military superiority, but that does not guarantee success. This is especially true when one considers that the impact of any technology depends principally on how people and organizations decide to use it. From the way the longbow broke the power of mounted knights to the way naval aviation ended the era of the battleship, history is littered with great powers that thought they were still in the lead—right up until a major battlefield defeat. We simply do not know whether the consequences of AI for militaries will be at a similar scale. Given the degree of uncertainty even within the AI research community about the potential for progress, and the safety and reliability challenge, it is possible that, two decades from now, national security analysts will recall the AI “fad.” But given its breadth as a technology, as compared to specific technologies like directed energy, and the degree of commercial energy and investment in AI, it seems more likely that the age of artificial intelligence is likely to shape, at least to some extent, the future of militaries around the world.

Independently, A.I. Dominance is key to military-power projection

Todd Probert 18, [Todd Probert- vice president of Mission Support and Modernization for Raytheon Company’s Intelligence, Information and Services business, “US Military Dominance Requires Better Command-and-Control Tools”, Defense One, 4-17-2018, https://www.defenseone.com/ideas/2018/04/us-military-dominance-requires-better-command-and-control-tools/147491/] Valiaveedu
To maintain its position as the world’s dominant military, the U.S. needs new command-and-control technologies that can fully connect and put to use the capabilities of every asset available, regardless of service or domain. These new tools will need to be quickly upgradeable – often on the fly – and resilient enough so commanders can trust the data as it comes in and goes out to individual platforms and units. Forward-thinking leaders are starting to get serious about this need. In a speech to the Air Force Association’s annual Air Warfare Symposium in Orlando, Florida in February, Air Force Chief of Staff Gen. David Goldfein said, “If we are going to fight and win in wars of cognition, we’ve got to ask a different series of questions before starting an acquisition program on any platform, any sensor or any weapon. Does it connect? Good. Does it share? Better. Does it learn? Perfect.” The services have long experimented with the ability to integrate and command platforms from across the services. The post-Vietnam AirLand Battle doctrine and changes in joint operations forced by the 1986 Goldwater-Nichols Act broke domain stovepipes and prompted commanders to fuse the individual services into a single, warfighting whole. But in an era when potential adversaries are catching up with and finding ways to nullify U.S. military capability, our current methods of combining air, land, and sea power — to say nothing of the electronic domain — are no longer good enough. In order to fully connect and integrate the future force, the U.S. military must accelerate the adoption of autonomy, machine learning and artificial intelligence to increase the speed at which data is processed, information distributed and warfighting decisions made. Fortunately, recent breakthroughs in these technologies promise to dramatically improve the ability to connect platforms and shrink the data-to-decision timeline. We are reaching a point where commercial software companies can develop tools and algorithms that allow commanders to make warfighting decisions nearly instantaneously, across every domain, and using whichever platforms can be networked into a battle management system, regardless of service or manufacturer. These software tools won’t remove commanders from the decision-making process; they will simply help them collect and make sense of data in a way that allows for fully informed decisions quicker than a room full of human planners ever could.
Vs. Smart Cities/Driverless Cards

Smart cities are electronic panopticons that are vulnerable to hackers, likely to malfunction, exclude the poor, and discriminate against protect classes

Kelsey Finch, Westin Research Fellow, International Association of Privacy Professionals, 2015, Welcome to the Metropticon: Protecting Privacy in a Hyperconnected Town,” FORDHAM URBAN LAW JOURNAL v. 41, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2549&context=ulj,
Although privacy advocates may yet stand in for Jane Jacobs and other social reformers in this modern urban planning debate, it is far from clear that smart cities are mere panaceas. Smart cities bring cutting-edge monitoring, big data analysis, and innovative management technologies to the world of urban planning, promising to make cities “more livable, more efficient, more sustainable, and perhaps more democratic.”7 Of course, “clever cities will not necessarily be better ones.”8 There is a real risk that, rather than standing as “paragons of democracy, they could turn into electronic panopticons in which everybody is constantly watched.”9 They are vulnerable to attack by malicious hackers or malfunction in their complex systems and software, and they furnish new ways to exclude.the poor and covertly discriminate against protected classes.

Smart cities will met down

Kelsey Finch, Westin Research Fellow, International Association of Privacy Professionals, 2015, Welcome to the Metropticon: Protecting Privacy in a Hyperconnected Town,” FORDHAM URBAN LAW JOURNAL v. 41, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2549&context=ulj,
In his book, Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia, Anthony Townsend expressed alarm about smart cities becoming “buggy, brittle and bugged.”136 As with any complex interrelated technological infrastructure, smart city systems are vulnerable to attacks by hackers or software bugs causing extended blackouts, massive traffic jams, communications shutdowns, or wasteful water spills.137 Given that any device connected to the Internet is exposed to cyberattack, smart cities multiply the potential for security breaches that could impact critical systems. Over the past few years, cyberattacks on supervisory control and data acquisition (SCADA) systems have multiplied in number and sophistication, ranging from a sole hacker disrupting a water utility in Illinois138 to powerful nation states launching a crippling assault on a nuclear reactor in Iran.1

Cyber attacks will wreck smart cities

Kelsey Finch, Westin Research Fellow, International Association of Privacy Professionals, 2015, Welcome to the Metropticon: Protecting Privacy in a Hyperconnected Town,” FORDHAM URBAN LAW JOURNAL v. 41, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2549&context=ulj,
Cyberattacks will affect not only security, but also privacy. Critics argue that “[t]he quest to centrali[z]e the distributed and messy yet highly resilient intelligence of existing cities within a single network or piece of software appears quixotic at best.”140 With millions of citizens, commuters, and visitors interacting with multiple systems to create trillions of data points each day, smart cities will generate a deafeningly noisy data exhaust. This, in turn, will spawn huge quantities of incomplete, imprecise, and conflicting data; biased sampling; and outliers, inevitably yielding correlations that imply spurious causation. Pulling actionable conclusions out of the noise could be daunting. Further, while in other areas such as Netflix film recommendations, the potential harm of an erroneous inference may be as small as an evening wasted, in urban management it could lead to diverting city resources away from the needy or performing unjustified arrests.

Smart cities destroy privacy

Kelsey Finch, Westin Research Fellow, International Association of Privacy Professionals, 2015, Welcome to the Metropticon: Protecting Privacy in a Hyperconnected Town,” FORDHAM URBAN LAW JOURNAL v. 41, https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=2549&context=ulj, p. 1582
Today, once again a diverse array of urban planners, businesses, technologists, academics, governments, and consumers have begun to join their voices in support of the newest revolution in urban planning: the smart city. Driven by the technological promise of the Internet of Things (the increasing array of objects and devices that communicate with each other over the network) and the intelligent planning systems of big data (the enhanced ability to collect, store, and process massive troves of information), smart city initiatives are equally, if not more, disruptive to the urban existence of today as slum-clearing urban renewal efforts were in the previous century. Smart city technologies thrive on constant, omnipresent data flows captured by cameras and sensors placed throughout the urban landscape. These devices pick up all sorts of behaviors, which can now be cheaply aggregated, stored, and analyzed to draw personal conclusions about city dwellers.5 This ubiquitous surveillance threatens to upset the balance of power between city governments and city residents, and to destroy the sense of privacy and urban anonymity that has defined urban life over the past century

Smart cities could undermine choices that give meaning to our lives

Courtney Humphries, The Too-Smart City, BOSTON GLOBE (May 19, 2013), available at https://www.bostonglobe.com/ideas/2013/05/18/the-too-smartcity/q87J17qCLwrN90amZ5CoLI/story.html.
Take that smart-parking system at BU. The convenience of hassle-free parking requires people to park only in the spot reserved for them by the system’s software. But, modest as it may sound, the freedom to park where we want—to make our own small daily choices—helps give our lives meaning, and the challenges of city life may well be part of what makes urban existence so attractive. The orderly, manageable city is a vision with enduring appeal, from Plato’s Republic to Songdo, an entirely new smart city constructed near Seoul. But there’s an equally compelling vision of the city as a chaotic and dynamic whirl of activity, an emergent system, an urban jungle at once hostile and full of possibility—a place to lose oneself. Hill points out that efficiency isn’t the reason we like to live in cities, and it’s not the reason we visit them. Tourists come to Boston for the bustling charm of the North End, not the sterile landscape of Government Center. In a city where everything can be sensed, measured, analyzed, and controlled, we risk losing the overlooked benefits of inconvenience. It’s as if cities are one of the last wild places, and one that we’re still trying to tame.
Post humanism turn

Weareable technology enables post-humanism

Philip Howard, Professor, Oxford, 2015, Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up, kindle edition – page number at end of card
David Rose, professor, MIT labs, 2015, Enchanted Objects: Innovation, Design, and the Future of Technology, Kindle edition, page number at end of card
A second possible future is prosthetics— wearable technology. This trajectory locates technology on the person, to fortify and enhance us with more capabilities, to, in a sense, give us superpowers.   To make humans superhuman or, indeed, “posthuman.” This path of embedded wearability has some great benefits. I’m inspired, for example, when I see how prosthetics can restore physical capabilities to people who have lost them, enabling people— once considered “disabled”— to walk and run as they couldn’t before, or to see or hear with range or precision they had lost or never had. However, when companies talk about a future of implants and ingestibles for everyone, I get queasy. Like plastic surgery, this future seems irreversible, fraught with unforeseen consequences, and prone to regret rather than enchantment. An early and well-known tech prosthetic was the Sony Walkman, Rose, David. Enchanted Objects: Innovation, Design, and the Future of Technology (pp. 11-12). Scribner. Kindle Edition.
CONTINUES IN FUTUR CAPTHER
THE SINGULARITY AND TRANSHUMANISM As much as we may take this message to heart, the fundamental human drive for perfect health and longevity is as powerful as ever. Humans today are no less prone to believe in potions and elixirs, magic stones, fountains, and youth-giving springs than our ancestors. We now call them medicines and treatments, surgeries and preventions, diet and wellness regimens. Technology plays a huge role in the fulfillment of this drive. Ray Kurzweil, one of the most visible advocates of long life, the “singularity,” and transhumanism (both deal with the eventual fusion   of man and machine in everlasting, cyborglike beings), is working on a variety of ways to combat disease and prolong life. He predicts that, within fifteen years, life expectancy will be increasing by one year every year. This will largely be accomplished by steadily eliminating the most common causes of death, such as heart disease and diabetes. (Kurzweil suffers from glucose intolerance and reportedly takes some 150 supplements a day to control it.) Kurzweil predicts that by 2030 or so “we’ll be putting millions of tiny, single-purpose robots called nanobots inside our bodies to augment our immune system and wipe out disease. One scientist has already cured type 1 diabetes in rats with a blood-cell-size device.” 4 By 2050, Kurzweil   says, our entire body might be composed of nanobots and we will be completely disease-free. That may not be eternal life, but it ensures a longevity well beyond our current calculation of it. In the quest for immortality, fantasy and the technology overlap in odd ways. Did Michael Jackson really have an oxygen chamber in which he slept to boost his capacities? A man called FM-2030 (born Fereidoun M. Esfandiary) was a transhumanist, author, professor, and former basketball player. Like Kurzweil, he envisioned a new world in which technology and its spread would fundamentally alter our basic human functionings and assumptions. He hoped to live to be at least a hundred, which would   have meant celebrating his centenary birthday in 2030. “The name 2030 reflects my conviction that the years around 2030 will be a magical time,” FM-2030 said in an interview on NPR. “In 2030 we will be ageless and everyone will have an excellent chance to live forever. Twenty thirty is a dream and a goal.” 5 He died in 2000, from cancer, and now resides in cryogenic suspension at a facility in Arizona. 6 Most of us do not genuinely yearn for immortality. We’ve seen how hard it is for vampires to cope, at least after those first couple hundred years of constant fun. Philosopher Stephen Cave, author of Immortality, talks about four paths to prolonging life: Staying Alive, Resurrection, Soul,   and Legacy. The first of these— staying alive— is where we put a good deal of our daily energy today, and it is the one in which technology and enchanted objects most come into play. Cove argues that the promise of defeating disease and debilitation has “never been more widespread than today. . . . A host of well-credentialed scientists and technologists believe that longevity liftoff is imminent.” 7 I am less concerned with “longevity liftoff” than I am with finding ways for enchanted objects to help us achieve maximum well-being by taking advantage of the health boons already available to us. Rose, David. Enchanted Objects: Innovation, Design, and the Future of Technology (p. 117). Scribner. Kindle Edition.

Human extinction

Allen Scarborough, 2016, The Singularity: Where Man and Machine Meet, Kindle Edition
The singularity, as it is called by others, is coming. It may not be for several decades or it may arrive by 2025, but it will come as surely as the sun rises. Man has set into motion a monster that he can only slightly control and will eventually be unable to control. Let us hope that we are not swept aside by a new form of living entity that considers us the way we have considered most other things, that being as a nuisance or something to exploit. Robots will be able to come up with new forms of torture more painful than anything a human has ever invented using their advanced artificial intelligence? I say, not yet, but it may come to that in the future if we are to survive. In a best case scenario robots and humans will live as hybrids or in symbiotic bliss, but how often has perfection been achieved in human endeavors? We may, to save our own necks, have to stop Scarbrough, Allen L.. The Singularity: Where Man And Machine Meet (Kindle Locations 77-79). UNKNOWN. Kindle Edition.