A.I. Evidence

Universal and Transferable Adversarial Attacks on Aligned Language Models

AI will kill everyone

The Impact of AI and Misplaced Catastrophic Anxieties

The Week, June 17, 2023, https://theweek.com/artificial-intelligence/1024341/ai-the-worst-case-scenario, AI: The worst-case scenario, AI: The worst-case scenario; Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen?

Artificial intelligence’s architects warn it could cause human “extinction.” How might that happen? Here’s everything you need to know: What are AI experts afraid of? They fear that AI will become so superintelligent and powerful that it becomes autonomous and causes mass social disruption or even the eradication of the human race. More than 350 AI researchers and engineers recently issued a warning that AI poses risks comparable to those of “pandemics and nuclear war.” In a 2022 survey of AI experts, the median odds they placed on AI causing extinction or the “severe disempowerment of the human species” were 1 in 10. “This is not science fiction,” said Geoffrey Hinton, often called the “godfather of AI,” who recently left Google so he could sound a warning about AI’s risks. “A lot of smart people should be putting a lot of effort into figuring out how we deal with the possibility of AI taking over.” When might this happen? Hinton used to think the danger was at least 30 years away, but says AI is evolving into a superintelligence so rapidly that it may be smarter than humans in as little as five years. AI-powered ChatGPT and Bing’s Chatbot already can pass the bar and medical licensing exams, including essay sections, and on IQ tests score in the 99th percentile — genius level. Hinton and other doomsayers fear the moment when “artificial general intelligence,” or AGI, can outperform humans on almost every task. Some AI experts liken that eventuality to the sudden arrival on our planet of a superior alien race. You have “no idea what they’re going to do when they get here, except that they’re going to take over the world,” said computer scientist Stuart Russell, another pioneering AI researcher. How might AI actually harm us? One scenario is that malevolent actors will harness its powers to create novel bioweapons more deadly than natural pandemics. As AI becomes increasingly integrated into the systems that run the world, terrorists or rogue dictators could use AI to shut down financial markets, power grids, and other vital infrastructure, such as water supplies. The global economy could grind to a halt. Authoritarian leaders could use highly realistic AI-generated propaganda and Deep Fakes to stoke civil war or nuclear war between nations. In some scenarios, AI itself could go rogue and decide to free itself from the control of its creators. To rid itself of humans, AI could trick a nation’s leaders into believing an enemy has launched nuclear missiles so that they launch their own. Some say AI could design and create machines or biological organisms like the Terminator from the film series to act out its instructions in the real world. It’s also possible that AI could wipe out humans without malice, as it seeks other goals. How would that work? AI creators themselves don’t fully understand how the programs arrive at their determinations, and an AI tasked with a goal might try to meet it in unpredictable and destructive ways. A theoretical scenario often cited to illustrate that concept is an AI instructed to make as many paper clips as possible. It could commandeer virtually all human resources to the making of paper clips, and when humans try to intervene to stop it, the AI could decide eliminating people is necessary to achieve its goal. A more plausible real-world scenario is that an AI tasked with solving climate change decides that the fastest way to halt carbon emissions is to extinguish humanity. “It does exactly what you wanted it to do, but not in the way you wanted it to,” explained Tom Chivers, author of a book on the AI threat. Are these scenarios far-fetched? Some AI experts are highly skeptical AI could cause an apocalypse. They say that our ability to harness AI will evolve as AI does, and that the idea that algorithms and machines will develop a will of their own is an overblown fear influenced by science fiction, not a pragmatic assessment of the technology’s risks. But those sounding the alarm argue that it’s impossible to envision exactly what AI systems far more sophisticated than today’s might do, and that it’s shortsighted and imprudent to dismiss the worst-case scenarios. So, what should we do? That’s a matter of fervent debate among AI experts and public officials. The most extreme Cassandras call for shutting down AI research entirely. There are calls for moratoriums on its development, a government agency that would regulate AI, and an international regulatory body. AI’s mind-boggling ability to tie together all human knowledge, perceive patterns and correlations, and come up with creative solutions is very likely to do much good in the world, from curing diseases to fighting climate change. But creating an intelligence greater than our own also could lead to darker outcomes. “The stakes couldn’t be higher,” said Russell. “How do you maintain power over entities more powerful than you forever? If we don’t control our own civilization, we have no say in whether we continue to exist.” A fear envisioned in fiction Fear of AI vanquishing humans may be novel as a real-world concern, but it’s a long-running theme in novels and movies. In 1818’s “Franken­stein,” Mary Shelley wrote of a scientist who brings to life an intelligent creature who can read and understand human emotions — and eventually destroys his creator. In Isaac Asimov’s 1950 short-story collection “I, Robot,” humans live among sentient robots guided by three Laws of Robotics, the first of which is to never injure a human. Stanley Kubrick’s 1968 film “A Space Odyssey” depicts HAL, a spaceship supercomputer that kills astronauts who decide to disconnect it. Then there’s the “Terminator” franchise and its Skynet, an AI defense system that comes to see humanity as a threat and tries to destroy it in a nuclear attack. No doubt many more AI-inspired projects are on the way. AI pioneer Stuart Russell reports being contacted by a director who wanted his help depicting how a hero programmer could save humanity by outwitting AI. No human could possibly be that smart, Russell told him. “It’s like, I can’t help you with that, sorry,” he said.

AI will eliminate 80% of jobs

Eugenia LOGIURATTO, May 8, 2023, Yahoo News, https://news.yahoo.com/ai-could-replace-80-jobs-211900514.html, AI could replace 80% of jobs ‘in next few years’: expert

Artificial intelligence could replace 80 percent of human jobs in the coming years — but that’s a good thing, says US-Brazilian researcher Ben Goertzel, a leading AI guru. Mathematician, cognitive scientist and famed robot-creator Goertzel, 56, is founder and chief executive of SingularityNET, a research group he launched to create “Artificial General Intelligence,” or AGI — artificial intelligence with human cognitive abilities. With his long hair and leopard-print cowboy hat, Goertzel was in provocateur mode last week at Web Summit in Rio de Janeiro, the world’s biggest annual technology conference, where he told AFP in an interview that AGI is just years away and spoke out against recent efforts to curb artificial intelligence research. – ADVERTISEMENT – – As smart as humans? – Q: How far are we from artificial intelligence with human cognitive abilities? “If we want machines to really be as smart as people and to be as agile in dealing with the unknown, then they need to be able to take big leaps beyond their training and programming. And we’re not there yet. But I think there’s reason to believe we’re years rather than decades from getting there.” – AI risk – Q: What do you think of the debate around AI such as ChatGPT and its risks? Should there be a six-month research pause, as some people are advocating? “I don’t think we should pause it because it’s like a dangerous superhuman AI… These are very interesting AI systems, but they’re not capable of becoming like human level general intelligences, because they can’t do complex multi-stage reasoning, like you need to do science. They can’t invent wild new things outside the scope of their training data. “They can also spread misinformation, and people are saying we should pause them because of this. That’s very weird to me. Why haven’t we banned the internet? The internet does exactly this. It gives you way more information at your fingertips. And it spreads bullshit and misinformation. “I think we should have a free society. And just like the internet shouldn’t be banned, we shouldn’t ban this.” – Threat to jobs – Q: Isn’t their potential to replace people’s jobs a threat? “You could probably obsolete maybe 80 percent of jobs that people do, without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature, which are going to follow in the next few years. “I don’t think it’s a threat. I think it’s a benefit. People can find better things to do with their life than work for a living… Pretty much every job involving paperwork should be automatable. “The problem I see is in the interim period, when AIs are obsoleting one human job after another… I don’t know how (to) solve all the social issues.” – AI positives – Q: What can robots do for society today, and what will they be able to do in the future, if AGI is achieved? “You can do a lot of good with AI. “Like Grace, (a robot nurse) we showcased at Web Summit Rio. In the US, a lot of elderly people are sitting lonely in old folks’ homes. And they’re not bad in terms of physical condition — you have medical care and food and big-screen TV — but they’re bad in terms of emotional and social support. So if you inject humanoid robots into it, that will answer your questions, listen to your stories, help you place a call with your kids or order something online, then you’re improving people’s lives. Once you get to an AGI, they’ll be even better companions. “In that case, you’re not eliminating human jobs. Because basically, there’s not enough people who want to do nursing and nursing assistant jobs. “I think education will also be an amazing market for humanoid robots, as well as domestic help.” – Regulation – Q: What regulation do we need for AI to have a positive impact? “What you need is society to be developing these AIs to do good things. And the governance of the AIs to be somehow participatory among the population. All these things are technically possible. The problem is that the companies funding most of the AI research don’t care about doing good things. They care about maximizing shareholder value.

Poverty kills

Brian Contreras, April 23, 2023, https://www.managedhealthcareexecutive.com/view/poverty-is-the-fourth-leading-cause-of-death-in-the-united-states-study-finds, Managed Health Care Executive, Poverty Is the Fourth Leading Cause of Death in the United States, Study Finds

In the United States, poverty was found to be the fourth greatest cause of death. This threat to one’s health is also reported to be far higher in the U.S. compared to other countries. In a recently published research letter in JAMA Network by University of California – Riverside, in 2019 there were roughly 183,000 deaths associated with poverty in the U.S. among people 15 years and older. This is a significant result as the data is from the year prior to the COVID-19 pandemic where death rates skyrocketed. To get a better idea of the association between poverty and death in the U.S., researchers analyzed the Panel Study of Income Dynamics 1997- 2019 data merged with the Cross-National Equivalent File. Deaths reported in surveys were validated in the National Death Index, a database kept by the National Center for Health Statistics, which tracks deaths and their causes in the U.S. Household income was measured by all income sources including cash and near-cash transfers, taxes and tax cred- its and was adjusted for household size. With use of leading standards in international poverty research, poverty was measured relatively as less than 50% of the median income. Current poverty was observed simultaneously in each year, and cumulative poverty was the proportion of the past 10 years. Data shared that broken down, those aged 15 years or older were of the 183,000 deaths associated to current poverty, and 295,431 deaths were associated with cumulative poverty. Current poverty was associated with greater mortality than major causes, such as accidents, lower respiratory diseases, and stroke. Only heart disease, cancer, and smoking were associated with a greater number of deaths than cumulative poverty. Obesity, diabetes, drug overdoses, suicides, firearms, and homicides, among other common causes of death, were less lethal than poverty. “Poverty kills as much as dementia, accidents, stroke, Alzheimer’s, and diabetes,” said David Brady, the study’s lead author and a UCR professor of public policy, in a peer review. “Poverty silently killed 10 times as many people as all the homicides in 2019. And yet, homicide firearms and suicide get vastly more attention.” Another finding is that people living in poverty – those with incomes less than 50% of the U.S. median income — have roughly the same survival rates until they hit their 40s, after which they die at significantly higher rates than people with more adequate incomes and resources. In addition, these findings have major policy implications, the researchers say. “Because certain ethnic and racial minority groups are far more likely to be in poverty, our estimates can improve understanding of ethnic and racial inequalities in life expectancy,” the paper reads. Brady expressed the study shows that poverty should get more attention from policymakers as it not only causes great emotional suffering but also comes at a great cost. “If we had less poverty, there’d be a lot better health and well-being, people could work more, and they could be more productive,” Brady said. “All of those are benefits of investing in people through social policies.” According to a White House brief, over the last three decades, American families have experienced a rise in the costs of many necessities that has made it difficult for them to attain economic security. It was estimated, for example, 80% of families saw the share of budgets dedicated to spending on needs such as housing and healthcare increase by more than 7 percentage points between 1984 and 2014. Further, a 2019 Pew survey found that 37% of Americans worry about the cost of healthcare for themselves and their families.

AI delivers poor medical care, destroys democracy, and causes mass death in warfare

Frederick Federspeil, May 9, 2023, Global Health and Development, London School of Hygiene & Tropical Medicine, London, UK, British Medical Journal, Threats by artificial intelligence to human health and human existence, https://gh.bmj.com/content/bmjgh/8/5/e010435.full.pdf

Abstract

While artificial intelligence (AI) offers promising solutions in healthcare, it also poses a number of threats to human health and well-being via social, political, economic and security-related determinants of health. We describe three such main ways misused narrow AI serves as a threat to human health: through increasing opportunities for control and manipulation of people; enhancing and dehumanising lethal weapon capacity and by rendering human labour increasingly obsolescent. We then examine self-improving ‘artificial general intelligence’ (AGI) and how this could pose an existential threat to humanity itself. Finally, we discuss the critical need for effective regulation, including the prohibition of certain types and applications of AI, and echo calls for a moratorium on the development of self-improving AGI. We ask the medical and public health community to engage in evidence-based advocacy for safe AI, rooted in the precautionary principle.

Introduction

Artificial intelligence (AI) is broadly defined as a machine with the ability to perform tasks such as being able to compute, analyse, reason, learn and discover meaning.1 Its development and application are rapidly advancing in terms of both ‘narrow AI’ where only a limited and focused set of tasks are conducted2 and ‘broad’ or ‘broader’ AI where multiple functions and different tasks are performed.3

AI holds the potential to revolutionise healthcare by improving diagnostics, helping develop new treatments, supporting providers and extending healthcare beyond the health facility and to more people.4–7 These beneficial impacts stem from technological applications such as language processing, decision support tools, image recognition, big data analytics, robotics and more.8–10 There are similar applications of AI in other sectors with the potential to benefit society.

However, as with all technologies, AI can be applied in ways that are detrimental. The risks associated with medicine and healthcare include the potential for AI errors to cause patient harm,11 12 issues with data privacy and security13–15 and the use of AI in ways that will worsen social and health inequalities by either incorporating existing human biases and patterns of discrimination into automated algorithms or by deploying AI in ways that reinforce social inequalities in access to healthcare.16 One example of harm accentuated by incomplete or biased data was the development of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.17 Facial recognition systems have also been shown to be more likely to misclassify gender in subjects who are darker-skinned.18 It has also been shown that populations who are subject to discrimination are under-represented in datasets underlying AI solutions and may thus be denied the full benefits of AI in healthcare.

Although there is some acknowledgement of the risks and potential harms associated with the application of AI in medicine and healthcare,11–16 20 there is still little discussion within the health community about the broader and more upstream social, political, economic and security-related threats posed by AI. With the exception of some voices,9 10 the existing health literature examining the risks posed by AI focuses on those associated with the narrow application of AI in the health sector.11–16 20 This paper seeks to help fill this gap. It describes three threats associated with the potential misuse of narrow AI, before examining the potential existential threat of self-improving general-purpose AI, or artificial general intelligence (AGI) (figure 1). It then calls on the medical and public health community to deepen its understanding about the emerging power and transformational potential of AI and to involve itself in current policy debates on how the risks and threats of AI can be mitigated without losing the potential rewards and benefits of AI.

In this section, we describe three sets of threats associated with the misuse of AI, whether it be deliberate, negligent, accidental or because of a failure to anticipate and prepare to adapt to the transformational impacts of AI on society.

The first set of threats comes from the ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras, and to develop highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance. This ability of AI can be put to good use, for example, improving our access to information or countering acts of terrorism. But it can also be misused with grave consequences.

The use of this power to generate commercial revenue for social media platforms, for example, has contributed to the rise in polarisation and extremist views observed in many parts of the world.21 It has also been harnessed by other commercial actors to create a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour. Experimental evidence has shown how AI used at scale on social media platforms provides a potent tool for political candidates to manipulate their way into power.22 23 and it has indeed been used to manipulate political opinion and voter behaviour.24–26 Cases of AI-driven subversion of elections include the 2013 and 2017 Kenyan elections,27 the 2016 US presidential election and the 2017 French presidential election.28 29

When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts.

AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly. This is perhaps best illustrated by China’s Social Credit System, which combines facial recognition software and analysis of ‘big data’ repositories of people’s financial transactions, movements, police records and social relationships to produce assessments of individual behaviour and trustworthiness, which results in the automatic sanction of individuals deemed to have behaved poorly.30 31 Sanctions include fines, denying people access to services such as banking and insurance services, or preventing them from being able to travel or send their children to fee-paying schools. This type of AI application may also exacerbate social and health inequalities and lock people into their existing socioeconomic strata. But China is not alone in the development of AI surveillance. At least 75 countries, ranging from liberal democracies to military regimes, have been expanding such systems.32 Although democracy and rights to privacy and liberty may be eroded or denied without AI, the power of AI makes it easier for authoritarian or totalitarian regimes to be either established or solidified and also for such regimes to be able to target particular individuals or groups in society for persecution and oppression.30 33

The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS). There are many applications of AI in military and defence systems, some of which may be used to promote security and peace. But the risks and threats associated with LAWS outweigh any putative benefits.

Weapons are autonomous in so far as they can locate, select and ‘engage’ human targets without human supervision.34 This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.34–36 Lethal autonomous weapons come in different sizes and forms. But crucially, they include weapons and explosives, that may be attached to small, mobile and agile devices (eg, quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment. Moreover, such weapons could be cheaply mass-produced and relatively easily set up to kill at an industrial scale.36 37 For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.36

As with chemical, biological and nuclear weapons, LAWS present humanity with a new weapon of mass destruction, one that is relatively cheap and that also has the potential to be selective about who or what is targeted. This has deep implications for the future conduct of armed conflict as well as for international, national and personal security more generally. Debates have been taking place in various forums on how to prevent the proliferation of LAWS, and about whether such systems can ever be kept safe from cyber-infiltration or from accidental or deliberate misuse.34–36

The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology. Projections of the speed and scale of job losses due to AI-driven automation range from tens to hundreds of millions over the coming decade.38 Much will depend on the speed of development of AI, robotics and other relevant technologies, as well as policy decisions made by governments and society. However, in a survey of most-cited authors on AI in 2012/2013, participants predicted the full automation of human labour shortly after the end of this century.39 It is already anticipated that in this decade, AI-driven automation will disproportionately impact low/middle-income countries by replacing lower-skilled jobs,40 and then continue up the skill-ladder, replacing larger and larger segments of the global workforce, including in high-income countries.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, including harmful consumption of alcohol41–44 and illicit drugs,43 44 being overweight,43 and having lower self-rated quality of life41 45 and health46 and higher levels of depression44 and risk of suicide.41 47 However, an optimistic vision of a future where human workers are largely replaced by AI-enhanced automation would include a world in which improved productivity would lift everyone out of poverty and end the need for toil and labour. However, the amount of exploitation our planet can sustain for economic production is limited, and there is no guarantee that any of the added productivity from AI would be distributed fairly across society. Thus far, increasing automation has tended to shift income and wealth from labour to the owners of capital, and appears to contribute to the increasing degree of maldistribution of wealth across the globe.48–51 Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health.

The threat of self-improving artificial general intelligence

Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can.52 53 By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start developing its own purposes, or alternatively it could be equipped with this capacity from the beginning by humans.54 55