Transformative AI
The next few decades and what you can do
Link to the original LaTeX PDF which may be formatted better: https://drive.google.com/file/d/1WwNgYYWfpA8-xY6De4dTSH_6tmojpW30/view?usp=sharing
1 Introduction
What do you see your next few decades of life looking like? Maybe you plan on graduating from school. Maybe you plan to do scientific research, maintain cars, work in government, start a business, or work in a grocery store. Maybe you will find some lovely person in the supermarket, settle down, and have kids. Maybe you won’t, and will focus on your career or hobbies. Maybe you will retire at 62 years old, eating prunes and walking a small dog. Maybe you don’t have a plan. Maybe you don’t know what the world will look like by that time. Will climate change consume the planet and lead to resource wars and climate refugees? Will we have world peace? Will we have another pandemic — — ten times as worse as COVID-19, leading very few people to leave their homes? Will we have virtual reality which is indistinguishable from reality and as common as running water? Will your country exist? Will you even have a job? Will anyone? Might nuclear winter consume the planet?
1.1 Need to Plan for the Future
Whatever your ideas of the future of your life entail, it is extremely difficult to place them on the stage of technology and world events — — especially when many world-changing events are happening at once. Understandably, when presented with these problems, many people either decide they will deal with it when they get there are so overwhelmed by the uncertainty that they put on blinders and carry on as though the world will continue in the state it is in now, business-as-usual. Both of these approaches are reckless and potentially catastrophic for one’s life and humanity at large. Forecasting the future seems about as important (potentially even more important) than deciding a career path, saving for retirement, and a variety of other long-term goals.
1.2 Point of this Article
In this article, I will be discussing what I believe to be the most pressing issue of our time. It is not nuclear war, climate change, world hunger, or global poverty. It is the Artificial Intelligence (AI) revolution. When this topic is discussed, it is often limited to the automation of repetitive tasks like driving, manufacturing, retail, and phone calls (still massive industries that will absolutely change society, but the scope is larger than this). On the other end of the spectrum are people who imagine a sentient robot revolution — — a thought they store in their brain right between zombie apocalypse and alien invasion. By the end of this article, you will have a better idea of what the advanced AI situation looks like, why it matters, and how you can better orient yourself (and your community) to handle the changes better.
2 What is Artificial Intelligence?
2.1 Computer Science → AI
Before delving into these issues surrounding AI, let’s clarify some basic concepts. Definitions are useful here because many of the terms I will be using are often misunderstood or have different definitions for different people. Artificial intelligence (AI) is technology that leverages computers to mimic the problem-solving and decision-making capabilities of a human mind. (IBM, 2021) AI is a subset of computer science (CS): a more general field focused on writing instructions (algorithms), building computers, and structuring networks of computers. (University of Maryland, 2022) Setting up a server to host a website like the New York Times or programming 3D graphics for a video game would fall under computer science but not artificial intelligence because there is no mimicking of human intelligence. Whereas getting a car to mimic a human driver and learning to drive on its own or teaching a computer to recognize different illnesses in patients like a doctor would both fall under artificial intelligence. There is some ambiguity as to whether some processes are AI or not (ex: a program that can symbolically solve math equations where solving math equations is both a human skill but also a rather mechanistic process).
2.2 AI → Machine Learning
The field of artificial intelligence is broken up into many sub-fields, including, for example: speech recognition, natural language processing, vision, decision making, expert systems, etc. The most important field of AI for the future is likely machine learning. In traditional programming an algorithm is given to a computer by a human programmer which produces a desired outcome from given inputs. In contrast, with machine learning (ML) computers are given the inputs and desired outcomes and produce an algorithm. This algorithm can then be used by other computers. Machine learning is extremely helpful when we don’t know how to write an algorithm. For instance: it is incredibly hard to write explicit instructions on how to detect fraud in an insurance claim. But if you provide many examples of previously documented fraudulent and non-fraudulent claims collected over the years to a machine learning AI, it could produce an algorithm that fairly accurately determines whether or not new claims are fraudulent. If machine learning strikes you as similar to the way humans learn about the world, you’re right! Growing up from child to adult, we acquire many experiences (“data”) and have adjusted our actions (“output”) over the years depending on what we did and what the outcome of an action was. (Avenga, 2021) Knowing this will be helpful in understanding the capabilities of artificial intelligence. The development of some of the most complex and impressive AI we have uses deep learning. In most deep learning algorithms, machines accomplish the task it is being asked to do without humans even understanding how they do it. As an example: the insurance fraud algorithm produced by machine learning is not immediately understandable to humans, but we can still test it and confirm that it accurately detects fraud. (There is an entire sub-field of machine learning called “interpretability” which studies how these models come to a particular conclusion.)
2.3 Broad Applications of AI
AI already has applications in many fields: voice assistants, agriculture, aviation, retail, security, manufacturing, healthcare, transportation, and more. (Bowman, n.d.) Considering that the ability of AI is to mimic the skills and intelligence of a human, it only makes sense that the effects of AI are felt in every industry where humans are needed (which is to say: every industry). AI is sometimes referred to as a general purpose technology, roughly defined as a single generic technology with much scope for improvement. Other general purpose technologies include electricity, computers, and agriculture. (Crafts, 2021) (Garfinkel, n.d.)
3 Taking AI Seriously
3.1 Current Scale of AI
The field of AI is growing extremely rapidly, with $77.5 billion being invested in AI globally in 2021, more than double the $36 billion from the previous year. By 2023, investment is expected to be at $97.7 billion — roughly the GDP of Ethiopia. (VentureBeat, 2021) (Bowman, n.d.) (IMF, n.d.) This is an almost 100 fold increase in the total investment of AI since 2010. As this technology becomes more prominent and powerful, it can be expected that this number will continue increasing rapidly and exponentially.
3.2 False Promises and Hype
The field of AI is no stranger to bold, confidently incorrect claims about the future:
• US Secretary of Transportation, Anthony Foxx, declared in 2016 that we’d have fully autonomous cars everywhere by 2021 (Adams, 2020)
• Turing award winner, co-founder of MIT’s AI laboratory, and author of various philosophical papers regarding AI, the late Marvin Minsky said in 1970 “In from three to eight years we will have a machine with the general intelligence of an average human being.” (McCorduck, 2004)
The technologies being predicted in many of these optimistic claims about AI are possible and potentially likely but the timelines are laughably wrong. These repeated blunders in predicting the future of artificial intelligence has led to skepticism. This skepticism is warranted. However, I worry that the skepticism for timelines has transformed into a denial of the technology that the timeline is predicting. These timeline blunders only provide evidence that predictions on AI technologies are difficult to make and not that the technologies themselves are not feasible. For most people, these technologies are not a question of if they will happen but a question of when they will happen. I encourage readers to take the timelines in this article with a grain of salt (or many grains of salt), but I ask that you take the technologies seriously and consider what the consequences of these timelines would be if correct.
3.3 AI Will Continue Evolving
Despite having its origins in 1940s, AI and ML are still in their infancy. After two AI winters in which investing and interest in the field virtually disappeared due in part to lack of computing power, the field has has been revived and appears to be extremely fertile with astonishing discoveries popping up every few months and with no sign of stopping. AI has crept into every industry, giving businesses many incentives to continue research into improving their AI systems. Because of this, it appears that research in AI will not halt as it had in the past and that the field will continue growing for more decades to come. Many researchers claim that the capabilities of machine learning are over-hyped. (Dickson, 2018) This may be true for current machine learning models, but it seems rather naive to assume that, in the coming years or decades, more capable models wont be discovered considering how new the field is. Because underestimating the potential capabilities of future machine learning methods could be catastrophic, we should err on the side of caution and prepare for the most dangerous possible outcomes.
4 Transformative AI
4.1 The AI Revolution
AI is already revolutionizing our economy and society. AI that does this on the scale of the industrial or agricultural revolution is often referred to as transformative AI. (Karnofsky, 2016) AI automation is predicted to displace 73 million US jobs by 2030 — — 46% of all current jobs. Many of which will come from the manufacturing, storage, and transportation industry. (Dautovic, n.d.) This will affect non-college educated people, Hispanic people, women, and baby boomers the hardest. It is estimated that 85% of the jobs that will exist in 2030 haven’t yet been invented. (Flynn, 2022) However, according to the United Nations and other sources, there will not be a net loss in jobs — — it is likely that as businesses grow, more positions for non-automatable positions and new jobs related to maintaining automation will open. (UN, n.d.) I believe that these figures may be accurate but that effects past 2030 will be far greater as AI becomes more capable and enters industries typically thought of as safe from automation. The transformative AI is already here and it is only a matter of how fast and how hard it will hit.
Over this section on transformative AI, I will be discussing two industries will be automated and the relevant state-of-the-art technology:
1. The transportation industry — — a field expected to be automated soon.
2. The art industry — — a field typically seen as safe from automation.
The section will end discussing the societal impacts of transformative AI.
4.2 Automation of Transportation Industry
One of the most visible examples of AI are autonomous vehicles and, in particular, Tesla’s electric cars which are currently on the market. Companies associated with Apple, Google, Intel, BMW, Ford, Honda, Volvo, and more are currently all working on this technology. Over 4 million US jobs are expected to be lost due to this technology including bus drivers, delivery and heavy truck drivers, and taxi and chauffeur drivers. (CGPS, 2017) There may be over one million jobs not accounted for in this number from ride-sharing apps like Lyft and Uber. (Helling, 2021) These jobs make up about 3.2% of the United State’s 164 million labor force. (US Bureau of Labor Statistics, 2022) (However, these jobs will not be disappearing immediately once autonomous cars are developed: there will be a transition period with human tele-operators helping make decisions from afar (Coppola, 2021)).
It is important to note that autonomous vehicles currently on the market are not entirely autonomous despite the hype that arose when the technologies were first gaining traction in the auto industry a few years ago. Drivers still need to be at the wheel and make contact with it for the autonomous mode to be activated. This raises the question: “Why haven’t we created fully autonomous cars yet, despite the predictions of auto manufacturers?’’ It turns out that a self driving car is much more difficult than originally anticipated. Mainly due to edge cases. Ex: deciding to run down a flock of birds vs deciding to run down leaves in the wind or understanding when it is okay to break traffic laws due to obstacles. It is likely that we will not be seeing fully autonomous cars this decade because of these issues despite it being on the list of jobs predicted to be automated first. The reason being that a car doesn’t just need to know how to move in traffic. Traffic exists in the real world and the real world is unpredictable, requiring self-driving cars to not just have an understanding of traffic laws and cars, but everything else as well. (Adams, 2020)
4.3 The Future of Art and AI
It is hard to see how artificial intelligence could become so capable as to replace so many jobs. There are many fields that feel like there is an inherent human quality to them. One of which being the art industry. How could a machine reproduce the symbolism, emotions, and meaning that is a part of the creation process? I claim that this is not entirely the point — — the creation of art will likely to continue being an important part of human expression and fulfillment. What will change are the tools that art is made with, which open up the possibility of creating complex and expressive works in collaboration with an AI that capture the artist’s visions very quickly. This may open a world of expression to people not skilled with years of experience. In the future, there will likely be many different forms of art and expression that that we cannot imagine today being granted by artificial intelligence. One example of the art frontier may include entire experiences in virtual reality in which AI may help create 3D models, textures, compositions, experiences, etc. Compared with drawing, VR artwork is a relatively time-consuming task for any individual to create, but human-AI collaboration could dramatically speed up the process, allowing the majority of artists that previously did not have time to work in the medium.
But what will happen to the art industry as AI continues evolving, as AIs require less human guidance? As AI continues advancing, we will see that it will become increasingly more difficult to tell the difference between human and AI generated art as AI develops better models for style, composition, meaning, and planned imperfection. During this transition period, humans will work closely with AI to make up for where models of that time fall short and art becomes less and less of a skilled job as time goes on. How long this transition period will last is unknown. Producing art work for a job may become something of the past and skilled artists will no longer be needed — — game artists, illustrators, photographers, graphic designers, animators, film makers, etc. In a world where it is impossible to tell the difference between human and AI generated art, it seems unlikely that artworks designated as “real” or “human-produced” will be more symbolically or aesthetically meaningful as much as it might be valued for the human labor that went into making it.
These predictions may be spread over the course of the next few decades. But there is reason to believe that the time for this is not too far off. In a 2021 paper by OpenAI, they unveiled a new technology that uses text input for photo generation and editing called GLIDE. This technology is not the first of its kind nor will it be the last, but it is currently one of the most powerful and leaps ahead of where similar technologies were just a year ago. (Nichol, et al., 2021) (As this article was being written, OpenAI released DALL-E 2, a model that is even more powerful than the one being shown here)
To generate these images, GLIDE uses GPT-3, a new and extremely powerful text-completion AI that can be prompted to do math, tell jokes, solve logic puzzles, write stories, and more. GLIDE uses a similar architecture, but instead swaps text for pixels.
Below are some examples of images generated with GLIDE.
It is important to note that this model did not simply look up the images above online. Prior to this paper, the images never existed on the internet or anywhere for that matter. As you can see from the figure above, this model is able to produce stunningly well rendered photos of incredibly specific objects in many different styles — — from 3D images of real and imaginary scenes to pixel art, stained glass, and images in the style of famous paintings. Even in this infant state, this technology is already capable of shaking the art industry. Those in the business of creating stock photos, commissioned art work, photography, and more could be seeing the impacts of this technology soon as it becomes more accessible to the public.
GLIDE does not just generate images but it has the ability to contextually edit them as well. As you can see from the figure above, it was able to add objects accurately and intelligently. The corgi is in the same style as the painting. Reflections and shadows are properly added to the surrounding environment in the hat and vase image. This could significantly change the productivity of those that edit images for a living.
As you can see in the figure above, many edits are able to be made to a generated image to provide an extremely accurate and complex scene that aligns closely with the user’s intent. From the image shown, this could be game changing for interior designers and concept artists.
As previously stated, as these tools become increasingly powerful, less skill and time will be required to create art work necessary for commercial projects which may slowly, then completely, lead to the elimination of commercial artists.
Because of the power of GLIDE to easily create realistic images, the authors of the paper cite misinformation and deep fakes — — a fake image or recording in which a real person is present — — as major concerns. One could see malicious actors creating viral images of politicians in shady deals or lewd, non-consensual images of others. As models like these become more powerful, fake images, videos, and sound recordings become more wide spread. It is rather surprising that we have not had more international pandemonium over false recordings in recent years. Recently a deep fake of Vladimir Zelensky, the president of Ukraine during the Russian Invasion, went viral on social media and television, where “Zelensky” is requesting Ukrainian forces to surrender (Allyn, 2022). It is easy to see how this manipulation could cost millions lives and unimaginable of suffering as Russia could tear through and conquer Ukraine. It was only recently that the worst popular deep fakes we have had was just a set of three Tik Tok deep fakes of Tom Cruise. In the coming years when deepfakes more widespread and harder to detect, misinformation will become a more powerful weapon and we will have to redefine our trust in recordings.
4.4 A Future Without Work
While it may be true that automation may create more jobs than it destroys, it is likely that this is a temporary situation lasting a few decades. While many jobs are being created by automation, it is inevitable that each occupation will be cut down year by year as AI capabilities improve. The best chance at securing a job is to get a college degree — — something that many will not be able to get. And even if they do, competition will be fierce and it is on a scale of years or decades until those are eliminated as well. Transformative AI will likely lead to a booming economy, but one in which the new unemployed lower class will not be able to participate in unless a welfare system such as Universal Basic Income is implemented.
Anticipating a future in which many are unemployed due to transformative AI, there have been many advocates for Universal Basic Income (UBI), a government policy in which every adult citizen receives a set amount of income regularly without any demonstration of need or strings attached. UBI was notably a key pillar of 2020 US presidential candidate Andrew Yang’s campaign. (Peters, n.d.) It is possible that a future with automation and a prospering economy could implement a high enough UBI to sustain a lavish, work-free lifestyle for everyone. However, in this world, the lower-class (everyone who does have ownership over industry or capital goods) will have little to no power. Throughout history, even the most oppressed groups still (to an extent) had the power of violence or ceasing labor when their well-being was under siege. But now, the lower-class has no viable threats (because massive wealth disparity also means massive disparity in the capacity for violence) or goods to offer (because AI has automated production of all goods and services), and thus the lower class has no societal leverage. This imbalance in power poses the danger of a brutal economic and political oligarchy where power goes unchecked.
4.5 Other Dangers with Transformative AI
There are a vast number of other issues that will either arise or worsen as AI develops as a field.
In the field of AI ethics, there is the issue of discrimination based on race, sex, gender, creed, etc. For example, there is the issue of predictive policing where police officers are often sent to patrol majority black communities where there is a high level of reported crime and make a disproportionate numbers of arrests which continues to amplify issues in the policing software. (Li, 2022)
There is also the potential issue for automated cyberwarfare, where countries, terrorists, companies, or other bad actors disable infrastructure, break systems, change databases, etc. with an intent to cause harm. (Johnson, 2022)
With systems like GPT-3, one can convincingly generate misinformation, fake product reviews, fake social media posts, etc. This potential for misinformation is extremely dangerous because it may lead to societal epistemic decay where people either cannot disseminate real from fake information and may end up believing false information or completely disengage and stop believing anything they read. While it is still possible to generate false information by hand, it cannot be done with the scale and ease that can be done with AI. This failure may lead people to take dangerous extremist positions or for certain factions to push the discourse in whatever direction they wish to. For example: An authoritarian government could convince the public to universally support both the government and its policies by automating fake positive social media sentiment towards themselves or spreading disinformation about their enemies. (Knight, 2022)
5 Beyond Transformative AI
5.1 Classifying the Capabilities of AI Systems
So AI is a subset of CS and ML is a subset of AI. To talk about the capabilities and impacts of AI, it is helpful to put some terminology on these. What we have today is called weak artificial intelligence or artificial narrow intelligence (ANI). These systems are called such because they have a very limited range of things that they can do. On the other hand, strong artificial intelligence is a theoretical intelligence that has intelligence at (Artificial General Intelligence (AGI)) or above (Artificial Super Intelligence (ASI)) human intelligence in every domain — — being able to solve problems, learn, and plan for the future. (IBM, 2021) This technology is vastly different from how we see artificial intelligence today, as a tool — — like a hammer, which requires close human guidance to perform a specific task. Strong AI — — which I will simply refer to as AGI going forward — — will not be like that. You can imagine it more as a model that can be given an abstract goal (ex: eliminate cancer) and pursue it successfully through whatever means necessary: conducting experiments, analyzing thousands of online papers and data, messaging politicians, taking control of assembly lines, etc. In this sense, AI stops looking like a tool for humans to use and more like an agent — — some being with a level of autonomy and “a mind of its own,” though it is important to note that these systems are not like us — — they are optimizers, hyper-fixated on performing a goal well, and do not inherently have the same values as we do when performing these tasks (this human-AI value-mismatch is known as the alignment problem and is a very new, important, and actively developing field in AI research.)
As time goes on, AI systems are becoming more general. Whereas previously, you used to have to create a single AI for a particular task, there are now systems that are able to perform well across the board on a variety of tasks. For example, MuZero is an AI that was developed by Google-owned DeepMind which has mastered a large variety of games from Atari arcade games to the infamously difficult game “Go.” This model performed better than its predecessors which were only able to play a handful of games and it did so with far less information about the nature of the game than previous models. (DeepMind, 2020)
Another example of a general AI is GPT-3, which is currently the world’s most complex language model that is capable of generating text with a given prompt. As mentioned earlier, GPT-3 can be prompted to do math, tell jokes, solve logic puzzles, write stories, and more. It is also being used in many more groundbreaking technologies such as GLIDE which was mentioned earlier or GitHub Copilot — — a code-editor extension that writes code, documentation, and tests in practically any language for the user. I can confirm this has increased the efficiency of many programmers I know and is only the first of many code-writing AIs we can expect to see in the future.
5.2 PASTA
There is a good reason to believe that Artificial Super Intelligence will come immediately after the development Artificial General Intelligence. The reason for this being that an AGI could recursively improve itself, creating a more intelligent AI with each cycle. This concept is known as the intelligence explosion and is one of the most popular versions of the technological singularity, concepts popularized by famous computer scientists I. J. Good and John von Neumann respectively. In the technological singularity (or just “the singularity”), technological growth happens quickly and uncontrollably, leaving humanity unrecognizable from when it began. (MIRI, 2022)
One could imagine that an intelligence explosion might not require so much as an AGI. One idea for an intelligence explosion is PASTA (Process for Automating Science and Technology Advancement), a narrow intelligence but one which can automate human activities which can speed up scientific and technological advancement. (Karnofsky, 2021) This system could lead to explosive technological growth as AI could recursively improve itself and the efficiency of the machines that it resides on. This technology could be duplicated across many devices, leading to what is essentially a digital population of “engineers” (including AI engineers) and “scientists.” The goal of creating PASTA is easier than creating true AGI because it requires only a subset of an AGI’s abilities. However, it would have similar impacts. I will not go any deeper into the details of how PASTA may be created, how it works, and how it could recursively improve, but you can find more information about it here.
5.3 Superintelligent AI
It is hard to emphasize just how different of a technological paradigm that AGI is compared to the technologies that we are used to today. If/when AGI is developed, there will not be a single task where humans can outperform the computer system (otherwise it is not AGI). In a post-AGI world, there would be no work for humans to do but also nothing that humans could meaningfully contribute to the function of society. In other words: humans would lose their spot as “player one” in the universe.
Human mathematicians, scientists, and engineers would not be able to keep pace with the discoveries and inventions of AGI and thus would not be able to exist. Some may choose to continue studying math and science, but this knowledge could not be used to solve any problems or make any new discoveries or insights that have not already been solved by an AGI. Learning would just be another hobby equivalent to painting or playing video games (assuming this post-AGI society had not found better forms of entertainment.)
Human politicians would also not make sense in this post-AGI world. Why would people leave fate in the hands of politicians who often don’t work towards the public interest and have to make extremely complex decisions concerning the well being of millions or billions of people? An AGI would have far more knowledge about economics, welfare, conflict, and society than any human. If an AGI is smarter than a human and was aligned with human values, then it should be more well equipped to manage these massive decisions. Imagine if humans were still at the head of government and instead asked this AGI questions about what actions or policies they should use. The AGI gives them an answer and when they listen to the AGI, there are consistently good results, but when they deviate, there are consistently sub-optimal or poor results. The politicians end up adopting the decisions of the AGI in every situation. At this point, there is no reason why the AGI would not be allowed to take actions on its own, without the bottleneck of going through humans.
5.4 How AGI goes wrong
The concept of an AI at the head of government often makes people uncomfortable because it places in incredible, dictatorial weight on abilities and morality of an AI. I think this uncomfortably is warranted and that the key issue here is making sure that the AGI is actually aligned with human values (ie: the alignment problem). Most dictators are unaligned with the people the govern and this often leads to wide-scale suffering and oppression. There is no reason why an AGI would not do the same if it was unaligned with human values. Although an AGI would understand human values and the world at large, there is no reason that it would value the well-being of humans. In fact, intelligence and final goals are completely unrelated — — it is just as likely for a superintelligent AI to inherently value stacking rocks as it is to value human life. This concept is known as the orthogonality thesis and is an unintuitive and dangerous truth about AI. (Armstrong, n.d.)
The solution here is easy: if the AI is misbehaving, you can just turn it off, right? Not so fast. For an AI to accomplish whatever goal it has, whether it be to solve cancer or serve you coffee, a powerful instrumental goal is to make sure that it survives to accomplish the final goal. Obviously, the AI cannot perform well on its task if it is turned off, or in the famous words of computer scientist Stuart Russell: “You can’t get the coffee if you’re dead.” This may result in the AI defending itself, disabling the off switch, and/or distributing itself to many devices. (Yudkowsky, 2017) The same can be said for trying to adjust its goals — — an AI could not accomplish its original goal if its goal was changed. Imagine if I offered you a pill and said “if you take this pill, I will give you $1, but you will wake up tomorrow with the goal to eat babies,” you obviously would not take it because it is in conflict with your current values. (This example is not my own, but I have since forgotten the source.) Because AGI preserves itself and its own goals, it becomes incredibly difficult to stop.
Even if the AGI is not given control of the government, AI tends to be power-seeking (Carlsmith, 2021), and would likely go out of its way to acquire financial, political, and computational resources because these too are useful instrumental goals to complete a variety of different tasks. Whether or not one wishes to give an AGI power, it may seek it out regardless.
When a powerful AGI is unaligned with human values, it poses an existential risk to humans, meaning that AGI could potentially bring humanity to extinction or permanently curtail our future. An extreme example of this is asking an AGI to “eliminate cancer,” measured by reducing the number of people with cancer to zero. In response to this goal, the AGI launches all the world’s nukes, creating a nuclear winter that does, indeed, reduce the number of people with cancer to zero. This is obviously a ridiculous example, but it highlights just how a misaligned AGI could go wrong. Though it is not immediately obvious, it is incredibly difficult to properly communicate goals to an AGI. Explaining this requires iterating through many instances of failure which I do not have the space for in this article. (Many books can and have been written about this topic including Human Compatible, Superintelligence, and The Alignment Problem.)
The first AGI that we develop will likely be power-seeking and self-preserving, and thus will likely also try to prevent the development of other AGIs that may compete with it. Because the first AGI that we create may be the one that we are stuck with, we have to make sure that this technology is developed safely the first time because we may not have another chance. And because it is so hard to make an human-aligned AGI, it is critical that we take the development of this technology very seriously and very slowly.
In a survey of experts on artificial intelligence by technological philosophers Vincent Müller and Nick Bostrom from 2012 and 2013, the median estimate of respondents estimated that high-level machine intelligence (essentially AGI) had a 50% chance of being developed around 2040–2050, with a 90% chance of being developed by 2075. They also estimate that there is a 33% chance that this development is bad or extremely bad for humanity. (Muller, 2016) I believe that if these experts saw the progress OpenAI and DeepMind (two companies with the explicit goal of developing AGI) have made in the past few years, they would have even shorter timelines because the companies have hit milestones quicker than most experts previously expected. These experts today are also not familiar with modern day AI-safety research, which has revealed greater threats from AGI and even harder problems that come from aligning it. In addition, Oxford philosopher Toby Ord, in his book, The Precipice, placed the total existential risk from AI at 10% (Purtill, 2020)
AGI has massive potential for harm, perhaps even on the level of human-extinction on the time line of just a few decades. Even if human-extinction from AGI was an incredibly low probability, this issue still warrants a lot of attention, which is unfortunately not being given to the issue at hand. Anecdotally, being a member of the growing movement focused on this issue, I can say that there are maybe only 200 people working full-time on the issue of aligning AGI in particular. A number I would put at less than 60 just a few years ago. This is rather terrifying because this increases the risk that AGI is incredibly harmful to humanity. I would highly recommend looking into this field for people that are technically proficient — — the field is both highly impactful, hiring many people, and expanding very quickly. There are also a large variety of programs aimed at informing people about AI risk by organizations like the Stanford Existential Risk Initiative, Cambridge Effective Altruism, and Redwood Research. If you are into policy and government, these same organizations also run programs on the governance of AI.
6 What to Do
In this article, I have laid out what transformative AI and artificial general intelligence looks like, when and why to expect it, and some examples of its global socioeconomic and technological impacts. Now that you are equipped with this knowledge, where should you go from here?
If you are worried about automation eliminating your job, the best thing you can do in the short run is to skill-up or move to jobs in less automatable domains, ie: move away from jobs in retail, transportation, and storage and towards jobs in engineering, business, law-enforcement, etc. This is obviously more easily said than done, many of the hard-to-automate jobs are ones that often require many years of schooling and thus are often expensive and time-consuming to become qualified for. I unfortunately have no easy answer to this issue. However, there are many digital learning platforms to learn new skills such as Udemy or Coursera which can often be completed cheaply and on your own time. A list of jobs ranked by automation can be found here.
In addition, advocate for Human Compatible AI, which is AI that is meant to work with and enhance the abilities of the humans that use them instead of replacing the humans that would otherwise do work. These human compatible AIs typically perform very well, taking advantage of both the intuitive aspects of human intelligence and the precision of artificial intelligence. Having human compatible AI maintains the need for humans while still reaping the benefits of AI.
However, a better long-term, overall solution for preparing for transformative AI and full automation is to advocate for Universal Basic Income, which could prevent the unemployed from needless poverty and suffering when automation becomes more rampant.
The development of superintelligent AI requires some more specific interventions to ensure it is developed safely. Consider working or getting others to work on AI alignment research or on ML interpretability research which could identify issues with a model before it is deployed. Encourage legislation that requires that companies produce safety plans for for AI (such as the European Union’s 2021 AI Act.) Advocate for policy that would slow down the development of AGI and allow safety researchers more time to work on alignment. Support politicians that acknowledge the dangers of artificial intelligence. Urge companies developing advanced AI to participate in the Windfall Clause, which is a commitment by the AI firm to donate a significant amount of profits in the low-probability event that they develop revolutionary AI such as AGI. This could prevent a single company from having the power of what may essentially be an entire economic revolution. (FHI, 2020)
This last point is a little more abstract and not one that I have not seen mentioned anywhere previously, but I believe that there are many areas of research and engineering that, when taking into account the potential development of AGI, may not actually leave an impact on society or contribute to science in a meaningful way. I think to understand this, you have to frame an individual’s discoveries and inventions as “speeding up” science and technology. Had the (scientist/engineer/mathematician) not discovered or invented that thing, someone would have discovered or invented that thing anyways, just a few months, years, or perhaps decades later. For example: Newton invented calculus, but did so 8 years prior to Leibniz independently inventing calculus on his own. (Story of Mathematics, n.d.) Newton’s contribution to the world in this case is NOT inventing calculus, but rather inventing calculus 8 years earlier than it otherwise would have been invented. This is still a big contribution, but nowhere near as big as “inventing calculus.” I claim that within the first few days of its development, AGI could make massive discoveries and inventions, including replicating any that we will make between now and when it is developed. With this in mind, you can imagine any contributions we make as “doing so x years prior to AGI independently developing it.”
Take the example above which is a graph of utility (aka happiness) versus time from the year 2000 onward. The Higgs Boson, an elementary particle in physics, was discovered in 2012. For a while, there is no use for the Higgs Boson, but physicists are a little pleased with themselves having discovered it, so there is a little bit of utility. But in 2097, lets say that, for whatever reason, the Higgs Boson was found as a necessary component for the creation of hoverboards. A bunch of hoverboards are made, and people are pretty happy. Then, in 2120, hoverboard technology is improved to the point where it can also be used in hovercars. A bunch of hovercars are made and people are even more happy. In this case, the discoverers of the Higgs Boson can claim to have made all the impact caused by 1) The discovery of the Higgs Boson, 2) the invention of hoverboards, and 3) the invention of hovercars (assuming no one else would have discovered the Higgs Boson, which obviously isn’t true, but lets say it is.) Overall, they have made a massive contribution to society.
Now lets imagine superintelligent AI is developed in 2082. Had the scientists not discovered the Higgs boson, the superintelligent AI would have. In this new scenario, the contribution that the scientists made to society was discovering the Higgs boson 70 years earlier than the AGI would have. Which, as you can see from the graph, contributes almost nothing to society, the only utility being from the scientists being pleased by knowing this information for 70 years. (Knowing this information earlier also does not speed up the development of hoverboards or hovercars either because these all would be developed by AGI as well.)
This analysis can be extended to the development of any kind of discovery or invention. This is not to say that all discoveries and inventions are not useful. But it certainly redefines just how important they are. For example: Designing a new, more efficient water pump that simultaneously filters water for people living in impoverished communities in less developed countries could have massive impacts for the health and well-being of these people. Even if AGI would have been able to invent the same pump (or a better one) by 2082, having the pump 60 years earlier would be a great boon for society.
Even if discovering the Higgs boson would have had far more overall impact than this water pump if superintelligent AI had not been invented, the invention of the water pump would have far more impact because its impact is large on the short time period prior to the discovery of the superintelligent AI.
If you believe superintelligent AI will be developed, this analysis would imply that we should value engineering and research projects with short-term benefits over ones with long-term benefits that will become null due to the development of Superintelligent AI. I believe that the societal impact of a few fields are heavily discounted by this analysis including theoretical physics, pure math, and astronomy since these seem to only have benefits in the far future. To the scientists, mathematicians, and engineers that are persuaded that AGI may someday be developed, I ask that you consider which fields or sub-fields will truly impact society by figuring out how you can make an impact on a short time scale. As someone who had entered into my undergraduate degree fully ready to become a physics researcher, ready to contribute to the lofty goal of advancing our understanding of the universe, I was somewhat disappointed by the realization that my efforts might be in vain. If you are interested in learning more, the next article in this series will focus on this topic more in-depth.
Bibliography
Adams, E. (2020, September 25). Why we’re still years away from having self-driving cars. Vox. https://www.vox.com/recode/2020/9/25/21456421/why-self-driving-cars-autonomous-still-years-away
Allyn, B. (2022, March 16). Deepfake video of Zelenskyy could be “tip of the iceberg” in info war, experts warn. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
Armstrong, S. (n.d.). GENERAL PURPOSE INTELLIGENCE: ARGUING THE ORTHOGONALITY THESIS. 20.
Avenga. (2021, December 17). Machine Learning vs Traditional Programming — Avenga. https://www.avenga.com/magazine/machine-learning-programming/
Bowman, J. (n.d.). Best AI Stocks for 2022: Artificial Intelligence Investing. The Motley Fool. Retrieved March 5, 2022, from https://www.fool.com/investing/stock-market/market-sectors/information-technology/ai-stocks/
Carlsmith, J. (2021). Is Power-Seeking AI an Existential Risk? 57.
CGPS. (2017). Autonomous Vehicles, Driving Jobs, and the Future of Work. https://globalpolicysolutions.org/wp-content/uploads/2017/03/Stick-Shift-Autonomous-Vehicles.pdf
Coppola, G. (2021, August 10). Driverless Cars Are Proving to Be Job Creators, At Least So Far. Bloomberg.Com. https://www.bloomberg.com/news/newsletters/2021-08-10/driverless-cars-are-proving-to-be-job-creators-at-least-so-far
Crafts, N. (2021). Artificial intelligence as a general-purpose technology: An historical perspective. Oxford Review of Economic Policy, 37(3), 521–536. https://doi.org/10.1093/oxrep/grab012
DALL-E. (2022). In Wikipedia. https://en.wikipedia.org/w/index.php?title=DALL-E&oldid=1086892781
Dautovic, G. (n.d.). 19+ Automation & Job Loss Statistics | Fortunly.com. Fortunly. Retrieved March 6, 2022, from https://fortunly.com/statistics/automation-job-loss-statistics/
DeepMind. (2020, December 23). MuZero: Mastering Go, chess, shogi and Atari without rules. https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules
Dickson, B. (2018, December 3). Is deep learning overhyped? TechTalks. https://bdtechtalks.com/2018/12/03/jeremy-howard-ai-deep-learning-myths/
Eliezer. (n.d.). You can’t get the coffee if you’re dead. Retrieved May 14, 2022, from https://arbital.com/p/no_coffee_if_dead/
FHI, F. of H. I.-. (2020, January 30). The Windfall Clause: Distributing the Benefits of AI. The Future of Humanity Institute. http://www.fhi.ox.ac.uk/
Flynn, J. (n.d.). 36+ Alarming Automation & Job Loss Statistics [2022]: Are Robots, Machines, And AI Coming For Your Job? — Zippia. Retrieved March 5, 2022, from https://www.zippia.com/advice/automation-and-job-loss-statistics/
Garfinkel. (n.d.). 1.4 Garfinkel AI and impact of General Purpose Technologies. Google Docs. Retrieved March 5, 2022, from https://docs.google.com/document/d/1I13_0o3kUe1AVQNfevOF9sHpc4mCQkuFDxOXFj_4g-I/edit?usp=embed_facebook
Helling, B. (2021, September 9). How Many Uber Drivers Are There in 2022? | Ridester.com. https://www.ridester.com/how-many-uber-drivers-are-there/
IBM. (2021, September 16). What is Artificial Intelligence (AI)? https://www.ibm.com/cloud/learn/what-is-artificial-intelligence
IMF. (n.d.). Report for Selected Countries and Subjects. IMF. Retrieved March 5, 2022, from https://www.imf.org/en/Publications/WEO/weo-database/2021/October/weo-report
Johnson, L. (n.d.). Automated Cyber Attacks Are the Next Big Threat. Ever Hear of “Review Bombing”? Entrepreneur. Retrieved May 14, 2022, from https://www.entrepreneur.com/article/325142
Karnofsky, H. (2016, May 6). Some Background on Our Views Regarding Advanced Artificial Intelligence. Open Philanthropy. https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence
Karnofsky, H. (2021, August 10). Forecasting Transformative AI, Part 1: What Kind of AI? Cold Takes. https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/
Knight, W. (n.d.). AI Can Write Disinformation Now — And Dupe Human Readers. Wired. Retrieved May 14, 2022, from https://www.wired.com/story/ai-write-disinformation-dupe-human-readers/
Li, J. (2022, February 17). Pitfalls of Predictive Policing: An Ethical Analysis. Viterbi Conversations in Ethics. https://vce.usc.edu/volume-5-issue-3/pitfalls-of-predictive-policing-an-ethical-analysis/
Lim, M. (n.d.). History of AI Winters — History of AI Winters | Actuaries Digital. Retrieved March 5, 2022, from https://www.actuaries.digital/2018/09/05/history-of-ai-winters/
Loux, B. (2018, April 6). Step Into the Weird World of Sutu’s Immersive VR Paintings. VRScout. https://vrscout.com/news/sutu-immersive-vr-paintings/
Marvin Minsky. (2022). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Marvin_Minsky&oldid=1080215875
McCorduck, P. (2004). Machines who think: A personal inquiry into the history and prospects of artificial intelligence (25th anniversary update). A.K. Peters.
MIRI. (2022). Intelligence Explosion FAQ. Machine Intelligence Research Institute. https://intelligence.org/ie-faq/
Müller, V. C., & Bostrom, N. (2016a). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (Vol. 376, pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33
Müller, V. C., & Bostrom, N. (2016b). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (Vol. 376, pp. 555–572). Springer International Publishing. https://doi.org/10.1007/978-3-319-26485-1_33
Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., & Chen, M. (2021). GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models. ArXiv:2112.10741 [Cs]. http://arxiv.org/abs/2112.10741
Peters, K. (n.d.). Universal Basic Income (UBI). Investopedia. Retrieved March 5, 2022, from https://www.investopedia.com/terms/b/basic-income.asp
Purtill, C. (2020, November 21). How Close Is Humanity to the Edge? The New Yorker. https://www.newyorker.com/culture/annals-of-inquiry/how-close-is-humanity-to-the-edge
Story of Mathematics. (n.d.). Isaac Newton: Math & Calculus. The Story of Mathematics — A History of Mathematical Thought from Ancient Times to the Modern Day. Retrieved May 14, 2022, from https://www.storyofmathematics.com/17th_newton.html/
Technological singularity. (2022). In Wikipedia. https://en.wikipedia.org/w/index.php?title=Technological_singularity&oldid=1086781791
United Nations. (n.d.). Will robots and AI cause mass unemployment? Not necessarily, but they do bring other threats. United Nations; United Nations. Retrieved March 6, 2022, from https://www.un.org/en/desa/will-robots-and-ai-cause-mass-unemployment-not-necessarily-they-do-bring-other
University of Maryland. (2022). What is Computer Science? | Undergraduate Computer Science at UMD. https://undergrad.cs.umd.edu/what-computer-science
U.S. Bureau of Labor Statistics. (1948, January 1). Civilian Labor Force Level. FRED, Federal Reserve Bank of St. Louis; FRED, Federal Reserve Bank of St. Louis. https://fred.stlouisfed.org/series/CLF16OV
VB Staff. (2021, December 6). Report: AI investments see largest year-over-year growth in 20 years. VentureBeat. https://venturebeat.com/2021/12/06/report-ai-investments-see-largest-year-over-year-growth-in-20-years/