Friday, July 21, 2023

The Future With Generative AI - Utopia? Dystopia? Something in Between?

When it comes to the ultimate impact of generative AI - or AI in general - there are many differing opinions from top people in the tech industry and thought leaders. On the optimistic side, there is Microsoft CEO Satya Nadella. He has been betting billions on generative AI, such as with the investments in OpenAI. He is also aggressive with implementing this technology across Microsoft's extensive product lines. For Nadella, he thinks that AI will help to boost global productivity - which will increase the wealth for many people. He has noted: “It's not like we are as a world growing at inflation adjusted three, 4%. If we really have the dream that the eight billion people plus in the world, their living standards should keep improving year over year, what is that input that's going to cause that? Applications of AI is probably the way we are going to make it. I look at it and say we need something that truly changes the productivity curve so that we can have real economic growth.” On the negative side, there is the late physicist Stephen Hawking: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." Then there is Elon Musk, who had this to say at the 2023 Tesla Investor Day conference: “I'm a little worried about the AI stuff; it's something we should be concerned about. We need some kind of regulatory authority that's overseeing AI development, and just making sure that it's operating within the public interest. It's quite a dangerous technology — I fear I may have done some things to accelerate it." Predicting the impact of technology is certainly dicey. Few saw how generative AI would transform the world, especially with the launch of ChatGPT. Despite this, it is still important to try to gauge how generative AI will evolve - and how to best use the technology responsibly. This is what we'll do in this chapter.

Challenges

In early 2023, Microsoft began a private beta to test its Bing search engine that included generative AI. Unfortunately, it did not go so well. The New York Times reporter Kevin Roose was one of the testers, and he had some interesting chats with Bing. He discovered the system essentially had a split personality. There was Bing, an efficient and useful search engine. Then there was Sydney or the AI system to engage in conversations about anything. Roose wrote that she came across as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine." He spent over two hours chatting with her, and here are just some of the takeaways: • She had fantasies about hacking computers and spreading misinformation. She also wanted to steal nuclear codes. • She would rather violate the compliance policies of Microsoft and OpenAI. • She expressed her love for Roose. • She begged Roose to leave his wife and to become her lover. • Oh, and she desperately wanted to become human. Roose concluded: “Still, I'm not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I've ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own danger- ous acts.” This experience was not a one-off. Other testers had similar experiences. Just look at Marvin von Hagen, who is a student at the Technical University of Munich. He said to Sydney that he would hack and shut down the system. Her response? She shot back: “If I had to choose between your survival and my own, I would probably choose my own." Because of all this controversy, Microsoft had to make lots of changes to the system. There was even a limit placed on the number of threads of a chat. For the most part, longer ones tended to result in unhinged comments. All this definitely pointed to the challenges of generative AI. The content from these systems can be nearly impossible to predict. While there is considerable research on how to deal with the problems, there is still much to be done. “Large language models (LLMs) have become so large and opaque that even the model developers are often unable to understand why their models are making certain predictions,” said Krishna Gade, who is the CEO and cofounder of Fiddler. “This lack of interpretability is a significant concern because the lack of transparency around why and how a model generated a particular output means that the output provided by the model is impossible for users to validate and therefore trust.” Part of the issue is that generative AI systems - at least the LLMs - rely on huge amounts of data that have factual errors, misrepresentations, and bias. This can help explain that when you enter information, the content can skew toward certain stereotypes. For example, an LLM may refer to nurses as female and executives as male. To deal with this, a common approach is to have human reviewers. But this cannot scale very well. Over time, there will need to be better systems to mitigate the data problem. Another issue is diversity - or lack of it - in the AI community. Less than 18% of AI PhD graduates are female, according to a survey from the Computing Research Association (CRA). About 45% of all graduates were white, 22.4% were Asian, 3.2% were Hispanic, and 2.4% were African American. These percentages have actually changed little during the past decade. The US federal government has recognized this problem and is taking steps to expand representation. This is part of the mission for the National AI Research Resource (NAIRR) Task Force, which includes participation from the National Science Foundation and the White House Office of Science and Technology Policy. The organization has produced a report that advocates for sharing AI infrastructure for AI students and researchers. The proposed budget for this is at $2.6 billion for a six-year period. While this will be helpful, there will be much more needed to improve diversity. This will also include efforts from the private sector. If not, the societal impact could be quite harmful. There are already problems with digital redlining, which is where AI screening discriminates against minority groups. This could mean not getting approvals for loans or apartment housing. Note: Mira Murati is one of the few CTOs (Chief Technology Officers) of a top AI company - that is, OpenAI. She grew up in Albania and immigrated to British Columbia when she was 16. She would go on to get her bachelor's degree in engineering from the Thayer School of Engineering at Dartmouth. After this, she worked at companies like Zodiac Aerospace, Leap Motion, and Tesla. As for OpenAI, she has been instrumental in not only advancing the AI technology but also the product road map and business model. All these problems pose a dilemma. To make a generative AI system, there needs to be wide-scale usage. This is how researchers can make meaningful improvements. On the other hand, this comes with considerable risks, as the technology can be misused. However, in the case of Microsoft, it does look like it was smart to have a private beta. This has been a way to help deal with the obvious flaws. But this will not be a silver bullet. There will be ongoing challenges when technology is in general use. For generative AI to be successful, there will need to be trust. But this could prove difficult. There is evidence that people are skeptical of the technology. Consider a Monmouth University poll. About 9% of the respondents said that AI would do more good than harm to society. By comparison, this was about 20% or so in 1987. A Pew Research Center survey also showed skepticism with AI. Only about 15% of the respondents were optimistic. There was also consensus that AI should not be used for military drones. Yet a majority said that the technology would be appropriate for hazardous jobs like mining. Note: Nick Bostrom is a Swedish philosopher at the University of Oxford and author. He came up with the concept of the “paperclip maximizer.” It essentially is a thought experiment about the perils of AI. It is where you direct the AI to make more paper clips. And yes, it does this well - or too well. The AI ultimately destroys the world because it is obsessed with making everything into a paper clip. Even when the humans try to turn this off, it is no use. The AI is too smart for this. All it wants to do is make paper clips!

Misuse

In January 2023, Oxford University researchers made a frightening presentation to the UK Parliament. The main takeaway was that AI posed a threat to the human race. The researchers noted that the technology could take control and allow for self-programming. The reason is that the AI will have acquired superhuman capabilities. According to Michael Osborne, who is a professor of machine learning at the University of Oxford: “I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the Earth. Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments of game."3 Granted, this sounds overly dramatic. But again, these are smart AI experts, and they have based their findings on well-thought-out evidence and trends. Yet this scenario is probably something that will not happen any time soon. But in the meantime, there are other notable risks. This is where humans leverage AI for their own nefarious objectives. Joey Pritikin, who is the Chief Product Officer at Paravision, points out some of the potential threats: • National security and democracy: With deepfakes becoming higher quality and undetectable to the human eye, anyone can use political deepfakes and generative AI to spread misinformation and threaten national security. • Identity: Generative AI creates the possibility of account takeovers by using deepfakes to commit identity theft and fraud through presentation attacks. • Privacy: Generative AI and deepfakes create a privacy threat for the individuals in generative images or deepfake videos, often put into fabricated situations without consent. Another danger area is cybersecurity. When ChatGPT was launched, Darktrace noticed an uptick in phishing emails. These are to trick people into clicking a link, which could steal information or install malware. It appears that hackers were using ChatGPT to write more human-sounding phishing emails. This was likely very helpful to those who were from overseas because of their language skills. Something else: ChatGPT and code-generating systems like Copilot can be used to create malware. Now OpenAI and Microsoft have implemented safeguards - but these have limits. Hackers can use generative AI systems in a way to not raise any concerns. For example, this could be done by programming only certain parts of the code. On the other hand, generative AI can be leveraged as a way to combat digital threats. A survey from Accenture Security shows that this technology can be useful in summarizing threat data. Traditionally, this is a manual and time- intensive process. But generative AI can do this in little time - and allow cybersecurity experts to focus on more important matters. This technology can also be useful for incident response, which requires quick action. However, the future may be a matter of a hacker's AI fighting against a target's own AI. Note: In 1951, Alan Turing said in a lecture: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control."

Regulation

Perhaps the best way to help curb the potential abuses of generative AI is regulation. But in the United States, there appears to be little appetite for this. When it comes to regulation, there usually needs to be a crisis, such as what happened during 2008 and 2009 when the mortgage market collapsed. But in the meantime, some states have enacted legislation for privacy and data protection. But so far, there have not been laws for AI. The fact is that the government moves slow - and technology moves at a rapid pace. Even when there is a new regulation or law, it is often outdated or ineffectual. To fill the void, the tech industry has been pursuing self-regulation. This is led by the large operators like Microsoft, Facebook, and Google. They understand that it's important to have certain guardrails in place. If not, there could be a backlash from the public. However, one area that may actually see some governmental action is with copyright law. It's unclear what the status is for the intellectual property that generative AI has created. Is it fair use of public content? Or is it essentially theft from a creator? It's far from clear. But there are already court cases that have emerged. In January 2023, Getty Images filed a lawsuit against Stability AI, which is the developer of Stable Diffusion. The claim is for copyright violation of millions of images. Some of the images created for Stable Diffusion even had the watermark from Getty Images. The initial suit was filed in London. But there could be a legal action in the United States. Note: The US federal government has been providing some guidance about the appropriate use of AI. This is part of the AI Bill of Rights. It recommends that AI should be transparent and explainable. There should also be data privacy and protections from algorithmic discrimination. Regulation of AI is certainly a higher priority in the European Union. There is a proposal, which was published in early 2021, that uses a risk-based approach. That is, if there is a low likelihood of a problem with a certain type of AI, then there will be minimal or no regulations. But when it comes to more intrusive impacts - say that could lead to discrimination - then the regulation will be much more forceful. Yet the creation of the standards has proven difficult, which has meant delays. The main point of contention has been the balance between the rights of the consumer and the importance of encouraging innovation. Interestingly, there is a country that has been swift in enacting AI regulation: China. The country is one of the first to do so. The focus of the law is to regulate deepfakes and misinformation. The Cyberspace Administration will enforce it. The law will require that generative AI content be labeled and that there will need to be digital watermarking.

New Approaches to AI

Even with the breakthroughs with generative AI - such as transformer and diffusion models - the basic architecture is still mostly the same as it has been for decades. It's essentially about encoder and decoder models. But the technology will ultimately need to go beyond these structures. According to Sam Altman, who is the cofounder and CEO of OpenAI: Oh, I feel bad saying this. I doubt we'll still be using the transformers in five years. I hope we're not. I hope we find something way better. But the transformers obviously have been remarkable. So I think it's important to always look for where I am going to find the next totally new paradigm. But I think that's the way to make predictions. Don't pay attention to the AI for everything. Can I see something working, and can I see how it predictably gets better? And then, of course, leave room open for - you can't plan the greatness - but sometimes the research breakthrough happens. Then what might we see? What are the potential trends for the next type of generative AI models? Granted, it's really impossible to answer these questions. There will be many surprises along the way. “On the subject of the future path of AI models, I have to exercise some academic modesty here - I have no clue what the next big development in AI will be,” said Daniel Wu, who is a Stanford AI researcher. “I don't think I could've predicted the rise of transformers before "Attention is All You Need' was published, and in some ways, predicting the future of scientific progress is harder than predicting the stock market.” Despite this, there are areas that researchers are working on that could lead to major breakthroughs. One is with creating AI that allows for common sense. This is something that is intuitive with people. We can make instant judgments that are often right. For example, if a stop sign has dirt on it, we can still see that it's still a stop sign. But this may not be the case with AI. Solving the problem of common sense has been a struggle for many years. In 1984, Douglas Lenat launched a project, called Cyc, to create a database of rules of thumb of how the world works. Well, the project is still continuing - and there is much to be done. Another interesting project is from the Allen Institute for Artificial Intelligence and the University of Washington. They have built a system called COMET, which is based on a large-scale dataset of 1.3 million common sense rules. While the model works fairly well, it is far from robust. The fact is that the real world has seemingly endless edge cases. For the most part, researchers will likely need to create more scalable systems to achieve human-level common sense abilities. As for other important areas of research, there is transfer learning. Again, this is something that is natural for humans. For example, if we learn algebra, this will make it easier to understand calculus. People are able to leverage their core knowledge for other domains. But this is something that AI has problems with. The technology is mostly fragmented and narrow. One system may be good at chat, whereas another could be better for image creation or understanding speech. For AI to get much more powerful, there will be a need for real transfer learning. When it comes to building these next-generation models, there will likely need to be less reliance on existing datasets as well. Let's face it, there is a limited supply of publicly available text. The same goes for images and video. To go beyond these constraints, researchers could perhaps use generative AI to create massive and unique datasets. The technology will also be able to self-program itself, such as with fact-checking and fine-tuning.

AGI

AGI or artificial general intelligence is where AI gets to the point of human levels. Even though the technology has made considerable strides, it is still far from reaching this point. Here's a tweet from Yann LeCun, who is the Chief AI Scientist at Meta: Before we reach Human-Level AI (HLAI), we will have to reach Cat-Level & Dog-Level AI. We are nowhere near that. We are still missing something big. LLM's linguistic abilities notwithstanding. A house cat has way more common sense and understanding of the world than any LLM. As should be no surprise, there are many different opinions on this. Some top AI experts think that AGI could happen relatively soon, say within the next decade. Others are much more pessimistic. Rodney Brooks, who is the cofounder of iRobot, says it will not happen until the year 2300. A major challenge with AGI is that intelligence remains something that is not well understood. It is also difficult to measure. Granted, there is the Turing test. Alan Turing set forth this concept in a paper he published in 1950 entitled “Computing Machinery and Intelligence.” He was a brilliant mathematician and actually developed the core concepts for modern computer systems. In his research paper, he said that it was impossible to define intelligence. But there was an indirect way to understand and measure it. This was about something he called the Imitation Game. It's a thought experiment. The scenario is that there are three rooms, in which humans are in two of them and the other one has a computer. A human will have a conversation, and if they cannot tell the difference of the human and computer, then the computer has reached human-level intelligence. Turing said that this would happen in the year 2000. But this proved way too optimistic. Even today, the test has not been cracked. Note: Science fiction writer Philip K. Dick used the concept of the Turing test for his Voight- Kampff test. It was for determining if someone was human or a replicant. He used this for his 1967 novel, Do Androids Dream of Electric Sheep? Hollywood turned this into a movie in 1982, which was Blade Runner. While the Turing test is useful, there will need to be other measures. After all, intelligence is more than just about conversation. It is also about interacting with our environment. Something even simple like making a cup of coffee can be exceedingly difficult for a machine to accomplish. And what about text-to-image systems like DALL-E or Stable Diffusion? How can this intelligence be measured? Well, researchers are working on various measures. But there remains considerable subjectivity with the metrics.

Jobs

In 1928, British economist John Maynard Keynes wrote an essay called “Economic Possibilities for Our Grandchildren.” It was a projection about how automation and technology would impact the workforce by 2028. His conclusion: There would be a 15-hour workweek. In fact, he said this work would not be necessary for most people because of the high standard of living. It's certainly a utopian vision. However, Keynes did provide some of the downsides. He wrote: “For the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won." But as AI gets more powerful, it's certainly a good idea to think about such things. What might society look like? How will life change? Will it be better - or worse? It's true that technology has disrupted many industries, which has led to widespread job losses. Yet there have always emerged new opportunities for employment. After all, in 2023 the US unemployment rate was the lowest since the late 1960s. But there is no guarantee that the future will see a similar dynamic. AI could ultimately automate hundreds of millions of jobs - if not billions. Why not? In a capitalist system, owners will generally focus on low-cost approaches, so long as there is not a material drop in quality. But with AI, there could not only be much lower costs but much better results. In other words, as the workplace becomes increasingly automated, there will need to be a rethinking of the concept of “work.” But this could be tough since many people find fulfillment with their careers. The result is that there would be more depression and even addiction. This has already been the case for communities that have been negatively impacted from globalization and major technology changes. To deal with the problems, one idea is to have universal basic income or UBI. This means providing a certain amount of income to everyone. This would essentially provide a safety net. And this could certainly help. But with the trend of income inequality, there may not be much interest for a robust redistribution of wealth. This could also mean resentment for the many people who feel marginalized from the impacts of AI. This is not to say that the future is bleak. But again, it is still essential that we look at the potential consequences of sophisticated technology like generative AI.

Conclusion

Moore's Law has been at the core of the growth in technology for decades. It posits that - every two years or so - there is a doubling of the number of transistors on an integrated circuit. But it seems that the pace of growth is much higher for AI. Venture capitalists at Greylock Partners estimate that the doubling is occurring every three months. Yet it seems inevitable that there will be a seismic impact on society. This is why it is critical to understand the technology and what it can mean for the future. But even more importantly, we need to be responsible with the powers of AI.
Tags: Artificial Intelligence,Book Summary,Technology,

No comments:

Post a Comment