Showing posts with label Generative AI. Show all posts
Showing posts with label Generative AI. Show all posts

Monday, March 18, 2024

Books on Large Language Models (Mar 2024)

Download Books
1.
Quick Start Guide to Large Language Models: Strategies and Best Practices for Using ChatGPT and Other LLMs
Sinan Ozdemir, 2023

2.
GPT-3: Building Innovative NLP Products Using Large Language Models
Sandra Kublik, 2022

3.
Understanding Large Language Models: Learning Their Underlying Concepts and Technologies
Thimira Amaratunga, 2023

4.
Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
I. Almeida, 2023

5.
Pretrain Vision and Large Language Models in Python: End-to-end Techniques for Building and Deploying Foundation Models on AWS
Emily Webber, 2023

6.
Modern Generative AI with ChatGPT and OpenAI Models: Leverage the Capabilities of OpenAI's LLM for Productivity and Innovation with GPT3 and GPT4
Valentina Alto, 2023

7.
Generative AI with LangChain: Build Large Language Model (LLM) Apps with Python, ChatGPT, and Other LLMs
Ben Auffarth, 2023

8.
Natural Language Processing with Transformers
Lewis Tunstall, 2022

9.
Generative AI on AWS
Chris Fregly, 2023

10.
Decoding GPT: An Intuitive Understanding of Large Language Models Generative AI Machine Learning and Neural Networks
Devesh Rajadhyax, 2024

11.
Retrieval-Augmented Generation (RAG): Empowering Large Language Models (LLMs)
Ray Islam (Mohammad Rubyet Islam), 2023

12.
Learn Python Generative AI: Journey from Autoencoders to Transformers to Large Language Models (English Edition)
Indrajit Kar, 2024

13.
Natural Language Understanding with Python: Combine Natural Language Technology, Deep Learning, and Large Language Models to Create Human-like Language Comprehension in Computer Systems
Deborah A. Dahl, 2023

14.
Developing Apps with GPT-4 and ChatGPT
Olivier Caelen, 2023

15.
Generative Deep Learning
David Foster, 2022

16.
Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media
Gerhard Paass, 2023

17.
What is ChatGPT Doing ... and why Does it Work?
Stephen Wolfram, 2023

18.
Artificial Intelligence and Large Language Models: An Introduction to the Technological Future
Al-Sakib Khan Pathan, 2024

19.
Large Language Model-Based Solutions: How to Deliver Value with Cost-Effective Generative AI Applications
Shreyas Subramanian, 2024

20.
Introduction to Transformers for NLP: With the Hugging Face Library and Models to Solve Problems
Shashank Mohan Jain, 2022

21.
Generative AI for Leaders
Amir Husain, 2023

22.
Machine Learning Engineering with Python: Manage the Production Life Cycle of Machine Learning Models Using MLOps with Practical Examples
Andrew P. McMahon, 2021

23.
Artificial Intelligence Fundamentals for Business Leaders: Up to Date With Generative AI
I. Almeida, 2023

24.
Transformers For Natural Language Processing: Build, Train, and Fine-tune Deep Neural Network Architectures for NLP with Python, Hugging Face, and OpenAI's GPT-3, ChatGPT, and GPT-4
Denis Rothman, 2022

Monday, August 7, 2023

Enhancing AI Risk Management in Financial Services with Machine Learning

Introduction:

The realm of financial services is rapidly embracing the power of artificial intelligence (AI) and machine learning (ML) to enhance risk management strategies. By leveraging advanced ML models, financial institutions can gain deeper insights into potential risks, make informed decisions, and ensure the stability of their operations. In this article, we'll explore how AI-driven risk management can be achieved using the best ML models in Python, complete with code examples.



AI Risk Management in Financial Services


Step 1: Data Collection and Preprocessing

To begin, gather historical financial data relevant to your risk management objectives. This could include market prices, economic indicators, credit scores, and more. Clean and preprocess the data by handling missing values, normalizing features, and encoding categorical variables.


Step 2: Import Libraries and Data

In your Python script, start by importing the necessary libraries:

import pandas as pd import numpy as np from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler from sklearn.metrics import accuracy_score, classification_report from sklearn.ensemble import RandomForestClassifier from xgboost import XGBClassifier

Load and preprocess your dataset:

data = pd.read_csv("financial_data.csv") X = data.drop("risk_label", axis=1) y = data["risk_label"]

Step 3: Train-Test Split and Data Scaling

Split the data into training and testing sets:

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Scale the features for better model performance:

scaler = StandardScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test)

Step 4: Implement ML Models

In this example, we'll use two powerful ML models: Random Forest and XGBoost.

  1. Random Forest Classifier:
rf_model = RandomForestClassifier(n_estimators=100, random_state=42) rf_model.fit(X_train_scaled, y_train) rf_predictions = rf_model.predict(X_test_scaled) rf_accuracy = accuracy_score(y_test, rf_predictions) print("Random Forest Accuracy:", rf_accuracy) print(classification_report(y_test, rf_predictions))
  1. XGBoost Classifier:
xgb_model = XGBClassifier(n_estimators=100, random_state=42) xgb_model.fit(X_train_scaled, y_train) xgb_predictions = xgb_model.predict(X_test_scaled) xgb_accuracy = accuracy_score(y_test, xgb_predictions) print("XGBoost Accuracy:", xgb_accuracy) print(classification_report(y_test, xgb_predictions))

Step 5: Evaluate and Compare

Evaluate the models' performance using accuracy and classification reports. Compare their results to determine which model is better suited for your risk management goals.


Conclusion:

AI-driven risk management is revolutionizing the financial services industry. By harnessing the capabilities of machine learning, financial institutions can accurately assess risks, make informed decisions, and ultimately ensure their stability and growth. In this article, we've demonstrated how to implement risk management using the best ML models in Python. Experiment with different models, fine-tune hyperparameters, and explore more advanced techniques to tailor the solution to your specific financial service needs. The future of risk management lies at the intersection of AI and finance, and now is the time to embrace its potential.


AI and Financial Risk Management – Critical Insights for Banking Leaders

I hope this article was helpful. If you have any questions, please feel free to leave a comment below.

Friday, August 4, 2023

Mapping the AI Finance Services Roadmap: Enhancing the Financial Landscape

Introduction

Artificial Intelligence (AI) has rapidly transformed the financial services industry, revolutionizing how we manage money, make investments, and access personalized financial advice. From robo-advisors to AI-driven risk management, the potential for AI in finance services is boundless. In this article, we'll navigate the AI Finance Services Roadmap, exploring the key milestones and opportunities that are reshaping the financial landscape and empowering consumers and businesses alike.



The Development of AI in the Financial Industry


Step 1: Personalized Financial Planning with Robo-Advisors

Robo-advisors have emerged as a revolutionary AI-powered tool that democratizes access to sophisticated financial planning. These platforms use AI algorithms to analyze an individual's financial situation, risk tolerance, and goals, enabling the creation of personalized investment portfolios. With lower fees and greater convenience, robo-advisors are transforming how we plan for our financial future.


Step 2: AI-Driven Credit Scoring and Lending

AI has revolutionized the lending process by introducing more efficient and accurate credit scoring models. By analyzing vast amounts of data, including transaction history, social media behavior, and online presence, AI algorithms can assess creditworthiness more effectively. This has opened up new avenues for individuals and businesses to access loans and credit facilities.


Step 3: Fraud Detection and Cybersecurity

The financial services industry faces persistent threats from cybercriminals. AI-based fraud detection systems can analyze vast data streams in real time, detecting suspicious activities and protecting against potential threats. By bolstering cybersecurity measures with AI, financial institutions can safeguard sensitive customer information and maintain trust in their services.


Step 4: AI-Powered Virtual Assistants

AI virtual assistants are reshaping customer interactions in the finance sector. These intelligent chatbots provide personalized support, answer inquiries, and perform routine tasks, enhancing the overall customer experience. By automating these processes, financial institutions can improve efficiency and focus on delivering high-value services to their clients.


Step 5: AI for Compliance and Regulatory Reporting

Compliance and regulatory reporting are critical aspects of the financial services industry. AI technologies can streamline these processes, ensuring adherence to complex regulations and reporting requirements. AI-driven solutions can identify potential compliance issues and proactively address them, reducing the risk of costly penalties and reputational damage.


Step 6: AI-Enhanced Risk Management

AI-powered risk management solutions provide more accurate and real-time risk assessment. These tools analyze historical data and market trends, enabling financial institutions to identify potential risks and make data-driven decisions. Enhanced risk management fosters stability and resilience, even in volatile market conditions.

Conclusion

The AI Finance Services Roadmap is shaping a future where financial services are more accessible, personalized, and secure than ever before. From robo-advisors offering tailored investment strategies to AI-driven fraud detection systems protecting against cyber threats, the transformative power of AI is revolutionizing the financial landscape. As we continue to innovate and embrace AI technologies, the potential for growth, efficiency, and customer satisfaction in the financial services industry is limitless. By navigating the AI Finance Services Roadmap, we can ensure a prosperous and inclusive financial future for individuals and businesses worldwide.

Overall, the AI finance services roadmap is promising. AI has the potential to improve efficiency, accuracy, and customer experience in the financial industry. However, there are also some challenges that need to be addressed before AI can be fully adopted in the financial sector.

I hope this article was helpful. If you have any questions, please feel free to leave a comment below.

Saturday, July 22, 2023

Generative AI Books (Jul 2023)

Download Books
1. Generative AI with Python and TensorFlow 2: Harness the Power of Generative Models to Create Images, Text, and Music Raghav Bali, 2021 2. Generative Deep Learning: Teaching Machines to Paint, Write, Compose, and Play David Foster, 2019 3. Generative AI with Python and TensorFlow 2: Create Images, Text, and Music with VAEs, GANs, LSTMs, Transformer Models Raghav Bali, 2021 4. Rise of Generative AI and ChatGPT: Understand how Generative AI and ChatGPT are transforming and reshaping the business world (English Edition) Utpal Chakraborty, 2023 5. Generative AI: How ChatGPT and Other AI Tools Will Revolutionize Business Tom Taulli, 2023 6. ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation Dr. Gleb Tsipursky, 2023 7. Generative AI for Business: The Essential Guide for Business Leaders Matt White, 2024 8. Modern Generative AI with ChatGPT and OpenAI Models: Leverage the Capabilities of OpenAI's LLM for Productivity and Innovation with GPT3 and GPT4 Valentina Alto, 2023 9. Generative AI for Entrepreneurs in a Hurry Mohak Agarwal, 2023 10. GANs in Action: Deep Learning with Generative Adversarial Networks Vladimir Bok, 2019 11. Generative AI: A Non-Technical Introduction Tom Taulli, 2023 12. Exploring Deepfakes: Deploy Powerful AI Techniques for Face Replacement and More with this Comprehensive Guide Bryan Lyon, 2023 13. Artificial Intelligence Basics: A Non-Technical Introduction Tom Taulli, 2019 14. Generative AI: The Beginner's Guide Dr Bienvenue Maula, 2023 15. The AI Revolution in Medicine: GPT-4 and Beyond Peter Lee, 2023 16. Synthetic Data and Generative AI Vincent Granville, 2024 17. Generative Adversarial Networks Cookbook: Over 100 Recipes to Build Generative Models Using Python, TensorFlow, and Keras Josh Kalin, 2018 18. Impromptu: Amplifying Our Humanity Through AI Reid Hoffman, 2023 19. The Age of AI: And Our Human Future Henry Kissinger, 2021 20. Generative Adversarial Networks for Image Generation Qing Li, 2021 21. Advanced Deep Learning with Keras: Apply Deep Learning Techniques, Autoencoders, GANs, Variational Autoencoders, Deep Reinforcement Learning, Policy Gradients, and More Rowel Atienza, 2018 22. Generative AI: Implications and Opportunities for Business Wael Badawy, 2023 23. GPT-3 Sandra Kublik, 2022
Tags: Generative AI,Artificial Intelligence,Technology,

Friday, July 21, 2023

The Future With Generative AI - Utopia? Dystopia? Something in Between?

When it comes to the ultimate impact of generative AI - or AI in general - there are many differing opinions from top people in the tech industry and thought leaders. On the optimistic side, there is Microsoft CEO Satya Nadella. He has been betting billions on generative AI, such as with the investments in OpenAI. He is also aggressive with implementing this technology across Microsoft's extensive product lines. For Nadella, he thinks that AI will help to boost global productivity - which will increase the wealth for many people. He has noted: “It's not like we are as a world growing at inflation adjusted three, 4%. If we really have the dream that the eight billion people plus in the world, their living standards should keep improving year over year, what is that input that's going to cause that? Applications of AI is probably the way we are going to make it. I look at it and say we need something that truly changes the productivity curve so that we can have real economic growth.” On the negative side, there is the late physicist Stephen Hawking: “Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy." Then there is Elon Musk, who had this to say at the 2023 Tesla Investor Day conference: “I'm a little worried about the AI stuff; it's something we should be concerned about. We need some kind of regulatory authority that's overseeing AI development, and just making sure that it's operating within the public interest. It's quite a dangerous technology — I fear I may have done some things to accelerate it." Predicting the impact of technology is certainly dicey. Few saw how generative AI would transform the world, especially with the launch of ChatGPT. Despite this, it is still important to try to gauge how generative AI will evolve - and how to best use the technology responsibly. This is what we'll do in this chapter.

Challenges

In early 2023, Microsoft began a private beta to test its Bing search engine that included generative AI. Unfortunately, it did not go so well. The New York Times reporter Kevin Roose was one of the testers, and he had some interesting chats with Bing. He discovered the system essentially had a split personality. There was Bing, an efficient and useful search engine. Then there was Sydney or the AI system to engage in conversations about anything. Roose wrote that she came across as “a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine." He spent over two hours chatting with her, and here are just some of the takeaways: • She had fantasies about hacking computers and spreading misinformation. She also wanted to steal nuclear codes. • She would rather violate the compliance policies of Microsoft and OpenAI. • She expressed her love for Roose. • She begged Roose to leave his wife and to become her lover. • Oh, and she desperately wanted to become human. Roose concluded: “Still, I'm not exaggerating when I say my two-hour conversation with Sydney was the strangest experience I've ever had with a piece of technology. It unsettled me so deeply that I had trouble sleeping afterward. And I no longer believe that the biggest problem with these A.I. models is their propensity for factual errors. Instead, I worry that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own danger- ous acts.” This experience was not a one-off. Other testers had similar experiences. Just look at Marvin von Hagen, who is a student at the Technical University of Munich. He said to Sydney that he would hack and shut down the system. Her response? She shot back: “If I had to choose between your survival and my own, I would probably choose my own." Because of all this controversy, Microsoft had to make lots of changes to the system. There was even a limit placed on the number of threads of a chat. For the most part, longer ones tended to result in unhinged comments. All this definitely pointed to the challenges of generative AI. The content from these systems can be nearly impossible to predict. While there is considerable research on how to deal with the problems, there is still much to be done. “Large language models (LLMs) have become so large and opaque that even the model developers are often unable to understand why their models are making certain predictions,” said Krishna Gade, who is the CEO and cofounder of Fiddler. “This lack of interpretability is a significant concern because the lack of transparency around why and how a model generated a particular output means that the output provided by the model is impossible for users to validate and therefore trust.” Part of the issue is that generative AI systems - at least the LLMs - rely on huge amounts of data that have factual errors, misrepresentations, and bias. This can help explain that when you enter information, the content can skew toward certain stereotypes. For example, an LLM may refer to nurses as female and executives as male. To deal with this, a common approach is to have human reviewers. But this cannot scale very well. Over time, there will need to be better systems to mitigate the data problem. Another issue is diversity - or lack of it - in the AI community. Less than 18% of AI PhD graduates are female, according to a survey from the Computing Research Association (CRA). About 45% of all graduates were white, 22.4% were Asian, 3.2% were Hispanic, and 2.4% were African American. These percentages have actually changed little during the past decade. The US federal government has recognized this problem and is taking steps to expand representation. This is part of the mission for the National AI Research Resource (NAIRR) Task Force, which includes participation from the National Science Foundation and the White House Office of Science and Technology Policy. The organization has produced a report that advocates for sharing AI infrastructure for AI students and researchers. The proposed budget for this is at $2.6 billion for a six-year period. While this will be helpful, there will be much more needed to improve diversity. This will also include efforts from the private sector. If not, the societal impact could be quite harmful. There are already problems with digital redlining, which is where AI screening discriminates against minority groups. This could mean not getting approvals for loans or apartment housing. Note: Mira Murati is one of the few CTOs (Chief Technology Officers) of a top AI company - that is, OpenAI. She grew up in Albania and immigrated to British Columbia when she was 16. She would go on to get her bachelor's degree in engineering from the Thayer School of Engineering at Dartmouth. After this, she worked at companies like Zodiac Aerospace, Leap Motion, and Tesla. As for OpenAI, she has been instrumental in not only advancing the AI technology but also the product road map and business model. All these problems pose a dilemma. To make a generative AI system, there needs to be wide-scale usage. This is how researchers can make meaningful improvements. On the other hand, this comes with considerable risks, as the technology can be misused. However, in the case of Microsoft, it does look like it was smart to have a private beta. This has been a way to help deal with the obvious flaws. But this will not be a silver bullet. There will be ongoing challenges when technology is in general use. For generative AI to be successful, there will need to be trust. But this could prove difficult. There is evidence that people are skeptical of the technology. Consider a Monmouth University poll. About 9% of the respondents said that AI would do more good than harm to society. By comparison, this was about 20% or so in 1987. A Pew Research Center survey also showed skepticism with AI. Only about 15% of the respondents were optimistic. There was also consensus that AI should not be used for military drones. Yet a majority said that the technology would be appropriate for hazardous jobs like mining. Note: Nick Bostrom is a Swedish philosopher at the University of Oxford and author. He came up with the concept of the “paperclip maximizer.” It essentially is a thought experiment about the perils of AI. It is where you direct the AI to make more paper clips. And yes, it does this well - or too well. The AI ultimately destroys the world because it is obsessed with making everything into a paper clip. Even when the humans try to turn this off, it is no use. The AI is too smart for this. All it wants to do is make paper clips!

Misuse

In January 2023, Oxford University researchers made a frightening presentation to the UK Parliament. The main takeaway was that AI posed a threat to the human race. The researchers noted that the technology could take control and allow for self-programming. The reason is that the AI will have acquired superhuman capabilities. According to Michael Osborne, who is a professor of machine learning at the University of Oxford: “I think the bleak scenario is realistic because AI is attempting to bottle what makes humans special, that has led to humans completely changing the face of the Earth. Artificial systems could become as good at outfoxing us geopolitically as they are in the simple environments of game."3 Granted, this sounds overly dramatic. But again, these are smart AI experts, and they have based their findings on well-thought-out evidence and trends. Yet this scenario is probably something that will not happen any time soon. But in the meantime, there are other notable risks. This is where humans leverage AI for their own nefarious objectives. Joey Pritikin, who is the Chief Product Officer at Paravision, points out some of the potential threats: • National security and democracy: With deepfakes becoming higher quality and undetectable to the human eye, anyone can use political deepfakes and generative AI to spread misinformation and threaten national security. • Identity: Generative AI creates the possibility of account takeovers by using deepfakes to commit identity theft and fraud through presentation attacks. • Privacy: Generative AI and deepfakes create a privacy threat for the individuals in generative images or deepfake videos, often put into fabricated situations without consent. Another danger area is cybersecurity. When ChatGPT was launched, Darktrace noticed an uptick in phishing emails. These are to trick people into clicking a link, which could steal information or install malware. It appears that hackers were using ChatGPT to write more human-sounding phishing emails. This was likely very helpful to those who were from overseas because of their language skills. Something else: ChatGPT and code-generating systems like Copilot can be used to create malware. Now OpenAI and Microsoft have implemented safeguards - but these have limits. Hackers can use generative AI systems in a way to not raise any concerns. For example, this could be done by programming only certain parts of the code. On the other hand, generative AI can be leveraged as a way to combat digital threats. A survey from Accenture Security shows that this technology can be useful in summarizing threat data. Traditionally, this is a manual and time- intensive process. But generative AI can do this in little time - and allow cybersecurity experts to focus on more important matters. This technology can also be useful for incident response, which requires quick action. However, the future may be a matter of a hacker's AI fighting against a target's own AI. Note: In 1951, Alan Turing said in a lecture: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. They would be able to converse with each other to sharpen their wits. At some stage therefore, we should have to expect the machines to take control."

Regulation

Perhaps the best way to help curb the potential abuses of generative AI is regulation. But in the United States, there appears to be little appetite for this. When it comes to regulation, there usually needs to be a crisis, such as what happened during 2008 and 2009 when the mortgage market collapsed. But in the meantime, some states have enacted legislation for privacy and data protection. But so far, there have not been laws for AI. The fact is that the government moves slow - and technology moves at a rapid pace. Even when there is a new regulation or law, it is often outdated or ineffectual. To fill the void, the tech industry has been pursuing self-regulation. This is led by the large operators like Microsoft, Facebook, and Google. They understand that it's important to have certain guardrails in place. If not, there could be a backlash from the public. However, one area that may actually see some governmental action is with copyright law. It's unclear what the status is for the intellectual property that generative AI has created. Is it fair use of public content? Or is it essentially theft from a creator? It's far from clear. But there are already court cases that have emerged. In January 2023, Getty Images filed a lawsuit against Stability AI, which is the developer of Stable Diffusion. The claim is for copyright violation of millions of images. Some of the images created for Stable Diffusion even had the watermark from Getty Images. The initial suit was filed in London. But there could be a legal action in the United States. Note: The US federal government has been providing some guidance about the appropriate use of AI. This is part of the AI Bill of Rights. It recommends that AI should be transparent and explainable. There should also be data privacy and protections from algorithmic discrimination. Regulation of AI is certainly a higher priority in the European Union. There is a proposal, which was published in early 2021, that uses a risk-based approach. That is, if there is a low likelihood of a problem with a certain type of AI, then there will be minimal or no regulations. But when it comes to more intrusive impacts - say that could lead to discrimination - then the regulation will be much more forceful. Yet the creation of the standards has proven difficult, which has meant delays. The main point of contention has been the balance between the rights of the consumer and the importance of encouraging innovation. Interestingly, there is a country that has been swift in enacting AI regulation: China. The country is one of the first to do so. The focus of the law is to regulate deepfakes and misinformation. The Cyberspace Administration will enforce it. The law will require that generative AI content be labeled and that there will need to be digital watermarking.

New Approaches to AI

Even with the breakthroughs with generative AI - such as transformer and diffusion models - the basic architecture is still mostly the same as it has been for decades. It's essentially about encoder and decoder models. But the technology will ultimately need to go beyond these structures. According to Sam Altman, who is the cofounder and CEO of OpenAI: Oh, I feel bad saying this. I doubt we'll still be using the transformers in five years. I hope we're not. I hope we find something way better. But the transformers obviously have been remarkable. So I think it's important to always look for where I am going to find the next totally new paradigm. But I think that's the way to make predictions. Don't pay attention to the AI for everything. Can I see something working, and can I see how it predictably gets better? And then, of course, leave room open for - you can't plan the greatness - but sometimes the research breakthrough happens. Then what might we see? What are the potential trends for the next type of generative AI models? Granted, it's really impossible to answer these questions. There will be many surprises along the way. “On the subject of the future path of AI models, I have to exercise some academic modesty here - I have no clue what the next big development in AI will be,” said Daniel Wu, who is a Stanford AI researcher. “I don't think I could've predicted the rise of transformers before "Attention is All You Need' was published, and in some ways, predicting the future of scientific progress is harder than predicting the stock market.” Despite this, there are areas that researchers are working on that could lead to major breakthroughs. One is with creating AI that allows for common sense. This is something that is intuitive with people. We can make instant judgments that are often right. For example, if a stop sign has dirt on it, we can still see that it's still a stop sign. But this may not be the case with AI. Solving the problem of common sense has been a struggle for many years. In 1984, Douglas Lenat launched a project, called Cyc, to create a database of rules of thumb of how the world works. Well, the project is still continuing - and there is much to be done. Another interesting project is from the Allen Institute for Artificial Intelligence and the University of Washington. They have built a system called COMET, which is based on a large-scale dataset of 1.3 million common sense rules. While the model works fairly well, it is far from robust. The fact is that the real world has seemingly endless edge cases. For the most part, researchers will likely need to create more scalable systems to achieve human-level common sense abilities. As for other important areas of research, there is transfer learning. Again, this is something that is natural for humans. For example, if we learn algebra, this will make it easier to understand calculus. People are able to leverage their core knowledge for other domains. But this is something that AI has problems with. The technology is mostly fragmented and narrow. One system may be good at chat, whereas another could be better for image creation or understanding speech. For AI to get much more powerful, there will be a need for real transfer learning. When it comes to building these next-generation models, there will likely need to be less reliance on existing datasets as well. Let's face it, there is a limited supply of publicly available text. The same goes for images and video. To go beyond these constraints, researchers could perhaps use generative AI to create massive and unique datasets. The technology will also be able to self-program itself, such as with fact-checking and fine-tuning.

AGI

AGI or artificial general intelligence is where AI gets to the point of human levels. Even though the technology has made considerable strides, it is still far from reaching this point. Here's a tweet from Yann LeCun, who is the Chief AI Scientist at Meta: Before we reach Human-Level AI (HLAI), we will have to reach Cat-Level & Dog-Level AI. We are nowhere near that. We are still missing something big. LLM's linguistic abilities notwithstanding. A house cat has way more common sense and understanding of the world than any LLM. As should be no surprise, there are many different opinions on this. Some top AI experts think that AGI could happen relatively soon, say within the next decade. Others are much more pessimistic. Rodney Brooks, who is the cofounder of iRobot, says it will not happen until the year 2300. A major challenge with AGI is that intelligence remains something that is not well understood. It is also difficult to measure. Granted, there is the Turing test. Alan Turing set forth this concept in a paper he published in 1950 entitled “Computing Machinery and Intelligence.” He was a brilliant mathematician and actually developed the core concepts for modern computer systems. In his research paper, he said that it was impossible to define intelligence. But there was an indirect way to understand and measure it. This was about something he called the Imitation Game. It's a thought experiment. The scenario is that there are three rooms, in which humans are in two of them and the other one has a computer. A human will have a conversation, and if they cannot tell the difference of the human and computer, then the computer has reached human-level intelligence. Turing said that this would happen in the year 2000. But this proved way too optimistic. Even today, the test has not been cracked. Note: Science fiction writer Philip K. Dick used the concept of the Turing test for his Voight- Kampff test. It was for determining if someone was human or a replicant. He used this for his 1967 novel, Do Androids Dream of Electric Sheep? Hollywood turned this into a movie in 1982, which was Blade Runner. While the Turing test is useful, there will need to be other measures. After all, intelligence is more than just about conversation. It is also about interacting with our environment. Something even simple like making a cup of coffee can be exceedingly difficult for a machine to accomplish. And what about text-to-image systems like DALL-E or Stable Diffusion? How can this intelligence be measured? Well, researchers are working on various measures. But there remains considerable subjectivity with the metrics.

Jobs

In 1928, British economist John Maynard Keynes wrote an essay called “Economic Possibilities for Our Grandchildren.” It was a projection about how automation and technology would impact the workforce by 2028. His conclusion: There would be a 15-hour workweek. In fact, he said this work would not be necessary for most people because of the high standard of living. It's certainly a utopian vision. However, Keynes did provide some of the downsides. He wrote: “For the first time since his creation man will be faced with his real, his permanent problem—how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won." But as AI gets more powerful, it's certainly a good idea to think about such things. What might society look like? How will life change? Will it be better - or worse? It's true that technology has disrupted many industries, which has led to widespread job losses. Yet there have always emerged new opportunities for employment. After all, in 2023 the US unemployment rate was the lowest since the late 1960s. But there is no guarantee that the future will see a similar dynamic. AI could ultimately automate hundreds of millions of jobs - if not billions. Why not? In a capitalist system, owners will generally focus on low-cost approaches, so long as there is not a material drop in quality. But with AI, there could not only be much lower costs but much better results. In other words, as the workplace becomes increasingly automated, there will need to be a rethinking of the concept of “work.” But this could be tough since many people find fulfillment with their careers. The result is that there would be more depression and even addiction. This has already been the case for communities that have been negatively impacted from globalization and major technology changes. To deal with the problems, one idea is to have universal basic income or UBI. This means providing a certain amount of income to everyone. This would essentially provide a safety net. And this could certainly help. But with the trend of income inequality, there may not be much interest for a robust redistribution of wealth. This could also mean resentment for the many people who feel marginalized from the impacts of AI. This is not to say that the future is bleak. But again, it is still essential that we look at the potential consequences of sophisticated technology like generative AI.

Conclusion

Moore's Law has been at the core of the growth in technology for decades. It posits that - every two years or so - there is a doubling of the number of transistors on an integrated circuit. But it seems that the pace of growth is much higher for AI. Venture capitalists at Greylock Partners estimate that the doubling is occurring every three months. Yet it seems inevitable that there will be a seismic impact on society. This is why it is critical to understand the technology and what it can mean for the future. But even more importantly, we need to be responsible with the powers of AI.
Tags: Artificial Intelligence,Book Summary,Technology,