AI in 2023
Here’s a 5-minute video that I made for our executive team to better understand what AI can do today, and where it’s heading this next year.
Here’s a 5-minute video that I made for our executive team to better understand what AI can do today, and where it’s heading this next year.
The stunning advances in AI this year further underlines the imperative that every company needs to invest in incorporating these capabilities into their business.
Over the past several years, AI models have demonstrated the ability to see, hear, and speak - as well as or better than humans. AI has bested experts in almost every type of game from board games to video games to game shows. And robotics has exceeded human-level ability to navigate complex environments.
Until now, it’s been easy to dismiss these as capable computers whose disruption was limited to certain industries and endeavors.
However, in the past few months, OpenAI and other leading companies have released AI models that can create content as well or better than people. AI can create art and music, write poetry and essays, write code, find software issues, answer questions, offer advice, and more.
A few years ago, this was not the type of disruption that was widely discussed. Instead, disruptive automation was expected to more narrowly target menial and repetitive tasks - not human-level creativity.
The big advance that happened over this summer is Generative AI, that can create realistic images, write poetry and software, and more. Twitter, Reddit, and YouTube are full of examples that are exceptionally good.
A few months ago, realistic image generators such as Stable Diffusion, DALL-E 2, and MidJourney were publicly-released - either via API or as open-source models - and the Internet went wild. HuggingFace, an AI community for sharing models and datasets, is approaching 100,000 models with many uploaded or updated in the past year. A variety of these are variations of the open-source image generators, fine-tuned for special cases such as generating images of cars, realistic photos, architectures, and more.
And this past week, OpenAI released ChatGPT, an iteration of GPT-3.5, but trained in a way to answer questions in a conversational manner (and with some memory of the earlier conversation). GPT-3.5 itself was an iteration of GPT-3 trained to answer instructions and was the cornerstone of a huge improvement in the algorithms ability to generate useful text and code.
The reason that ChatGPT has received so much press is because it’s amazingly good. It’s also currently free and has a chat-like interface, making it accessible to a wide audience. It’s so good that individuals have used ChatGPT to get passing scores on college entrance exams and college-level essays.
In the same way that many begin with a Google search or a calculator when doing our jobs, we should also have these new generative AI tools available for us in our daily work. And in a growing set of use cases, start with AI and then improve on the outcome, instead of the reverse.
If you experiment with these more innovative uses, be aware that because they are creative and because they were trained on large swaths of internet text and images, the results are sometimes inflammatory or inaccurate.
In short, where we’re at right now with this latest generation is that a human needs to decide if the result is fit for publishing - but generally the first draft is rather good.
In considering the scale of possible disruption of AI, the level of technology that’s publicly available this year is analogous to the mobile revolution that occurred after the release of the first iPhone. That is, eventually most people in the world would access the internet via a smartphone - and companies that wanted to reach those people would need to offer a mobile solution.
Similarly, most people will expect some level of intelligence in the software they interact with, and companies that want to keep their business will need to offer that.
By intelligent, it means that customers will require that the software that they interact with is personalized, that it is generative, and that the software acts on their behalf.
Personalization requires that the software automatically tailors the experience to meet the individual’s expectations and needs. Increasingly, this is accomplished by recording the user’s interaction with the software and then applying machine learning to anticipate and recommend the best options for the individual.
Generative software employs AI to help the individual create something new, better and faster than they could otherwise. The latest AI models can create images, text, video, music, and more: often with little more than a text prompt describing the desired result. Going forward both amateurs and professionals alike will expect the software they work with to create at least a first draft, if not the result, based on their instruction.
Automation is where the software acts on behalf of the individual to streamline and automate repetitive tasks. Traditionally, this has been thought of as taking some set of actions at the individual’s direction. As AI continues to advance, software should be able to proactively aid the user - perhaps still asking the individual for permission - without being prompted.
Note: Intelligent software is a somewhat amorphous term with a variety of definitions. The above definition has been chosen in line with the latest advances in AI.
As we increase our adoption of AI, it’s imperative to understand the many ways it can go wrong.
There are several potential dangers to using AI in business, including:
An important consideration is bias in AI - such as sexism, racism, ageism, and ableism - which can go undetected until reported by the individual that was harmed.
Biases in AI are most often introduced through:
Unfortunately, bias is common in AI and can have profound consequences.
It’s important for companies employing AI to ensure they are employing tools designed to evaluate fairness and to introduce safeguards and controls to reduce the risk of harm to individuals. And companies must also ensure they have a plan to address bias if it’s found and to offer redress and remedies if the algorithm causes harm.
To safeguard the use of AI, it’s suggested to create an AI ethics committee which would establish ethical guidelines and best practices for its use within the company. One approach is to comprise the committee of a diverse group of individuals, including representatives from different departments within the company, and external experts on AI ethics.
The committee would handle regularly reviewing and updating the company’s AI ethics guidelines and conducting regular audits to ensure that the company’s use of AI is in line with these guidelines. Additionally, the committee should supply training and education to employees on ethical AI practices and could serve as a resource for employees who have questions or concerns about the ethical use of AI in their work.
Overall, the goal of this governance model would be to ensure that the company’s use of AI is transparent, responsible, and aligned with organization’s values and principles.
It’s widely expected that the billions of dollars of investment in AI will continue to grow.
The key drivers of this investment are Google, Microsoft, Amazon, NVIDIA, OpenAI, DeepMind, as well as other contributing organizations such as Stability.AI, Midjourney, and substantial collaborations by open-source contributors across the world.
Key expectations for 2023:
Something exciting and perhaps incredible is happening in AI right now. In short, we’re seeing massive leaps in AI where the best models can generate human-level art, music, poetry, stories, and more.
Given the rapid advances and often stunning examples, it’s useful to consider how disruptive this might be. To frame this, we can take inspiration from the recent past.
In 2007, the introduction of the iPhone started what later became known as the Mobile Revolution. That is, eventually most people in the world would access the internet via a smartphone - and companies that wanted to reach those people would need to offer a mobile solution.
A few years later, in 2011, Marc Andreesen wrote a significant thought-piece titled “Software is eating the world”. In that essay, he made a point that’s now taken for granted:
But too much of the debate is still around financial valuation, as opposed to the underlying intrinsic value of the best of Silicon Valley’s new companies. My own theory is that we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.
More and more major businesses and industries are being run on software and delivered as online services — from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.
Link here: https://a16z.com/2011/08/20/why-software-is-eating-the-world/.
What we’ve seen over the past few years is AI surpass human-level capabilities in seeing, hearing, and speaking. And this year, we’ve witnessed AI matching and exceeding human-level capabilities in generating text, images, videos, and more.
An interesting question then is whether the current AI revolution is more like the mobile revolution: the parallel being that everyone company needs to adopt AI. Or, is it more substantial than that, with almost every business and industry eventually being disrupted by AI.
Last year, Sam Altman, CEO of OpenAI, wrote “Moore’s Law for everything”. In this passionate essay, he asserts that over the coming decades, nearly everything will be changed and disrupted by AI.
The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
Link here: https://moores.samaltman.com.
As noted, to get to the point of near total disruption, where “AI eats the world”, we’ll need AI that can create, understand, reason, and think. Sam Altman’s position seems to be that this is a matter of “when”, not “if”.
With the AI’s released just in the past several months such as Dalle2, MidJourney, and Stable Diffusion, it’s clear that AIs can create. And with ChatGPT, more often than not, it appears that the AI understands what is being asked and can reply appropriately.
But no publicly-accessible model currently exhibits true reasoning and thinking.
As such, we’re not there yet; the current impact of AI is more like the mobile revolution. And yet, it’s exciting to see the rapid progress of AI and the possibility ahead.
At the point where we do have AI models that can understand, reason, and think, then this will change from an AI revolution to a path of disruption more analogous to AI eating the world.
And we may not be as far away from that future as we think.
The United Nations recently estimated that the world’s population had surpassed 8 billion people. That number has doubled in the past 50 years.
It took until 1804 to surpass 1 billion people and another 130 years to double that (1927). From there, it only took 50 years to double again to 4 billion (1974).
In 1974, one might have said that it would only take 25 years to reach 8 billion - but we know now that it took 50 years - reaching that milestone this year (2022).
This graph illustrates the curve: which also shows that it’s beginning to flatten out, to slow in growth rate, over the past decade or so.
It’s reasonable to ask what’s ahead for growth… which the UN has modeled in the following:
Historically, this growth predominated in Europe and the Americas and then shifted to Asia and India. The next growth region is widely predicted to be Africa over the coming decades.
Back in 2011, when the world hit 7 billion people, the BBC ran several stories exploring what might happen as this population continued to grow. The top themes were a tendency for families to have only a single child (partly responsible for the slowing replacement rate), an aging population (due to the slowing replacement rate), and concerns of water and food shortages.
In contrast, this year was much more moderate in reaction. In large part, the above concerns haven’t changed with the major exception that many believe the rate of growth will continue to slow, with an expected peak of around 10 billion.
In short: happy 8 billion world! We did it!
The beginning of AI could be characterized as sophisticated pattern recognition. ML can identify patterns in data, such as classifying whether an image is of a dog or a cat or if a radiograph contains cancer. This is extremely valuable for a broad class of problems.
Building on this, once a pattern can be reliably identified, then a prediction can be made about what’s next. For example, based on a set of real estate sales data, a prediction can be made about the next home for sale.
Arguably, detecting cancer is of greater importance than home values. But in terms of comparing artificial intelligence to human intelligence, predicting the future is a higher order capability than pattern recognition.
Recently, we’ve seen an explosion of the next evolution in AI: generation of speech, images, video, and more. AI has been generating this type of content for some time, but over the summer, we saw the release of AI models that moved from okay to very good.
To follow the prior line of reasoning, the ability to predict what’s next is a necessary precursor to generating something new. A painting is a flow of color, strokes, and patterns (or pixels as the case may be) that require continuous “what’s next” decision-making.
If you agree with this progression: pattern recognition -> prediction -> generation, then we may speculate on how AI will evolve. The ability for AI to generate music, art, and novels are all precursors for bi-directional communication.
There’s a strong argument that the current state of AI is only piecing together a set of known concepts and presenting them as new. However, what AI can do is consistently surprise and delight. And if creativity is the act of creating something new, then AI meets that standard today.
But, what AI can’t do yet, is reason and communicate its rationale. AI lacks context and goals; self-awareness and self-direction. However, AI is evolving at an extremely rapid pace. And importantly, extensive research is under way on how to make neural networks recursive and to use AI to make AI.
Some have characterized the next step of evolution as the Third Wave of AI (or Cognitive AI): https://www.darpa.mil/attachments/AIFull.pdf. The suggestion is that AI will be able to contextually adapt to what’s it’s presented - to reason and respond.
Importantly, this step probably won’t be a full AGI with complete autonomy and its own open-ended goals, but it likely will feel very real to us in a broad set of situations.
As such, it’s reasonable to suggest that the next step of evolution for AI is Cognition.
Skeptical of AI moving to Cognition? Here’s Intel asserting it will happen in the next three years:
By 2025, it is expected that human-centric Cognitive AI systems with higher machine intelligence will emerge. Machines will be able to understand language, integrate commonsense knowledge and reasoning, and adapt to new circumstances. https://www.intel.com/content/www/us/en/research/blogs/higher-machine-intelligence-for-next-gen-ai.html
At a recent AI meetup, we were asked if anyone knew a good AI joke. This seemed like a good opportunity to test if AI can tell jokes.
I started with a simple prompt using OpenAI’s GPT-3 Davinci-2 model: “tell me a joke about artificial intelligence”.
That provided this (not very funny) option:
Why did the robot cross the road?
To get to the other side.
With these AI engines, you get what you give. A little more thought produced something better, but not quite right, “Robin Williams is telling a joke about artificial intelligence to Steve Martin.”
“I was reading about this artificial intelligence thing,” Williams says. “It’s incredible. They say that by 2030, AI will be able to do everything humans can do, but better. They’ll be able to think faster, be more efficient, and never get tired.
“So I was thinking, what if we just put all our eggs in one basket and let the machines take over? Then we could all just sit around and do nothing all day!”
Martin laughs and says, “Yeah, but what would we do for entertainment?”
After a few tries, I ended with a prompt that was more descriptive than before: “Robin Williams and Steve Martin are both sitting on a stool in front of a microphone, telling a very funny story about artificial intelligence.”
Steve Martin:
So, there was this artificial intelligence researcher, who was working on a project to create a super intelligent computer. He was working day and night, and he finally succeeded. The computer was incredibly intelligent, and it could talk and understand everything.
The researcher was so excited that he decided to show it off to the world. He took it to a press conference, and he put it on stage in front of all the reporters.
The computer started talking, and it was so intelligent that it completely fooled everyone. It seemed like it was a real person.
The reporter asked it a question, and the computer answered perfectly. Another reporter asked a question, and the computer answered perfectly again. Everyone was amazed.
But then, one of the reporters asked the computer a question that it couldn’t answer. The computer hesitated for a moment, and then it said, “I’m sorry, I don’t know the answer to that.”
The reporter looked at the researcher and said, “See? Your computer is not as intelligent as you think it is.”
The researcher was shocked. He didn’t know what to say.
But then the computer spoke up again and said, “Actually, I do know the answer to that question. I just didn’t want to tell you because it’s not polite to show off.”
Humor is tough and often there’s more to comedy than just the joke: the delivery, timing, and body language; but several iterations were thought-provoking and a few were laugh out loud funny.
Regardless, the AI-generated attempts are impressive!
HN user, EddieDante:
It’s called “Medium” because the posts are neither rare nor well-done.
Brilliant!
I’d heard of the disasters happening in our national parks, but I didn’t really know why:
Climate change has increased temperatures across the United States. But because so many national parks are at high elevations, in the arid Southwest or in the Arctic, they are being disproportionately affected by global warming. A 2018 study found that temperatures in national parks are rising at twice the rate as the country as a whole.
The examples of what’s happening at our parks are devastating.
…climate change caused by humans—is well-established science. The evidence is overwhelming, and attempted rebuttals are incomplete, flawed, or fabricated.
When I read this, I wondered if this is still a contested issue - and, yes, it is. Fourteen percent of American adults don’t agree that global warming is happening and about that same number are unsure.
Source: Yale Climate Opinion Maps 2021
McKinsey surveyed 25,000 Americans. Some care should be taken with the conclusions, as this is an online survey, and may underrepresent “people with lower incomes, less education, people living in rural areas, or people aged 65 and older”.
However, the results are still informative:
Notably, of those looking for a job, the top reasons were pay and career opportunities, which were more important than the number three reason of working remotely.
Beyond these top three, it’s a great reminder that leaders need to focus on creating purpose and building a great team to keep their high performers.