Ron Lancaster

Thoughts on tech and leadership

The stunning advances in AI this year further underlines the imperative that every company needs to invest in incorporating these capabilities into their business.

Over the past several years, AI models have demonstrated the ability to see, hear, and speak - as well as or better than humans. AI has bested experts in almost every type of game from board games to video games to game shows. And robotics has exceeded human-level ability to navigate complex environments.

Until now, it’s been easy to dismiss these as capable computers whose disruption was limited to certain industries and endeavors.

However, in the past few months, OpenAI and other leading companies have released AI models that can create content as well or better than people. AI can create art and music, write poetry and essays, write code, find software issues, answer questions, offer advice, and more.

A few years ago, this was not the type of disruption that was widely discussed. Instead, disruptive automation was expected to more narrowly target menial and repetitive tasks - not human-level creativity.

Using generative technology today

The big advance that happened over this summer is Generative AI, that can create realistic images, write poetry and software, and more. Twitter, Reddit, and YouTube are full of examples that are exceptionally good.

A few months ago, realistic image generators such as Stable Diffusion, DALL-E 2, and MidJourney were publicly-released - either via API or as open-source models - and the Internet went wild. HuggingFace, an AI community for sharing models and datasets, is approaching 100,000 models with many uploaded or updated in the past year. A variety of these are variations of the open-source image generators, fine-tuned for special cases such as generating images of cars, realistic photos, architectures, and more.

And this past week, OpenAI released ChatGPT, an iteration of GPT-3.5, but trained in a way to answer questions in a conversational manner (and with some memory of the earlier conversation). GPT-3.5 itself was an iteration of GPT-3 trained to answer instructions and was the cornerstone of a huge improvement in the algorithms ability to generate useful text and code.

The reason that ChatGPT has received so much press is because it’s amazingly good. It’s also currently free and has a chat-like interface, making it accessible to a wide audience. It’s so good that individuals have used ChatGPT to get passing scores on college entrance exams and college-level essays.

In the same way that many begin with a Google search or a calculator when doing our jobs, we should also have these new generative AI tools available for us in our daily work. And in a growing set of use cases, start with AI and then improve on the outcome, instead of the reverse.

If you experiment with these more innovative uses, be aware that because they are creative and because they were trained on large swaths of internet text and images, the results are sometimes inflammatory or inaccurate.

In short, where we’re at right now with this latest generation is that a human needs to decide if the result is fit for publishing - but generally the first draft is rather good.

Intelligent software

In considering the scale of possible disruption of AI, the level of technology that’s publicly available this year is analogous to the mobile revolution that occurred after the release of the first iPhone. That is, eventually most people in the world would access the internet via a smartphone - and companies that wanted to reach those people would need to offer a mobile solution.

Similarly, most people will expect some level of intelligence in the software they interact with, and companies that want to keep their business will need to offer that.

By intelligent, it means that customers will require that the software that they interact with is personalized, that it is generative, and that the software acts on their behalf.

Personalization requires that the software automatically tailors the experience to meet the individual’s expectations and needs. Increasingly, this is accomplished by recording the user’s interaction with the software and then applying machine learning to anticipate and recommend the best options for the individual.

Generative software employs AI to help the individual create something new, better and faster than they could otherwise. The latest AI models can create images, text, video, music, and more: often with little more than a text prompt describing the desired result. Going forward both amateurs and professionals alike will expect the software they work with to create at least a first draft, if not the result, based on their instruction.

Automation is where the software acts on behalf of the individual to streamline and automate repetitive tasks. Traditionally, this has been thought of as taking some set of actions at the individual’s direction. As AI continues to advance, software should be able to proactively aid the user - perhaps still asking the individual for permission - without being prompted.

Note: Intelligent software is a somewhat amorphous term with a variety of definitions. The above definition has been chosen in line with the latest advances in AI.

Dangers in using AI

As we increase our adoption of AI, it’s imperative to understand the many ways it can go wrong.

There are several potential dangers to using AI in business, including:

  • Ethical concerns: The use of AI can raise ethical concerns, particularly if it is used to make decisions that affect people’s lives, such as in the criminal justice system or healthcare. Ensuring that AI systems are fair, transparent, and accountable is essential to avoid ethical violations.
  • Security: AI systems can be vulnerable to security threats, such as hacking or other forms of cyber-attack. This can lead to sensitive business data being exposed or stolen, or the AI system itself being compromised.
  • Lack of accountability: Because AI systems are often complex and opaque, it can be difficult to decide who is responsible when things go wrong. This lack of accountability can create legal and ethical issues.

An important consideration is bias in AI - such as sexism, racism, ageism, and ableism - which can go undetected until reported by the individual that was harmed.

Biases in AI are most often introduced through:

  1. The data used to train the AI system is biased: If the data used to train an AI system is biased, the system will learn to make decisions that reflect that bias.
  2. The algorithms used to process the data are biased: AI algorithms can sometimes be biased because they are based on assumptions or stereotypes that are not true or fair.
  3. Human bias is introduced during the development or deployment of the AI system: Even if the data and algorithms used to build an AI system are not inherently biased, human bias can still be introduced during this phase.

Unfortunately, bias is common in AI and can have profound consequences.

It’s important for companies employing AI to ensure they are employing tools designed to evaluate fairness and to introduce safeguards and controls to reduce the risk of harm to individuals. And companies must also ensure they have a plan to address bias if it’s found and to offer redress and remedies if the algorithm causes harm.

Ethical AI

To safeguard the use of AI, it’s suggested to create an AI ethics committee which would establish ethical guidelines and best practices for its use within the company. One approach is to comprise the committee of a diverse group of individuals, including representatives from different departments within the company, and external experts on AI ethics.

The committee would handle regularly reviewing and updating the company’s AI ethics guidelines and conducting regular audits to ensure that the company’s use of AI is in line with these guidelines. Additionally, the committee should supply training and education to employees on ethical AI practices and could serve as a resource for employees who have questions or concerns about the ethical use of AI in their work.

Overall, the goal of this governance model would be to ensure that the company’s use of AI is transparent, responsible, and aligned with organization’s values and principles.

Looking ahead - AI in 2023

It’s widely expected that the billions of dollars of investment in AI will continue to grow.

The key drivers of this investment are Google, Microsoft, Amazon, NVIDIA, OpenAI, DeepMind, as well as other contributing organizations such as Stability.AI, Midjourney, and substantial collaborations by open-source contributors across the world.

Key expectations for 2023:

  • Open-source collaborators will continue to replicate the major accomplishments of the big providers noted above - though lagging them by several months.
  • Text-to-video is expected to be at the resolution and frame rates needed for production use.
  • OpenAI is expected to release GPT-4, their next iteration of the already impressive GPT-3.5 noted earlier. Many believe this will be transformative in its impact and significantly alter what is thought to be possible.
  • Google Pathways and DeepMind Gato2 might be made publicly available in some form. What’s interesting about the approaches of these two, is that they are generalized - that is, the AI can carry out a wide variety of tasks. For example, instead of focusing on one task, such as generating text or images, these algorithms can do many tasks successfully.