The Augmented Age

The future is now.

The Augmented Age - Human History.png

There have been four major historical eras defined by the way we work:

  • The Hunter-Gatherer Age lasted several million years.

  • The Agricultural Age lasted several thousand years.

  • The Industrial Age lasted a couple of centuries.

  • The Information Age has lasted just a few decades.

And now today, we're on the cusp of our next great era as a species: The Augmented Age.

"In this new era, your natural human capabilities are going to be augmented by computational systems that help you think, robotic systems that help you make, and a digital nervous system that connects you to the world far beyond your natural senses", Maurice Conti.

By way of example, most of us are augmented today: if we want to know an answer to a question, likely, we user our smartphone to get instant access to the world's knowledge.

And this augmentation is rapidly improving - in large part because of the explosive growth in artificial intelligence applied to practical problems. This growth isn't explosive only in the terms of the number of uses, but also in the advances of these techniques. Every day, we can read the news of a new wonder progressed by artificial intelligence.

An exponential rate of change

In review of the timelines of each major era above, we are immediately struck by the sense that the pace of change is increasing - and it's increasing at an exponential rate.

To really get a sense of exponential growth, we can use a thought experiment: would you prefer to get a million dollars right now, or instead, a penny doubling every day for a month? If you said a million dollars, then sadly, you are missing out; because a penny doubling every day for 30 days is over $5 million dollars (536,870,912 pennies). And just to drive the point home, if the month has 31 days, then the value is over $10 million - that's the power of compounding values.

We can use this same rate of change to consider advances in artificial intelligence. One of the arguments underpinning the exponential growth in AI is that computing power is doubling roughly every two years (Moore's Law). And compute power is one of the drivers in capability (though certainly not the only one).

Wait But Why illustrates this rate of change using the analogy of AI as a train:

The Augmented Age - Human-Level Intelligence Station.png

Imagine that AI is a train way off in the distance. It's gotten off to slow start, but because technology is advancing at an exponential pace, we can assume that AI will advance similarly so. And as the train starts picking up in speed - doubling every couple of years - we look down the tracks to see it jump from a speck on the horizon to looming large. And then, in the blink of an eye, artificial intelligence surpasses human intelligence and then just keeps doubling on.

This is why AI is the key agent of change driving the Augmented Age.

What's different this time?

"For the last three-and-a-half million years, the tools that we've had have been completely passive. They do exactly what we tell them and nothing more. Our very first tool only cut where we struck it. The chisel only carves where the artist points it. And even our most advanced tools do nothing without our explicit direction", Maurice Conti.

This time, it's different. Because for the first time in human history our tools are no longer passive - but actively assist us in achieving our goals.

1.0 - Artificial Intelligence

In a number of areas, including seeing, hearing, and speaking; artificial intelligence is matching or exceeding human capability.

1.1 - Seeing

In 2015, AI could correctly identify images 96.5% of the time, compared to humans at 94.9%.

The Augmented Age - Seeing.png

AI is FDA approved.

For the first time, the US FDA has approved an AI diagnostic device that doesn’t need a specialized doctor to interpret the results.

April, 2018

We have ample evidence that AI can exceed human ability to recognize images. By way of example, Techcrunch recently reported that for the first time, the US FDA has approved a diagnostic device that doesn't need a specialized doctor to interpret the results. The device can detect diabetic retinopathy by analyzing an image of the patient's retina.

Of course, what's remarkable about this isn't that the software is good at detection, but that it's accurate enough to not require a human.

1.2 - Hearing

In 2017, Speech Recognition surpassed 95% accuracy.

The Augmented Age - Hearing.png

In the case of recognizing speech, AI is almost at 100% accuracy. One enterprising company, Waverly Labs, is using this technology to offer real-time language translation. 

Here's how it works, as described by the company : "Using specially designed noise-canceling microphones, the Pilot earpiece recognizes speech and filters out ambient noise. The Pilot App then uses speech recognition, machine translation and speech synthesis to translate what was spoken. The second Pilot earpiece plays the newly translated speech for the other person."

It's like the Babel fish made real (just, without the fish).

1.3 - Speaking

In late 2017, AI voice is demonstrated to be indistinguishable from humans.

Computer-generated speech - even speech that sounds human - has become so common as to be unremarkable.

As such, if you'll excuse the shift in perspective, the video above illustrates a computer generating video to match human speech (arguably, a more difficult task than generating lifelike speech). That is, instead of AI mimicking human speech, the researchers referenced human speech and used AI to visually mimic a person talking.

Note: you may have spotted instances where the lip-synching didn't perfectly match the speech. But as asserted earlier, technology is changing at a fierce pace; it’s only a matter of time before video generated in the manner above is indistinguishable from the real thing. And if you're hopeful that AI will evolve fast enough to detect this type of manipulation, the I admire your optimism - I view this as an arms race with the advantage to the forgers.

2.0 - Advanced Robotics

A second enabler of the Augmented Age is robotics and their rapidly improving ability to perceive, interact, and evolve.

2.1 - Perceiving

22 billion sensors will be used in the automotive industry per year by 2020.

 How self-driving cars see the world

When we think of robots, we often conjure the image of an autonomous machine that resembles a human. But, if we think of a robot as "a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer," then a self-driving car fits the category. And autonomous vehicles are where the action and investment are at today.

Of course, one of the critical requirements of an autonomous vehicle is the ability to sense and make sense of the world around it. Technologies powering the automotive perception include onboard cameras, radar, lidar, and more. And as we've seen with smartphones (and virtually every technology curve), the components continue to improve, shrink, and most importantly, exponentially drop in cost - often to near $0.

That's just what will happen with lidar: today, a 64-laser lidar system costs close to $75,000. But as Ars Technica notes, "The bottom line is that while bringing lidar costs down will take a significant amount of difficult engineering work, there don't seem to be any fundamental barriers to bringing the cost of high-quality lidar down below $1,000—and eventually below $100." Already, systems with fewer lasers are being offered for less than $10,000.

And as these prices drop, they'll become available for a variety of other uses outside of vehicles including drones for agriculture, drones for deep sea-exploration, smart cities, smart retail, and more.

2.2 - Interacting

The number of industrial robots will increase to ~2.6 million units by 2019.

If you're a fan of Black Mirror, then you may be familiar with the episode in Season 4, Metalhead, where killer robot dogs are chasing around human survivors. This episode was actually inspired by work performed by Boston Robotics - with a video that went viral on social media where a dog opened a door. 

In the clip above, you can see two robot dogs collaborating to open the door - a marked improvement from the original video. It's a terrific example of the rapid advancements in robots ability to navigate and interact with the world.

2.3 - Evolving

The human brain has ~86 billion neurons. Modern AI uses ~86 million.

One of the ways in which robotics are rapidly evolving, is training the AI "brains" of robots in a virtual environment. Take for example, the task of training a robot to pick up different shaped blocks. Doing so in physical space requires physical time for the training. What researchers have demonstrated is that performing these tasks in virtual spaces allows the simulations to be speeded by up to a factor of 50. It's fairly clear that this will be a preferred method for future evolution of AI for robotics.

Another way that robots are evolving is analogous to how smartphones have evolved - that is, with apps. For example, Alexa skills have exploded and now top 25,000 in the US. The above video is Misty Robotics announcement of this approach for advancing and evolving robotics.

This approach is particular interesting because it allows the application of human ingenuity from large numbers of people and companies. And, of course, AI can be a key utility driving those "skills".

3.0 - Human Augmentation

Humans + Robots + AI = Better Together

3.1 - Informing

Moore’s Law: computational resources doubles every two years.

The Augmented Age - Informing.png

One of the clearest uses for AI is to surface useful information for us to decide or act upon. This is because the amount of information available to us is vast; continuing to grow at a phenomenal rate; and requires something beyond clever algorithms to ensure that only what's pertinent is highlighted. 

An example of surfacing relevant information from a sea of data is China's use of facial recognition to capture criminals. Earlier this month, it was reported that a man attending a concert with 60,000 attendees was arrested after being identified using facial recognition. Today, China has over 170 million CCTV cameras; with plans to more than double that in the next three years.

3.2 - Deciding

Law of Accelerating Returns: technological change is exponential.

The Augmented Age - Deciding

I'm willing to be you'll let an AI make decisions for you.

Undecided? Consider whether or not you've used a mapping application to plan a trip to a place you've never been before. And if you have, did you follow the directions? If so, then you've allowed algorithms to make decisions on your behalf. Going forward, it's just a matter of how often, and how serious the consequences of those decisions.

Taking a more serious example, would you prefer a doctor diagnose you for cancer, or an artificial intelligence? According to Samuel Nessbaum of Wellpoint, the average diagnostic accuracy rate for lung cancer for human physicians is 50%. In comparison, IBM Watson's diagnostic accuracy for lung cancer is 90%.

This is in part, because the AI was trained on over 600,000 unique types of medical evidence and over 1.5 million patient records. Additionally, new medical information is published with such frequency, that physicians would have to read for at least 160 hours every week to stay current.

It's possible that in the future, it will be considered unethical for doctors to not consult an AI before prescribing a treatment plan.

3.3 - Accomplishing

Martec’s Law: technology changes exponentially, but organizations change logarithmically.

As noted earlier, for the first time in human history, our tools are no longer passive, but actively assist us in accomplishing our goals.

Since robots and AI have been with us for decades, we find them already assisting us in a wide variety of ways: from moving product in Amazon warehouses, to drones filming us, and in some cases, maybe even an over-reliance on robots.

Festo, a German robotics company, has been doing amazing work in mimicking animals via including: ants, butterflies, spidersflying jellyfish and penguinskangaroosseagulls, and even a bionic flying fox.

And looking forward: in the video above, Festo's Bionic Workplace hints at the type of Human + Robots + AI that the future may hold. 

4.0 - A Framework for the Augmented Age

Artificial Intelligence + Advanced Robotics = Human Augmentation

I'm hopeful that this has given you a useful framework in which to reason about the changes we'll see during this next Augmented Age:

  1. Artificial Intelligence - Seeing, Hearing, Speaking

  2. Advanced Robotics - Perceiving, Interacting, Evolving

  3. Human Augmentation - Informing, Deciding, Accomplishing

In closing, I'll leave you with this thought: we’re living in that magical time where the science fiction stories of our childhood are becoming the reality of today. It’s astonishing to think of the ways that computers match or exceed human abilities including speaking, hearing, and seeing.

Building on these capabilities will enable us to improve lives. I'm excited about what the future brings.


Get In Touch

If you're in the Twin Cities, drop me a note and we can get coffee sometime. Otherwise, check out the links at the bottom of this page for Twitter and LinkedIn. Lastly, you can contact me on the About page of this site.

 

Sources and References

Maurice Conti: https://www.ted.com/talks/maurice_conti_the_incredible_inventions_of_intuitive_ai/transcript#t-31406

Penny Doubling: https://answers.yahoo.com/question/index?qid=1006052529497

Wait But Why: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

1.0 - Artificial Intelligence

1.1 - Seeing

The Microsoft creation got a 4.94 percent error rate for the correct classification of images in the 2012 version of the widely recognized ImageNet dataset, compared with a 5.1 percent error rate among humans, according to the paper. The challenge involved identifying objects in the images and then correctly selecting the most accurate categories for the images, out of 1,000 options. Categories included “hatchet,” “geyser,” and “microwave.”

Statistic: https://venturebeat.com/2015/02/09/microsoft-researchers-say-their-newest-deep-learning-system-beats-humans-and-google/

Statistic: https://venturebeat.com/2015/12/10/microsoft-beats-google-intel-tencent-and-qualcomm-in-image-recognition-competition/

The AI100 index estimates that object recognition in 2014 reached an accuracy rate of 95 percent, which Stanford believes is equivalent to human performance. Since then, the accuracy level has increased steadily and is closing in on 100 percent. Speech recognition hit the 95 percent level in 2017, according to the index.

Source: https://www.theverge.com/2018/4/11/17224984/artificial-intelligence-idxdr-fda-eye-disease-diabetic-rethinopathy

1.2 - Hearing

Statistic: https://www.veritone.com/insights/standford-index-ai-image-speech-recognition-equals-betters-human-levels/

Pilot by Waverly Labs: https://www.waverlylabs.com

1.3 - Speaking

A research paper published by Google this month—which has not been peer reviewed—details a text-to-speech system called Tacotron 2, which claims near-human accuracy at imitating audio of a person speaking from text.

Statistic: https://qz.com/1165775/googles-voice-generating-ai-is-now-indistinguishable-from-humans/

Video: https://www.youtube.com/watch?v=AmUC4m6w1wo

If AI can be used to face swap, can't it also be used to detect when such a practice occurs? The team, led by Andreas Rossler at the Technical University of Munich, developed machine learning that is able to automatically detect when videos are face swapped. They trained the algorithm using a large set of face swaps that they made themselves, creating the largest database of these kind of images available. They then trained the algorithm, called XceptionNet, to detect the face swaps.

Source: https://www.engadget.com/2018/04/11/machine-learning-face-swaps-xceptionnet/

XceptionNet clearly outperforms its rival techniques in detecting this kind of fake video, but it also actually improves the quality of the forgeries. Rossler's team can use the biggest hallmarks of a face swap to make the manipulation more seamless.

Source: https://www.engadget.com/2018/04/11/machine-learning-face-swaps-xceptionnet/

2.0 - Advanced Robotics

2.1 - Perceiving

Statistic: http://www.automotivesensors2017.com

Image: https://www.leafandcore.com/2016/03/02/a-google-self-driving-car-may-have-caused-an-accident/

Lidar works much like radar, but instead of sending out radio waves it emits pulses of infrared light—aka lasers invisible to the human eye—and measures how long they take to come back after hitting nearby objects. It does this millions of times a second, then compiles the results into a so-called point cloud, which works like a 3-D map of the world in real time—a map so detailed it can be used not just to spot objects but to identify them. Once it can identify objects, the car's computer can predict how they will behave, and thus how it should drive.

Self-driving cars use other sensors to see, notably radars and cameras, but laser vision is hard to match. Radars are reliable, but don't offer the resolution needed to pick out things like arms and legs. Cameras deliver the detail, but require machine-learning-powered software that can translate 2-D images into 3-D understanding.

Source: https://www.wired.com/story/lidar-self-driving-cars-luminar-video/

Definition: https://en.oxforddictionaries.com/definition/robot

Source: https://arstechnica.com/cars/2018/01/driving-around-without-a-driver-lidar-technology-explained/

Source: https://www.iotforall.com/lidar-technology/

2.2 - Interacting

Statistic: https://www.automation.com/automation-news/industry/ifr-report-14-million-new-industrial-robots-in-factories-by-2019

Creepy Robot Dog: https://www.youtube.com/watch?v=skFlAnvPSNQ

The robot is full of lethal tricks, ranging from operating a car to re-charging from the sun. Yet perhaps the eeriest moment is when the overturned robot simply pushes itself back upright to regain its footing — as that’s something we’ve actually seen robots do in Boston Dynamics online videos. It’s perhaps the most chilling vision yet of the well-worn killer robot trope since the robot’s mechanics overlay so closely with real footage we’ve seen.

Source: http://ew.com/tv/2017/12/29/black-mirror-metalhead-interview/

2.3 - Evolving

Statistic: http://www.nybooks.com/articles/2016/11/24/86-billion-neurons-herculano-houzel/

Deriving the number of neurons in AI systems from this number is a stretch since these AIs emulate certain types of connections and sub assemblies of neurons, but let's continue...

Statistic: https://ai.stackexchange.com/questions/2330/when-will-the-number-of-neurons-in-ai-systems-equal-the-human-brain

By using synthetic data and domain adaptation we are able to reduce the number of real-world samples required to achieve a given level of performance by up to 50 times, using only randomly generated objects in simulation. This means that we have no prior information about the objects in the real world, other than pre-specified size limits for the graspable objects.

Source: https://research.googleblog.com/2017/10/closing-simulation-to-reality-gap-for.html

Elon Musk's artificial intelligence platform OpenAI introduced a new program to train robots entirely in simulation. Now they've added a new algorithm, named one-shot imitation learning, which will only require humans to demonstrate a task once in VR for a robot to learn it. Teaching robots entirely in simulation could allow researchers to train them for complex tasks without needing physical elements at all. That would let humans safely and easily approximate extreme environments like arctic waters or areas soaked in nuclear radiation -- or even other planets.

Source: https://www.engadget.com/2017/05/16/openai-s-new-system-lets-you-train-robots-entirely-in-vr/

Voicebot was the first to report right before CES 2017 that Amazon had reached a new milestone with 7,053 Alexa skills for U.S. users. Since that time Amazon Alexa skills grew 266% to start 2018 at 25,784 in the U.S. Alexa skill growth has also risen quickly in the UK and Germany.

Source: https://www.voicebot.ai/2018/01/08/amazon-closes-year-with-266-alexa-skill-growth-u-s/

Misty Robotics: https://vimeo.com/267366586

Disclosure: Ben Edwards, Head of Community, at Misty Robotics is a friend of mine and I'm excited about what they are doing.

3.0 - Human Augmentation

3.1 - Informing

Moore's Law: https://en.wikipedia.org/wiki/Moore%27s_law

Chinese police have used facial recognition technology to locate and arrest a man who was among a crowd of 60,000 concert goers.

China has a huge surveillance network of over 170 million CCTV cameras.

Source: http://www.bbc.com/news/world-asia-china-43751276

For 40-year-old Mao Ya, the facial recognition camera that allows access to her apartment house is simply a useful convenience. “If I am carrying shopping bags in both hands, I just have to look ahead and the door swings open,” she said. “And my 5-year-old daughter can just look up at the camera and get in. It’s good for kids because they often lose their keys.”

But for the police, the cameras that replaced the residents’ old entry cards serve quite a different purpose.

The pilot in Chongqing forms one tiny part of an ambitious plan, known as “Xue Liang,” which can be translated as “Sharp Eyes.” The intent is to connect the security cameras that already scan roads, shopping malls and transport hubs with private cameras on compounds and buildings, and integrate them into one nationwide surveillance and data-sharing platform.

It will use facial recognition and artificial intelligence to analyze and understand the mountain of incoming video evidence; to track suspects, spot suspicious behaviors and even predict crime; to coordinate the work of emergency services; and to monitor the comings and goings of the country’s 1.4 billion people, official documents and security industry reports show.

At the back end, these efforts merge with a vast database of information on every citizen, a “Police Cloud” that aims to scoop up such data as criminal and medical records, travel bookings, online purchase and even social media comments — and link it to everyone’s identity card and face.

A goal of all of these interlocking efforts: to track where people are, what they are up to, what they believe and who they associate with — and ultimately even to assign them a single “social credit” score based on whether the government and their fellow citizens consider them trustworthy.

Source: https://www.washingtonpost.com/news/world/wp/2018/01/07/feature/in-china-facial-recognition-is-sharp-end-of-a-drive-for-total-surveillance/

3.2 - Deciding

Law of Accelerating Returns: http://www.kurzweilai.net/the-law-of-accelerating-returns

Image: https://jbchicago.com/googles-waze-app-changing-advertising-game/

Waze has struck a data-sharing agreement with Waycare, an artificial intelligence-based traffic management startup, the two companies announced today. The deal will allow them to combine anonymized navigation information crowdsourced from the 100 million drivers who use Waze with Waycare’s proprietary traffic analytics.

Source: https://techcrunch.com/2018/04/26/waze-signs-data-sharing-deal-with-ai-based-traffic-management-startup-waycare/

Approximately two years ago, IBM researchers announced that Watson had the same level of knowledge as a second-year med-school student. As of now, Watson has assimilated over 600,000 unique types of medical evidence. In addition, Watson’s database includes two million pages sourced from a variety of different medical journals. To improve the link between symptoms and a diagnosis, Watson also has the ability to search through 1.5 million patient records to learn from previous diagnoses. This amount of information is more than any human physician can learn in a lifetime.

According to a study by Sloan-Kettering, only one-fifth of knowledge used by physicians when diagnosing a patient is based on trial-based information. To stay on top of new medical knowledge as it is published, physicians would have to read for at least 160 hours every week. Since Watson can absorb this information faster than a human, it could potentially revolutionize the current model of healthcare.

According to Samuel Nessbaum of Wellpoint, Watson’s diagnostic accuracy rate for lung cancer is 90%. In comparison, the average diagnostic accuracy rate for lung cancer for human physicians is only 50%.

Source: http://republic-of-innovation.ch/ibms-watson-could-diagnose-cancer-better-than-doctors/

In February, Medicare announced that it would pay for an annual lung cancer screening test for certain long-term smokers. Medicare's decision was partly a response to a 2011 study showing that screenings with the technique could reduce lung cancer deaths by 20 percent.

But as more and more people are getting screened for lung cancer, other doctors worry the test is doing more harm than good. "It's the two-edged sword," says Dr. H. Gilbert Welch. That's because some cancers grow slowly and never become dangerous, he says.

Source: https://www.npr.org/sections/health-shots/2015/04/13/398101515/why-some-doctors-are-hesitant-to-screen-smokers-for-lung-cancer

3.3 - Accomplishing

Martec's Law: https://chiefmartec.com/2016/11/martecs-law-great-management-challenge-21st-century/

Source: https://spectrum.ieee.org/automaton/robotics/robotics-hardware/festo-bionic-learning-network-rolling-spider-flying-fox

A central part of the working environment is the BionicCobot. The pneumatic lightweight robot is based on the human arm in terms of its anatomical construction and – like its biological model – solves many tasks with the help of its flexible and sensitive movements. Due to its flexibility and intuitive operability, the BionicCobot can interact directly and safely with people. In doing so, it supports workers doing monotonous jobs and takes over tasks that are dangerous for humans.

Source: https://www.festo.com/group/en/cms/13112.htm