Google Lens brings search to the physical world

Here is Sandar Pachai, on Google Lens, at Google's I/O keynote yesterday (text from Stratechery):

We are clearly at an inflection point with vision, and so today, we are announcing a new initiative called Google Lens. Google Lens is a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on that information. We’ll ship it first in Google Assistant and Photos, and then other products.

How does it work? If you run into something and you want to know what it is, say a flower, you can invoke Google Lens, point your phone at it and we can tell you what flower it is…Or if you’re walking on a street downtown and you see a set of restaurants across you, you can point your phone, because we know where you are, and we have our Knowledge Graph, and we know what you’re looking at, we can give you the right information in a meaningful way.

As you can see, we are beginning to understand images and videos. All of google was built because we started understanding text and web pages, so the fact that computers can understand images and videos has profound implications for our core mission.

And Ben Thompson's reaction::

The profundity cannot be overstated: by bringing the power of search into the physical world, Google is effectively increasing the addressable market of searchable data by a massive amount, and all of that data gets added back into that virtuous cycle. The potential upside is about more than data though: being the point of interaction with the physical world opens the door to many more applications, from things like QR codes to payments.

Ben's excitement is contagious: AI provides a transport for information between the digital and physical worlds. We're on the cusp of amazing change.