AI Advances In Translation And Image Recognition
Google Search was built on the insight that understanding links between webpages leads to dramatically better search results. We’ve made remarkable advances over the past 22 years, and Search helps billions of people. And to improve Search even further, we need to deepen our understanding of language and context. To do this requires advances in the most challenging areas of AI, and I want to talk about a few today, starting with translation.
We learn and understand knowledge best in our native languages. So, 15 years ago, we set out to translate the web -- an incredibly ambitious goal at the time. Today, hundreds of millions of people use Google Translate each month across more than 100 languages. Last month alone, we translated more than 20 billion web pages in Chrome. With Google Assistant’s interpreter mode, you can have a conversation with someone speaking a foreign language, and usage is up four times from just a year ago. While there is still work to do, we are getting closer to having a universal translator in your pocket.
At the same time, advances in machine learning have led to tremendous breakthroughs in image recognition. In 2014, we first trained a production system to automatically label images -- a step change in computers’ understanding of visual information -- and it allowed us to imagine and launch Google Photos. Today, we can surface and share a memory, reminding you of some of the best moments in your life. Last month alone, more than 2 billion memories were viewed.
Image recognition also means you can use Google Lens to take a photo of a math problem. I wish I had this when I was in school. Lens is used more than 3 billion times each month. We can now be as helpful with images as we are with text.