Almost two decades ago, the head of Google's then nascent enterprise division referred to the firm's search service as "an uber-command line interface to the world."
These days, Google is interested in other modes of input too. On Thursday, Google announced changes to its search service that emphasize visual input via Google Lens, either as a complement to text keywords or as an alternative, and audio input. And it's using machine learning to reorganize how some search results get presented.
The transition to multi-modal search input has been ongoing for some time. Google Lens, introduced in 2017 as an app for the Pixel 2 and Pixel XL phones, was decoupled from the Google Camera app in late 2022. The image recognition software, which traces its roots back to Google Goggles in 2010, was integrated into the reverse image search function of Google Images in 2022.
Last year, Lens found a home in Google Bard, an AI chatbot model that has since been renamed Gemini. Presently, the image recognition software can be accessed from the camera icon within the Google App's search box.
Earlier this year, Google linked Lens with the generative AI used in its AI Overviews - AI-based, possibly errant search results hoisted to the top of search results pages - so that users can point their mobile phone cameras at things and have Google Search analyze the resulting image as a search query.
According to the Chocolate Factory, Lens queries are one of the fastest growing query types on Search, particularly among young users (18-24) - a demographic sought after by advertisers. Google claims that people use Lens for nearly 20 billion visual searches a month and 20 percent of them are shopping-related.
Encouraged by such enthusiasm, the search-ads-and-apps biz is expanding its visual query capabilities with support for video analysis. "We previewed our video understanding capabilities at I/O, and now you can use Lens to search by taking a video, and asking questions about the moving objects that you see," explained Liz Reid, VP and head of Google Search, in a blog post provided to The Register.
This aptitude is being made available globally via the Google App, for Android and iOS users who participate in the Search Labs "AI Overviews and more" experiment - presently English only.
The Google App will also accept voice input for Lens - the idea being that you can aim your camera, hold the shutter button, and ask a question related to the image.
As might be expected, Lens has also been enhanced to return more shopping-related info, like item price at various retailers, reviews, and where the item can be purchased.
Lens searches are not stored by default, though the user can elect to keep a Visual Search History. Videos recorded with Lens are not saved, even with Visual Search History active. Also, Lens does not use facial recognition, making it not very useful for identifying people.
Google's infatuation with AI has led to the application of machine learning to search result page layout, on mobile devices in the US, initially for meal and recipe queries. Instead of segmenting results by file type - websites, videos, and so on - AI will handle how results are arranged on the page.
"In our testing, people have found AI-organized search results pages more helpful," insists Reid. "And with AI-organized search results pages, we're bringing people more diverse content formats and sites, creating even more opportunities for content to be discovered."
Meanwhile, Google is being more deliberate in the organization of AI Overviews, snapshot summaries of search results placed at the top of search result pages. Beyond not having much impact on website traffic referral, AI Overviews can't be turned off but "can and will make mistakes," as Google puts it.
At least now, AI Overviews will have better attribution.
"We've been testing a new design for AI Overviews that adds prominent links to supporting web pages directly within the text of an AI Overview," said Reid. "In our tests, we've seen that this improved experience has driven an increase in traffic to supporting websites compared to the previous design, and people are finding it easier to visit sites that interest them."
Also, AI Overview is getting ads; after all, someone's got to pay for all that AI.
Google is making this search experience available globally, or at least everywhere that AI Overviews are offered. ®
21 lines that show the big man still has what it takes
Webinar Boost your organization's AI application performance with optimized SQL vector data queries
Screens sprayed with coffee after techies find Microsoft's latest OS in unexpected places
Need to know how to set up a business? There's an (experimental) AI for that
Ubuntu Summit 2024 'First impressions matter' but a KDE flavor is in the making - and more publicly at that
Change of mind follows discovery China was playing with it uninvited?
Firefox overlord to 'revisit' advocacy mission