Even so, Google’s own internal research has shown that a significant portion of younger internet users prefer searching for information on social apps, like Instagram and TikTok, over Google products. Those evolving user preferences help explain why Google is evolving its basic search tools to include more multi-modal forms of communication, and to include information from more “authentic” sources, such as social media posts.  “The way that people search and author information was never meant to be constrained to typing words,” Cathy Edwards, Google VP and GM of Search, said to reporters ahead of Google’s Search On event on Wednesday. “There’s so much information out there on the web that comes in different formats and from different voices that have different authority and expertise.” Also: Best cheap 5G phone 2022: No need to pay flagship prices for quality devices During Wednesday’s event, Google announced it will present Search users with content from “creators on the open web.” If a user searches the name of a specific city, for instance, they may get results that include visual stories and short videos from people who have visited that place.  “We can really see, as we enter this new era of search, that you’ll be able to find exactly what you’re looking for by combining images, sounds, text and speech, organized in a way that makes sense to you,” Edwards said. “And that ultimately helps you make sense of the world.”  Edwards acknowledged that “there’s some really good content” on TikTok, since it has reduced the barriers to entry for content creation. “We are looking at more ways to bring that into our search results,” she said.  Google’s “new era of search” also includes a greater emphasis on community-led conversations on forums like Reddit – another platform that serves as an alternative to Google Search. Google is launching a new feature in Search called Discussions in Forums that brings in those results.  “There are people who obviously are keen to see more results from Reddit and other community forums in our results,” Edwards said. “Fundamentally, this is just about giving people what they want, when and what’s most helpful to them when they come to us.” At the same time, Edwards said Google is focused on ensuring that users can find “both the authentic information and the authoritative information” on its search tools.  “We also see in our research that people come to Google specifically to verify their claims and… to help them decide whether they want to believe something that they might have found on a social feed,” she said. “They really trust that they’ll be able to find high quality information on Google, and I think that’s really important, too.” Google’s increased focus on multi-modal search isn’t just about the results – it’s also about how people are able to ask questions.  Earlier this year, Google introduced multisearch in beta, allowing users to search a topic using both images and text simultaneously. Now, in the coming months, Google will be expanding that capability to more than 70 languages. Meanwhile, Google is also improving its Lens capabilities. People already use Google to translate text in images more than one billion times a month, across more than 100 languages. Now, if you point your camera at an image with text on it, Lens will be able to translate the text and overlay the translated text onto the pictures underneath. For instance, if you are reading the label on a bag of chips, the text would appear in your preferred language on that bag of chips.  “Instead of covering up the original text, we actually erase it and then rebuild the pixels underneath with an AI-generated background,” Edwards explained. “And then we overlay the translated text on top of the image. So it really feels like you’re just looking at that product package with the translated text.” This feature is launching later this year.