Google, Meta, Microsoft Modality Features Show Different Future For Advertising

Google, Meta, Microsoft Modality Features Show Different Future For Advertising

by , Staff Writer @lauriesullivan, July 2, 2024

Google, Meta, Microsoft Modality Features Show Different Future For Advertising

Google plans to introduce a set of new machine learning (ML) features under the Google AI brand. One for the latest Pixel phone is expected next month and  would offer a feature similar to Microsoft’s Recall.

Microsoft has put Recall — which the company initially installed on Copilot Plus PCs running Windows 11 — on hold after some revealed it could allow access to stored information by any hacker with access to the machine.

The Google feature, Pixel Screenshots, is part of the forthcoming Google Pixel 9 series that the company is expected to announce in August — along with other AI experiences — later this month, according to Android Authority.

Google’s feature focuses on privacy. Rather than automatically capturing everything someone works on, similar to Microsoft Recall, it will only capture information on screenshots someone takes themselves, according to the report.

Using AI, the app adds metadata to identify the content and context such as app names, and web links. A local AI technology such as multimodal version of Gemini Nano will likely process the data.

This will enable the owner of the data and device to query and search for specific screenshots by contents.

When and if Microsoft releases Recall, it will automatically capture everything on the device, allowing users to find information within a file or document.

But unlike Google, Microsoft’s feature has been seen as eroding privacy after it was revealed that any attacker with access to the computer could read stored information.

People will begin interacting differently with devices that support everything from advertising to content. 

Some believe Google’s event will also include information about its glasses project. Project Astra introduced earlier this year at Google I/O could become the foundation for future internet-connected and augmented reality glasses. 

The project uses Gemini multimodal models to process audio, video, and text data simultaneously.

“If and when a real fully holographic every day pair of glasses and lenses comes out, and is genuinely adopted by mass consumers that purchase products through them, I imagine them to become another device in the complex and convoluted buying journey,” said Mike Gullaksen, CEO at NP Digital. “The amount of structured data required from the product and brand side required to have the correct information display on any sort of lens would require the brand to be fairly advanced. They would need to be able to track these conversions correctly.”

Gullaksen said the image included in this post is from 2016 when he spoke about Microsoft Hololens and Google was on the heels of releasing Google Lens.

“Eight years have gone by, and we are still where we are,” he said.

Years ago, I cannot remember how many, in the conference room at San Diego-based Covario, where Gullaksen served as co-CEO. I tried an Oculus headset the agency had on-hand. It showed the device to brands as a way for marketers to start thinking about other platforms in which to run ads.

That same day in March 2014, Facebook announced the acquisition of Oculus. I believe it changed the direction of Facebook, prompting Mark Zuckerberg, Facebook founder, to create Meta, the holding company for Facebook, Instagram and others.

A network of AI agents has become Zuckerberg’s vision as a way to give creators and advertisers a strong connection with fans and consumers. The agents would run on AI-supported devices like Smart Glasses that people would interact with.

Meta is preparing for its annual Connect conference where it will likely demonstrate neural interface wristband and holographic smart glasses based on underlying technology, Llama. Zuckerberg recently spoke with Kane Sutter, known as Kallaway on YouTube and social media.  He disclosed some details of its AI and XR roadmap.

He spoke about the lessons Meta learned during the Ray-Ban partnership, the future of smartphones, the neural interface wristbands, and powerful new AR glasses. The wristband would control everything in the home, for example.

Zuckerberg forecasts a future with “display-less glasses” using AI that can capture content, listen to audio books, play music, and take phone calls. He said smart glasses would come with a type of lite-AR visualizations, “where it’s not going to be full-holographic display in the sense that it’s not going to be your full field of view as a hologram, but a little bit of a heads up.”

This technology will never replace phones, Zuckerberg said, but will just remain more frequently in the user’s pocket. Ten years from now, people will still have phones, but the use will be more intentional rather than a reflex or a reaction.

Google plans to introduce a set of new machine learning features under the Google AI brand in the next version of its Pixel phone.
 

(5)