SecuritySoftware

Google To Start Marking AI-Generated Images In Search

()

Google has announced that it will soon start marking and labeling AI-generated images in Google search. But its implementation isn’t as flawless as it looks.

The AI craze has taken over the world in recent times. This is fueled by AI (artificial intelligence) giving quick answers to complex issues in just seconds. Something which can take a human hours and days is solved by AI in minutes.

AI isn’t limited to just text, either. AI is being used in media generation and editing too. Including image and videos. One of the most commonly used AI tools are the tools which are used for image generation.

However, AI generated images and videos have their downsides too. Many times, AI generated images and videos are used to spread misinformation. This is because many of these tools generate images in such a high quality that it is very hard for common people to tell whether they AI-generated or real.

So much so that these days, AI-generated images have reached and sometimes filled the Google image search results. Looks like Google is doing more to make people aware about them.

Google To Mark AI-Generated Images

Google AI-Generated Images

In an official announcement blog (via TechRadar), Google has announced that it will soon mark AI-generated images in the search.

But it’s not going to be as outright flawless as it seems. Some background: in February this year, Google joined C2PA (Coalition for Content Provenance and Authenticity) association’s steering committee.

C2PA is an association which consists of many companies, including the likes of Amazon, Intel, Microsoft, OpenAI, Sony and others. This association aims to develop technical standards to mark media content online to prevent spread of misinformation.

In this move of labeling AI-generated images, Google aims to implement the labeling via C2PA metadata containing images. In simple words, only images containing C2PA metadata will be marked as AI-generated or edited for now.

This is a flawed implementation in our view. As will be useful only every AI-generation tools and image editors join the C2PA or implement its standards themselves. For example, Twitter’s (now X.com) Grok AI doesn’t intend to support it.

How will Google show whether an image is AI-generated. It will be through About This Image screen that may require additional clicks.

Implementation

The official announcement states that it will be implemented over the coming months to Google Search, Google Ads and will later also come on YouTube when captured through a camera. It reads:

Over the coming months, we’ll bring this latest version of Content Credentials to a few of our key products:

  • Search: If an image contains C2PA metadata, people will be able to use our “About this image” feature to see if it was created or edited with AI tools. “About this image” helps provide people with context about the images they see online and is accessible in Google Images, Lens and Circle to Search.
  • Ads: Our ad systems are starting to integrate C2PA metadata. Our goal is to ramp this up over time and use C2PA signals to inform how we enforce key policies.

We’re also exploring ways to relay C2PA information to viewers on YouTube when content is captured with a camera, and we’ll have more updates on that later in the year.

We will ensure that our implementations validate content against the forthcoming C2PA Trust list, which allows platforms to confirm the content’s origin. For example, if the data shows an image was taken by a specific camera model, the trust list helps validate that this piece of information is accurate.

These are just a few of the ways we’re thinking about implementing content provenance technology today, and we’ll continue to bring it to more products over time.

Google acknowledges that this alone isn’t enough. It’s encouraging more services and hardware to support the C2PA standards.

Google is also trying to bring SynthID, it’s embedded watermarking solution for text, audio, visual and video, to more AI-generation tools. It’s also joining other groups to improve AI safety.

Conclusion

This is a much welcomed move by Google in our view. Marking AI-generated images is a must when spreading of fake AI-generated media is so prevalent online.

But using just metadata to rely on labeling them is not enough. That metadata can easily be removed by editing those images. Provided that it had had any such metadata in the first place.

What is required is an implementation where a combination of metadata, forced watermarking and AI-identification tools are used. This combination will prove to be very powerful and more trustworthy.

Not only that, we feel Google should also come with its own tools to easily identify fake AI-generated images and even advertise those tools openly.

Still, it’s a long way ahead to reduce issues created by AI. Which itself is a great tool, but is always up for misuse. So this move is one small step to improve its usage for everyone.

How useful was this post?

Click on a star to rate it!

Average rating / 5. Vote count:

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

OurDigiTech Staff

We love covering news about everything tech. If we are not busy looking after news, we are either playing games, or watching our favorite sport, or browsing sites like Reddit, Twitter (now X.com) and others.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Back to top button
Close

Adblock Detected

Ads fund this site and allows us to publish articles which you can read. Please disable ads on this site.