Skip to main content

As the media landscape shifts, publishers are increasingly turning to subscription models to sustain their businesses. For those who have not yet adopted this approach, the decision is not a matter of if, but when. Yet, the process is not as straightforward as one might assume. Simply placing a subscription form on a site is not sufficient. The implementation of paywalls requires a strategic approach, taking into account the full spectrum of a website’s traffic flows and revenue streams. Each paywall must be uniquely crafted to fit the publisher’s content and audience. This article examines the various paywall types and their impact on search engine visibility, especially considering Google’s ranking algorithms. It provides publishers with the necessary insights to navigate the complexities of content monetization in an online arena that is both competitive and dynamic.

Types of Paywalls

When it comes to setting up a subscription service, publishers have several options on how to restrict content to their audience. Let’s look at the common ones:

Hard Paywall

A “hard” paywall means that you can’t see any of the content unless you pay first. This might make it tougher to get people to buy because they can’t try it out for free like they can with a “soft” paywall. However, if the content is really special or different in a way that people can’t find anywhere else, they might be willing to pay for it right away.


A Freemium model is when a website (or app) provides some content for free but charges for more detailed or advanced stuff. Publishers who use this approach give you simple articles at no cost, and then if you want more in-depth or special features, you have to pay.

Metered Paywall (Recommended by Google)

A metered paywall lets you see some content for free for a little while or a few times before you have to pay. Usually, this limit resets every month. Lots of news publishers do this. They might let you read 5 articles for free each month and then ask you to pay if you want to read more. Software companies (SaaS) also use metered paywalls for their subscriptions. For example, your subscription might let you download 20 reports each month. Once you hit that number, you need to pay if you want more.

People often mix up “soft paywall” and “metered paywall,” but they’re not exactly the same. A “soft paywall” typically means there’s a special part of the website with extra good content that you need to pay for, while a “metered paywall” means you can see a certain amount of content before you have to start paying.

Lead-in (Recommended by Google)

This method is like a strict paywall. You get to see the title and the first bit of the article, maybe the first paragraph or 100 words (it’s up to the published how much will show). It’s a middle ground that lets you check out how good the article might be without giving you the whole thing for free.

Even though this is okay with Google’s rules, it can annoy people who click on the article from a search and then leave right away because they can’t read the whole thing without paying.


Dynamic Paywall

Dynamic paywalls use what they know about each reader to offer a more personal experience. They look at what you do, what you’re interested in, and how you interact with their content to figure out the best way to get you to subscribe. This is really helpful for news publishers because it lets them see what their readers like best, how often they visit, and even what devices they use.These smart paywalls are all about the reader. They’re not just a one-size-fits-all barrier; they adapt to what you seem to want and need. So, if someone’s really into the content and likely to subscribe, the paywall will notice and adjust the number of free articles they get to see before asking them to pay. This way, it tries to turn more readers into paying subscribers by only asking for a subscription at the perfect time.

Each type of paywall has its own way of balancing reader access with revenue goals, and the choice depends on what fits best with a publisher’s content strategy and their understanding of the audience’s reading habits.

Flexible Sampling and Google

In 2017, Google phased out First Click Free (FCF) and introduced Flexible Sampling as its replacement.

Google doesn’t automatically dislike content that you have to pay to access. It’s fine with this kind of content as long as the website tells Google that it’s behind a paywall.

Publishers who use paywalls aren’t left out of Google’s search results. Their content that’s only for subscribers can still show up on various surfaces, like the Top Stories carousel, Aslo News, Google News, Discover, and the regular list of search results.

IMPORTANT: publishers must set up their paywalled content so that Google can look at it. This way, Google can include it in search results and use the right factors to decide how well it should rank.

Paywall Implementations and Their SEO Impact

Different technical setups can influence how well a site with a paywall performs in search rankings. Here’s a rundown of various implementations and how they might affect a site’s visibility in search results:

Structured Data

  • isAccessibleForFree

First, you have to make sure Google knows when an article is behind a paywall so it won’t confuse your paywall with something sneaky like cloaking.

You do this by using structured data for news articles. When you set up the structured data for an article that’s behind a paywall, you have to specify that it’s not free. You do this by setting the “isAccessibleForFree” attribute to “false,” which tells Google that the article is either fully or partially paywalled.

Usually, news publishers mix both free and paid content. The value of the “isAccessibleForFree” item is a boolean so if the content is for free the value should be “TRUE” and if the content is paid – “FALSE

  • hasPart

By putting the “hasPart” item into your NewsArticle structured data, you can show where the paywall begins. You do this with the “cssSelector” attribute, which should have the CSS class from your article page that marks where the paywalled content starts.

Googlebot verification

By verification of Google and their crawler, Google will be able to crawl and index your content, including the paywalled sections. There are a couple of option to verify and differentiate Googlebot and a regular user.

    • IP verification – Fortunately, Google provides a JSON with the IP addresses it uses to search the internet. This makes it easy and safe to check if a visitor is actually Google and decide what kind of content to show them on the website.

googleIPs screenshot of a JSON file

  • User-Agent Paywall – When you have a paywall that checks the type of browser, your website shows different things to regular visitors and to Google. Normal visitors see a version of the website that asks for payment to access content, without any free parts. But when Google’s web crawler visits, it sees a version with the full article and all the detailed information it needs to list the article in search results. The downside of this is that the content won’t be protected on 100% from scrapers because the User-Agent is easy to be bot - user agent - screenshot
  • JavaScript Paywall – This method verifies the user on client-side (in the browser) by JavaScript. The webpage HTML document contains the full article, and based on where is coming the request the JS can hide the paid content and show the barrier. Googlebot is crawling all pages first without executing the JS and is parsing only the HTML document, that’s why with good Server-side rendering this won’t be an issue for news publishers. The benefit of this is the flexibility that the JS provides to optimise for conversion but at the same time by just disabling the JS in the browser the content will be accessible.

Based on my experience the best way and the most secure is by cross-checking the user-agent and the IP of the request. If the publisher is more conversion oriented I would suggest a JavaScript paywall because of the flexability but at the same time monitor the session without enabled JS and if they increase should be taken some action in this direction.

No Archive tag

the times - no archive robots tag

Search engines like Google might save a copy of your web pages. This saved version is called a “cached” page. Google can show a link to this cached page in its search results, which allows people to see the content of the page even if the original page is not available anymore.

If you don’t want Google to save and show a cached copy of your pages, you need to tell Google not to do this. You can do this by setting a rule on your website. If you don’t set this rule, Google might save a copy of your page and people could find and view this saved version by clicking on the “Cached” link in Google’s search results. This means that even if you have a paywall or special content that you only want paying users to see, they might still see it for free through the cached link.

HTTP header

HTTP/1.1 200 OK
Date: Tue, 25 May 2010 21:42:43 GMT
X-Robots-Tag: noarchive

Robots tag

<!DOCTYPE html> 
<meta name="robots" content="noindex"> 

Rich Rusult Test for Paywall and subscription content

Google’s Rich Results Test has been upgraded to show you validation of structured data for paywalled content. From October 2023 the tool shows if that site is using the proper structured data for paywalled content or not.

What are the potential downsides of setting up a paywall?


Different (poor) user behavior

Let’s address a big issue: user reactions to paywalled sites. Even if a paywall is easy to bypass, user behavior matters.

User experience is a ranking factor and I believe that part of this is how often users click on the “Back” button and return to the search results page (SERP) after visiting a site. If users frequently go back to SERP quickly, it’s a bad sign for the site, suggesting the content didn’t meet their needs.

Websites with high ‘return to SERP’ rates might rank lower on Google over time. Google aims to provide the best search results, and sites that users quickly leave don’t fit the bill.

The real SEO challenge with paywalls is that they can increase ‘return to SERP’ instances, leading to less visibility on Google.

To counter this, publishers can use ‘First Click Free’ tactics or wisely set paywall limits for users from Google, letting them read full articles and reducing ‘return to SERP’ events to avoid long-term SEO harm from paywalls.

Lower CTR than the average

During the NESS conference in 2022, SEO experts from The Times reported a decrease in click-through rates (CTR) as users became aware that the website’s content was behind a paywall, leading them to avoid clicking on the search results.

Based on my experience with paywall content the proportion between free content and paid content should monitored and it’s a good option {again} to implement “First Click Free” which will minimise this routing of the users.

Fewer backlinks

Websites operating behind paywalls often experience a reduction in backlinks because their content is inaccessible to a wide audience. This observation aligns with the findings presented in BuzzSumo’s case study, “We Analyzed Millions Of Publisher Links. Here’s How To Syndicate Your Content & PR For Free” which provides insights into effective strategies for content syndication and public relations without incurring costs.

buzzsumo - screenshot backlinks to pawayll content

Flexible sampling and SEO – Opinion

Paywalls are becoming more popular but at the same time more complicated because they can help make publishers monetise their content but at the same time the implementation should frollow the SEO requirements and at the same time to be based on the user experience and behavior and to avoid any damages. If you set them up right, letting Google’s crawler see your full articles and links, they won’t hurt your website’s search engine rankings.


Follow me
Passionate SEO specialist which is really enjoys growing websites. In my free time, I'm a Python enthusiast who is trying to make my/our lives easier.
Svetoslav Petkov
Follow me