Research: Is content uniqueness really that important?

The article will be useful to you if you nervously fight for each percentage of uniqueness of content on the site, considering this factor the most important of the hundreds of other search engines. Familiarizing yourself with proven hypotheses and examples, you will free time for the development of the site, instead of chasing 100% uniqueness pages.

You have blocked the possibility of receiving notifications

You are subscribed to the notifications

Where did the race for unique content come from?

I decided to study this question in the autumn of 2015, after the article by Denis Saveliev came to my attention. The comments there are mostly skeptical, like “but that’s not how I am”, “where the pros,” “show examples”…

Among the many SEO-judgments, the uniqueness of content still occupies one of the first places. When about 100% importance of unique content tells me a former classmate who wants to make a site – I can understand it. But this same misconception is still found on sites of famous companies, services:

Content should be its own – this is the basis of promotion

Search engines also added oil to the fire – Google has launched a confirmation of authorship of the text through a bunch of site and Google + (currently the service is not working). “Yandex” added the tool “Original texts” to “Yandex.Webmaster”. As a frequenter of several webmaster forums, I have seen there many topics about how important it is to use these tools.

Although “Yandex” wrote honestly:

Let’s try it, but we don’t guarantee it.

We’ll try, but we can’t guarantee

I even came across a study on the effectiveness of this service. More than 15,000 articles sent to the original texts were checked. We calculated the percentage of text uniqueness and measured the positions – who is higher, the owner of the unique text or site-copier.

A serious work was done.

A lot of work has been done.

With all due respect to the work done in the study – even in the graphs above, it is clear that 14% and 7% of sites that copied the text, bypassed the original source. Further in this article I will explain why it happened.

If you repeat something for a long time – it may become like the truth. The more people talked about the uniqueness of content, the more theories arose:

To rank a site high – the text must be unique.

To get more traffic – images on the site must be unique.

The design of the site must be unique, sites on templates are not highly ranked.

Video should also be unique.

Now there will be a small spoiler that the three statements above are not quite correct. On this site, I came across when I started to study – what a unique content “roll” for search engines. The uniqueness of the text on this site is close to zero – this is a typical dorway with a mishmash of foreign text:

The domain was registered in March 2016. What do you think of this attendance growth schedule? From zero to half a million in one and a half years. You must have the same on unique content?

So, what was the myth about the need for unique text material?

It was heated by text exchanges (which is logical, this is their bread).

Search engines within one site excluded from the issuance of similar pages (considering them the same).

Thirsty free webmasters tried to just copy someone else’s success – massively stealing texts, photos. Sites naturally got under filters and banya search engines (as duplicates and for lack of added value).

The search engines themselves declared the importance of unique material and even made tools to indicate the original source.

SEO blogs and advertising agencies also almost unanimously claim the importance of unique material.

Not least is the novelty of the web industry itself and the incompetence of site owners. Almost every web specialist or web studio has come across customers who say “make the same site, just to mine. In addition to the fact that it is unethical and illegal – it devalues the work of professionals. Can not the development of the site cost a few tens of thousands (or several hundred thousand) rubles, if you just want to take and copy?

If any customer to show a premium template for the selected CMS for $ 70 with themeforest or offer to make a similar (but unique) from scratch for 500-5000 dollars (to pay for the work of a designer, layout designer, programmer, project manager) – the choice will be obvious. It also caused another misconception – that the unique design is also important (because if it is not important – why pay more?).

Main types of content

In competitive topics it is difficult to imagine a top 3 page with only text. Usually the following materials are used (in decreasing importance and in increasing price):

Text.

Photo. This can also include infographics, icons, etc. Tables – these can be price tables, comparisons, price lists, etc.

Video. Although video marketing is gaining momentum every year, there are more frequent pages without video. This is due to the high cost and time spent on content production.

Here is an example of a good page – everything is used:

This is – about 20% of all content on the page.

This is about 20% of all page content.

For small budgets use text + photos. This is what gives rise to “clones”, when sites seem to be different, and the content seems to be different, but very similar. This happens because instead of a chain:

Defining the target audience of the site.

Primary collection of interest groups, the selection of suitable, analysis of competitors.

The structure of the site, the distribution of interest groups on pages.

In-depth analysis of each interest group, planning pages.

Page creation.

Improvement of the page after the accumulation of statistics.

There is one of variants:

Take a couple of competitors, we do the same.

Texts are ordered cheaper, photos are taken at Google or on someone else’s site.

It is with the text pierced most. Not that good, even good copywriting is not cheap.

Because to get a good text you need either the interest of the copywriter, or a clear TK + example “how to do”.

Without this for a small price the customer (web-studio or the owner of the site, it does not matter) will at best get a statement from a pair of sources, and at worst – rewrite the site, which came across the copywriter. And this is not because the copywriters are bad. It’s because every person wants to earn a decent living. For a cheap order with an incomprehensible technical task to delve into the topic, to make a plan for the article, to select cool sources, to determine the CA for the customer no one will.

Examples of pages that use the same photos you have probably seen. A “good” tone on the site about repair, design or construction services is to place a photo with the first or second page of the image search:

So I see all these guys at the construction site of a neighbor.

I see all these guys at the construction site next door.

And with all this in runet is a ruthless battle for the uniqueness of content. Let the text bad – but unique. Let the picture is inappropriate – but it was turned, cut, changed tones – and it is also unique.

Examples of commercial niches and sites with unique content…

To understand that the role of unique content is exaggerated, it is enough to study commercial sites in the same niche, especially online stores with a large number of positions. If you compare the content of product cards for technical products – it turns out that the data is almost identical:

The technical specifications are the same.

Photos may be the same (if the manufacturer allows them to be used).

If the product is not a hit, there will be no or little feedback.

Description – in the best case, a raid from the official site with very low text uniqueness.

And this absolutely does not prevent sites to occupy the top:

Description by 30-90% coincides with texts on other sites.

Description by 30-90% coincides with texts on other sites

You can find as many examples as you like, especially in topics where there is nothing to write. It’s hard to come up with a unique description for:

building materials (drywall, nails, sanitary fittings, metal mesh, etc.);

products for needlework (threads for sewing, knitting, paintings by number, beads, knitting hooks, sets of needles, fabrics, etc.);

fishing accessories (hooks, wobblers, cords, sinkers, etc.).

And now imagine that there are more than 100,000 items in the online store, and categories – more than 500. Even at the automated creation of descriptions on templates is a huge amount of work. Therefore the majority of Internet shops promotes categories, without bothering with uniqueness of commodity cards (which can give more than 50 % of traffic though do not differ from competitors even on 10 %).

And this does not prevent their owners to eat the brains of copywriters for extra 3% of uniqueness in Advego, etxt, text.ru or their analogues.

Other types of sites

If the site is filled with the forces of one company, you can somehow follow the uniqueness. And what about sites where materials are created by users? Here are examples of UGC sites that live quietly without bothering:

Message Boards.

Real estate catalogues, organizations.

Sites for job/vacancy search.

Question-answer services.

Forums.

Casual sites that publish the same memes, stories, etc.

News sites.

Aggregators of any type (zero uniqueness, the site collects data from other sites).

Cartographic services

I think you can find more examples where the content is the same and the sites collect good traffic. Interesting examples can be left in comments.

My three experiments on creating a site with zero or minimal uniqueness

As I wrote earlier, I had these thoughts in the autumn of 2015.

Site #1

In October 2015, the domain was registered and a 30-page website was created. The project was done without technical SEO, ie.

the site was not added to the tools for webmasters “Yandex” and “Google”,

service pages were not closed from indexing,

usability was at the level of “free CMS template” without corrections,

meta tags were automatically generated by the site engine,

The source of content were traffic pages of popular sites selected through competitor analysis services (i.e., with a large trust, with a bunch of links, etc.).

The hypothesis was as follows – added value is important for a search engine. Therefore, for each page was written something like a squeeze – about what the content, what will be useful to the user. Then came the content from the source + link to the original source.

After that, I forgot about the site and remembered only when reminding me to extend the domain. This is what happened in terms of attendance in the first year of life of the site:

Each page contained about 5% of unique content – namely, an introduction. The results inspired me, it was decided to extend the domain and give more. In February 2017 I decided to expand the experiment. Instead of searching for traffic articles trust thematic sites – collected all the materials on any issue related to financial reporting. A niche competitive, useful and relevant materials should be searched for by fractions. What was done:

There were interest groups on the project topic.

For each group a page was created, which contained an introduction, the most complete information on any issue with references to sources.

All SEO points that were not made at the start were made.

If in 15-16 years the number of unique content for each document was about 5%, now it has fallen to 1-2% or less. The non-text content was also not unique. From February to April, about 70 pages were added, and site traffic increased 2.5 times.

Growth from 65 to 165 visitors per day in 2 months.

Growth from 65 to 165 visitors per day in 2 months

In May, I decided to check how important it is to have at least one text request:

In the meta tags (title and description).

IN H1.

In H2-H6 level headers and page text.

A module for checking semantics was installed on the site. A semantic kernel was compiled to the site’s anniversary and a half and queries were added to the pages with the module. Despite the summer decline in the niche, May-August gave a 3+ times increase in traffic from April. And judging by the growth schedule, in September the site will break through the figure of 20,000 visitors.

Growth forecast for September

In my opinion – the increase in attendance and excellent behavioral factors (less than 10% bounce, more than 5 minutes of time on the site) have already shown that uniqueness is not the most important indicator.

But my opinion is personal, and I wanted to know what search engines think about the site. How can you achieve a manual check the quality of the site by representatives of search engines? By submitting the site to participate in the “Advertising Network Yandex” and Google Adsense.

The site was accepted into both advertising systems. A special chic is that the site was placed in a new account, ie passed a deeper inspection. Apparently, until the end of the year, the site will cross the threshold of 1000 visitors per day and continue to grow.

What are the conclusions on this project:

In 2015, there was enough minimal effort to get visitors.

New relevant (relevant, not unique) content on the site gives an increase in traffic, if the site has good behavioral.

Growth in 2017 occurred largely due to the history of the domain – accumulated behavioral and narrow niche.

“Just improve your site” works several times worse than “Just improve your site, but don’t forget about the full disclosure of the topic, SEO and semantics”.

Search engines do not consider the uniqueness of content to be a cornerstone.

The reasons why the site has received, receives and will continue to receive visitors are good behavioral ones that are formed:

thanks to the design of pages – large print, readable design, photo, video, infographics and other data that give the user all the information;

because all pages are longrides, all other things being equal, time and depth of viewing is higher, because even for a cursory page viewing it takes more time;

the most important thing is to answer all user questions in one place.

From experience with other sites – peak attendance is achieved in 6-12 months, depending on the quality of the page and the competitiveness of topics. That is, by March 2018, the projected attendance – about 2,000 visitors per day. This is 30 times more than it was in February 2017.

Another small discovery that I made when writing this article – content over time increases the text uniqueness. Six months ago, only 3-4 materials could boast of uniqueness above 3, but less than 10%. Now more than half of the texts are gaining such indicators:

Some of the domains that were sources – were not extended, the content was removed. If you look at the diagram and assume that the data will change slightly in the future – it turns out that the chance of content to become unique in two years – about 60%.

Sites 2 and 3

Due to the success of experiment No. 1, it was decided to check whether it would be possible to repeat it in today’s realities? We chose 2 competitive niches – construction and legal services.

Both sites were originally designed with the first experiment in mind:

The semantics for each of the pages was chosen.

A free but handy adaptive template was chosen.

The site had no errors preventing indexing (getting the pages into search results) and ranking.

All materials are longrides that completely close user request.

The uniqueness of the content is less than 3%.

On each of the projects was placed 10 pages.

Both sites had an interesting effect – with zero pages in the index “Yandex” – a surge of attendance from this search engine. This is most likely due to the “shuffling” of pages of new sites in the issue to assess user behavior.

On a legal site, the spike is more clearly visible

On the legal site, the surge is visible more clearly

After hitting the index pages participate in the search “on a common basis”, which leads to a decrease in attendance. Further dynamics is the same as on site number 1. The first 3-4 months is the accumulation of statistics and a slight increase in the number of visitors.

In November-December the growth is expected in 2-5 times.

In November-December the growth is expected to be 2-5 times higher

Other projects from 2016 to 2017 (with 100% unique content), which I consider successful, have similar charts – initial traffic, decline in the second month, growth in 3-6 months.

Starting from six months of life, the site grows in traffic several times and continues to grow if you continue to develop it.

What conclusions can be drawn:

There have been no fundamental changes in the ranking of non-unique content for the last 2 years.

The dynamics of growth – the same as on 100% of unique sites.

For new areas it is desirable to prepare a stub sites for 5-50 pages and give them to stand 3-6 months or more – this will help accelerate the development of the project in the future.

Closer to the end of the year in the comments to the article I will try to place graphics for all three projects to see – how my expectations will coincide with reality.

Site #4

It’s just a 50-page stub site. The content is not unique, the site is a narrow subject directory. On webmaster forums and SEO-blogs dozens of times it was discussed that catalogues and aggregators in the issue take the top, throwing out the sites of companies. But personal experience is always better than someone else’s. In August 2016, was registered domain, it placed about 50 pages.

One of the topics on the website

One of the topics on searchengines

What hypotheses had to be tested:

How the development of the project will be affected by the presence of a plug (i.e., the definition of project topics by search engines and the accumulation of data on user behavior).

After what time the site in the issue will begin to “push” the sites of companies.

Is it possible to take away from companies part of the brand traffic.

What is better – the development of the project “under the requests” or “under the niche.

The first year of the project’s life is nothing special. The most difficult was to gather information and present it in a single form. To tell you the truth, I did not think that the project would grow in any way on those 50 pages filled in August 17. I was wrong:

If you discard all information requests in the niche, the main commercial requests are looking for 15 to 30 thousand people per month in Russia as a whole. The niche is competitive – more than 1000 companies offer their services. The price of attracting clients – crazy, in contextual advertising more than 100 advertisers.

And “made on the knee” plug collects about 200 visitors per month (attracting such a number of visitors through advertising would cost 200-500 dollars).

In August 2017, data on major companies were collected, more than 2,000 pages were added to the site.

Schedule of pages growth in search

Page growth graph in search

It turned out that in addition to 50 pages used CMS has generated more than 400 pages in the form of rss-feeds, versions for printing, pdf-versions and other garbage, which is usually thrown out of the index by the search engines themselves, if this is not done webmaster.

All the hypotheses were tested a week after the pages hit the index.

How the presence of a plug (ie, the definition of search engines project topics and the accumulation of data on user behavior) will affect the development of the project:

Even in a very competitive niche, another site will collect its share of visitors.

If the site has no good behavioral – its attendance will increase (despite the lack of new materials).

New pages quickly get to the search index and collect traffic.

Here is the increase in attendance immediately after adding new pages:

Astrologers have announced the indexing week. Visitors from the search have doubled.

Astrologers have announced indexing week. Visitors from the search have doubled.

The site, which was made as an experiment – began to collect about 5% of all visitors to the niche. Let’s move on to the next point – how long will the site in the issue begin to “push” the sites of companies?

Immediately after adding new pages, the site began from positions 10-50 to get to the top 3, most often to the first position:

Yandex.Webmasters’ data.

Google has a similar picture, but mainly the site took 3rd to 10th positions. This did not prevent the site from taking some of the branded traffic:

The red highlights the brand requests.

Brand requests are highlighted in red

And the last hypothesis – what is better, to develop the project under users’ requests or under a niche in general? Definitely, it is better to make a project that closes all possible interest groups. 2000+ pages, which were added in August, were selected on the principle “take everything you can look for at least once in a few years”.

This approach has fully justified itself. If this project develops along the same curve as the others – it will be released on the peak of traffic just in spring-summer 2018, by the new season. According to my forecast – in terms of traffic, it will fall behind only the leaders of the niche with millions of budgets for online and offline marketing with sites that have been rapidly developing since the 200s.

In figures – about 20% of traffic in the niche on a pessimistic forecast.

What conclusions can be drawn from this experiment:

Thematic directories (catalogues, aggregators) still bypass the issue of the authoritative sites of companies, despite the meager uniqueness.

The main thing is to think like a user, and give him the right information.

To make such sites, you need to be able to search and structure the data.

Not a single link was bought, but the referral traffic is there – visitors scatter useful site.

In August, several new companies asked for the directory, asked about the possibility of advertising and attendance. The project was not advertised in any way. I would risk assuming that they found it in the issue, and decided to add, “because our competitors are there.

A little bonus for those who like to read…

Neither this site, nor any of our competitors in the niche, uses a technique that is called “silo architecture” in the Western seo shuffle, and in our case – tagging. This is a vast topic, I recommend you to read the article to understand it. Briefly – create new content based on existing data and show it to the target audience. Here are some examples:

On the property rental site for each property, specify the nearest school/high school and create pages “Rent an N-room apartment near the NN school”.

For a construction company – display the portfolio not a bunch of photos, but the pages “design and repair of a small studio”, “redevelopment of the apartment into a studio”, etc.

In the comments you can write your niche, and I will try to find a few options for you.

Here is an example in a competitive topic, where neither catalogues nor websites of manufacturers are not bothering over such a trifle as the interests of their clients:

Handbook with the page “PVC Windows for a tree in Chelyabinsk” would bypass all.

The reference book with the page “PVC windows under a tree in Chelyabinsk” would have gone around.

On the first page – only one site somehow makes it clear that the right service is provided by the company. And this is while the full imitation “under the tree” (with lamination of the ends of the sashes) gives a markup of 20 to 50% on the window.

So, returning to the project “reference book on non-unique data”, I can not even assume what part of potential customers can be taken to your site, adding such pages. Time will show, and you can find out…

In what cases the content may not be unique and why for this there will be no sanctions from search engines

First, let’s analyze the text component of the pages, and then images and video content. The text uniqueness is checked by means of the shingles. That is, the entire text is divided into fragments (usually from 3 to 10 words) and checked – whether these 3 (4, 5, 10) words in the same order on other sites.

For example, the text with the phrase “International Children’s Day” cannot be 100% unique in shingle 4, because this word combination is found hundreds of thousands of times on forums, news sites, calendars, etc.

At the same time, in hundreds of niches text content can only be unique. Here are some first examples that came to mind:

The texts of laws, court decisions, legal formulations, quotes from them. You can not just take and make a rewrite of the Civil Code or all the decisions of arbitration courts in the Moscow region. Websites and question and answer services with translation from legal to human repeatedly quote laws, by-laws, court practice in similar cases. Is this not a unique content? Yes. Have websites been banned or downgraded for this? No.

Online libraries. In fact, all the differences of such sites are the number of books, their breakdown (the whole book on one page, or 10,000 words per page), perhaps – forums, lists of the best, etc. Within a large site >99% of content (texts of books and poems) is not unique? Yes. Such projects are shown in search engines and get visitors from there? Yes. Would you read Pushkin’s poems in a rhetoric with increased uniqueness? I wouldn’t.

Texts and translations of songs. Even if we assume that all translations within the site will not be copied to other similar projects, the source text of the songs will remain. It turns out that such sites can not boast of uniqueness above 50% and are evaluated by other parameters – convenience, user behavior, the number of songs. Maybe only one site with the lyrics of songs remained in the top 1, and the rest of them were all blacklisted by “Yandex” and “Google”? No.

Background information, which is unchanged. Or rarely outdated. Postal indexes, distance between settlements, last year’s currency and stock rates, etc. Hundreds of sites offer the same information? Yes. Each of them gets its share of visitors? Mostly, yes.

More examples can be found in any rating in the “Help” section:

Rating of the Live Internet

Live Internet Rankings

It turns out that despite the abundance of examples of sites with little or no uniqueness – the myth of its importance continues to live and grow, and not only for texts.

We are engaged in a comprehensive promotion of business on the Internet.

Read more .

The uniqueness of graphic content (photos, images, infographics, etc.) is needed to attract traffic to images, but does not affect the site as a whole. Yes, and to get the “picture” traffic should in addition to the presence of copyright images to optimize the tags alt and title, text near the image and the page itself.

On the part of search engines to sites that use unique images, no automatic sanctions or filters do not apply. About the risks of using other people’s images, I will tell you a little later.

The same is true for video content. Whether it’s an author’s video or your own personal video – there is no difference in the ranking of the site.

YouTube allows you to embed the video, if the author of the video is not against it.

YouTube allows you to embed a video if the author of the video is okay with it.

If a search engine would be against posting non-copyrighted content on the pages of sites, and in his service such functionality would not offer.

Added value

Let’s see, what are the key principles of the world’s leading search engine?

Do you see here percentages of text uniqueness or something like that?

Do you see here a percentage of the uniqueness of the text or something similar?

Even for the most hated by all generated sites, which are hotly called Dorways or GS-ami, there is a set of rules:

With editing, generate it for your health.

That’s what “Yandex” offers to website owners:

Everything has to be original, otherwise the site will not be ranked high.

The problem is that Yandex’s help does not provide a definition of originality or uniqueness that can be measured in some metric system. You will not find definitions like “Unique is the content that occupies at least N % in the source code of the page in characters, has uniqueness of at least NN %, measurements can be made using the service XYZ”. In fact, these are unsubstantiated recommendations, which can be illustrated by the following image:

My personal experience and analysis of different sites shows that “originality” is more likely to be understood as a “highlight” or “chip” of the project, rather than as a “text uniqueness of 96% and above on a service”.

I will not name any specific domains. You can find yourself as sites that copy and rewrite information from other sites, and sites that are only content aggregators, while collecting from 1,000 to 15,000 visitors per day.

User convenience is much more important than the original content. And user-friendliness is created with the help of added value, which is expressed in absolutely different ways.

  1. Convenience of content consumption

If you take the required content (for example, addresses of polling stations from the CEC website) and bring them to a convenient form with a search for the address – it will not be unique content. But it will be more convenient than that:

UI designer you have so much

You’ve got a UI designer like that.

You may have read books online once or searched for lyrics (or translations) of songs. In that case, you probably had a favorite site that you went to without searching. You went because of the convenience, not the uniqueness, of the content.

2) Combining data from different sources into a single format

Such sites can include any aggregators from the catalogue of hotels in Sochi to the aggregator of e-currency exchangers.

  1. Data Compilation

The simplest example – a site of recipes, where in addition to the ingredients specified caloric value, fat protein-carbohydrate, cooking time.

What is the main content here?

A lady who calculates calories and measures the portion on the scales is more important than the calorie content of the dish, not the uniqueness of the recipe. She may not even need the recipe.

  1. The services that add value to an unrivaled dish…

A simple example – the approximate cost of ownership of the car, taking into account:

annual cheapening of its cost for N years;

the cost of fuel when driving M kilometres every year;

parking charges X days a month for Y money;

the cost of insurance for P years of driving experience.

Will such service be in demand? Probably, yes. Can you find something like that? I couldn’t find anything like it on a quick search. Will any data be unique here? No, it’s a simple formula calculation.

  1. It’s just the completeness of the information

For example, a SEO may be interested in how to close the results of online store product filtering for search engines via robots.txt. First, the search in the top 10 will also have pages that contain ALL the directives possible in robots.txt, including the desired one. Secondly, all these articles coincide by 10-90%, because the examples of commands will be either the same or very similar. Third, the user is more likely to hang on a page with a maximum of examples (in case there is anything else useful) rather than the most unique one.

  1. User Involvement

If on two different sites containing the same content (for example, one of the laws of the Russian Federation), on one site the lawyer will answer users’ questions, and on the other site will not be able to leave a comment – which site would you prefer? Here you can argue that comments are unique content, and it is true. But initially, both sites are in equal position. At least because the laws of the Russian Federation are contained not on two, but on hundreds of sites, and the starting uniqueness will be almost zero for both projects.

Another example is the presence of video on the topic of the page. To paraphrase the proverb, “it is better to see once than to read 100 times”, especially if the text is as clear as this one:

That is, if you take the car’s manual and repair, put it on the pages and add a thematic video to each – it will be a good and useful site. With zero uniqueness (and the text and videos are not author’s), but excellent behavioral factors.

  1. Who gets up first, that and slippers

This is mainly relevant for news or event content. A site that gains visitors faster on new pages will get the lion’s share of visitors. The source of initial traffic can be social networks, YouTube, entertainment sites.

Clickability plays a very important role for this item. If the original source site has a fresh snippet, and the site that copied the page will make a clickable taitle, uses micro markup – the uniqueness of the content will not play a role. Because the user will be looking to move not to the original source, and a copy. In addition, adding weight to the copy, if it will be the last click in the output.

  1. Data volume

A simple example – a site with 50,000 lyrics, will gather more visitors than a site with 500. This is especially well visible in non-commercial and casual niches (online libraries, flash games, etc.) – sites that are leaders accumulate more content. Not more unique content, but more pages in principle.

In commercial niches, a good example is online hypermarkets. In some narrow segment (tents and sleeping bags) they will lose to the thematic store with travel equipment, but in general will receive hundreds and thousands of times more visitors and profits. Although the descriptions of goods may coincide by 30-100%.

  1. Changing the format of content

You can extract subtitles from the video and add frames to it all – and get unique content. This is what MOZ does by placing a transcription of the video in text format.

3 content submission formats

Various video sites work on the same principle – selecting thematic videos and adding a description to them. There are also more “parasitic” sites, which give added value only to search engines. For example:

Instagram clones

Since “installations” itself is optimized this way, it is helped by unofficial fans. As you understand, there is no unique content there.

  1. Unique added value

All examples 1-9 can be reduced to convenience, volume or data merging, without creating at least some significant amount of new content. The opposite example is content that shows data in a format that most people can understand. For example, when choosing an inexpensive mirror I was not told anything technical specifications. The content understandable to a non-professional is photos taken on some model:

This page can be enlarged many times by adding a unique material – technical specifications, videos from YouTube shot on this model, photos of the device itself, etc. – and this will increase its attendance.

In online technology stores, where visitors can vote for the usefulness of feedback, the most “zalaikan” usually reviews from people for people like “Bought a laptop, 4 hours you can watch movies without recharging, the sun is not shining, not warm. Try to get the same data from the specifications.

Non-research risks when using non-unique content

Although the use of foreign materials does not carry the direct risk of banana or filtering by search engines, a particularly zealous copoupaster has a different danger.

Mem

In search queries in Russian with the word “free of charge” are leading consoles “download”, “watch” and “online”. It is possible thanks to almost zero literacy about copyright. At the same time, for one fact of violation of copyright (copy someone else’s site, someone else’s text, photo, video) under the law of the Russian Federation can get a claim for the amount of 10 000 to 5 000 000 rubles.

Legal practice in such cases has already developed, you can easily get a claim for 200-300 thousand, just borrowing one foreign photosets. In runet full of sensational stories – “got” and the owners of sites, and the owners of the public “VKontakte”. There have been millions of lawsuits against physical persons.

Surprisingly, but despite this is full of yet frightened webmasters, who just take pictures on Google – and what, they are in the public domain. Over the last 5 years I have repeatedly observed epic topics of 40-50 pages on various forums, all the essence of which can be reduced to five stages:

They wrote to me that I have other people’s photos, they want NNN thousands.

Adequate lawyer’s comment in the style of “try to solve the problem cheaper, but pre-trial.

100500 comments of couch lawyers “they won’t prove it”, “they’ll take it” and so on.

Brave the author of the top – removed, sent all the forest.

Subpoena, court, the author of the topik really has to pay NNN thousand, or a little less.

And for projects, in which hundreds of thousands of dollars are invested, a fine of 100 thousand rubles per photo will seem like a fairy tale, compared to regular removal of content on the complaint. Google provides such a tool:

A competitor’s online store took your pictures and climbed into the top? Sue and get it out of Google.

It is because of the risks associated with copyright, commercial projects are more difficult to do without unique content. There is no guarantee that for the photos taken “from the Internet” the owner of the site will not receive a claim for a round sum in six months. And if it will be an online store, and the photos will be copied not a dozen, but thousands?

The same applies to the text – above in the article I gave an example, where you can just replace the name of the company to your own. And after that, you can also “get” a round sum, if this text is handwritten from the booklet to the site brought a competitor’s manager, and it will be proved in court.

I recommend you to review your policy of working with the content of the site, especially related to various contractors. Regardless of who leads your site, social network account, etc. – the owner of the account or domain of the site will be responsible in court.

Ask where the data added to your project comes from.

Conclusions and checklist

For search engines, what matters is not the uniqueness of content, but the satisfaction of the user. If the data is not unique, but provides more value than the original source – such content will receive its share of the audience, perhaps more than the original source.

Checklist:

If you take data for your project “from the Internet” – make sure it is either publicly available data or the author has allowed its use under certain conditions.

What matters is benefit, not uniqueness.

The tools of acknowledgement of authorship from search engines do not guarantee preservation of the first places of delivery to the primary source.

If your content has been dragged through different sites without permission, you can either sue them or throw them out of Google.

If the copyist is ranked above the source, it means it’s something better. Find out what makes your project better.

Make it harder to steal content – watermarks on photos and videos. Photos are theirs that make no sense to steal. Text should not be impersonal – refer to other materials, mention “chips” that only you have.

In popular niches, the uniqueness of content is not only due to theft. When there are 50 articles about the new iPhone, their uniqueness will be higher than when there are 50,000.

Check your site materials – isn’t there a risk to get a lawsuit for images on the company’s blog or home page?

Create a content policy that excludes copyright infringement and requires compliance from employees and contractors.

It may happen that you cannot force someone else’s site to remove your content. Don’t be discouraged. Much greater results can be achieved by developing your project, rather than chasing those who have the text for 10% coincided with yours.

Questions and opinions in comments are welcome. Remember, in comments you can write your niche, and I will try to find a few options to add value or expand existing content for you.