Compilation of the semantic core. Working with the semantic core in practice. How to use the semantic core What files the semantic core consists of

Often novice webmasters, faced with the need to create a semantic core, do not know where to start. Although there is nothing complicated in this process. Simply put, you need to collect a list of key phrases by which Internet users are looking for information on your site.

The more complete and accurate it is, the easier it is for a copywriter to write a good text, and for you to get high positions in the search for the required queries. This article will discuss how to correctly compose large and high-quality semantic cores and what to do with them further so that the site goes to the top and collects a lot of traffic.

The semantic core is a set of key phrases, ungrouped by meaning, where each group reflects one need or desire of the user (intent). That is, what a person thinks about when they drive their query into the search bar.

The whole process of creating a kernel can be represented in 4 steps:

  1. We are faced with a task or problem;
  2. In our head we formulate how we can find its solution through a search;
  3. We drive a request into Yandex or Google. Other people besides us do the same;
  4. The most frequent options for calls fall into analytics services and become key phrases that we collect and group according to needs. As a result of all these manipulations, a semantic core is obtained.

Is it necessary to select key phrases or can you do without it?

Previously, semantics were compiled in order to find the most frequent keywords on a topic, write them into the text and get good visibility for them in the search. For the last 5 years, search engines have been striving to switch to a model where the relevance of a document to a query will be assessed not by the number of words and the variety of their variations in the text, but by the assessment of the disclosure of the intent.

Google started it in 2013 with the Hummingbird algorithm, Yandex in 2016 and 2017 with Palekh and Korolev technologies, respectively.

Texts written without a SA will not be able to fully disclose the topic, which means that it will not work to compete with the TOP in high-frequency and medium-frequency queries. It makes no sense to bet on low-frequency queries - there is too little traffic for them.

If you want to successfully promote yourself or your product on the Internet in the future, you need to learn how to compose the correct semantics that fully reveals the needs of users.

Search query classification

Let's look at 3 types of parameters by which keywords are evaluated.

By frequency:

  • High-frequency (HF) - phrases that define the topic. Consist of 1-2 words. On average, the number of search queries starts from 1000-3000 per month and can reach hundreds of thousands of impressions, depending on the topic. Most often, the main pages of sites are sharpened for them.
  • Medium-frequency (MF) - separate directions in the topic. Mostly contain 2-3 words. With an exact frequency of 500 to 1000. Usually commercial site categories or topics for large informational articles.
  • Low-frequency (LF) - queries related to the search for a specific answer to a question. As a rule, from 3-4 words. This can be a product card or an article topic. On average, they search from 50 to 500 people per month.
  • When analyzing metrics or data from statistics counters, one more type can be found - micro LF keys. These are phrases that are often asked once on a search. There is no point in sharpening a page for them. It is enough to be in the top for bass, which includes them.



Competitiveness:

  • Highly competitive (VK);
  • Medium-slope (SK);
  • Low competitive (NK);

On demand:

  • Navigational. Express the user's desire to find a specific Internet resource or information on it;
  • Informational. They are characterized by a need for information as a response to a request;
  • Transactional. Directly related to the desire to make a purchase;
  • Fuzzy or general. Those for which it is difficult to accurately determine the intent.
  • Geo-dependent and geo-independent. Reflect the need to search for information or complete a transaction in your city or without regional reference.


Depending on the type of site, you can give the following recommendations when selecting key phrases for the semantic core.

  1. Information resource... The main emphasis should be on finding topics for articles in the form of MF and LF queries with low competition. It is recommended to open the topic broadly and deeply, sharpening the page for a large number of LF keys.
  2. Online store or commercial site. We collect HF, MF and LF, segmenting as clearly as possible so that all phrases are of a transactional type and belong to one cluster. We focus on finding well-converting low-frequency NK keywords.

How to correctly compose a large semantic core - step by step instructions

We have moved on to the main part of the article, where I will sequentially analyze the main stages that need to go through to build the core of the future site.
To make the process clearer, all steps are given with examples.

Search for basic phrases

Working with the SEO kernel begins with choosing a primary list of basic words and phrases (HF) that best characterize the topic and are used in a broad sense. They are also called markers.

These can be either the names of the directions, or the types of products, popular queries from the topic. As a rule, they consist of 1-2 words and have tens, and sometimes hundreds of thousands of impressions per month. It is better not to take very wide keys, so as not to drown in negative keywords during the expansion stage.

The most convenient way to select marker phrases is to use. Driving a query into it, in the left column we see the phrases that it contains in itself, in the right - similar queries from which you can often find topics suitable for expanding. The service also shows the basic frequency of the phrase, that is, how many times it was asked per month in all word forms and with the addition of any words to it.

By itself, such a frequency is of little interest, so to get more accurate values, you need to use operators. Let's analyze what it is and what it is for.

Operators Yandex Wordstat:

1) "..." - quotes. A query in quotation marks allows you to track how many times a phrase was searched in Yandex with all its word forms, but without adding other words (tails).

2)! - Exclamation point. Using it before each word in the request, we fix its form and get the number of impressions in the search for a key phrase only in the specified word form, but with a tail.

3) "! ...! ...! ..." - quotes and an exclamation mark before each word. The most important operator for the optimizer. It allows you to understand how many times a keyword is requested per month strictly for a given phrase, as it is written, without adding any words.

4) +. Yandex Wordstat does not take into account prepositions and pronouns when making a request. If you need him to show them, we put a plus sign in front of them.

5) -. The second most important operator. With its help, words that do not fit are quickly eliminated. To apply it, after the analyzed phrase, put a minus and a stop word. If there are several of them we repeat the procedure.

6) (… |…). If you need to get data from Yandex Wordstat for several phrases at the same time, we enclose them in brackets and separate them with a forward slash. In practice, the method is rarely used.

For the convenience of working with the service, I recommend installing a special browser extension “Wordstat Assistant”. Installed on Mozila, Google Chrome, J. Browser and allows you to copy phrases and their frequencies with one click of the "+" or "Add all" icon.


Let's say we decided to make our own SEO blog. Let's choose 7 basic phrases for him:

  • semantic core;
  • optimization;
  • copywriting;
  • promotion;
  • monetization;
  • Direct

Search for synonyms

When formulating a query to search engines, users can use words that are similar in meaning, but different in spelling.

For example, "car" and "car".

It is important to find as many synonyms for the main words as possible in order to increase the coverage of the future semantic core. If this is not done, then when parsing, we will miss a whole layer of key phrases that reveal user needs.

What we use:

  • Brainstorm;
  • Right column Yandex Wordstat;
  • Requests typed in Cyrillic;
  • Special terms, abbreviations, slang expressions from the subject;
  • Yandex and Google blocks - search together with "request name";
  • Competitor snippets.

As a result of all actions for the selected topic, we get the following list of phrases:


Extending Basic Queries

Let's parse these keywords to identify the basic needs of people in this area.
The most convenient way to do this is in the Key Collector program, but if it's a pity to pay 1800 rubles for a license, use its free analogue - Slovoyob.

In terms of functionality, it is certainly weaker, but it is suitable for small projects.
If you do not want to delve into the work of the programs, you can use the Just-Magiс service and Rush Analytics. Still, it's better to spend a little time and figure out the software.

I will show you how it works in Kay Collector, but if you work with Slovoeb, then everything will be clear too. The program interface is similar.

Procedure:

1) Add a list of basic phrases to the program and remove the basic and exact frequency for them. If we plan to promote in a specific region, we indicate the regionality. For informational sites, this is most often not necessary.


2) Let's parse the left column of Yandex Wordstat by added words to get all requests from our topic.


3) At the output, we got 3374 phrases. Let's take them off the exact frequency, as in the 1st paragraph.


4) Let's check if there are any keys with zero base frequency in the list.


If there is, delete it and go to the next step.

Negative words

Many people neglect the procedure of collecting negative keywords, replacing it with the removal of phrases that are not suitable. But later you will realize that it is convenient and really saves time.

Open the Data -> Analysis tab in the Key Collector. We select the type of grouping by individual words and scroll through the list of keys. If we see a phrase that does not fit, click on the blue icon and add the word instead of with all its word forms to the stop words.


In Slovoeba, work with stop words is implemented in a more simplified form, but you can also create your own list of phrases that do not fit and apply them to the list.

Do not forget to use sorting by Base frequency and number of phrases. This option helps to quickly reduce the list of original phrases or filter out rare ones.


After we have compiled a list of stop words, we apply them to our project and proceed to collecting search suggestions.

Parsing hints

When you enter a query in Yandex or Google, search engines offer their options for its continuation from the most popular phrases that Internet users drive in. These keywords are called search clues.

Many of them do not fall into Wordstat, therefore, when constructing a semantic one, it is imperative to collect such queries.

Key Collector, by default parses them with an iteration of endings, Cyrillic and Latin alphabets and with a space after each phrase. If you are ready to sacrifice quantity in order to significantly speed up the process, put a tick on the box "Collect only TOP prompts without brute force and space after the phrase."


Often among search suggestions you can find phrases with good frequency and competition ten times lower than in Wordstat, so in narrow niches I recommend collecting as many words as possible.

The time for parsing hints directly depends on the number of simultaneous calls to the search engine servers. The maximum Key Collector supports 50-thread work.
But in order to parse requests in this mode, you will need the same number of proxies and Yandex accounts.

For our project, after collecting tips, we got 29595 unique phrases. In terms of time, the whole process took a little more than 2 hours on 10 threads. That is, if there are 50 of them, we will keep within 25 minutes.


Determination of the base and exact frequency for all phrases

For further work, it is important to determine the base and exact frequency and weed out all zeroes. Leave requests with a low number of impressions if they are targeted.
This will help you better understand the intent and create a more complete structure of the article than is in the top.

In order to remove the frequency, we first filter out all unnecessary:

  • repetitions of words
  • keys with other symbols;
  • duplicate phrases (via the Implicit Duplicate Analysis tool)


For the remaining phrases, we will determine the exact and basic frequency.

a) for phrases up to 7 words:

  • Select through the filter "The phrase consists of no more than 7 words"
  • Open the "Collect from Yandex.Direct" window by clicking on the "D" icon;
  • If necessary, indicate the region;
  • Choose the guaranteed impressions mode;
  • We put the collection period - 1 month and check the boxes above the required types of frequencies;
  • Click "Get data".


b) for phrases from 8 words:

  • We set a filter for the "Phrase" column - "consists of at least 8 words";
  • If you need to advance in a specific city, we indicate the region below;
  • Click on the magnifying glass and select “Collect all kinds of frequencies”.


Cleaning keywords from garbage

After we have received information about the number of impressions for our keys, we can start filtering out those that are not suitable.

Let's consider the order of actions in steps:

1. Go to the "Group Analysis" Key Collector and sort the keys by the number of words used. The task is to find non-target and frequent ones and add them to the list of stop words.
We do everything in the same way as in the "Minus words" paragraph.


2. We apply to the list of our phrases all found stop words and go over it so as not to lose target queries for sure. After checking, click delete "Marked phrases".


3. We filter out dummy phrases that are rarely used in an exact match, but have a high base frequency. To do this, in the settings of the Kay Collector program in the "KEY & SERP" item, insert the calculation formula: KEY 1 = (YandexWordstatBaseFreq) / (YandexWordstatQuotePointFreq) and save the changes.


4. We calculate KEY 1 and delete those phrases for which this parameter is 100 or more.


The remaining keys need to be grouped by landing page.

Clustering

The distribution of queries into groups begins with the clustering of phrases according to the top through the free program "Majento Clusterizer". I recommend KeyAssort, a paid analogue with wider functionality and faster work speed, but the free one is quite enough for a small kernel. The only caveat is that to work in any of them you will need to buy XML limits. Average price - 5 rubles. for 1000 requests. That is, processing an average kernel for 20-30 thousand keys will cost 100-150 rubles. See the screenshot below for the address of the service you are using.


The essence of key clustering using this method is to group phrases that have Yandex Top 10:

  • shared urls with each other (Hard)
  • with the most frequent request in the group (Soft).

Depending on the number of such coincidences for different sites, clustering thresholds are distinguished: 2, 3, 4 ... 10.

The advantage of this method is the grouping of phrases according to the needs of people, and not only by synonyms. This allows you to immediately understand which keywords can be used on one landing page.

Suitable for informational people:

  • Soft with a threshold of 3-4 and then brushing by hand;
  • Hard in 3-ke, and then the unification of clusters according to the meaning.

Online stores and commercial sites, as a rule, are promoted on Hard with a clustering threshold of 3. The topic is voluminous, so I will analyze it later in a separate article.

For our project, after grouping by the Hard method on a 3-ke, 317 groups were obtained.


Competition check

There is no point in promoting highly competitive queries. It is difficult to get to the top, and without it there will be no traffic to the article. To understand what topics it is profitable to write on, we use the following method:

We focus on the exact frequency of the group of phrases for which the article is being written and the competition for Mutagen. For informational sites, I recommend that you take into work topics that have a total exact frequency of 300 or more, and the coefficient of competition is from 1 to 12, inclusive.

In commercial topics, focus on the marginality of a product or service and how competitors do it in the top 10. Even 5-10 targeted queries per month can be a reason to make a separate page for it.

How to check on-demand competition:

a) manually, by typing in the appropriate phrase in the service itself or through bulk tasks;


b) in batch mode through the Key Collector program.


Topic selection and grouping

Let's consider each of the resulting groups for our project after clustering and select themes for the site.
Majento, unlike Key Assort, does not provide the ability to download data on the number of impressions for each phrase, so you will have to additionally shoot them through Key Collector.

Instructions:

1) Unload all groups from Majento in CSV format;
2) We concatenate phrases in Excel using the "group: key" mask;
3) Load the resulting list into the Key Сollector. In the settings, there must be a check mark in the "Group: Key" import mode and do not monitor the presence of phrases in other groups;


4) We remove the base and exact frequency for keywords from the newly created groups. (If you use Key Assort, then you do not need to do this. The program allows you to work with additional columns)
5) We are looking for clusters with a unique intent, containing at least 3 phrases and the number of impressions for all requests in the sum is more than 300. Next, we check the 3-4 most frequent ones for mutagen competition. If among these phrases there are keys with competition less than 12 - we take it to work;

6) We look through the rest of the groups. If there are phrases that are close in meaning and are worth considering within the same page, we combine them. For groups containing new meanings, we look at the prospects for the total frequency of phrases, if it is less than 150 per month, then we postpone it until we go through the entire core. Perhaps it will be possible to combine them with another cluster and collect 300 accurate impressions - this is the minimum from which it is worth taking an article into work. To speed up manual grouping, use auxiliary tools: quick filter and frequency dictionary. They will help you quickly find suitable phrases from other clusters;


Attention!!! How to understand that clusters can be combined? We take 2 frequency keys from those selected in step 5 for the landing page and 1 request from the new group.
Add them to Arsenkin's tool "Unload Top 10", indicate the desired region, if necessary. Next, we look at the number of crossings by color for the 3rd phrase with the rest. We unite groups if there are 3 or more of them. If there are no coincidences or one thing, it is impossible to combine - different intents, in the case of 2 intersections, look at the issue by hand and use logic.

7) After grouping the keys, we get a list of promising topics for articles and the semantics for them.


Deleting other content type requests

When compiling the semantic core, it is important to understand that commercial queries are not needed for blogs and communication sites. Just like online stores do not need information.

We go through each group and clean up all unnecessary, if it is not possible to accurately determine the intent of the request, we compare the results or use the tools:

  • Commercialization check from Pixel Tools (free, but with a daily limit of checks);
  • Just-Magic service, clustering with a tick check the commercial request (paid, the cost depends on the tariff)

After that, we move on to the last stage.

Optimizing phrases

We optimize the semantic core so that it would be convenient for a seo specialist and copywriter to work with it in the future. To do this, we will leave key phrases in each group that fully reflect the needs of people and contain as many synonyms for the main phrases as possible.

Algorithm of actions:

  • Sort keywords in Excel or Key Collector alphabetically from A to Z;
  • Let's choose those that reveal the topic from different angles and in different words. All other things being equal, we leave phrases with a higher exact frequency or which have a lower key 1 (the ratio of the base frequency to the exact one);
  • We remove keywords with less than 7 impressions per month, which do not carry new meanings and do not contain unique synonyms.

An example of what a well-composed semantic core looks like -

In red, I marked phrases that do not fit the intent. If you neglect my recommendations for manual grouping and do not check compatibility, it will turn out that the page will be optimized for incompatible key phrases and you will no longer see high positions for promoted queries.

Final checklist

  1. We select the main high-frequency queries that set the topic;
  2. We are looking for synonyms for them, using the left and right columns of Wordstat, competitors' sites and their snippets;
  3. Expand the received requests by parsing the left column of Wordstat;
  4. Preparing a list of stop words and applying them to the resulting phrases;
  5. Parse Yandex and Google tips;
  6. We remove the base and exact frequency;
  7. Expanding the list of negative keywords. We clean up garbage and dummy requests
  8. We do clustering via Majento or KeyAssort. For information sites in Soft mode, the threshold is 3-4. For commercial Internet resources using the Hard method with a threshold of 3.
  9. We import data into Key Collector and determine the competition of 3-4 phrases for each cluster with a unique intent;
  10. We select topics and decide on landing pages for queries based on an estimate of the total number of exact impressions for all phrases from one cluster (from 300 for informational people) and competition for the most frequent ones for Mutagen (up to 12).
  11. For each suitable page, we look for other clusters with similar user needs. If we can consider them on one page, we combine them. When the need is not clear or there is a suspicion that another type of content or page should be the answer to it, we check by issuance or through the Pixel Tools or Just-Magic tools. For content sites, the core should consist of information requests, for commercial sites, of transactional ones. We delete the excess.
  12. We sort the keys in each group alphabetically and leave those that describe the topic from different angles and in different words. All other things being equal, priority is given to those queries that have a lower base-to-exact frequency ratio and a higher number of accurate impressions per month.

What to do with the SEO core after it is created

They made a list of the keys, gave them to the author, and he wrote an excellent article in full, revealing all the meanings. Eh, I was dreaming ... An explanatory text will only work if the copywriter clearly understands what you want from him and how he can check himself.

Let's analyze 4 components, having worked well, which you are guaranteed to get a lot of targeted traffic to the article:

Nice structure. We analyze the requests selected for the landing page and identify what needs people have in this topic. Next, we write the outline of the article, which fully answers them. The task is to make sure that people, having entered the site, receive a voluminous and comprehensive answer according to the semantics that you have compiled. This will give good behavioral and high relevance to the intent. Once you've made a plan, take a look at the competitors' sites by typing the main search query you are promoting. You need to do exactly in this sequence. That is, first we do it ourselves, then we look at what others have and, if necessary, refine it.

Optimization for keys. The article itself is sharpened under 1-2 most frequent keys with competition for Mutagen up to 12. Another 2-3 mid-frequency phrases can be used as headings, but in a diluted form, that is, by inserting additional words that are not related to the topic, using synonyms and word forms ... We focus on low-frequency phrases from which we pull out a unique part - the tail and evenly embed it into the text. The search engines will find and glue everything themselves.

Synonyms for basic queries. We write them out separately from our semantic core and set the task for the copywriter to use them evenly throughout the text. This will help to reduce the density for our main words and at the same time the text will be optimized enough to get to the top.

Subject-setting phrases. By themselves, LSI does not promote the page, but their presence indicates that most likely the written text belongs to the "pen" of an expert, and this is already a plus to the quality of the content. To search for thematic phrases, we use the "Terms of Reference for a Copywriter" tool from Pixel Tools.


An alternative method of selecting key phrases using services for competitor analysis

There is a quick approach to building a Semantic Core that is applicable to both novice and advanced users. The essence of the method is that we initially select keys not for the entire site or category, but specifically for the article, landing page.

It can be implemented in 2 ways, which differ in how we choose themes for the page and how deeply we expand key phrases:

  • by parsing primary keys;
  • based on competitor analysis.

Each of them can be implemented at a simpler and more complex level. Let's take a look at all the options.

Without using programs

A copywriter or webmaster often does not want to deal with the interface of a large number of programs, but they need good themes and key phrases for them.
This method is just for beginners and those who do not want to bother. All actions are performed without using additional software, using simple and understandable services.

What you need:

  • Keys.so service for competitor analysis - RUB 1,500 With the "altblog" promo code - 15% discount;
  • Mutagen. Checking the competitiveness of requests - 30 kopecks, collecting the basic and exact frequency - 2 kopecks per check;
  • Bukvarix - free version or business account - 995 rubles. (now with a discount of 695 rubles)

Option 1. Selecting a topic by parsing basic phrases:

  1. We select the main keys from the topic in a broad sense, using brainstorming and the left and right columns of Yandex Wordstat;
  2. Next, we look for synonyms for them, the methods of which were mentioned earlier;
  3. We fill in all received marker requests in Bukvarix (you will need to pay a paid tariff) in the extended mode "Search by the list of keywords";
  4. Specify in the filter: "! Exact! Frequency" from 50, Number of words from 3;
  5. We export the entire list to Excel;
  6. We select all the keywords and send them for grouping to the "Kulakov Clusterizer" service. If the site is regional, select the desired city. Leave the clustering threshold for informational sites at 2, for commercial sites set at 3;
  7. After grouping, we select topics for articles, looking at the resulting clusters. We take those where the number of phrases is from 3 and with a unique intent. The analysis of urls of sites from the top in the column "Competitors" (on the right in the Kulakov's service plate) helps to better understand the needs of people. Also, do not forget to check the Mutagen competition. We punch 2-3 requests from the cluster. If everything is more than 12, then the topic should not be taken;
  8. We decided on the name of the future landing page, it remains to choose key phrases for it;
  9. From the "Competitors" field, copy 3 URLs with the appropriate type of pages (if the site is informational - we take links to articles, if it is commercial, then to stores);
  10. We insert them sequentially into keys.so and unload all key phrases for them;
  11. We combine them in Excel and delete duplicates;
  12. Service data alone is not enough, so you need to expand it. Let's use Bukvarix again;
  13. We send the resulting list for clustering to the "Kulakov Clusterizer";
  14. We select groups of requests that are suitable for the landing page, focusing on the intent;
  15. We remove the base and exact frequency through the Mutagen in the "Mass missions" mode;
  16. We export the list with refined data on the number of impressions in Excel. Remove nulls for both types of frequencies;
  17. Also in Excel, we add the formula for the ratio of the base frequency to the exact one and leave only those keys for which this ratio is less than 100;
  18. We delete requests of a different type of content;
  19. We leave phrases that reveal the main intent as fully and in different words as possible;
  20. We repeat all the same steps for points 8-19 for the rest of the topics.

Option 2. Choosing a topic through competitor analysis:

1. We are looking for top sites in our topic, driving in high-frequency queries and looking through the results through Arsenkin's tool "Top-10 Analysis". It is enough to find 1-2 suitable resources.
If we are promoting a site in a specific city, we indicate the regionality;
2. Go to the keys.so service and enter the urls of the sites that we found and see which pages of competitors bring the most traffic.
3. 3-5 of the most accurate frequency queries of them we check for competitiveness. If for all phrases it is above 12, then it is better to look for another topic that is less competitive.
4. If you need to find more sites for analysis, open the "Competitors" tab and set the parameters: similarity - 3, thematicity - 10. Sort the data in descending order of traffic.
5. After we have selected a topic, we drive its name into the search results and copy 3 urls from the top.
6. Then we repeat points 10-19 from the 1st option.

Using Kei Collector or Slovoeba

This method will differ from the previous one only by using the Key Collector program for some operations and by deeper expansion of keys.

What you need:

  • Key Collector program - 1800 rubles;
  • all the same services as in the previous method.

"Advanced - 1"

  1. Parse the left and right Yandex columns for the entire list of phrases;
  2. We remove the exact and base frequency through the Key Collector;
  3. We calculate the indicator key 1;
  4. We delete requests for zeroes and with key 1> 100;
  5. Then we do everything in the same way as in paragraphs 18-19 of option 1.

"Advanced - 2"

  1. We do steps 1-5, as in option 2;
  2. Collect keys in keys.so for each url;
  3. Removing duplicates in the Key Collector;
  4. Repeat Points 1-4 as in the "Advanced -1" method.

Now let's compare the number of keys received and their exact total frequency when collecting the SN by different methods:

As you can see from the table, the best result was shown by an alternative method of creating a core for a page - "Advanced 1,2". It was possible to obtain 34% more target keys and at the same time the total traffic across the cluster turned out to be 51% more than in the case of the classical method.

Below in the screenshots, you can see how the finished kernel looks like, in each of the cases. I took phrases with the exact number of impressions from 7 per month, so that I could evaluate the quality of the keywords. For the full semantics, see the table under the "View" link.

A)


B)


V)

Now you know that it is not always the most common way, as everyone does, the most correct and correct, but you should not give up other methods either. Much depends on the topic itself. For commercial sites where there are not so many keys, the classic version is quite enough. On information sites, you can also get excellent results if you correctly draw up the technical assignment for a copywriter, make a good structure and seo-optimization. We will talk about all this in detail in the following articles.

3 common mistakes when creating a semantic core

1. Collecting phrases at the top. It's not enough to parse Wordstat to get a good result!
More than 70% of queries that people enter rarely or periodically do not get there at all. But among them there are often key phrases with good conversion and really low competition. How not to miss them? Be sure to collect search suggestions and combine them with data from different sources (, counters on sites, statistics services and databases).

2. Mixing informational and commercial requests on one page. We have already discussed that key phrases differ according to the type of needs. If a visitor comes to your site who wants to make a purchase, but sees a page with an article as a response to his request, do you think he will be satisfied? No! Search engines also think when they rank a page, which means that you can immediately forget about the top in MF and HF phrases. Therefore, if you are in doubt about determining the type of request, see the results or use the tools Pixel Tools, Just-Magis to determine the commercial value.

3. Choice to promote highly competitive queries. Positions for HF VK phrases by 60-70% depend on behavioral factors, and in order to get them you need to get to the top. The more applicants, the longer the line of applicants and the higher the requirements for sites. Everything is like in life or sports. Becoming a world champion is much more difficult than getting the same title in your city.
Therefore, it is better to go into a quiet rather than an overheated niche.

Earlier, it was even more difficult to get to the top. In the top they were on the principle of who had time, he ate. Leaders came to the fore, and they could only be displaced by accumulating behavioral factors. And how to get them if you are on the second or third page ... Yandex broke this vicious circle in the summer of 2015 by introducing the “multi-armed bandit” algorithm. Its essence is precisely to randomly increase and decrease the position of sites in order to understand whether more worthy candidates have appeared for being in the top.

How much money do you need to start?

To answer this question, let's calculate the costs of the required arsenal of programs and services in order to prepare and ungroup key phrases into 100 articles.

The very minimum (suitable for the classic version):

1. Slovoeb - free
2. Majento Cluster - free
3. Recognition of captchas - 30 rubles.
4. Xml-limits - 70 rubles.
5. Checking the competition of a request for Mutagen - 10 checks per day free of charge
6. If you are in no hurry and are ready to spend 20-30 hours on parsing, you can do without a proxy.
—————————
The total is 100 rubles. If you enter captchas yourself, and get xml limits in exchange for those transferred from your site, then you can actually prepare the kernel for free. You just need to spend another day setting up and mastering the programs and another 3-4 days waiting for the parsing results.

Standard semantist set (for advanced and classic method):

1. Kay Collector - 1900 rubles
2. Key Assort - 1700 rubles
3. Bukvarix (business account) - 650 rubles.
4. Service of the analysis of competitors keys.so - 1500 rubles.
5. 5 proxies - 350 rubles per month
6. Anticaptcha - about 30 rubles.
7. Xml limits - about 80 rubles.
8. Checking competition with Mutagen (1 check = 30 kopecks) - we will keep within 200 rubles.
———————-
The result is 6410 rubles. You can, of course, do without KeyAssort, replacing it with Majento with a clusterer and use Slovoyob instead of Key Collector. Then 2810 rubles will be enough.

Is it worth trusting the development of the "pro" kernel, or is it better to figure it out and do it yourself?

If a person regularly does what he loves, pumps in it, then following the logic, his results should be definitely better than those of a beginner in this area. But with the selection of keywords, everything turns out exactly the opposite.

Why does a beginner do better than a professional in 90% of cases?

It's all about the approach. The task of the semantist is not to collect the best core for you, but to complete his work in the shortest possible time and so that its quality suits you.

If you do everything yourself according to the algorithms mentioned earlier, the result will be an order of magnitude higher for two reasons:

  • You understand the topic. This means that you know the needs of your clients or site users and can, at the initial stage, maximize the token queries for parsing, using a large number of synonyms and specific words.
  • We are interested in doing everything efficiently. The business owner or employee of the company in which he works, of course, will approach the issue more responsibly and will try to do everything to the maximum. The more complete the core and the more low-competitive queries in it, the more targeted traffic will be collected, which means that the profit will be higher with the same investments in content.

How do you find the remaining 10% that will make up the core better than you?

Look for companies in which the selection of key phrases is a key competence. And immediately discuss what you want the result, like everyone else or the maximum. In the second case, it will be 2-3 times more expensive, but in the long term it will pay off many times over. For those who want to order a service from me, all the necessary information and conditions. I guarantee the quality!

Why is it so important to fully work out semantics

Here, as in any field, the principle of "good and bad choice" works. What is its essence?
Every day we are faced with what we choose:

  • to meet with a person who seems to be nothing, but does not cling to, or having figured out himself to build a harmonious relationship with someone who is needed;
  • to do work that you do not like or find something to which your soul lies and make it your profession;
  • renting a room for a store in a non-walkable place or still waiting until it becomes free is a suitable option;
  • to hire not the best sales manager, but the one who showed himself the best in today's interview.

Everything seems to be clear. And if you look at it from the other side, presenting each choice as an investment in the future. This is where the fun begins!

Saved on this. core, 3-5 thousand. Happy as elephants! But what does this lead to next:

a) for information sites:

  • Losses in traffic by at least 1.5 times with the same investments in content. Comparing different methods of obtaining key phrases, we have already found out empirically that the alternative method allows you to collect 51% more;
  • The project sags faster in the search results... It is easy for competitors to bypass us by giving a more complete answer on the intent.

b) for commercial projects:

  • Fewer or more leads... If we have semantics, like everyone else, then we are moving forward on the same queries as competitors. A large number of offers with constant demand reduces the share of each of them on the market;
  • Low conversion. Specific inquiries translate better into sales. Saving on this. core, we lose the most conversion keys;
  • The harder it is to move forward. There are many who want to be in the top - the requirements for each of the candidates are higher.

I wish you always make a good choice and invest only in a plus!

P.S. Bonus "How to write a good article with bad semantics", as well as other life hacks for promoting and making money on the Internet, read in my group

Quick navigation on this page:

Like almost all other webmasters, I compose the semantic core with the KeyCollector program - this is by far the best program for composing the semantic core. How to use it is a topic for a separate article, although the Internet is full of information on this matter - I recommend, for example, a manual from Dmitry Sidash (sidash.ru).

Since the question has been raised about an example of compiling a kernel, I give an example.

Key List

Suppose we have a site dedicated to British cats. I type in the phrase "British cat" into the "List of phrases" and click on the "Parse" button.

I get a long list of phrases, which will begin with the following phrases (the phrase and particulars are given):

British cats 75553 British cats photo 12421 British Fold cat 7273 British cats cattery 5545 British cats 4763 British Shorthair 3571 British cat colors 3474 British cats price 2461 British blue cat 2302 British fold cat photo 2224 British cats mating 1888 British cats character 1394 cat 1179 british cats buy 1179 longhair british cat 1083 british cat pregnancy 974 british chinchilla cat 969 british cats photo 953 british cat cattery moscow 886 british cat color photo 882 british cats grooming 855 british shorthair cat photo 840 scottish and british cats 763 names of british cats 762 cat british blue photo 723 photo of british blue cat 723 british cat black 699 how to feed british cats 678

The list itself is much longer, I just gave its beginning.

Key grouping

Based on this list, on my site there will be articles about the varieties of cats (fold, blue, shorthaired, longhaired), there will be an article about the pregnancy of these animals, about what to feed them, about the names, and so on according to the list.

For each article, one main such request is taken (= topic of the article). However, the article is not limited to just one request - it also adds other relevant queries, as well as different variations and word forms of the main query, which can be found in the Key Collector below in the list.

For example, with the word "lop-eared" there are the following keys:

British fold 7273 british fold cat photo 2224 british fold cat price 513 cat breed british fold 418 british blue fold cat 224 scottish fold and british cats 190 british fold cats photo 169 british fold cat photo price 160 british fold british fold buy cat photo 129 british fold cats character 112 british fold cat grooming 112 mating british fold cats 98 british shorthair fold cat 83 color british fold cats 79

To avoid overspam (and overspam may be due to the combination of using too many keys in the text, in the header, in, etc.), I would not take all of them with the inclusion of the main request, but separate words from them make sense use in the article (photo, buy, character, care, etc.) so that the article is better ranked by a large number of low-frequency queries.

Thus, a group of keywords will be formed under the article about lop-eared cats, which we will use in the article. Groups of keywords for other articles will be formed in the same way - this is the answer to the question of how to create the semantic core of the site.

Frequency and competition

There is also an important point related to the exact frequency and competition - they must be collected in the Key Collector. To do this, select all requests by ticking and on the tab "Frequencies Yandex.Wordstat" click the button "Collect frequencies"! " - the exact particularity of each phrase will be shown (that is, with exactly the same word order and in this case), this is a much more accurate indicator than the overall frequency.

To check the competition in the same Key Collector, you need to click the button "Get data for Yandex PS" (or for Google), then click "Calculate KEI from available data". As a result, the program will collect how many main pages for a given request are in the TOP-10 (the more, the more difficult it is to get there) and how many pages in the TOP-10 contain such a title (similarly, the more, the more difficult it is to break into the top).

Then we need to act on the basis of our strategy. If we want to create a comprehensive cat site, then exact frequency and competition are not so important to us. If we only need to publish a few articles, then we take the queries that have the highest frequency and at the same time the lowest competition, and on their basis we write articles.

An article on how to compose the semantic core on your own so that your online store is in the first positions of search engine results. The process of finding keywords is not that easy. It will take care and a relatively long time. But if you are ready to move forward and grow your business, this article is for you.It goes into detail about methods of collecting keywords, as well as which tools can help you with this.

The answer is banal - for the site to "fall in love" with search engines. And so that when users request for specific keywords, it is your resource that is given out.

And the formation of the semantic core is the first, but very important and confident step on the way to the goal!

The next step is to create a kind of skeleton, which implies distributed selected "keys" on certain pages of the site. And only after that should you move to a new level - writing and implementing articles, tags.

Note that the network contains several options for defining the concept of the semantic core (hereinafter referred to as SN).

In general, they are similar and if you summarize everything, then you can form the following: a set of keywords (as well as related phrases and forms) for website promotion. Such words accurately characterize the focus of the site, reflect the interests of users and correspond to the activities of the company.

Our article provides an example of the formation of a CY for an online bedding store. The whole process is divided into five sequential steps.

1) Collecting basic queries

In this case, we are talking about all the phrases that will correspond to the direction of the store's activities. Therefore, it is so important to think over as accurately as possible those phrases that best characterize the goods presented in the catalog.

Of course, this is sometimes difficult to do. But the right column Wordstat.Yandex will come to the rescue - it contains phrases that are most often entered by users when using the phrase you have chosen.

Watch the video on working with Wordstat (only 13 minutes)

In order to get the results, enter the desired phrase in the service line and click on the "Select" button.

In order not to copy all requests manually, we recommend using the Wordstat Helper extension, created specifically for the Mozilla Firefox and Google Chrome browsers. This addition will greatly simplify the work with the selection of words. How it works - see the screenshot below.

Save the selected words in a separate document. Then brainstorm and add the phrases that you come up with.

2) How to expand the SA: three options

The first step is relatively straightforward. Although it will require attentiveness from you. But the second is active brain activity. After all, each separately selected phrase is the basis of the future group of search queries, by which you will be moving.

To collect such a group, you must use:

  • synonyms;
  • paraphrasing.

In order not to "download" at this stage, use special applications or services. How to do this is described in detail below.

How to expand your SEO with Google Keyword Planner

We go to the thematic chapter (called the Keyword Planner) and picks those phrases that most accurately characterize the group of queries you are interested in. Do not touch other parameters and click on the "Get ..." button.

After that, just download the results.

How to extend SN using Serpstat (ex. Prodvigator)

You can also use another similar service that conducts competitor analysis. After all, competitors are the best place to get the keywords you need.

Serpstat service (ex. Prodvigator) allows you to determine exactly what key queries your competitors were using to become the leaders of search engines. Although there are other services - decide for yourself which one to use.

In order to select search queries, you need:

  • enter one request;
  • indicate the region of promotion you are interested in;
  • click on the "Search" button;
  • and when it finishes, select the "Search queries" option.


After that, click on the "Export Table" button.

How to compose the semantic core: how to expand the SN with the Key Collector / Slovoyob

Do you have a large store with a huge amount of products? In such a situation, you need a service Key Collector.

Although if you are just starting to learn the science of selecting keywords and forming a semantic core, we recommend that you pay attention to another service - with a dissonant name Slovoeb ... Its advantage is that it is completely free.

Download the application, go to the Yandex.Direct settings and enter the username / password for the Yandex mailbox.

After that:

  • open a new project;
  • click on the Data tab;
  • there click on the Add phrases option;
  • indicate the region of promotion you are interested in;
  • enter the queries that were generated earlier.

After that, start collecting the SN from Wordstat.Yandex. For this:

  • go to the "Data collection" section;
  • then - you need to select the section "Batch collection of words from the left column";
  • a new window will appear on the screen in front of you;
  • in it - do as shown in the screenshot below;


Note that Key Collector is an excellent tool for large, large projects and with its help it is easy to organize the collection of statistical data on services that analyze the "work" of competing sites. For example, these services include the following: SEMrush, SpyWords, Serpstat (ex. Prodvigator) and many others.

3) Delete unnecessary "keys"

So, the base has been formed. The volume of collected "keys" is more than solid. But if you analyze them (in this case, just read them carefully), you will find out that not all of the collected words correspond exactly to the theme of your store. That is why “non-target” users will enter the site using them.

Such words should be deleted.

Here's another example. So, on the site you sell bedding, but in your assortment there is simply no fabric from which such underwear can be sewn. Therefore, everything related to fabrics must be removed.

By the way, a complete list of such words will have to be formed manually. No "automation" will help here. Naturally, it will take a relatively long time and in order not to miss anything, we recommend that you arrange a full-fledged brainstorming session.

Let's note the following types and types of words that will be irrelevant for online stores:

  • name and mention of competing stores;
  • cities and regions where you do not work and where you do not supply goods;
  • all words and phrases containing "free", "old" or "used", "download", etc .;
  • the name of a brand that is not represented in your store;
  • "Keys" in which there are errors;
  • repeating words.

Now we will tell you how to delete all the words you don't need.

Form a list

Open the Slovoeb service, select the "Data" section in it, and then go to the "Stop Words" tab and "drive" the manually selected words into it. It is interesting that you can write down words either manually or simply upload a file with them (if you have prepared one).


Thus, you will be able to quickly eliminate from your list stop words that do not correspond to the subject matter or the peculiarities of the store.

How to build a semantic core: a quick filter

You have received a kind of syllabus. Analyze it carefully and start manually deleting unnecessary words. The same Slovoeb service will help you optimize the solution to this problem. Here is the sequence of steps you need to follow:

  • take the first unnecessary word from your list, for example, let it be the city of Kiev;
  • drive it into the search (on the screen - number 1);
  • mark the corresponding lines;
  • by clicking on them with the right mouse button, delete;
  • press Enter in the search field to return to the original list.

Repeat the above steps as many times as necessary until you have revisited the largest word list possible.

4) How to compose a semantic core: group requests

In order to understand how to carry out the distribution of words on specific pages, you should group all the queries you have selected. For this, the so-called semantic clusters should be formed.

This concept means a group of “keys” similar in subject matter and meaning, which is formed in the form of a multilevel structure. Let's say the first-level cluster is the search query "bedding". But the second-level clusters will be search queries "blankets", "blankets" and the like.

In most cases, the definition of clusters is carried out by brainstorming. But it is important to be well versed in the assortment, features of your product, but also take into account the way in which the structure of competitors is built.

The next thing that you need to pay special attention to is that on the last level of the cluster there should be only those requests that exactly correspond to the only need of potential customers. That is, a specific type of goods.

Here, the same Wordoeb service and the Quick filter option described above will come to your aid again. It will help you sort your search queries into specific categories.

To perform this sorting, you need to follow a few simple steps. First, in the search bar of the service, enter the keyword that will be used in the name:

  • categories;
  • landing page, etc.

For example, it could be a brand of bedding. In the results obtained, mark the phrases that suit you and copy.

Those phrases that you do not need, just select with the right mouse button and delete.


On the right side of the service menu, create a new group, naming it appropriately. For example, the brand name.

To transfer your selected phrases to this part of the tab, you must select the Data line and click on the Add phrases caption. For more details, see the screen.

Pressing Enter in the search box will return you to the original word list. Follow the described procedure for all other requests.

The system will display all selected phrases in alphabetical order, which makes it easier to work with them - you can easily determine what exactly can be deleted. Or, you can group words into a specific group.

We add that manual grouping also takes a fair amount of time. Especially when it comes to too many key phrases. Therefore, we recommend using automated paid programs. These include:

  • Key Collector;
  • Rush-Analytics;
  • Just-Magic and others.

There is also a completely free script Devaka.ru. By the way, please note that you often have to combine some types of queries.

Since there is no point in piling up a huge number of categories on the site, differing only in such names as "Beautiful bedding" and "Fashionable bedding".

To determine the importance of each individual key phrase for a particular category, you just need to transfer them to the Google planner, as shown in the screenshot.

Thus, you can determine how much a particular search query is in demand. All of them can be divided into three categories, depending on the particular use:

  • high-frequency;
  • low frequency;
  • mid-frequency;
  • and even micro-low-frequency ones.

However, it is important to understand that there are no exact numbers that indicate the belonging of a request to a particular group. Here you should focus on the topic of both the site itself and the request. In a separate case, a request with a frequency of up to 800 per month can be considered a low-frequency one. In another situation, a request with a frequency of up to 150 will be high-frequency.

The most high-frequency queries from all the selected ones will subsequently be entered into tags. But the lowest frequencies are recommended to be used in order to optimize specific store pages for them. Since there will be low competition among such queries, it will be enough to simply fill such subsections with high-quality text descriptions so that the page is in the forefront of search results.

All of the above actions will allow you to form a clear structure in which you will have:

  • all the necessary and important categories - to make a visualization of the "skeleton" of your store, use the additional service XMind;
  • landing pages;
  • pages that provide information that is important for the user - for example, with contact information, with a description of delivery conditions, etc.

How to extend the semantic core: an alternative method

With the development of the site, the expansion of the store, the CY will also increase. For this, it is necessary to monitor and collect key phrases within each group. This greatly simplifies and speeds up the process of expanding the SA.

To collect similar queries, for hints, use additional services, including:

  • Serpstat (ex. Prodvigator);
  • Ubersuggest;
  • Keyword Tool;
  • and others.

The screenshot below shows how to use the Promoter service.

How to compose a semantic core: what to do after going through our instructions

So, in order to independently form the SN for an online store, you need to perform a number of sequential steps.

It all starts with the selection of keywords that can only be used when searching for your products and which will subsequently become the main group of queries. Further, using the tools of search engines to expand the semantic core. It is also recommended to conduct an analysis of competing sites for this.

The next steps will be like this:

  • analysis of all selected search queries;
  • removal of requests that do not correspond to the meaning of your store;
  • grouping of requests;
  • formation of the site structure;
  • constant tracking of search queries and expansion of the SJ.

The method for selecting a SN for an online store presented in this article is far from the only correct and correct one. There are others. But we have tried to present you the most convenient way.

Naturally, such indicators as the quality of text descriptions, articles, tags, store structure are also important for promotion. But we will talk about this in a separate article.

In order not to miss new and useful articles, be sure to subscribe to our newsletter!

You are not undergoing training yet,? Sign up right now and in 4 days you will have your own website.

If you can't make it yourself, we'll make it for you!

What is the semantic core of the site? The semantic core of the site (hereinafter referred to as Sya) Is a collection of keywords and phrases for which the resource advancing in search engines and which indicate that the site belongs to a certain subject matter.

For successful promotion in search engines, keywords must be correctly grouped and distributed across the pages of the site and in a certain form contained in meta descriptions ( title, description, keywords), as well as in headings H1-H6. At the same time, overspam should not be allowed so as not to "fly away" to Baden-Baden.

In this article, we will try to look at the issue not only from a technical point of view, but also to look at the problem through the eyes of business owners and marketers.

What is the collection of SJ?

  • Manual- possible for small sites (up to 1000 keywords).
  • Automatic- programs do not always correctly determine the context of the request, so problems with the distribution of keywords across pages may arise.
  • Semi-automatic- phrases and frequency are collected automatically, the distribution of phrases and revision is done manually.

In this article, we will consider a semi-automatic approach to creating a semantic core, since it is the most effective.

In addition, there are two typical cases when compiling a CL:

  • for a site with a ready-made structure;
  • for new site.

The second option is more preferable, since it is possible to create an ideal site structure for search engines.

What does the process of drawing up a SA consist of?

The work on the formation of the semantic core is divided into the following stages:

  1. Highlighting the directions in which the site will be promoted.
  2. Collection of keywords, analysis of similar queries and search suggestions.
  3. Frequency parsing, filtering out "empty" requests.
  4. Clustering (grouping) requests.
  5. Distribution of requests across site pages (drawing up an ideal site structure).
  6. Recommendations for use.

The better you make the core of the site, and the quality in this case means the breadth and depth of semantics, the more powerful and reliable stream of search traffic you can send to the site and the more you will attract customers.

How to compose the semantic core of the site

So, let's take a closer look at each point with various examples.

At the first step, it is important to determine which products and services on the site will be promoted in Yandex and Google search results.

Example # 1. Let's say the site has two areas of services: computer repair at home and training to work with Word / Excel at home. In this case, it was decided that training is no longer in demand, so there is no point in promoting it, and therefore collecting semantics for it. Another important point, you need to collect not only requests containing "Home computer repair", but also "Laptop repair, PC repair" and others.

Example No. 2. The company is engaged in low-rise construction. But at the same time he builds only wooden houses. Accordingly, queries and semantics by direction "Construction of houses from aerated concrete" or "Building brick houses" you can not collect.

Collecting semantics

We will look at two main sources of keywords: Yandex and Google. We will tell you how to collect semantics for free and briefly review paid services that allow you to speed up and automate this process.

In Yandex, key phrases are collected from the Yandex.Wordstat service and in Google through the statistics of queries in Google AdWords. If available, as additional sources of semantics, you can use data from Yandex Webmaster and Yandex Metrics, Google Webmaster and Google Analytics.

Collecting keywords from Yandex.Wordstat

Collecting queries from Wordstat can be considered free. To view the data of this service, you only need a Yandex account. So, go to wordstat.yandex.ru and enter the keyword. Consider an example of collecting semantics for a car rental company website.

What do we see in this screenshot?

  1. Left column... Here is the main query and its various variations with "Tail". Opposite each request is a number indicating how much the given request is in general has been used by various users.
  2. Right column... Queries similar to the main one and indicators of their overall frequency. Here we see that the person who wants to rent a car, apart from the request "car rental", can use "Car rental", "car rental", "car rental" and others. This is very important data that you need to pay attention to in order not to miss a single request.
  3. Regionality and history... By choosing one of the possible options, you can check the distribution of requests by region, the number of requests in a particular region or city, as well as the tendency to change over time or with the change of seasons.
  4. Devices from which the request was made. By switching tabs, you can see which devices are most often searched from.

Check different variants of key phrases and record the obtained data in Excel tables or in Google tables. For convenience, install the plugin Yandex Wordstat Helper. After installing it, plus signs will appear next to the search phrases, when you click on which the words will be copied, you will not need to select and insert the frequency indicator manually.

Collecting keywords from Google AdWords

Unfortunately, Google does not have an open source of search queries with their frequency indicators, so here you need to take workarounds. And for this we need a working Google AdWords account.

We register an account in Google AdWords and replenish the balance for the minimum possible amount - 300 rubles (on an account that is inactive in terms of budget, approximate data are displayed). After that, go to "Tools" - "Keyword Planner".

A new page will open where in the "Search for new keywords by phrase, site or category" tab, enter your keyword.

Scroll down, click "Get options" and see something like this.

  1. Basic request and average number of requests per month. If the account is not paid, then you will see approximate data, that is, the average number of requests. When the account has funds, the exact data will be shown, as well as the dynamics of the change in the frequency of the entered keyword.
  2. Relevance keywords. This is the same as similar queries in Yandex Wordstat.
  3. Downloading data. This tool is convenient in that the data obtained in it can be downloaded.

We've covered working with two main sources of statistics for search queries. Now let's move on to automating this process, because collecting semantics manually takes too much time.

Programs and services for collecting keywords

Key collector

The program is installed on the computer. The program connects work accounts from which statistics will be collected. Next, a new project and a folder for keywords are created.

We select "Batch collection of words from the left column of Yandex.Wordstat", enter the requests for which we collect data.

An example is introduced in the screenshot, in fact, for a more complete SY, here you additionally need to collect all the options for queries with car brands and classes. For example, "bmw for rent", "buy a toyota with the option to purchase", "rent an SUV" and so on.

SlovoEb

Free analogue previous program. This can be considered both a plus - you do not need to pay, and a minus - the program has significantly reduced functionality.

For collecting keywords, the steps are the same.

Rush-analytics.ru

Online service. Its main advantage is that you do not need to download and install anything. Register and use. The service is paid, but when registering, you have 200 coins on your account, which is quite enough to collect a small semantics (up to 5000 requests) and parse the frequency.

The downside is collecting semantics only from Wordstat.

Checking the frequency of keywords and queries

And again, we notice a decrease in the number of requests. Go ahead and try another word form for the same query.

We note that in the singular, this request is looking for a much smaller number of users, which means that the initial request is of higher priority for us.

Such manipulations must be carried out with every word and phrase. Those queries for which the total frequency is equal to zero (when using quotes and an exclamation mark) are eliminated, because "0" - means that no one enters such requests and these requests are only a part of others. The point of compiling a semantic core is to select the queries that people use to search. All requests are then placed in the Excel table, grouped by meanings and distributed across the pages of the site.

It is simply not realistic to do this manually, therefore there are many services on the Internet, paid and free, that allow you to do this automatically. Here are a few:

  • megaindex.com;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

Removing inappropriate requests

After sifting the keywords, the unnecessary ones should be removed. Which search terms can be removed from the list?

  • queries with the names of competitors' firms (you can leave in contextual advertising);
  • requests for goods or services that you do not sell;
  • requests that indicate a district or area in which you do not work.

Clustering (grouping) requests for site pages

The essence of this stage is to combine similar queries into clusters, and then determine which pages they will be promoted to. How to understand which queries to promote to one page and which to another?

1. By request type.

We already know that all search engine queries are divided into several types, depending on the purpose of the search:

  • commercial (buy, sell, order) - promoted to landing pages, product category pages, product cards, service pages, price lists;
  • informational (where, how, why, why) - articles, forum topics, heading an answer to a question;
  • navigation (phone, address, brand name) - a page with contacts.

If in doubt about the type of request, enter its search string and analyze the results. For a commercial request, there will be more pages with a service offer, for an informational one - articles.

There is also geo-dependent and geo-independent queries... Most of the commercial queries are geo-dependent, as people tend to trust the companies in their city to a greater extent.

2. Logic of the request.

  • "Buy iphone x" and "iphone x price" - you need to promote one page, as in the first and second cases, the search is carried out for the same product, and more detailed information about it;
  • "Buy iphone" and "buy iphone x" - you need to promote to different pages, since in the first request we are dealing with a general request (suitable for the product category where the iPhones are located), and in the second, the user is looking for a specific product and this request follows promote goods on the card;
  • “How to choose a good smartphone” - it is more logical to promote this request to a blog article with a corresponding title.

View search results for them. If you check which pages of various sites have queries "building houses from a bar" and "building houses from bricks", then in 99% of cases these are different pages.

4. Automatic grouping by software and manual revision.

The 1st and 2nd methods are great for compiling the semantic core of small sites, where a maximum of 2-3 thousand keywords are collected. For a large SN (from 10,000 to infinite requests), the help of machines is needed. There are several programs and services that enable clustering:

  • KeyAssistant - assistant.contentmonster.ru;
  • semparser.ru;
  • just-magic.org;
  • rush-analytics.ru;
  • tools.pixelplus.ru;
  • key-collector.ru.

After the completion of automatic clustering, it is necessary to check the result of the program manually and, if errors are made, correct it.

Example: the program can send the following queries to one cluster: "vacation in sochi 2018 hotel" and "vacation in sochi 2018 hotel breeze" - in the first case, the user is looking for various options for hotels to stay, and in the second, a specific hotel.

To exclude the occurrence of such inaccuracies, you need to manually check everything and, if errors are found, correct.

What to do next, after compiling the semantic core?

Based on the assembled semantic core, then we:

  1. we compose the ideal structure (hierarchy) of the site, from the point of view of search engines;
    or by agreement with the customer, we change the structure of the old site;
  2. we write technical specifications for copywriters for writing text, taking into account the cluster of requests that will be promoted on this page;
    or finalizing old articles and texts on the site.

It looks something like this.

For each generated query cluster, we create a page on the site and define its place in the site structure. The most popular queries are promoted to the topmost pages in the resource hierarchy, the less popular ones are located below them.

And for each of these pages, we have already collected requests that we will promote on them. Next, we write TK copywriters to make the text for these pages.

Terms of reference for a copywriter

As in the case of the site structure, we will describe this stage in general terms. So, the terms of reference for the text:

  • the number of characters without a space;
  • page title;
  • subheadings (if any);
  • a list of words (based on our core) that should be in the text;
  • requirement for uniqueness (always require 100% uniqueness);
  • the desired text style;
  • other requirements and wishes for the text.

Remember, do not try to promote +100500 requests on one page, limit yourself to 5-10 + tail, otherwise you will get a ban for over-optimization and fly out of the game for a long time for a place in the TOP.

Output

Compilation of the semantic core of the site is a painstaking and hard work that needs to be paid special attention, because it is on it that the further website promotion is based. Follow the simple instructions in this article and take action.

  1. Choose the direction of promotion.
  2. Collect all possible requests from Yandex and Google (use special programs and services).
  3. Check the frequency of requests and get rid of the blanks (which have a frequency of 0).
  4. Remove inappropriate queries - services and products that you do not sell, a query mentioning competitors.
  5. Form query clusters and spread them across the pages.
  6. Create an ideal site structure and draw up a technical assignment for filling the site.

In our article, we talked about what a semantic core is and gave general recommendations on how to compose it.

It's time to break this process down in detail, creating a semantic core for your site step by step. Stock up on pencils and paper, and most importantly time. And join ...

We compose the semantic core for the site

Let's take the site http://promo.economsklad.ru/ as an example.

The company's field of activity: warehouse services in Moscow.

The site was developed by the specialists of our site service, and the semantic core of the site was developed in stages in 6 steps:

Step 1. Putting together a primary list of keywords.

Having conducted a survey of several potential customers, having studied three sites that are close to us in terms of topics and having worked with our own brains, we have compiled a simple list of keywords that, in our opinion, reflect the content of our site: warehouse complex, warehouse rental, storage services, logistics, warehouse rental, warm and cold warehouses.

Assignment 1: Browse competitors' sites, consult with colleagues, brainstorm and write down all the words that you think describe YOUR site.

Step 2. Expanding the list.

Let's use the service http://wordstat.yandex.ru/. In the search line, enter each of the words of the primary list in turn:


We copy the specified queries from the left column to the Excel table, look through the associative queries from the right column, select among them those relevant to our site, and also enter them into the table.

After analyzing the phrase "Warehouse rent", we received a list of 474 refined and 2 associative queries.

After conducting a similar analysis of the rest of the words from the primary list, we received a total of 4,698 refined and associative queries that were entered by real users in the past month.

Task 2: Collect a complete list of requests for your site by running each of the words in your primary list through the Yandex.WordStat request statistics.

Step 3. Stripping

First, we remove all phrases with a frequency of impressions below 50: “ how much does it cost to rent a warehouse"- 45 impressions," Warehouse rent 200 m"- 35 impressions, etc.

Secondly, we remove phrases that are not related to our site, for example, “ Warehouse for rent in St. Petersburg" or " Warehouse rental in Yekaterinburg", Since our warehouse is located in Moscow.

The phrase “ warehouse lease download"- this sample may be present on our website, but there is no point in actively promoting this request, since a person who is looking for a sample contract is unlikely to become a client. Most likely, he has already found a warehouse or is the owner of the warehouse himself.

After you remove all unnecessary requests, the list will be significantly reduced. In our case, with the "warehouse lease" out of 474 refined queries, 46 remained relevant to the site.

And when we cleaned up the full list of refined queries (4 698 phrases), we got the Semantic Core of the site, consisting of 174 key queries.

Task 3: Clean up the previously created list of refined queries by excluding low-frequency content with less than 50 impressions and phrases that are not related to your site.

Step 4. Refinement

Since 3-5 different keywords can be used on each page, we will not need all 174 queries.

Considering that the site itself is small (maximum 4 pages), then from the complete list we select 20, which, in our opinion, most accurately describe the services of the company.

Here they are: warehouse lease in Moscow, warehouse lease, warehouse and logistics, customs services, safekeeping warehouse, warehouse logistics, logistics services, office and warehouse lease, safe storage of goods etc….

These key phrases include low-frequency, mid-frequency and high-frequency queries.

Note that this list is significantly different from the original one taken from the head. And it is definitely more accurate and efficient.

Assignment 4: Reduce the remaining words to 50, leaving only those words that, in your experience and opinion, are best for your site. Do not forget that the final list should contain requests of different frequencies.

Conclusion

Your semantic core is ready, now is the time to put it into practice:

  • revise the texts of your site, maybe they should be rewritten.
  • write several articles on your topic using the selected key phrases, post articles on the site, and after the search engines index them, register in article directories. Read One Unusual Approach to Article Promotion.
  • pay attention to search ads. Now that you have a semantic core, the ad impact will be significantly higher.