Jump to content

Seo Myths


Recommended Posts

About 6 years ago I wrote an article about common SEO myths. Some of you that have been following me for a long time may remember it. Many of those same myths are still pretty prevalent today. Other new myths have jumped to the forefront. I figured it was time to dust off that topic and update it.

These are some of the common SEO myths I see people talking about all the time, and most of them drive me nuts.

Keyword Research

Let’s start with the topic of keyword research.

You should only target keywords with X number of searches per month.

Something around 1,000 searches per month seems to most often be stated as the minimum to shoot for. This one I do not get too upset about, only because I understand where it is coming from. You do not want to chase keywords where there is not a strong enough market to make money off of.

What people who commonly repeat this myth fail to look at is the specifics around the project involved. Yes, if you are building a Made-For-AdSense site, chasing a lot of keywords that only get 100-300 searches per month is probably not going to be a very profitable venture.

On the flipside of that, what if you are selling something in which you make $3,000 per sale? In that case, I’m going to target keywords that only get 10 searches a month if they are good buyer keywords. If a site like that just made one sale a month, that is the kind of site most struggling internet marketers would wet their pants about.

And what about local SEO projects? Unless you are in a big market city like Los Angeles or Philadelphia, you probably are not going to find many keywords with search volumes even over 300-400 searches a month. In some smaller towns, most of your keywords are going to be in the 50-150 searches per month range.

Search volume is all relative. I do not want to go off-topic too much here, but I have been teaching people for the past few years now to go after any keyword that will bring you relevant search traffic no matter what the search volume is. That is a topic for another day though.

Search for your keyword with quotes, without quotes, use allintitle:, or whatever other goofy search method you can think of and the number of results in Google’s search index is an indicator of the level of competition for that keyword.

This one pisses me off to no end. No matter how many times it gets shot down and proven to be wrong, it just keeps popping up. It will not die.

http://spartanmarketingacademy.com/images/banghead.gif

It does not matter how many results are in the index. Not one bit. It tells you nothing about the level of competition. There could be 500,000 results and it could be really, really competitive, just as easily as it could be really, really easy to rank.

If you still buy into this silly myth, let me explain to you why it does not matter. With some rare exceptions, the lionshare of the traffic for any search term goes to the top 3 results. No matter what your keyword is, your competition is the site ranked #1, the site ranked #2, and the site ranked #3. That’s it.

A good analogy is to think of a race in which you want to win a medal. The top 3 racers earn medals. You need to be faster than the 3rd fastest runner in the race. It does not matter if there are 10 people in the race or 10,000 people in the race. If you can beat the 3rd fastest runner, the result is the same.

Do you think the top runners in the Boston Marathon care about how many racers there are in the race? It grows every year. Does that make them think it is going to be more difficult to win? No. They are just looking at the top racers. That is who they are focused on beating.

If you can beat the website ranked #3, the site ranked #423,762 does not matter.

The competition level in the Google Keyword Planner says ‘Low’, so the keyword is easy to rank.

This is not so much a myth that often gets spread around as it is a misunderstanding of the Google Keyword Planner. The GKP is a tool for AdWords advertisers. It was never really designed for SEO, but is frequently used for SEO. The competition level reported in the tool is the competition among Google AdWords advertisers. It has nothing to do with the difficulty in ranking a keyword.

Link Building

Link building is another great source of SEO myths.

Only build X number of links per day or else it will look unnatural.

This one is touted a lot. Usually people are saying to only build 5-10 links per day. Building more than that could be seen as unnatural and get you slapped by Google.

You know what is unnatural? A website getting 5-10 links per day consistently. That would be unnatural.

Link spikes are natural. What I mean by a link spike is seeing a site get a large number of links in a short period of time, and then drop back down to a smaller number per day for awhile.

What do you think happens to a webpage when it goes viral?

Huge spikes. Even really popular sites like The New York Times or IGN.com see link spikes when a big story breaks.

You must keep your anchor text percentages low to rank well.

There is a lot more to this one than just having a certain percentage of your links use the same anchor text. If it was just a certain percentage, that would be too easy to game. Google is smarter than that.

If you had a webpage with 30 links, is it possible, maybe a bit unlikely but possible, that 20-25 of those links could have the same anchor text? Yes. The probability against it is fairly high, but it could certainly happen. There would be nothing inherently unnatural about that.

What if that webpage had 10,000 links? Is it likely that 6,500-8,500 webmasters would have used the same anchor text to link to that webpage? No. Extremely unlikely.

The percentages are roughly the same, but one looks way more unnatural than the other.

Here is another example. What if I created the most helpful, user-friendly mortgage calculator ever made? If I embedded that on a webpage and did 100% natural, whiter than white hat search engine marketing, is it not likely that most of the links I attract are going to contain the anchor text ‘mortgage calculator’? Would it be unnatural to even have something as high as 75-80% of the links coming in to have ‘mortgage calculator’ as the anchor text? Not only would that be natural, it would be highly likely to happen.

You can play with anchor percentages all you want. Bad links are bad links, and just because you keep the anchors under certain percentages does not keep you safe from penalties. Nor does going over certain percentages guarantee a penalty.

That being said, I have always varied my anchor text when building links, even long before Penguin ever came out. I tested it and saw better rankings with more varied anchors. The reason behind that, in my opinion, has a lot to do with LSI and giving search engines more information to describe what your page is about.

If someone shows me a site that acquires a few thousand white hat links with a high percentage of one anchor text that got penalized because of it, I’ll change my view on this. Really though, it is the quality of the links, not the percentages of anchor text used by those links.

.GOV and .EDU links carry magical ranking powers.

This myth has been around for probably about a decade now. Its beginning actually had some logic behind it, and then people just went crazy with it.

Internet marketers and SEOs noticed that sites with links from .GOV and .EDU sites often were ranking very well. Most government and educational websites have a pretty high authority in the eyes of Google.

However, these were usually earned and legitimate links. Things like a published paper on an educational site linking out to another site as a source. Or a government website linking to another website that provided a quoted statistic.

As marketers often do, they jumped to the wrong conclusion. Instead of understanding that these were like a strong link on any other domain, whether it be .com, .net, .org, or anything else, many of them jumped to the conclusion that ALL links from government and educational websites were given special treatment by Google.

In fact, some marketers took it a step further. In an effort to exploit this ridiculous nonsense, they started selling links on these sites by using free blogs that anyone could setup or comment links on some of them. Others were even selling profile links on them.

These links were no better than any other link you setup on a brand new webpage. You could setup a new webpage on Blogger, and it would have every bit of the linking power that a new blog on Harvard.edu will have.

If you see anyone suggesting that .GOV and .EDU links have an inherent advantage over other links, they are either trying to rip you off by selling them to you, or they just honestly do not know any better.

Onsite SEO

Google loves sites that are updated constantly.

You will see many variations of this one, but they all center around the premise that you have to update your website content constantly and that this offers some sort of ranking advantage.

It is just not true. You can comb through the SERPs and find webpages and websites that have not been updated in a decade, but still rank for very competitive search terms.

Let’s say I decide to build a website about how to play the board game Pandemic. I would include webpages that detail the rules, the gameplay, common mistakes in interpreting the rules, common variations or house rules used, information about expansions for the game, and tips for winning the game. Outside of that, what else would be needed? Why would I continue pumping out post after post of content that is just going to retread the same material? The website is a site that covers the topic in its entirety. The only real reason to update the site at all would be if a new expansion is released.

Is Google going to punish my site because it is not updated regularly? No, of course not. It would still be a worthwhile site useful to searchers looking for information about the game.

If you were running a news website or a website about celebrity gossip, obviously you should be updating your website regularly, but that is as much for your visitors as it is for search engines.

Content is King.

I get so sick of hearing people say if you create good content search engines will reward your efforts. What is good content? And if nobody knows about your content or is visiting your site, why would Google feel that it deserves a better ranking?

http://spartanmarketingacademy.com/images/the-king.png

Is content important? Yes and no. Yes, the content on your webpage is important in as far as it tells search engines what your page is about, helping them to determine what related search queries your page should be displayed for. However, there are plenty of instances of low quality content ranking just fine. You can also find plenty of pages with very little content on them ranking well.

Is that the ideal way to create a webpage? In many cases, no.

More content on a webpage certainly has advantages such as allowing a page to target more keywords effectively. More content may keep visitors on a page longer.

Here is what you should ask yourself about your webpage content. Does the content effectively answer the search query? If the answer is yes, then the content is fine, however long or short it is.

Bounce Rate.

People often state that the bounce rate reported in Google Analytics is a ranking factor. I have not seen any definitive evidence of this. In fact, I have seen webpages with bounce rates in excess of 85% ranking just fine.

There are a few reasons that Google is not using this information in rankings.

For one thing, Google does not have this data on enough websites to make it a part of the algorithm. In a 2012 earnings report, they reported that Google Analytics is used on 10 million websites. I have not seen an update of that number since then, but let’s say that the number tripled in the last 2+ years. It’s unlikely it has grown that much, but let’s say it has.

You could also add in other ways that Google might be able to monitor that data such as tracking visitors using Google Chrome as a web browser, however that would seem to be highly inaccurate as it is only tracking Chrome users to a website, not all of the website’s search traffic.

If 30 million websites are currently using Google Analytics, that is only a drop in the bucket of the total internet. They do not have the bounce rate data on enough websites to make it a ranking factor in the algorithm.

On top of that, just because a user bounces from a webpage, does not mean they did not find the page useful or that it did not answer their query. For example, maybe they were looking for a line from lyrics to a song. They found it on the first result they clicked on. Why would they browse around other pages of that site? And the fact that they do not browse around other pages of the site should not be held against the site. It provided exactly what the user was looking for.

What I do think they track is a different sort of bounce rate, and one that really makes a lot more sense to include as part of the algorithm. That is how many visitors click on a listing in the search engine result pages, visit the page, click back on their browser, and then click on another result. This could be a signal to Google that they searcher did not find the information they were looking for on the first result.

That would make much more sense to use as a ranking signal.

Social Media

Social Signals Improve Rankings.

This one just keeps growing and growing, and amazingly with zero evidence to support it. That’s right. There is zero credible evidence to support the statement that social signals play an impact in a webpage’s rankings. Zero.

Oh, there are plenty of correlation reports. Tons of those. But here is where the bullshit begins.

Correlation is not causation. There is a big difference. It does not mean that social signals caused an improvement in rankings. What these reports show is that a statistically significant percentage of sites ranked near the top of search results have a high number of social signals.

What all of these correlation reports fail to address is which came first, the rankings or the social signals? It’s the chicken or the egg question.

Saying the social signals caused the high rankings would be equivalent to saying that their higher volume of search traffic is causing them to have better rankings. They have higher search traffic because they rank better. They have better rankings. Better rankings bring in more traffic. More visitors is likely to bring more social signals.

Still not buying it? Okay, let’s look at it from a business perspective.

Why would Google allow something to be a major part of the search algorithm which they could be shut out of at any moment? It’s happened before that they have been blocked. What would that do to the algorithm? Not to mention that social signals are even easier to manipulate than links.

Until somebody ranks a webpage for something that is not ridiculously easy based largely on social signals alone, this is nothing but a myth.

Social media is great for generating traffic, building your brand, staying in touch with clients and prospects, and many other things. Use it for that. However, retweets, pins, and Facebook likes are not driving rankings.

Other SEO Myths

Focus on your PageRank, DA, PA, [insert other metric here].

This one has been around for years. People have been telling others to focus on improving their PageRank to improve their rankings. No matter how many times it was explained that PR is a measure of the quality of incoming links and a minor factor in regards to a page’s ranking, this one has hung around and will not go away.

Now that Google has stopped providing public updates of PageRank data, it has shifted to people suggesting you should focus on your DA, PA, Trust Flow, or some other metric. Just like increasing your PageRank does not directly lead to improved rankings, increasing your Domain Authority does not directly lead to better rankings.

In fact, I would argue that DA, PA, TF, CF, and any other metric you can think of provide even less correlation to your rankings than PageRank ever did. The reasoning behind that is pretty simple. At least with PageRank we knew it was based 100% on data that Google had collected about your webpage. Now take Moz’s Domain Authority, for example. It is based on data Moz has collected and feels is valuable. Moz does not have nearly as much of the internet spidered and indexed as Google.

The other thing that none of these metrics take into account, nor did PageRank, is the anchor text used in the links. If PageRank guaranteed you a high ranking, sites like Facebook would rank for just about everything, wouldn’t they?

Focusing on these third party metrics and obsessing over them is just a waste of time. Focus on things that are known to improve your rankings, and ignore things like DA and TF when it comes to your own site.

The Google Dance.

You may be surprised to see this commonly accepted phenomenon on this list, but I really believe that there is no such thing as the Google Dance. There are just ranking changes.

http://spartanmarketingacademy.com/images/carlton-dance.gif

If your site’s rankings are fluctuating, there is a reason for it. You may not see the reason right away, but there is a reason. Every time someone has brought a situation to me where they felt their site was experiencing a Dance, I found something that explained it.

The most common cause of it is having a dynamic home page. I see this all the time. Someone sets up their website as a blog. Every post is published to the home page and pushes other posts off of the home page.

Well, think about what that is doing from a SEO perspective. It is totally changing the internal link structure of your website with every post you make. Whether you have full posts or excerpts on your home page, you are diminishing the link value of links present in posts as they move down the page. Also the home page links pointing to each post is getting less powerful as you move down the page. (Links closer to the top of your webpages carry more weight than links further down the page.)

Some other common reasons for a site to bounce around in rankings:

Loss of links. Sometimes followed by additional links being found, causing the site to move back up again. Other times a webpage that a really good link is on goes down or is having some sort of hosting problems. Google finds and caches the page as down. Then a week or two later they recrawl the page, and it is now fully functional again.

Hosting problems. I have seen where a website is having intermittent hosting issues where the site will be unresponsive at different times throughout the day. This can cause a site to drop, and then bounce back up when the issue is resolved.

These are 12 of the most common SEO myths I see mentioned over and over again.

Link to comment
Share on other sites

Grow your online visibility

This article was exremely helpful and very imformative. Anyone that is looking to increase their online performance or get into SEO work should read these 12 Myths. Honestly, reading this article saved me time, headache, and money down the road.

 

Thank you for providing this information for all of us to enjoy.

Link to comment
Share on other sites

  • 3 weeks later...

Max,

 

In my opinion, no they are not using that sort of data. They are not using it because they have no way to access traffic sources on most of the sites out there on the internet, so what would they have to compare it to? For all they know, 1,000 hits a day from Facebook is low in your marketspace.

 

Now if every webpage on the planet was forced to give their traffic data to Google, maybe they would factor in something like that.

Link to comment
Share on other sites

I accept that no one has demonstrated a direct connection bewteen social shares and SERPs. I just find it hard to believe that Google has this information and chooses not use it at all.

 

Social shares can and do lead to indirect bumps in SERPS. I often run free giveaways in which I place the entry forms on my best content pages. I've earned some links this way. This is not a link building strategy of course- the links are just a nice bonus. :-)

Link to comment
Share on other sites

I accept that no one has demonstrated a direct connection bewteen social shares and SERPs. I just find it hard to believe that Google has this information and chooses not use it at all.

 

Social shares can and do lead to indirect bumps in SERPS. I often run free giveaways in which I place the entry forms on my best content pages. I've earned some links this way. This is not a link building strategy of course- the links are just a nice bonus. :-)

 

Okay, well taking a different approach to it, just because a site gets 1,000 visitors from Facebook, why should that be a positive ranking factor? I see plenty of sites that generate a lot of Facebook traffic and are awful, mostly through Facebook Ads.

 

For example, there is a sports site that makes up total BS sports stories and uses branding that looks kind of like ESPN. I think they are called Empire Sports or something like that. You can tell by the comments that they are getting a ton of traffic.

 

Should that site rank better better just because a lot of people from Facebook visit it?

Link to comment
Share on other sites

I accept that no one has demonstrated a direct connection bewteen social shares and SERPs. I just find it hard to believe that Google has this information and chooses not use it at all.

 

Social shares can and do lead to indirect bumps in SERPS. I often run free giveaways in which I place the entry forms on my best content pages. I've earned some links this way. This is not a link building strategy of course- the links are just a nice bonus. :-)

 

Social are usually nofollow links so unless the followers are posting outside the social loop the links might as well not exist as far as SEO & ranking pages on the SERPs.

 

Consider social an alternative traffic source which is a good thing, diversifying traffic sources is an easy way to build a long term safety net.

 

No doubt giveaways & contest drive traffic/links, I've seen a small hobby niche blog consistently drive a boat load of traffic to their site with monthly contest for free products. If you're aiming for SEO, aim for traffic that own sites/blogs (followed links).

Link to comment
Share on other sites

I accept that no one has demonstrated a direct connection bewteen social shares and SERPs. I just find it hard to believe that Google has this information and chooses not use it at all.

 

Social shares can and do lead to indirect bumps in SERPS. I often run free giveaways in which I place the entry forms on my best content pages. I've earned some links this way. This is not a link building strategy of course- the links are just a nice bonus. :-)

It's reliability. Google cannot trust social because the platform can block them on a whim. Social can lead to links, of course. But the social shares themselves do not provide an SEO bump.

 

Think of privacy settings on Facebook. Google cannot rely on that. 

Link to comment
Share on other sites

Okay, well taking a different approach to it, just because a site gets 1,000 visitors from Facebook, why should that be a positive ranking factor? I see plenty of sites that generate a lot of Facebook traffic and are awful, mostly through Facebook Ads.

 

For example, there is a sports site that makes up total BS sports stories and uses branding that looks kind of like ESPN. I think they are called Empire Sports or something like that. You can tell by the comments that they are getting a ton of traffic.

 

Should that site rank better better just because a lot of people from Facebook visit it?

 

If people coming from fb on average visit several pages, spend several minutes on site, clicks an Adsense ad, etc. then yes. Google doesn't necessarily know the content is crap but Google can see the visitor's source and their behavor on site.

Not trying to be contrarian. When I look at Google Analytics > Acquisition > Social > User Flow, I'm inclined to believe that Google uses this info to judge my site.

I have observed two instances where a temporary bump in serps occurred in conjuction with running recurring giveaway tweets but I definitely can't say the connection is direct, indirect (traffic => links => serps), correlation, or coincidence.

Long story short, I'm not in the camp that categorically rules out social signals (or aliens, or big foot). :-P

Link to comment
Share on other sites

Not trying to be contrarian.

 

It's a discussion forum. Not everyone is going to agree on every point. If they did, there probably would not be a whole lot to discuss. ;)

 

Okay, let's say Google is using that information. Wouldn't they just use that information for the traffic in general, no matter the source? Why would they weigh Facebook traffic any heavier than traffic from another traffic source? And if they are weighing traffic's activity based on where it came from, that opens up a whole other load of questions. Is Facebook traffic better than traffic from the Washington Post? What if I get a bunch of traffic from TechCrunch or Mashable? How would they determine which source is better and why would it be better?

 

The other thing to consider, and I'll always go back to this, they do not have this sort of traffic data for most of the webpages in existence. Let's say you rank in the top 10, but none of the other sites for your keyword in the top 20 are using Google Analytics (or AdSense or any other Google script). How do they know that the way in which your traffic is behaving is better than the way traffic is behaving on any of your competitors? How do they weigh it? People might spend an average of 10 minutes on your site, which for a lot of sites is great. But maybe for all the other sites in the top 5 people are spending an average of over 20 minutes on those sites. Google just doesn't know that.

 

In that regards, I think it is difficult for them to use this sort of data because they do not have it on each site in order to make comparisons. The only way they could really do it is to make set data points. Visitors viewing 2-3 pages is good. Viewing 4-5 pages is better, and over 8 pages gets major brownie points. Same with time on the site. Over 2 minutes could be a slight bonus, over 4 a bigger one, and anything over 10 is major points.

 

But there is no relevancy in metrics like that. Over 10 minutes on a site is not that impressive if the top competitors are all averaging over 20.

 

I just do not see a way for them to implement that data in a fair way across the entire algorithm. Basically, they would be penalizing people who have good metrics but are not using Google Analytics.

Link to comment
Share on other sites

What Exactly Are These Twitter Signals?

The universe of Twitter signals breaks up fairly neatly into four categories, where the category is the best indicator of how the signal is likely to be used.

 

 

Trend Strength

Explanation: How popular a trend is at any given time; how many people are talking about it     

Signals: Hashtag usage, Keyword usage (frequencies)

Tweet Strength / Engagement

Explanation: The popular value of any specific tweet, how many people have seen or interacted with it

Signals: Impressions, Favorites, Retweets, Link clicks, Video plays

User Influence

Explanation: The strength of any particular Twitter user, particularly as it relates to their follower network

Signals: Followers, Follower/Following ratio, Lists included in, @mentions

Link/Page Strength

Explanation: The only dimension to move beyond Twitter itself, measures traffic to external sites from links within Twitter     

Signals: # Links to a specific page, Link Clicks

 

http://searchengineland.com/signals-twitter-google-care-219202

Link to comment
Share on other sites

Those articles are really just speculation. Nothing backing up social signals playing a role in rankings. They are basically just saying the way they want it to work with no actual proof in those articles of how it does work (as many of the white hat bloggers out there tend to do).

 

Show me websites that are outranking their competitors based on their social footprint and I will buy into it. I have never found any.

Link to comment
Share on other sites

No one will ever be able to measure it in a direct way, as most say it supplements the link building but you can't rank solely based on them.

 

Google is a sophisticated machine so it would surprise me if they completely ignored it.

 

The way I see it is like this:

 

- Site gains popularity, receives a lot of social shares, Google gives them an additonal bump to anticipate on the popularity of the site

 

- Hype is over, everything back to normal, Google degrades the rankings again as it's not popular anymore.

 

We all know social shares have zero link value/juice, but there are other mechanics in place that can show them (temporarily) higher in the results.

Link to comment
Share on other sites

If they can crawl the social site, they'll crawl it and treat it like a normal webpage. But they can't use the social signals as a factor. I think they were blocked by crawling Twitter for a month and a half. After that, they cut the cord. They can't rely on something that they can be cut off from at any time. Cutts called Social sites an "imperfect web" because of blocking, and adding, etc

 

 

Someone could change the relationship status or someone could block a follower and so it would be a little unfortunate if we try to extract some data from the pages that we crawl and we later on found out that for example a wife had blocked an abusive husband or something like that and just because we happened to crawl at that exact moment when those two profiles were linked, we started to return pages that we had crawled. So because we're sampling an imperfect web, we have to worry a lot about identity when identity is already hard.

Unless we were able to get some way to solve that impasse where we have better information, that's another reason why the engineers would be a little wary of where he or little bit leery of trying to extract data when that data might change and we wouldn't know it because we were only crawling the web.

At the end of the day, Google can't trust social signals. So they're hesitant to use it.

Link to comment
Share on other sites

It's reliability. Google cannot trust social because the platform can block them on a whim. Social can lead to links, of course. But the social shares themselves do not provide an SEO bump.

 

Think of privacy settings on Facebook. Google cannot rely on that. 

 

Not only blocking, but Facebook is notorious for making constant changes to their reporting and APIs. It would require quite a lot of effort to keep up with them, and as someone pointed out it'd also open a new can of worms. Can you really trust that Facebook cares enough wether or not likes are real?

 

And this sort of leads to why I don't believe in "social signals" as a broader concept. Google would have painstakingly build connections to every platform they'd want to include. It would be half a dozen top sites max. In my opinion this would be such a herculean effort that they'd want some credit for it, and at least would have you know that such a thing exists.

 

 

What Exactly Are These Twitter Signals?

The universe of Twitter signals breaks up fairly neatly into four categories, where the category is the best indicator of how the signal is likely to be used.

 

I've been a Twitter user for ages, and my profile page is PR5 for several reasons. Around 2600 followers, which isn't that much in the land of US-based marketing automation and crap like that, but these users are more or less real. It's a smaller country, and even if I'm a bit of a bore I'm guessing that I still have some top journalists and politicians as followers.

I'm pretty sure I'd know if these "twitter signals" affected something outside Twitter. I'd have to say that the article is speculation, although you'd get some of those same effects with Google just using their normal algorithm. Authoritative profile pages should be strong (as in likely to pop up in relevant searches), and popular tweets often get picked up by all sorts of blogs and content curation platforms.

 

Well, actually tweets kind of affect something outside Twitter if you count dedicated social media trackers such as Klout as something. I have no clue how that algorithm is supposed to work, because it just doesn't seem to follow anything that happens.

 

I have the feeling that several of these SEO myths get more power because people are staring at statistics that seem to confirm their existing ideas, hopes and biases. An unrelated service tracking social media isn't a proof that another company uses it for anything.

Link to comment
Share on other sites

Great post Mike. Nice to have some definitive list that isn't too vague, I was getting disillusioned by the WarriorForum.

Just wanted to say guys, thanks for spreading your wisdom in general - I've seen Mike, nettiapina, Yukon and some others I forget to mention on the WarriorForum and I kind of followed you here. Great to have a forum where there isn't incessant forum link spamming and "to rank higher on Google, write QUALITY content and get QUALITY backlinks. Also, social media is the best"-posts literally everywhere.

Link to comment
Share on other sites

  • 3 months later...

Just got reminded about a topic that is constantly brought up, or at least has seen a resurgence in sites like Moz or the like, namely content freshness. In one of the latest articles from Moz they assume that Google assumes that your site is starting to become unreliable because it isn't updated regularly. Therefore, your site will lose its validity and you will lose rankings, or at least wont jump up in the SERP because you're not "fresh".

I swear to God that I will be converted into thinking like everyone on Moz and Search Engine Land and the like wants me to, because there are really a perpetual amount of regurgitated seo myths going around on the net, and it's difficult to stay adamant about your own beliefs when reading stuff like that.

Link to comment
Share on other sites

HorseMan,

 

Yes, I have seen people talking about updating content a little more than normal lately too. Quite annoying. Just from a logical point of view, forget about the obvious evidence in the SERPs, why would Google make that a part of the algorithm? So if there is a better answer to a query, and it has been the best answer for 10 years, it is going to get dinged for not being updated? That is just silly.

Link to comment
Share on other sites

  • 8 months later...

Excellent list.  Much of this I have been telling my owns about for years - they'll appreciate this forum.

 

One topic I've been thinking about recently is the "Recently Updated" content issue.

 

On the one hand, it sometimes seems once a page is lodged at the top of the rankings it tends to stick there.  That could be from the steady clicks and users not needing to click other results.

 

On the other hand, in my personal experience (hence thinking about this) it seems some pages start to slowly drift down the rankings but can be jolted back up simply by making any change which generates a new last-modified header date.

 

Google could also try to have a mix of dates for some searches.  Checking clinton cigar for me comes back with a mix of 1998 and 2016 artices.  That one search is a real interesting data set.  It suggests if you have a top ranking for an older page don't touch it.  It also suggests a 2nd site could be effective with continued new content.

 

Anyone know of a tool that grabs last-modified header dates where one could click through listings and quickly get some data for testing?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.