Mike's Tuesday Tips:
I build a lot of private networks. Back in 2013-2014 I produced the best course out there on how to do this successfully. The single most important skill in building a private network is knowing how to properly evaluate domains to purchase and build on.
Skill in evaluating previously owned domains is not only useful for building private network sites. These domains can be even more useful as money sites. You can use their authority to build solid ranking internal pages just like you use their authority to build solid links when you use them as a network site.
I’m not going to go in depth on how to evaluate sites (maybe that will be a tip for another day). Everyone has their own methods. Whatever method you use, there will be a few things you want to do if you are looking to build the domain into a money site.
First, just like with network sites, you want to try to find domains that have not had spammy tactics used on them in the past, but you need to be even more selective with a possible money site. Unlike with network sites, you are absolutely trying to rank these sites. With a network site, I’ll accept a spammy link profile many times as long as it is not in one of the major spammy niches like payday loans, weight loss, make money online, credit cards, business coaching, etc.
Knowing that we want to look for a clean link profile, the most ideal sites are ones that have a lot of brand or generic anchor text pointing at them. A great example I once saw was a 4-letter domain that previously was a website for a radio station. Think something like WKRP.tld with a ton of links using WKRP and WKRP.tld as anchors. This sort of domain is absolutely perfect to convert into a money site.
Other great examples I have found are former charity sites, organizations that no longer exist, and political related sites.
You don’t have to limit yourself in any way to just these types of sites. If you find one of these though, they are usually gold. When they already have a ton of generic and brand related anchor text, you can hammer the site with links that have keyword specific anchor text and not worry about it. It can make ranking much easier.
Does Relevancy Matter?
If you happen to find a niche specific site, be cautiously excited. For example, you want to set up a site about reverse mortgages, and you come across www.reversemortgagesinfo.tld in the auctions and all of its metrics look good. It has a lot of relevant links from decent sources.
Your first reaction is probably going to be to jump all over this domain. Bid $1,000, $1,200, maybe even $1,500. Who cares? A site ranking well for terms relating to reverse mortgages could make that amount of money in a week easily.
You have to ask yourself one question though. If the site is about reverse mortgages, has a lot of relevant links, has a great domain name, then why did the previous owner let it go? People do not regularly just drop domains like that.
What would I do with a domain like this?
Well, I would not want to ignore the fact that I may have just stumbled onto a goldmine. It’s unlikely, but possible. Two things I would do.
First, I would try to track down the previous owner. Yep. Go to the WhoIs information. Check the site in the WayBack Machine. Anything to try to find a contact email address or phone number. If I can contact them, I can see if they are willing to divulge any information about the domain. They might say they just let it go because they moved on to something else. They also might let you in on things like it received a manual penalty or that it tanked around a date that coincides with a specific Google update. If it was likely hit by Penguin, I’m going to see how bad the links are and potential cleanup to determine if it is worth the effort. If it was hit by a manual penalty, it depends on what the penalty was for. I have a process I have used in acquiring domains with manual penalties and getting them lifted. It works almost every time.
I’m going to factor these issues into the price I’m willing to spend on it. More risk = lower price.
If I cannot get in touch with the previous owner, what I would do is try to get the domain cheap and take a chance on it. I’m not going to blow a lot of money on a domain like this without more information, but I do not mind taking risks if it is inexpensive.
The last thing you want to do though is spend a ton of time building a great site on this domain only to find that it is never going to rank. I’m going to test it first.
I’m going to put some very simple content on the home page with a link to an internal page using the title of that internal page as the anchor text. I want the title to be something fairly long term and easy to rank for.
For the reverse mortgage example, I would probably go with something like How do I know if a reverse mortgage is right for my parents? The internal page is going to be about that topic and that will be the exact title. I give it about two weeks, and if I see the internal page indexed but I do not see it ranking in at least the top 100 for that search term, I can be pretty sure that the domain is a dud.
I can live with wasting a little bit of money in a situation like that. What I hate to do is waste a whole bunch of time building out a full site and it never moves in the rankings. Testing a site like this saves you from doing that.
If the site does appear to be a dud as a money site, it could still work out as a network site at that point, so it is not a total loss.
Take Advantage Of Its Power
Once you have a site to use, whether it is relevant or not, it’s time to start building the site on it.
Remember, that for aged domains like this, most of their authority is in the homepage, unless it had a lot of internal pages that attracted a lot of links.
We want to take advantage of this. There are two different ways to look at this.
For a relevant site. In this case, of course the homepage does have a lot of relevant links coming into it.
You are perfectly fine in trying to rank the homepage for some of the major keywords you are targeting, as well as building silos to target other related keywords. You are going to approach this as you would basically any other money site.
For a non-relevant site. This will be a little different. There are no relevant links, so there is really no point initially in trying to rank the homepage for any of your major search terms. Instead, you want to create landing or silo pages that are targeting those terms and build silos around them. You want the homepage to link to the landing pages directly with a keyword relevant anchor link.
We are harnessing the non-relevant link equety that is flowing to the homepage and turning it into a relevant link. This is a technique I call link laundering.
A few other things I do:
Cleanup any obviously spammy links. Just like you would do for any other money site.
Begin link building like you would on any other project. Put the site in directories, start hitting it with private network links, do some link outreach, etc.
If you are one of those people that worry about anchor variety, you don’t have to here. If the site has a lot of generic links or non-relevant anchor links pointing at it already, you can go 100% keyword rich anchors for your links on this site.
Using these tips you can find some great domains that will give you a leg up when you start your next money site.
Mike’s Tuesday Tips:
This one may seem basic to some of you, but I think these are two tools that are under-utilized in the SEO community and would allow people to answer a lot of their basic questions themselves. They are the Google cache and text-only version of webpages.
I use both pretty frequently when doing audits and trying to diagnose problems with webpages.
How do you access them?
It’s pretty simple. If a page has been indexed and cached by Google, you simply type cache: in front of the URL in your browser (assuming Google is your default search engine) or in the search results you will see an option for it. There used to just be a link that said, “cached”. Now you have to hit the 3 dots beside the search result title and you will see a button labeled “Cached”. Click that and it takes you to the Google cache version of the page.
Once you are looking at the cached version of a page, you can get to the text-only version of it by clicking the link at the top of the page.
The cached version is good to make sure that everything on your page is visible to Google and the text-only version is good for seeing how things really lay out to search engine spiders before CSS rearranges the page.
What kind of things can you identify and find out about a page with these two options?
Well, how many times have you seen someone ask if Google can see text hidden behind a “read more” button or in accordion tabs? Go look at the cache and/or text-only version of the page or a page using similar code. Right there is your answer.
Want to know if Google is indexing comments on your webpages? Look at the text-only version. If you are using Disqus, you’ll see the comments do not appear.
Remember a tip I gave a while back about how Wikipedia uses CSS to show their menu prominently while it is really buried at the bottom of the page code? Looking at the text-only version of pages is how I first discovered they were doing that.
Want to see if there are hidden links on a page or other hidden elements? They can be hidden in the browser version, but they cannot hide them in the text-only version of the page.
I’m including screenshots of a site built in Elementor (and featured on their site) called The Perfect Loaf to illustrate what I mean. You can see how the menu looks for desktop and mobile devices. Then look at the text-only version. The menu is duplicated. In this case, that is 15 extra links on every page, weakening their internal link structure.
Now imagine a site with 40-50 links in their navigation.
You can use the cache and text-only version of pages to answer a lot of questions about how search engines view webpages.
Mike’s Tuesday Tips:
Last week was about identifying CLS and I mentioned how to identify LCP (Largest Contentful Paint) on your pages. This week, we are going to identify common issues that cause a slow LCP.
LCP is probably the simplest of the Core Web Vitals to deal with.
Having addressed this on a few hundred pages over the past year now, I can tell you that the most common cause of a slow LCP loading time is a slow server response.
If you are on a shared hosting environment, especially ones that have been oversold (and often overhyped) - I’m looking at you Siteground - you can tweak things all you want, but there is only going to be so much speed you can squeeze out of the server.
Want to figure out if your server has been oversold? Take a look at how many domains are hosted on your same server using this tool: https://viewdns.info/reverseip/
There are tons of other tools out there like this. If you really want to investigate, you can run all the sites through something like Semrush and take a look at their traffic estimates. You may have one or two sites on the same server getting tons of traffic and hogging up a huge amount of resources.
If you really care about pagespeed, one of the best things you can do is get away from shared web hosting.
Before someone comments about how they got good scores with shared hosting on Core Web Vitals, Pagespeed Insights, GTMetrix, or any other speed test you want to mention…. Sure, I believe you. However, think of just how much better your pages would load if you moved to a decent VPS or dedicated hosting solution.
There is an example below for a new client I just started working on. This site is hosted on WPEngine, which is supposed to be one of the better hosts out there. We are getting server response times ranging from 1.5-2.2 seconds. Just fixing that alone without doing any other tweaks, will bring their LCP score in line with Google’s standards.
Before we go crucifying WPEngine, there is also a possibility that the problem is on the development side of the site design. There might be some processes being called that have to complete server side before anything loads. With a lot of dynamic content, that can happen sometimes.
A database cleanup may also solve some of that response time.
If you do not want to switch hosts, another solution that may work is to use a content delivery network (CDN). Your mileage and experience may vary with these. For example, I have had some sites I have put on Cloudflare and saw drastic improvements. I have used it for other sites and it has actually slowed them down.
For any styles that are critical to your LCP and/or above the fold content, you can inline them, which means you place the style elements directly in the <head> of the page.
The next thing I see frequently slowing down LCP times are images and/or videos. Make sure that you optimize and compress your images.
Browsers load full images before adjusting them to the proper size to be viewed. If you are using an image that is 2000 x 1330 pixels, but it is only viewed at 600 x 400 in your page design, the browser is going to load that full 2000 x 1330 sized image. Before you bother with compressing anything, make sure you are using appropriate image sizes. Resize the image and then upload it back to your server.
You can also lower the image quality. Load it up in Photoshop or something like GIMP and change the resolution by adjusting the pixels/inch. Many times you will find that a lower resolution still looks great on your web page, and it will be a much smaller file.
One little trick I sometimes will use if I notice that changing resolutions on an image causes the quality to take a noticeable dip is I will make it a part of the design. I will toss a dark overlay over it and/or make it slightly blurry. I’ll do this if the image is being used as the background for a section. It helps the text over it to pop out anyhow.
If you are using Wordpress, there are a lot of plugins and options out there for compressing images. There are also options for serving next-gen image options, mainly WebP images. WebP images are not supported in all browsers, so make sure you have JPG or some other format as backups to display.
If you are using a video or slideshow, stop it. They are not a great user experience on mobile devices.
Lastly, use a tool like GTMetrix to investigate the loading order of elements on your page. I hate GTMetrix scores and the fact that they default to desktop loading. GTMetrix is pretty useless for everything other than its waterfall display. There are other tools that have waterfall displays, but I find GTMetrix the easiest to work with.
Take a look at what is loading before your LCP element. Are those things necessary? Is there anything that can be deferred or just eliminated?
I’ve shaved significant times off of LCP scores just by getting rid of Google Fonts. Google Fonts are great, but they have to load from Google’s servers. Then if you use different font weights that’s an extra library to load.
Another common one that slows down pages are things like Font Awesome icon libraries. A lot of page builders like Elementor will give you the option to use icons from Font Awesome, Themify, or Ionicons.
The problem is that in order to use just one icon, the entire library is loaded. Use a single image instead. Some builders will let you use your own SVG files as icons like Oxygen and Bricks. I think Elementor just added that option recently too.
The advantage of using your own is that the browser only has to load what you are using and not an entire library of icons.
I see this happen a lot with local business websites. They often like to use one of those phone icons beside their phone number in the header. Sometimes an email icon beside an email address too or the Google Places pin beside an address.
Because it loads in the header, this usually will slow down the LCP time. Use your own icons instead and speed it up.
How to Identify Cumulative Layout Shifts
With Core Web Vitals upon us, people are scrambling to optimize their sites. Mostly a waste of time, but it is what it is.
Out of the 3 Core Web Vitals, cumulative layout shift (CLS) is the one I have seen people having the most trouble identifying. Obviously, seeing your score is easy in Pagespeed Insights, Lighthouse, or web.dev, but now how do you identify what is actually causing any shifts on your pages?
It’s pretty simple actually. To do so, you are going to want to use Chrome’s Web Developer Tools.
-Open the page you want to check in Chrome.
-Click the dropdown menu on Chrome.
-Go to More tools >> Developer tools
This should open you up into a screen that shows Lighthouse and a bunch of other options along a menu at the top.
-Click on the Performance tab at the top.
-Then click on what looks like a refresh icon.
-Let it load, and you will end up with something like the attached screenshot.
If you have cumulative layout shifts happening, they will appear as red bars under the Experience row. You can zoom in on them a little bit by hovering your mouse over that area and using the mouse scroll wheel. (At least on PC’s. I have no idea how to do it on inferior Mac machines.)
You can also click and drag the whole thing around if things start moving off the screen as you zoom in.
If you hover over the red bars, in the website pane on the left it will highlight where that shift is happening.
By the way, while you are here, you can also identify your Largest Contentful Pain (LCP) element as well. In the Timings row, you will see a black box labeled LCP. Hover over it and it will highlight your LCP element.
Mike’s Tuesday Tips:
When should you disavow links?
Back in 2012, Google shook up the link building market with two massive actions. First, there was an enormous push to take down popular public networks. Anyone remember Build My Rank or Authority Link Network?
Then in April, they unleashed the Penguin algorithm filter and sent many SEOs running around with their hair on fire.
The Penguin algorithm was harsh. Probably too harsh to be honest. It weaponized link building. It was risky, but you could potentially get competitors penalized by throwing spammy links at them. I say it was risky because Google’s algorithm was far from perfect. You could just as easily strengthen their position as harm it.
While the Penguin algorithm did a great job in many cases of punishing sites using low quality links, there were also a lot of innocent sites caught in the mix or sites who had hired an SEO not understanding that they used spammy link building.
As a result, Google released it’s Disavow Tool in October of that year.
Fun fact: Did you know that Bing actually released a Disavow Tool before Google? Yep. Bing’s came out in September of 2012.
Since it’s release, people have debated its use. Early on, many of us cautioned against using it. Google generally does not tell you which links they have flagged as bad, except in some cases of manual penalties where they may give you a few examples.
Overuse can actually hurt your rankings.
(Some of us also suggested caution because we saw it as Google crowdsourcing to fix a problem they couldn’t figure out on their own. Basically they were saying, “Hey, why don’t you tell us which links are bad that you have been building? In exchange, here is a get out of jail free card.”
I think our concerns were valid. A couple of years ago Google announced that they can pretty well identify bad links on their own now and just ignore them. Where do you think the data came from to train their AI and machine learning algorithms to do that?)
Matt Cutts made a great analogy for how to use the tool. I’m paraphrasing, but he said you should use it like a scalpel, not a machete.
There are only two cases where you should use the Disavow Tool.
The first case is when you have received a manual penalty from Google related to link building. If this happens, you should try to actually have the offending links removed by those websites and fall back on the Disavow Tool for the ones you cannot get removed.
The second case where you should use the Disavow Tool is when you see a massive drop in rankings AND you have seen some low quality links starting to pile up or maybe there was a recent influx of low quality links.
If you have a page or pages hit by the Penguin filter because of bad links, you won’t see slight ranking drops. If you see drops of just a few spots, it’s not your links. You won’t drop from #1 to #3. You will see something more like drops of 50 spots or more. Sometimes you will drop out of the top 100 completely.
In these cases, again, the best solution is to try to get links removed, but in cases involving hundreds or thousands of spammy links coming in, that will probably not work.
You can use the Disavow Tool.
How do you know which links to disavow?
Well, Semrush has a great Toxicity filter you can look at, but do not just disavow all links it identifies as ‘toxic’. Use this filter as an indicator for links you should take a look at yourself.
Only disavow links you have manually inspect yourself.
Do not use 3rd party metrics like DA to identify low quality links. DA has nothing to do with the quality of a link (nor does any other 3rd party metric). If anything, those metrics are trying to give you a gauge for the potential strength of a link. Strength and quality are not the same thing.
How do you recognize low quality or spammy links? Well, it’s a lot like United States Supreme Court Justice Potter Stewart famously said about pornography, “I know it when I see it.”
Is the content a jumbled mess? Is the link and content at all relevant to what you do? Does the page even load? In short, if a prospect who had never heard of you came across the page and saw your link, would it hurt your brand image? Would you be embarrassed to be mentioned on that page?
I consider links like blog comments, forum posts, social bookmarks, document sharing sites, and all insignificant wikis to be spam worth disavowing too.
Lastly, if you do decide to disavow links, remember that your disavow file is kind of a living document. When you upload a file, it replaces the old one. The Disavow Tool does not store the old data. If you decide to disavow additional links, you should keep adding on to the same document and upload that file.