SearchLove returned to London on the 15th and 16th October with two days of insight-packed talks from a host of industry leaders in search marketing. Congratulations to Distilled for throwing yet another flawlessly-organised, informative and entertaining conference.
Best of luck tomorrow for the #SearchLove @LynsLittle @willcritchlow but could you stop raising the bar for search conferences, yours sincerely, Me
— kelvin newman (@kelvinnewman) October 14, 2018
For those of you that weren’t lucky enough to attend, here is the first leg of our two part recap with the key takeaways from some of the day’s talks. The talks covered in this post include:
Julie Scott – No One Wants to Watch Another Commercial
Talk Summary
Julie shared how Onion Labs has combined the best comedy writers in the U.S. with some talented marketers to create a custom content studio that teaches brands to leverage comedy and entertainment to reach the publisher’s coveted target audience.
Key Takeaways
What is The Onion and Why is it Popular?
The Onion has been responsible for some of best headlines and received lots of coverage in mainstream media. Their headlines cover big political events, but also marketing and advertising stories.
Of the 1,500 headlines pitched by The Onion in a week, only 30-40 are selected. This is a 2% success rate, which is lower than the acceptance rates of some top US universities.
The Onion is popular because there is a relatable human truth to their satirical headlines. In some instances, readers can confuse The Onion’s stories with reality. People react to these stories because they can be very close to the truth. Literally Unbelievable is a site that documents people getting the wrong end of the stick for satirical headlines.
The Birth of Onion Labs
While The Onion is known for political satire, brands aren’t off limits. Take this headline they created about Monster selling a new line of defibrillators:
Onion Labs was born to make advertising more entertaining. No one wants to watch another 30 second advert, people want to watch something that’s entertaining and comedic. Julie believes brands and agencies have been stuck in a rat race for too long trying to get higher click-through rates in narrow-minded ways.
As an antidote to this, Onion Labs provide the best comedy writers to work with teams in agencies who help brands create different types of advertising. The brand is the hero without coming across as a typical ad.
With the introduction of Onion Labs, Julie and the team have had to implement creative processes which include: getting strategic input form client, constructing a creative brief, collaborative connecting and creating a design content strategy.
Onion Labs is part of Fusion Media Group which is the largest millennial publishing network and solves the problem of distribution with a very wide reach including the likes of Gizmodo.
A few examples
Julie finished up her entertaining talk by running through some of the campaigns Onion Labs have been responsible for.
The campaign with American Honey Whiskey imagined what it would be like if social media were a real physical place.
The Kind Snacks campaign was a parody of a Public Service announcement and looked at the phenomenon of Restless Palate Syndrome. Julie showed the importance of balancing the comedic part of the advert with getting across the brand message. They’re always looking at how far can they can push the comedy but still getting the message across.
Onion Labs worked with McDonalds using their cult Szechuan sauce following to promote their new podcast The Sauce.
Tom Capper – The two-tiered SERP: Ranking for the most competitive terms
Talk Summary
Tom has noticed over the last few years that the normal rules don’t always apply for the most competitive terms. In this talk, Tom dug into whether and how Google is going beyond our normal understanding of ranking factors, and how we need to react.
Key takeaways
The Dreaded Conversation
Tom kicked off his presentation by talking about the dreaded conversation with your boss: “Why aren’t we ranking first for [insert head term]? Tom has been frustrated by this question because there’s not much in the way of patterns and rankings are usually volatile.
Like many SEOs, Tom learnt SEO via DistilledU, but as he has progressed in his career these conversations about head terms have come up more and more. After all of his training and experience in SEO, Tom felt like he was missing something.
Two-tiered SERPs
From his talk a couple of years back, Tom concluded that links mean increasingly little. Tom re-ran research and domain authority and links correlate much less than even 2 years ago.
What if head terms are no longer about ranking factors? Maybe ranking factors not even useful for head terms, meaning that there could be two tiers of SERPs.
Back to Basics
It’s a cliché to say Google is changing, but the pace of change is very fast. Last year Google ran 31,584 experiments and there were 2,453 search changes.
If you want to look at why your site is being outranked, then you can check basic technical signals, targeting, links and hygiene factors (mobile, page speed, quality etc.). Google hasn’t reinvented the rule book, they are building on the same model.
Tom compared SEOs and Google’s algorithms to a frog in a saucepan of water that is slowly getting hotter. In the same way that the water heats up and the frog doesn’t notice, Google is changing but SEOs don’t notice because these changes are gradual.
Tom compared his research to Moz’s study on ranking factors. Tom’s research found that links were less of a predictor of rankings compared to Moz’s results. This finding could be due to the fact that Tom looked at the top 10 results rather than the top 50, like Moz did. Maybe it can be concluded that links matter less for the top 10 positions.
Tom’s hypothesis is that links are used as a proxy for popularity by Google to qualify a page as one that should be ranked near the top. You need links to get your site into the mix, but that isn’t necessarily enough when it comes to positions 1 to 5.
Example: Mother’s Day Flowers
Tom examined the SERPs for the term “mother’s day flowers”, which gets a surge in search volume once per year, to see how rankings changed with the increase in search volume. Tom found that there was a lot of fluctuation in the SERPs when this search volume arrived.
Tom found that the offline brands, such as Marks & Spencer, saw an increase in rankings at this time, whereas the specialist and best optimised pages saw a significant drop. Interestingly, the smaller brands that did hang on to their rankings tended to have a better design. This example seems to suggest that head terms are no longer about ranking factors.
In another example, one of Tom’s clients saw a 20% reduction in organic traffic for a client but no resulting loss in revenue, even though they were traffic driving terms. It looks like Google have reassessed intent for keywords and stripped out the ones where the intent doesn’t align.
Aligning KPIs with Google Engineers
Google engineers are being measured on the percentage of people pogo sticking from an image link back to search results. Some Google engineers have KPIs which include minimizing this metric. Engineers also look at the time to first interaction in the search results, they want that click to be as fast as possible. Neither of these metrics are impacted by links, so we need to look at the potential factors that could influence these metrics.
The fundamentals of ranking in organic search do still work, but we need to start optimising for Google’s metrics.
So how can we minimise pogo sticking? You could look at: running user surveys comparing your site and competitor sites, getting rid of interstitials, improving page speed.
When it comes to improving the time to first interaction we can optimise metadata to get a quick click, work on becoming a recognised brand in your field, sponsor YouTubers, produce expansive thorough content, facebook ads etc.
Key points:
- The game at the top of the SERPs has changed. The normal rules don’t apply.
- Expand your horizons as an SEO and inherit Google’s KPIs wherever you can.
- The foundations of SEO still work so don’t abandon them.
Wil Reynolds – 10 Conversation Changing Visuals
Talk Summary
Whether you are e-commerce, local, B2B, B2C, PPC or SEO, Wil’s presentation covered data visualizations that have helped his team change conversations with clients, prioritize workloads, get things done faster and win.
Key takeaways
If you haven’t seen or read about Wil’s talk from last year’s SearchLove then start there. Focus on the Bubble charts Wil spoke about. Wil has found that most people haven’t tried out the basics, so if he had his way he’d give the same presentation again. However, it’s not ok to repeat talks, so Wil decided to share his advanced techniques with us. But seriously, go back and create bubble charts!
Start with the bubble charts
Wil started off by revisiting the bubble charts from last year’s talk, where the y axis represents ranking position from STAT, the x axis represents conversions and the bubble size represents search volume. With this visualisation, you can find opportunities that should be prioritised and work out where to devote resource and where it is being wasted is. For example, one of Wil’s ecommerce clients was wasting money on their best terms, but they are now creating colour pages based on these insights which will make them much more money.
Local results
Wil introduced the idea of using Colour coded maps to see marketing penetration in specific areas for specific keywords. Wil’s been using this to work out how you can get a site to rank when users are physically further away. You can look into why a certain business is ranking well for a very small geographic area and why other businesses don’t. You can also layer in Google My Business and Ads data to further understand the local reach of different sites.
How do I stack up against competitors?
Where is Amazon ranking in the top 5 and the client ranking in the top 10. Wil has been using data pulled into Power Bi to determine which keywords it is worth trying to compete with Amazon for and which aren’t worth the time. You can use this data to find out where Amazon is ranking your brand and identify where people are falsely using your brand name to sell similar products.
YouTube videos are ranking in the top 5 positions increasingly frequently. You can compare clients where YouTube is showing for queries on mobile and desktop, often there is a different picture across devices.
For example, it looks like Google have been testing YouTube videos in the desktop search results more than mobile, so don’t just assume the same SERP features are consistent across both. Wil recommends not making hasty decisions based on the insights you see, there might be a more nuanced landscape in search, so don’t rush off to create loads of YouTube videos just because that’s what you’re seeing is ranking for desktop searches.
Power Bi and BigQuery
Wil uses Power Bi and Big Query and had some insights to share when it comes to using these platforms:
- Use drill down in Power BI, which allows you to look into individual domains.
- Excel is a tool for small data, you need to use BigQuery for big data.
- Make sure to include look over tables so you can help validate what you see by looking to see what makes up individual segments.
- Sort tables by conversions where possible to keep conversations with clients about what’s making money.
- Big Data has helped WIll prioritise, so that he relies less on best practices and more on data-driven insights.
- Wil isn’t a fan of PowerPoints, his client meetings are much more fruitful when he can show clients the insights with live demos. Doing so results in stronger and more meaningful conversations.
I’m spending on what?!
Look at what a site is ranking for in Google Ads data and you’re bound to see some queries you shouldn’t be bidding on. Stopping spending on these terms can save a lot of money simply by reigning in keyword bidding. This involves negating certain keywords to make sure ads are only targeted for search queries where it makes sense to.
One of Wil’s banking clients was spending money on ads which were clearly homework questions. It was absolutely pointless bidding on these queries as they won’t make the client any money, so Wil and his team added filters to stop bidding on these terms.
Areas of disconnect
Wil stressed the importance of looking for areas of disconnect. In one case, Wil has seen a site where 1 in 4 search terms goes to the homepage, which ranks for 100,000’s of queries. 43% of queries mentioned the word durable, but the homepage didn’t feature that word or include any synonyms. It’s only when you look at cases like this at scale you can clearly see what the issues are and what actions needs to be taken.
Els Aerts – User Research Done Right
Talk Summary
User research, understanding your users and finding out what makes them tick, is crucial to driving growth. In this talk, Els shared some of her favourite tools and showcase where user research has made all the difference.
Key takeaways
What’s the need for user research?
User research, usability and information architecture have been part of every project Els has worked on since 2001. Els still does user research because you have to find out what works for your website and your audience. Best practice doesn’t cut it, it’s a gamble.
You need to perform user research just in time and do just enough of it. Clients want to go straight to A/B testing, but user research helps mitigate risk and prioritises the optimisation pipeline.
Qualitative and quantitative research
Els isn’t a fan of the qualitative versus quantitative debate. It’s clear you need both. Quantitative research answers the what and how questions, while qualitative tells you the why. The latter of these methods gives very rich data which you don’t need at scale. Quantitative data is structured and fits nicely into spreadsheet.
Qualitative research provides empathy, and allows the researchers to step into user’s shoes. You need to feel your user’s pain because it’s a massive motivator for change in an organisation. User research helps close gap between customers and companies, giving you the opportunity to understand motivations, emotions and what drives your audience.
Qualitative user research
Targeted surveys
Els recommends using top tasks surveys to understand who’s visiting and why. Els worked with Yoast to understand who was visiting and engaging with their plugin download landing page. The first question asked to visitors of this landing pages was “Do you use the Yoast plugin?” and over half already had already downloaded it, so this page was not relevant for these visitors.
As an action, Els and Yoast decided to split the page so that newbies saw a information page explaining the plugin and existing users were shown an upgrade page. As a result, upgrades went up significantly.
Not content with stopping there, an exit survey was displayed to those who had downloaded the free version. Using insights from this qualitative information and some A/B testing, Els and Yoast managed to increase sales by 78% without losing the funnel of free downloads.
Interviews
Interviews are for when you need to get more personal. Els gave the example of a client called Zambro, who provide digital watches for elderly people so people can be alerted if they have an accident. The brochure is mainly downloaded by the potential customer’s children after their parent has had a fall. Through interviews they discovered that the children are motivated by peace of mind, while the parents are put off because it makes them feel old, needy and weak.
As a result, Els and Zambro altered the messaging so it reflected the need of peace of mind for the children when downloading the brochure but they repositioned the product as a modern and elegant digital watch which was more appealing to the parents. As a result, brochure downloads went up by 82%. Els stressed that the message from this example is that you shouldn’t make assumptions and should get personal with users to understand their motivations and reservations.
Moderated user testing
In-person moderated user testing is conducted on a one on one basis. You have a testing room with a participant and moderator and a separate observation room with someone watching on.
This type of user testing is difficult to get right, so Els provided some tips to be a good moderator in user testing:
- You need to recruit the right participants, making sure they are actually potential users and people who actually care about your product or service.
- Don’t listen to what users say, watch what they do.
- Be a kick-ass moderator. A good moderator will get good insights, but a bad one can get worthless or even harmful information that results in bad business decisions.
- As a moderator you simultaneously need to be a flight attendant (to make the participant feel comfortable), a sportscaster (making sure observation room can follow what’s happening), a scientist (to note down observations along with those in observation rooms) and like Switzerland (being neutral and not transferring bias).
Rob Bucci – How Distance and Intent Shape a Local Pack
Talk Summary
Since we can’t always rely on a searcher to state when their intent is local, we should look to keywords where that intent is implied. But is Google any good at interpreting implicit local intent? Rob unpacked the local pack, revealing how Google handles different kinds of intent and what elements go into shaping this local SERP mainstay.
Key takeaways
In August 2017, there was an increase in searches with geo-modifiers to signal intent. Rob wanted to understand the underlying intent and understand and how Google is thinking when it comes to local search.
Queries can express local intent in a variety of ways. Each of the examples below could be asking for something different.
Google have confirmed that every SERP is localised to some degree. Google has updated local search a lot recently and built it into their core algorithm, so Rob and the team at STAT decided to do some research to track this.
Pre-research
Rob started by comparing queries where the zip code and market-level searches (e.g. US-en market). Only 33% of the results were similar. If you’re not tracking location in your SERP tracking you’re missing 70% of what users are seeing. Even if the organic results display the same results, they are only ever in the same ranking positions 5% of the time when comparing zip code and market-level search queries.
SERP features are impacted even more than regular results, with market-level and zip code queries only being similar ~20% of the time. STAT has seen that there are more featured snippets shown for market-level searches. If Google can’t understand location they tend to display regionalised content in the form of featured snippets.
The Research
Before revealing the results of STAT’s main research, Rob defined some local search terminology for us:
- Geo-modification is when the searcher manually includes geographical terms in the search query itself.
- Geo-location is when the searcher’s device automatically provides location data as a part of the search query.
- Local Pack is a part of Google’s search results that shows three local businesses relevant to your search.
Rob then went on to talk about the findings from the main part of STAT’s local research. Here are the main findings.
Local packs
73% queries examined returned a local pack and were returned more for “near me” and “in [city]” queries compared to base queries e.g. “best chinese restaurants”.
The results for base keywords show that Google may be hedging their bets, because they have the location data but they are choosing not to use it as much for implicitly local searches. Google aren’t willing to go all-in with the assumption that user wants local every single time.
How Google interprets local intent
STAT then looked at whether Google could distinguish between “near me” and far away local intent, and where they put implied local intent on the map. They went about measuring this by measuring the distance between the centre point of the local pack map and the centre point of each city’s zip code.
The zip code and map centre points were closest together on the “near me” SERPs. This means that Google knows the searcher is looking for nearby businesses and surfaces results that fit the bill. No two geo-modifiers are treated the same, so Google understands “near me” and far away intent.
Influencing local packs with geo-location and geo-modifiers
Geo-modifiers only changed local packs 20% of the time. Conversely, geo-location has much bigger impact on local packs, especially for “near me” queries. Rob concluded that local packs care more about the user’s location than their geo-modifier and stressed the importance of tracking from multiple locations for a solid local pack strategy.
How are organic results impacted by geo-modifiers?
Geo-modifiers have a significant influence on organic results, it’s like asking a different question. Geo-location, on the other hand has relatively little influence on organic results. Rob concluded that organic results care more about the user’s geo-modifier and recommended tracking results for a number of different geo-modifiers.
Other findings:
In addition to these useful local insights, Rob found:
- Distance away from the searcher is usually a good predictor of what shows up first for local packs. In dense areas, like a city, distance is not as big a factor, in less dense areas distance is a bigger factor.
- You need a good rating to rank in local packs. Google are only letting the best quality results rise to the top. Local results consider the distance of the request before the rating though.
- If a business is shown in a local pack, this almost always corresponds to a high ranking in the organic results.
- 16% of local packs have an ad listing.
Dom Woodman – A year of SEO split testing changed how I thought SEO worked
Talk Summary
For many companies, SEO has no testing at all, just endless reams of best practice and hand waving. Last year, Dom changed role and got the chance to treat SEO differently, running over 50 tests across different websites. Dom’s entertaining session gave an insight into what worked, and just as importantly, what didn’t.
Key Takeaways
Dom loves rabbit holes and has been down the mother of all rabbit holes. A big problem SEOs have had is explaining the impact of their work, as it’s so hard to tie results back to the changes made. SEO has traditionally been a bit of a black box.
A year ago Dom was asked to work on Distilled’s ODN and ended up going down a year long rabbit hole trying to demystify SEO with experiments and a framework for testing SEO changes.
How does this work?
If you wanted to test a new video template, a CRO test would run A/B tests. That’s not possible with SEO testing. However, you can solve this by dividing up pages so that all users and bots see some versions of the old and new template on different pages.
Testing category names
On the Argos site, Dom tested changing the pushchairs category to include the word buggies and saw a sharp increase in organic traffic, with a 95% confidence interval.
A second graph showed a different picture though. This graph showed what the forecasted traffic would have been for that category page if that name change wasn’t made. When Dom compared the forecasted traffic with the actual traffic, he found that traffic to the page was actually worse because of the changed category name.
As a rule of thumb, Dom recommends that a page have at least 1,000 sessions per day for the group of pages that you’re testing in order to have a decent model.
The model Dom used was based on 60 sessions per day so wasn’t accurate – first lesson learned.
Testing page title changes
Dom decided to move on and change a title tag on one of Argos’ product pages by adding one of Argos’ USPs (free shipping).
There was some fluctuation but not a significant increase in organic traffic when compared to the forecast model.
Dom went on to change a title tag to exactly match what people were searching for. This turned out to be a bad idea, as that page lost 11% of it’s organic traffic.
In another throw of the dice, Dom added alt tags to images to some of Argos’ pages and it had absolutely no impact even though they are a solid staple of SEO best practice. Dom also learnt that you don’t necessarily need to know why something happened to take action.
At this point, Tom Anthony stepped in and tested consolidated titles on Cars.com, but organic traffic dropped by 27%.
You need a testing framework but don’t need to know the why necessarily.
From all of these failed tests Dom learnt that a testing framework is needed. You need to iterate and learn from tests and having a shared framework can help with this.
Titles and meta description testing
Dom decided to add in safety information to the title for Cars.com and saw a 10% drop in traffic as they lost intent from the review click through rate. 56% of all title tag changes were negative, but you shouldn’t stop stop testing them. The impacts of title and meta description changes typically take 2-4 days to have an impact.
JavaScript testing
Google have changed their tune with regards to JavaScript. They said they couldn’t render JavaScript, then they said really could and then they said wait give us some time.
Dom ran some tests on a site that couldn’t load products via the initial HTML, so he changed that and saw a 5% uplift in organic traffic . Dom learnt to make sure your main content is available on the initial page load.
Dom also implemented changes adding content to the bottom of pages, but this didn’t have the same impact across sites. Dom learnt that best practice isn’t always useful as one size fits all doesn’t work. In fact it was found that you can often can get away with having less content on a page.
Structured data tests
Dom wanted to know what happens if he added structured data rather than extra onpage content to ecommerce sites. The effect was null for some sites but worked well on others. The difference might have been that, in the cases where it worked, Google needed help as to which product to pick to be shown in the results.
Summary
You need to periodically re-challenge your beliefs, as you’ve likely made assumptions that are wrong because they worked on one site.
If you don’t have the intent, the extra bells and whistles will fail (e.g. structured data). You can’t put lipstick on a pig.
You don’t need to argue, when you can test hypotheses very easily. Success isn’t just up and to the right, it’s about testing and finding things out. A negative test rolled back is a bullet dodged, not a failure. Testing forces you out of silos and to collaborate across teams.
More SearchLove recaps are on their way!
Thanks to all of the speakers for delivering such great talks and sharing their insights. If you want to learn even more, then sign yourself up to our mailing list for future event recaps. You can also find the second part of our SearchLove recap here.