Matt Dzugan: [00:00:00] Chat GPT has over half of the content coming from the last year, and in fact, if you look at what is the single most likely day to have content published from, if I were to run a query today and I were to look at all the links that it cites or I were to run a hundred queries today and look at all the links that it cites, if you were a betting man, the most likely publication date you would see in any of those contents is.
Yesterday, it's always, the single day before is the most likely day that you're going to see content from, to your point, is a huge kind of wake up call. We have to make sure we have a steady stream of news press out there about our brand, because if it's not recent, it may not be seen.
Steve Halsey: Welcome to Building Brand Gravity. I'm Steve Halsey.
Anne Green: And I'm Anne Green here at G&S Integrated Marketing Communications Group. We're [00:01:00] so glad you joined us today.
Steve Halsey: Well, Anne, we've got a great episode for us today. Um, I'm kind of joking these days that in the sixties you had the summer of love, but, uh, here in 2025, we have the summer of the large language models, and this has been one of the biggest shifts really facing communications and marketing.
Teams right now. Um, this rise of generative engine optimization or GEO, which is the acronym that you see out there, but it really isn't just a theoretical trend. I mean, it's happening right now in real time. Every time you pick up your smartphone, every time you do a search on your computer, and it's changing how brand visibility is fundamentally working.
Anne Green: I couldn't agree more, Steve. I mean, we've moved from a world where people are very focused on optimizing for search, you know, search engines with links and keywords, and now they have to understand, um, how that is changing with AI models like chat, GPT, or Claude or Gemini, how they're searching and [00:02:00] creating discoveries.
Summarizing your brand story, deciding which content gets amplified as our recent primer on generative engine optimization, or GEO put it. LLMs of the new gatekeepers. I'd also refer our listeners back to an episode that just, um, was up, you know, and being promoted with Dan Nestle. The focus was on earned attention, not just earned media.
And I think Dan said it really well, which is you have to be thinking about ai, especially the large language models as another stakeholder.
Steve Halsey: Yeah, and I think, I think that's a very important insight in that, you know, where a lot of us started traditional PR gave us that credibility through trusted journalists, the earned coverage.
All of those things really, uh, really bolstered brands. Now, AI itself is acting like a little bit of a reputation system where it's scanning for credible structured recent content, and then it's shaping answers from it. So, you know. [00:03:00] What's, what's interesting to me is, is how it's evolving. SEO used to be around, Hey, let's get in like the top 10 links.
Now. It's really about this intersection of earned media credibility and digital market precision.
Anne Green: Yeah, it's so true. It's really that zero click search environment that is, uh, quite in impactful for publishers especially. But if you wanna see the data behind this. You know, everyone needs to check out Muck Rack's new study.
What is AI reading? They analyzed over a million citations from top LLMs and found that 89% of them come from earned media. Their definition of earned media is interesting. I really encourage people to check out that pie chart and dig into it. Um, there's a lot of good data in there, but the takeaway is credibility and that authority, measures of authority still rules, but the algorithms from the LLMs are really now in charge of surfacing them.
That's, that's material for many of the folks in our field and really any organization out there,
Steve Halsey: as you mentioned, like [00:04:00] really moving into this age of, uh, of a zero click environment. That's really why, as you know, we're, we're urging our clients to really rethink visibility in terms of an answer first.
Context, right? And so the GEO uh, uh, playbook that we put together really put together what I would call kind of four non-negotiables, which is recency. You know, AI really does favor content from the last 12 months, in some cases from the last day, which really requires you to think about your content strategy or even your crisis and issue strategy very differently.
The structure of it structure is still important. Having those digital experts to understand semantic, HTML, how to do the schema markup behind the scenes, but even how to structure the data with FAQs is, is key for how the AI starts to parse that. So recency structure, and then you get the two Cs.
Consistency and credibility, right? Consistency is, you've gotta tell your [00:05:00] story across all different, uh, media across earned, owned, social, paid, they all need to be aligned 'cause that's always getting pulled together. And then end of the day, the credibility is. You're seeing a little bit of, I'll call it a renaissance, but it never went away, but, but the journalistic validation still carries as much, if not more weight than it ever did.
Anne Green: And that's so powerful with journalism under such pressure. And we've seen that now just increasing year after year. So quite an interesting moment, especially for trade publications, as you said, that are in more specialized spaces. But we can't forget social media as well. So it's not just about engagement anymore, it's also about feeding the large language models.
So social posts, influencer content, you know, comment threads. Really starting to act as relevance signals. I think that's a great way to think about it. There's been a lot of discussion about the power of platforms like Reddit. You know, these things are really critical and if your message isn't reinforced or you're not engaging on those platforms that are at the very least not [00:06:00] aware of the kinds of ways in which the topics that are relevant to your, your organization or your organization itself for being discussed, you know, you may find the content that matters to you is filtered out or just.
Just not on the radar at all,
Steve Halsey: and I love that context. Relevant signals. And we're gonna, we're gonna be joined today by our guest, Matt Dugan. He is, uh, the head of Data of Muck Rack. He's gonna talk about this latest research that they did where they looked at more than a million queries by far the most, uh, in-depth research that's been made public.
And they're giving us the opportunity to get the first run at that data. So excited to be talking to Matt, and we have our own Lauren King with us. Who's one of our AI and GEO strategists here at GNS to really bring that context of what does this all mean from the client perspectives in the industries we serve.
Anne Green: Yeah. Together they're gonna help us break down the different models, you know, chat, GPT versus Claude versus Gemini. How they evaluate sources, how comms and content gets [00:07:00] cited and, and truly in terms of rubber on the road. What folks in our sector in integrated marketing communications, no matter where you sit in those fields, you know how you can make sure your brand shows up when AI generates the answer.
And I, I can say that this is a hot, hot topic of conversation across everyone we're working with with. Within our walls and also within our client walls.
Steve Halsey: Well, I'm, I'm excited to, uh, to get into the conversation, so let's get going. Today we have a very exciting and timely topic to talk through what AI is reading, the new Rules of Earned Media and GEO.
What if I told you that 95% of what AI sites isn't paid and that journalism might just be the most valuable digital real estate in your strategy? Today we're gonna break down some exciting new research on how large language models decide what to say and who cite. So with me today is Matt [00:08:00] Dugan, head of Data for Muck Rack.
Matt, welcome to the show.
Matt Dzugan: Yep. Thanks for having me. Super glad to talk about this, uh, important, exciting topic. Um. Very happy to be here.
Steve Halsey: And we also have Lauren King, digital marketing supervisors, AI and insights at Morgan Myers at GNS Agency. Lauren, welcome to the show.
Loren King: Thanks for having me back. I think AI's on top of all our brains right now.
Steve Halsey: Yeah. You know, in the sixties they had the Summer of love. Uh, you know, this year seems to be the summers of the large language models. There we go. We kept it in the, kept it in the Ls. So we're here to unpack Muck Rack's landmark survey and, and analysis of over 1 million citations from Chat, GPT, Claude and Gemini, and what it tells us about brand visibility, GEO, and earned media.
Matt, I guess we'll start with you. There's so much data, data to unpack what are, what are some of the highlights?
Matt Dzugan: Yeah, well you touched on it in the, in the intro, absolutely. But, um, I think the highlights. Are that it's a great time to be in [00:09:00] pr. It's a great time to be in communications. Um, it's no coincidence that the things that PR pros have known for a long time that credibility matters.
You know, recency matters. Um, those, those same things are important to the ai. Of course, we'll dive into it a little bit further, but AI is really relying on credible recent data, and we can see that, uh, plain as day in, in, in our analysis that we did.
Steve Halsey: Now when, when you did your analysis, I wanted to spend just a little bit of time up front talking that, so more than 1 million link.
And I thought it was interesting. When you look through U US, you assigned a number of categories. So can categories were like journalistic, which were new sites, journalists coverage, corporate blogs and content, own media, press releases, academic and research, government, NGO, paid, advertorial, social. User generated content aggregators and encyclopedia.
So you, you had a really nice breakdown of that. Why don't you tell us a little bit what, what [00:10:00] kinda led you to really run this study? And then, then how does that, how did that breakdown really inform the, the really interesting results that you guys found?
Matt Dzugan: Yeah, of course. So, you know, as Muck Rack, of course, we, uh, deal with PR pros every day and this kind of.
Um, question of, you know, what, what should I be doing? Uh, what should I be doing to impact these? Ais has started to come up, obviously getting your word out in the media and then, you know, even more recently, social media. I think, you know, in general we find that our, our customers at mock rack. Paths for that.
They, so it's sort of a well understood path, but there's this new thing, oh my gosh, all of a sudden these robots are talking about my brand. They have no idea what they're saying. Have no idea where they're getting it from. Um, and so we just kind of wanted to work that backwards. We wanted to say, yeah, okay, if we're gonna advocate for the PR professional, if we're gonna [00:11:00] advocate for, here's how you can get your message out there in the ai.
The question just becomes, well, what are they reading? So that's exactly why we titled the study. What is AI reading? I mean, that's the exact question that we had. Of course, you know, somewhat selfs servingly. We wanna help PR pros focus on the areas that matter. So. We did a tiny, tiny version of this study first to just see, you know, what are the types of stuff Reddit shows up there, LinkedIn, you know, Reuters Financial, to just kind of catch a quick glimpse of what's out there.
And then that's how we decided we need these categories. Oh, holy smokes. The different types of prompts that you ask. It matter. So once we had seen just a little snippet of the data, we kind of created a more rigorous analysis that we wanted to do, like you said, a million times.
Steve Halsey: And so, so Lauren, from where, where you sit, is it good, is it bad?
Should we be, we be concerned that the large language models are thinking more and more like humans in terms of how they search [00:12:00] and, and process. Process data. What, well, and we'll get into numbers here in a little bit, but, but what's your take about just the speed of the evolution and, and the, the real, I guess I would call it natural language processing, that that is happening with the large language models?
Loren King: Yeah, so there's a few, uh, caveats in there. It's positive on my side, but it also takes some rethinking on the benefits that you're gonna get and even how your users are interacting with your pr, but starting at the top. We're basically seeing, uh, like Matt was talking about this validation of trust and authority in a way that hadn't really been able to be showcased as easily, uh, in the past.
So if you think about an, a large language model that's behind something like chat GPT, it does two things when it's searching. So it references the knowledge it was trained on, and then it reviews what's out there for today. So if you're asking for a recent topic on crop protection, it's gonna go through recent stories, but it's gonna compare those to what it's already knowing.
So if you have a strong basis in pr, you've been putting things out for years, you've got this trust and authority already built in, then [00:13:00] you're gonna be compared to what's out there, and you're more likely to show up in these results over time. So beneficial, but a completely different approach that you have to consider.
Also, keeping in mind that your users are less likely to go directly to your site now, and they're more likely to ask a complex query maybe. Comparing as a company, your PR results to a competitor and seeing what comes out. So you have to reframe everything entirely. Uh, change the way the language that you're using is positioned, changing and incorporating SEO in an even stronger but slightly different, uh, approach than before.
And having a good basis in how these models just select stories in the first place.
Matt Dzugan: That's great. And actually I want to, I want to add one, just one short thought on top of what you said. Um, I love that you called out the fact that there's kind of two approaches that these models take. They reference their training data and then, um, they'll look at what, you know, what sort of recent news is out there today, as you called it.
I, I really wanna underline that because what I found, you know, since we launched that study, that second piece, I think a lot [00:14:00] of people. Don't even realize that second piece because when chat GBT first came out, you know, feels like it's been here for a while now, but I guess it was only three years. Three years or so, everyone, everyone had that experience.
I mean, everyone is used to that thing where it said like, Ooh, I can't comment on that because my training data only goes up to, you know, blah, blah, blah year. But like, just to be very clear, like what Lauren and I are saying here is. It doesn't really work like that anymore. They, they, they got burned by that.
Obviously. They're a business. They're trying to make money, they're trying to be credible. And so what they said is, wow, we need a way around this. So they now have this sort of two-pronged approach. Yes, they rely on their training data and it's useful. And as you said, if you've had years of positive coverage, you've got a nice foundation.
But in addition to that. It is surfing the web in real time after you make your query. And that's kind of the crux of this study that we did here at Muck Rack, what we're calling these [00:15:00] citations.
Steve Halsey: Well, and, and what I think is fascinating is, um, and of course you, we all had that experience. I had the same thing too.
And they're like, oh my gosh, when I first started experiment, this is great. And it's like, yeah, but we can only get you data from two to three years ago, which is of only so much value, but, but. But what it can do now, particularly with the recency and, and the sourcing, but, but here's some really cool stats from this.
Like I said at the opening of the show, 95% of the links that are cited by ai, um, in this study are from non-paid sources. And so we gave you what those categories are. 89% of the citation comes from. PR driven sources. So that's journalism, the blogs, uh, different things like that. And then to me, what really gets interesting is then when you overlay the lens of recency, nearly 50% of the citation are from, from journalistic in, in nature.
And to me, I saw that stat and that was pretty revolutionary to me, max. So, so can you, can you just [00:16:00] talk a little bit about, uh, about. About those stats and, and, and what that, what that means and how we should think about, uh, comms in light of that.
Matt Dzugan: Yes. And so I love that that one was eye-opening for you. I guess it can.
Open your eyes even further, raise your eyebrows even further because it's, it's about 50% of the links that are cited. But if you think about it, each time you're asking an AI one of these questions, it's actually citing multiple links. So odds are, I don't have the exact number, but it's something, you know, like.
70% or 80% if you're asking a time sensitive question or an opinion question, something about recency more likely than not, you know, 70, 80% that at least one of those links, at least one of the multiple ones will be, uh, you know, journalistic sort of major media outlet. Um. Or niche media outlet, uh, type type, uh, type of link.
So yes, absolutely 50% overall, but also most of the queries, the vast majority of the queries are gonna come back with at least one [00:17:00] of those. And so, abs, I mean, you know, to your point, it's just, uh, a very important piece of that pie.
Steve Halsey: So, so Lauren, what, what does this, what does this mean for the enduring value of, of media?
You see, I, I grew, I grew up where, you know, my first job was, um, you know. I had a list of reporters said, get on the, get on the phone and, uh, get your client in Newsweek. Get your client in bus Businessweek, get 'em on the, uh, front page of the Chicago Tribune. That really shaped a lot of my approach to storytelling and what you needed to do and how you prove relevance to, to media.
Now we're seeing. That, you know, the enduring value of media continues to come through as part of the comms mix. I mean, what, what's, what's, what's your take on that from somebody whose background is more digital in nature versus somebody like me who really grew up in the process of working with and, and proving value to media on a daily basis?
Loren King: Yeah, so the first thing that comes to mind out of that is, as I was reviewing the [00:18:00] Muck Rack data as well, what was really jumping out at me is how different some of the sources were, some of the companies that were being pulled from and the niche news organizations. And so you gotta rethink where you're trying to get your yourself showcased, essentially, and make sure that it's actually a.
The places that you, when your audience is searching, that's the sources that are gonna show up for them. So there's a real customization aspect here that probably involves doing a deeper dive into your audience's digital preferences. Are they using Claude? Are they using chat? GPT? What's their preferred AI tool?
And a lot of that's still emerging. I mean, this is. Still very, very new for the majority of consumers and for those who are adapting and trying these different tools. So there's not gonna be full answers yet. It's gonna take some experimentation, but once you've got a grasp on that, that gives you a better handle on the targeting you should be doing for those stories, because you really do wanna the specialized, you know, I know that tech radar is one that comes up quite a bit across a lot of different LLMs, and so from a technology background, if they weren't in consideration before.
They're [00:19:00] now basically gonna be preferred when it comes to any kind of technology product, uh, as long as a review is showing up.
Steve Halsey: Well, one of the things I, I think is interesting then to weigh, and I hadn't really thought of that until you mentioned it, Lauren, is, you know, I think a lot of us by nature default to the tool that we use.
So if you use chat all the time, you're defaulting. To chat if you're using Gemini. Similar thing, Claude. But what you bring up is a really interesting point that when we're putting together programs as communicators or as agency professionals, as we're counseling clients, we really not need to think about not large language models as this ubiquitous thing, but, but I guess.
You know, Matt, kind of from this, you're, you're basically saying they each have their own a little bit of a style that needs to be taken in account.
Matt Dzugan: Yeah, they, they absolutely do. I mean, I think even, uh, a couple times I've been chatting with folks, you know, and I've even called it like a personality. They do, they behave differently even in, in, of course, in the way that [00:20:00] they.
Talk, uh, to, to use that word, but like in the way that they, you know, create sentences, but also in the content that they read, which is I think what's most important here. In fact, we actually see that, um, even, even just the quantity of content that they read and the quantity of contents that they cite.
They'll even have different patterns. Some of the tools will cite articles that res that, um, that sort of map to their entire response. Some of the tools will just cite one article for like one sentence at a time. And this is something that's changing over time too. To add another, add, another dimension to it is these different tools are maturing and growing and changing.
So I definitely agree, uh, with Lauren's kind of overall point, which is like. The best way that you can sort of approach this is to just do a little bit of reflection on what's your audience, what are, how are they thinking of that? What tools are they using? And just kind of optimize [00:21:00] for that.
Steve Halsey: So, Lauren, my, my question for you is, you know, you, you counsel a lot on.
What I'll call niche audience. I hate to call 'em niche 'cause they're pretty big, but, uh, but agriculture, advanced manufacturing, uh, highly complex B2B supply chain based companies. How does that kind of, um, niche focus play out in, in what communicators need to think about when they're, when they're not going for tech radar, when they're not going for raid, uh, Reuters or Bloomberg?
Loren King: It does go back to understanding consumer preference or audience preference. You know, if you take my dad for example, he's a corn and soybean farmer in Michigan. Most of his interactions with the internet are on his phone. He's not using a desktop, but 99% of the time he's gonna be using his phone, looking up very technical information around crops, around crop protection tools, around markets for the day.
And so his behavior as a farmer is going to be dictated by that. It's also gonna be dictated by the type of phone he has. [00:22:00] You know, there's been some reporting recently that Apple wants to maybe look at using Google Gemini as the background for Siri. So if you're just making the assumption that your iPhone users are gonna use a certain app, you need to be aware that.
If their behavior is actually gonna be dictated partially by a Google based ai, which is going to show completely different links than if it was say, open AI that had built that partnership around Siri. So it's multifaceted. Uh, if you were to go more towards, say, our veterinarians or some of our other technical industry specialists, there's a chance that they're gonna be more on desktop and they're gonna be choosing the type of tool they might use.
They might move more towards perplexity 'cause of that deep. Research need. In that case, you're also gonna have to consider a completely different platform for sourcing. And so deep audience knowledge is very, very important. There's room here to help using ai, you know, just even, uh, going into AI tools and asking those types of questions to simulate where maybe a veterinarian might look on their own.
But you have to think very omnichannel [00:23:00] and you have to think with constant, uh, audience preferences in mind.
Steve Halsey: So let's, so let's talk a little bit about recency and, uh, this is the part where, uh, I remem remind our listeners that I'm actually a dinosaur. That like, when I started, uh, in this industry, the process was I would call up a, uh, reporter or an editor and say, Hey, I got something really cool.
I'm gonna be sending it to you. Then I would put it in this thing called the US Mail. So I'd have to wait three to five days. Then I would follow up and say, Hey, did you get that thing I sent you? And then let's talk about it. Facilitate the process of interviews, cover all that. And so typically the lead times we were working with, if you, if you were dealing with a trade publication, was about three months from when we started the pitch to where the, where the story was placed.
Obviously if you're dealing with tv, uh, radio or daily news, the cycle was a lot more compressed. But, but it had a certain cycle to it. Right. And then as, uh, the 24 news hour cycle shrunk to by the [00:24:00] hour, to by the minute to now by the second, you know, you saw a lot of things like really revolutionary moments like on the Arab Spring where you saw reporting starting from.
Smart phones off of social media sites that then led to feed, that led then led to breaking reporting, then led to longer form reporting. I guess that's a long ways to say the speed of this gets faster. So I guess, Matt, what's gonna be interesting to me is. Recency rules right now. So a, when we were talking earlier, you were saying like open ai, like favors articles from around the past 12 months.
Claude may go a little bit deeper, so I want you to talk a little bit about that, but do you think we're going to enter the same cycle of speed? That's happened in other parts of the media consumption get translated into, uh, how the large language models start pulling, uh, their information.
Matt Dzugan: Yeah, it's a, it's a, it's a good question.
So, yeah, I'll, I'll start with the first [00:25:00] half of your question, then I'll, and then I'll. Touch on kind of,
Steve Halsey: oh, the part where I'm a dinosaur is that the, oh, yeah. I'm gonna talk, talk about, about the
Matt Dzugan: US Mail if we can. First I'll touch on, you know, like what we saw in the study and then I'll kind of, you know, maybe make a little bit of guess as to where this is going.
Um, but absolutely what we saw in this study is that when these sources, or excuse me, when these AI systems are citing. Uh, sources with dates associated, you know, like a Wikipedia page for example, doesn't really have a publication date in the same way that a, um, in the same way that a news article or even even like a LinkedIn post or YouTube video, I mean, those have clear publication dates.
So for any of the type of content where we're able to discern a clear publication date, it's very clear that stuff in the last 12 months. Get cited more often than stuff, you know, four years ago, five years ago, and again, this is intuitive. Very clearly, the, uh, AI systems have [00:26:00] been, um, built and designed to favor recency.
Um, now what's particularly interesting is that, that, um. How, how could I, how could I call that? Amplification of the stuff from the last 12 months over stuff from four or five months ago is even stronger in chat GPT than it is the other tools. Actually chat. GPT has over half of the content coming from the last year, and in fact, if you look at what is the single most likely day to have content.
Published from, so like example, if I, you know, if I were to run a query today, um. And I were to look at all the links that it cites or I were to run a hundred queries today and look at all the links that it cites. If you were a betting man, the most likely publication date you would see in any of those contents is yesterday.
Um, it's always, the single day before is the most likely day that you're going to see [00:27:00] content from, which I think, uh, to your point, is a huge kind of wake up call of like, wow, we gotta make sure. I know, easier said than done. We have to make sure we have a steady stream of, uh, news press out there about our brand because if it's not recent, it may not be seen.
Steve Halsey: That is amazing that the recency is really, really day before and the, the implications of that are are pretty, pretty significant. And, um, Lauren, so, so how do you, how do you process that in, in like the difference between dealing with. Kind of breaking trending things versus maybe some, some more cyclical industries where you're not gonna have necessarily that, that flow of like the daily news.
What, what, what do you need to think about? Or kind of like what you mentioned the other day, which was like, Hey, you know, this large language model is as good as the others, except when it isn't. And you were just talking about the, the, the lack of information. Maybe you could talk a little bit about that in [00:28:00] some of the more specialty industries.
Loren King: Yeah, so agriculture is, again, a great example of this. There's a huge cyclical nature built in very seasonal. You've got harvest and planting, and so what people are looking for is going to change around that. And you have the ability to plan to some degree, but also you need to have some flexibility into your PR strategy for breaking events.
Like say a report comes out that says. Peach Harvest is gonna be down, and that report all of a sudden has completely changed the conversation around peaches. You were gonna do a press release that was gonna get some attention, but now you're gonna be looped in with all these other stories that are pretty negative.
So you have to stay on top of that, even if you're planning for this seasonality, because you are part of an aggregated mix of media content that an AI is now going to cite and not discern between, unless you're going very, very specialized in the query. Or in the prompt that you're sending. So keeping that top of mind in that you don't wanna probably have this very, very rigid PR plan.
You wanna build in these blocks, but then pay pretty close attention to swap things out, pause or shift as [00:29:00] needed. Obviously if you're delayed by a day or two, uh, it's hopefully not gonna be a huge impact to you, but might completely change the stories. That you're being referenced alongside, which is one way to to think about it, and then also just staying on top of what is most popular in keywords.
Having a strong SEO basis is really, really valuable here because we're still using keywords for search determination, and AI is. Similar to Google in that way, especially if you're using Gemini in that, uh, it's using a lot of the same functions that a traditional 10 blue link page would just disseminate.
It's not everything involved, but it is part, definitely part of the conversation. So if you have seasonal changes in the keywords that are popular, you have to keep that in mind too. But you have to keep an eye on the news, uh, probably at all times. And AI is actually a good tool to do that. Uh, so you're getting some type of real time updates,
Steve Halsey: really thinking about an always on, uh, PR and comm strategy, I think is interesting.
And Matt, the other thing I thought was interesting was, uh, based on, uh, your, your guys' [00:30:00] findings was how different industries sourced a little bit different. Finance and media was very highly journalistic in terms of the citation rates. Healthcare showed a little bit of a stronger representation of government or NGO sources than others.
Um, hospitality, uh, indicated a little more towards owned media. Um, so. I thought that was fascinating too. And, and maybe you could talk a little bit about, is it, is it the queries of nature or, or, or is it the value? What, what creates that change and what are the implications if you're in one of those industries?
Matt Dzugan: What, what we learned, uh, from this study is exactly what we can see from analyzing large amounts of data. So in other words, we, we wrote, you know, hundreds of thousands, millions of queries. We analyze millions of links we can. We can make some educated, uh, guesses as to the why behind the scenes. We don't truly know it.
You know, these are obviously highly, highly [00:31:00] guarded and held trade secrets, essentially, of these AI systems. But as someone myself who has, you know, spent time building AI systems, um, I would, uh, place a large, um. Likelihood on the fact that some of this behavior has basically been, um, you know, instructed or steered into the way that the models behave.
For example, you say, you know, healthcare is citing a lot of, um, NGO and government sites. This is, this is true like the, if you ask anything about medicine or healthcare, it's going to cite. Uh, the CDC or it's going to cite, you know, various other, like, at least in the US if you do a query from the US it'll cite these, uh, you know, US government entities.
We've seen especially, you know, don't want to get, uh, too far into these weeds, but we know from everything that's been happening with COVID-19 a couple years ago, people having different [00:32:00] opinions on what's right, what's wrong. You know, I believe this, I believe that, um. Trust me, the AI models want to stay out of that game as much as possible.
They don't want to be seen as having a opinion really. So I do think in some ways, um. For some of these touchier subjects. I do think the AI are trying to almost remove themself of any, uh, any sort of editorial opinion and sort of stick with, stick with the government stance. Uh, so I do think that explains at least some of
Loren King: it.
I know that Sam Altman, uh, uh, from OpenAI has talked about that a bit, that the default state should be this, this neutrality to some degree, uh, for what you're getting. But the expectation is at least for them, uh, with GPT six, 'cause they've already started talking about it. That personalization towards your preferences is going to become the norm very quickly.
And Google's doing this too. You know, you have the option, uh, in certain cases with Gemini to choose preferred media sources. [00:33:00] And it might change things. It's not going to limit the other ones 'cause it'll go beyond your preferred sources as needed. But it's going to prioritize. And that's something that's dictated on the individual level as well.
So, going back to understanding where your audience is looking, what do they prefer? What are their niche sources? What are their companies they're looking at? Assuming that becomes learned behavior. For everybody using these tools, uh, that's gonna become even more important to understand.
Steve Halsey: So Lauren, one of the things you were, you were talking about a little bit earlier, I was, I, I'll just kind of put to, I'll just kind of call citation friendly traits, high authority, timely structured, things like that.
And Matt covered, hey, the most recent thing that you're seeing a lot more, you know, for day was yesterday, which then leads me to ask about, so what about. G'S role in crisis and reputation, right? So the model's gonna process the way it is. How do we need to think about how we start structuring response to what the large language models are going to pull [00:34:00] through generative engine optimization?
What are some of the risk of outdated or absent citations? And, and what do, what do communicators need to think about? In this expanded environment where LLMs are now as much a channel as anything else,
Loren King: I think this is really important and it's going to continue to grow. I mean, you see responses say, so go on X, or go on Facebook and look at the responses around a news story.
Oftentimes somebody will have asked an AI about that news story to get a quick summary of it and then just dropped it as a comment. So people are used to going in and learning about a crisis through an AI tool, whether that's right or wrong, and whether they're actually verifying the information. AI still has pretty high hallucination rates.
I mean, GPT five, they're very proud to get around that 5% mark in most cases, which is still, uh, probably millions and millions of queries a day that are. Presenting something completely false. So with that in mind, I also thought it was important to note that it, while recency is still part of this, there's [00:35:00] nothing stopping a summary from going back in the past and pulling from another crisis you had.
So say you're a business that had two shut down a manufacturing plant last year. Or two years ago, and then you've just had to shut one down. Recently, as people are searching, it is very possible that those prior events will be incorporated into this summary, in addition to what's happening now. So finding ways to make it distinct, you know, to separate what has happened in the past from the present, while maintaining, uh, that GEO perspective of this answer, first query to try and make sure that you're getting prioritized is gonna be pretty important.
If there's still a lot to learn about crisis management in terms of how it's being reflected by ai. 'cause there's no person that. Is going to have any context behind the story at all. This, it's usually not gonna be framed in part of a larger discussion around the economy or competitors facing the same thing unless the, the user is actually asking for it.
So, once again, it gets complicated, it gets layered and it gets nuanced. Um, but I think there are some steps emerging and how to treat it. And Matt might have some ideas there too.
Matt Dzugan: [00:36:00] It's impossible to predict a crisis before it happens, obviously. Um, if we could, if we could do that, we, we know. We, uh, we'd be somewhere else, but, um, content is still gonna be cited from Monday and Sunday and last week and the, and the month prior.
So the more that you can make sure you have sort of a well-rounded approach, most areas of your business, most aspects of your brand covered, um, again, you're nev I'm not telling you to predict every possible crisis and make sure you have a piece that you know counter of, of course, no one can do that.
But you know, at a high level, any major categories of your brand, ma, ma, major categories of your business, even. Maybe stakeholders, your C-suite, maybe your customer. Like if the more you can have a well-rounded PR approach that sort of touches on these various aspects of your business. Just to have some content out there for when it inevitably is gonna go look for it.
When the crisis comes, uh, I think the better.
Steve Halsey: Well, and one thing I thought was, was interesting was the other day, um, I [00:37:00] was working with a group, we're running a workshop, looking at, um. How to quantify reputation for a very large company that is in a. Very complex, very highly regulated, highly politicized industry.
And as we started really mapping where their reputation played, it was, it was with journalists. So who are the journalists that matters? What, what is the earned media? It. It was, um, what is being said on the social channels. What are the things we're seeing there? It was what was coming up in the Google search and SEO as a channel.
Reddit popped up as its own kind of channel in terms of where are employees talking, what are they saying about it, and then interestingly enough, the large language models. Kind of came up as a, as a channel. And there were, there, we didn't have a million queries to look through like our friends at Muck Rack did.
But it was interesting to me that in a surface gloss, how [00:38:00] each of those told a different layer of story based on perspective, based on recency, based on, um, number of queries based on population on that. That, to me, it was really a big eye-opening moment that. There are so many channels that need to be managed for, for reputation, and, and the large language models are, are one of them.
And then I guess Matt, for you guys, the challenge is like traditionally and, and a significant part of your business is really about, you know, connecting those right journalists for the right stories with what they're talking about. So now you've got, you've got now all these different channels, which are channels of opportunities, but.
How do we, how do we mix those together? Or how do we, how do we think about this? 'cause it could also become paralysis of too many channels. So I'll just do what I've always done.
Matt Dzugan: I can tell you a little bit about how, you know, I'm thinking about it. So one of the things that is nice about these [00:39:00] ai, um, these AI opinions, if we wanna call it that, that anthropomorphize it in that way is.
Unlike maybe other sources, like, yeah, okay, maybe our brand is well connected with several journalists. How do I really quantify that? Like you measure you, you mentioned in this workshop you were doing, how do we, how do we really put a, put a value, put a numerical value on that? Well, the nice thing about these AI systems are, you know, for right or for wrong, they have their, their benefits and their, and their downfalls.
But boy, it is easy to to measure them, right? We can, we can just say like, Hey, here's the, here's the a hundred queries that users of our brand are gonna be wondering about, and I'm just gonna go smack those a hundred queries every day and see how often my brand is favored in a positive light. And every single day I can look at my number.
Am I 60%? Am I 70%? Am I 80%? And it's. Of course there's nuance for deep dive. Is it mentioning this aspect of my brand? Is it, is it, is it mentioning this crisis that happened? [00:40:00] But to, to take a look at it at a high level, you know, it is fairly quantifiable, which is nice. Um, I think the other thing that, you know, I tend to like about it is.
As we talked about, it does sort of encompass a lot of the other pieces that you mentioned, that since AI is citing journalists and it is citing Reddit and it is citing social media and it is citing, um, even video content at times, like to some extent you can consider. The AI responses as a little bit of a kind of amalgamation of everything.
Um, combine that with the fact that it is fairly easy to measure, you know, I don't wanna say easy to measure, but it is measurable. Uh, makes it, I think, pretty attractive part of a, of a, of a, of a sort of, you know, reputation management strategy.
Steve Halsey: So, Lauren, how about, how about from your perspective? I know you're dying to get a little bit more into little technical, little on page little [00:41:00] how we should structure things.
Um, 'cause as I said, you know, the, the journalistic component was a significant part of what comes outta here, but also just the way corporate page and corporate news are structured. So maybe. Maybe you can give us like a super high level, um, tutorial, at least of what you're seeing there.
Loren King: Yeah, so from a structure standpoint, you know, we have a few recommendations that we're pursuing with clients right now that are slightly different, uh, than in the past designed to make it easier for AI to read.
Obviously, staying up to date is probably the most important. Going off of Matt's point of how quickly this is all being pulled in, recency is foremost and you wanna have your team on top of that using answers and questions. So just being very, very clear of. Doing the answer first, whatever it is. And then putting the question in there as well, trying to align with the, what the queries are.
Defining complex words related to your industry. So going back to that niche side of things, if there are topics or concepts that aren't necessarily well known, or maybe in your own research you've [00:42:00] seen that AI isn't very good at explaining them, well assign that definition yourself because that's gonna demonstrate your knowledge base and it's gonna give it, make it easier for the tool to basically pull that definition into an AI summary.
That's being generated for you. I did briefly want to go back to what Matt was talking about though, 'cause thinking from an overall PR and marketing perspective, what you were both saying around basically operating in sync essentially. You know, it's really important to stay aligned on the messaging being out there across these different groups.
Right now, it's probably gonna be beneficial to have, you know, a key message you wanna share, but then also allow your individual PR, marketing sales teams to modify that message in a way that works best for their audiences. Because then you're gonna be hitting this in very different ways. AI is very relational.
It, I mean, they talk about vibes and the word vibe coating and the words vibe coating. And these vibe things have kind of become a joke. But it is also true in that it's very good at understanding non. [00:43:00] Numerical relationships between things. Uh, and it can, it can grasp that the message being shared around PR, even though it's positioned slightly differently, is similar to the message being positioned by sales.
And so if it's getting it from all these different sources, the odds of it showing up your message that you really wanna have shared, showing up in a recommendation is probably gonna be a bit higher. So. Flexibility, but have a strong core balance, I think is gonna be pretty important moving forward here.
Steve Halsey: So, so Matt, what, what's, what's next? Where can people go to learn more about, uh, what AI is reading, what you guys did in, uh, the generative pulse and, uh. What's, what's next for how you guys are further exploring how, uh, large language model behavior might evolve.
Matt Dzugan: So of course if anyone wants to read more details about what I've been talking about here, you can find it on our website.
Um, it's this a specific sort of. Product within the rack suite. So we call it generative pulse. You can find it@generativepulse.ai. And specifically the report with the statistics we've been talking about on this podcast, um, [00:44:00] are@generativepulse.ai slash report. Um, so you can find that there. Um. As far as what we're doing to kind of keep, keep tabs on this.
Um, absolutely. So for those unfamiliar with Muck Rack, right, we are a kind of all encompassing PR software tool in particular with a phenomenal database of journalists, media outlets, what they write about. And what's cool now, you know, to get on my soapbox a little bit, is that now not only do we know who the journalists are and what they write about.
But we know which journalists are influential, which journalists are, uh, have the ear, so to speak, of ai, the AI whisperers, you could call 'em. Um, so that's kind of like another angle that we're folding into our application to figure out, you know, which journalists have the ear of ai. When you get, uh, you know, maybe you send a pitch through Muck Rack.
Um. Can we see that that pitch has [00:45:00] resulted in earned media? And then can we see that that earned media has been cited by ai? Kind of trying to bring it all full circle to that point we were talking about earlier of really quantifying the value of your, of your PR efforts. Um. As far as like us keeping tabs on the research, we are absolutely doing this.
Um, you know, chat PT just came out recently, which has slightly, uh, different patterns and even still the existing models chat, PT four, Andro, quad Gemini, they're always tweaking what they search. Uh, so, you know, Lauren, Lauren brought this up, but I, I do wanna underscore a little bit like, um. You know, these are huge tech companies that are constantly running little micro experiments and they might realize like, oh, if I throw in more LinkedIn content in here, people are more likely to do X, Y, Z.
Or maybe it's not even that. Maybe it's more subtle. Maybe it's not just if I throw in [00:46:00] link LinkedIn content. Maybe it's, if I throw in LinkedIn content that has a bunch of emojis in it, like, you know, even just little things like that are constantly being experimented on, and so we're absolutely staying on top of this.
You know, selfishly, our business depends on it. So of course we want to be on top of this, and we will continue to publish more research as, as we did here and as we discussed here, to kind of, uh, keep the community informed.
Steve Halsey: Always appreciate it, Matt. And in, in full disclosure, we have been, um, we have worked with Muck Rack for, uh, for a number of years.
Um, Greg Galen, uh, one of the co-founders and, uh, and has been on the show multiple times. Uh, loved. The vision that he has for the industry. And you know, Lauren as well is, uh, is available to anybody to connect. Um, he's one of our digital leaders here in the group and can be found at, uh, Morgan Meyers, a GNS agency.
So let's close this up with, uh, with, with final thoughts. So. Lauren, what does, what does this all mean? So [00:47:00] going back to our title, what AI is reading, the New Rules of Earned Media and GEO, what's, what's your, what's your top takeaway from today?
Loren King: My top takeaway looking at this is that you're gonna need some expertise to really start interacting with ai.
And it's essentially a new medium. It's a new way of. Um, working with the internet, it's, it's a new way of getting the news. It's changing how your audiences respond, and so you need to have a very comprehensive plan that is flexible, that that can res, uh, respond to major events in the same way that you or I just do in our individual lives.
So keeping that in, you know, that core focus, maintaining it, understanding. How these systems actually work. And then being flexible enough to respond to what's changing in the world is probably gonna set you up pretty well.
Steve Halsey: Matt, how, how about you? What's your big takeaway? The
Matt Dzugan: way I've been thinking about this is this is really a, a game to be played.
It has rules, it has players. There are strategies, um, and the [00:48:00] more you can think about what the rules are in your sector, figure out, you know, who the players are in your sector. Um, in your niche, the better. Of course, you know, selfishly at Muck Rack, you know, we're trying to help our users do this with this product that we've built out called Generative Pulse.
Um, but absolutely it's a game to be played and the only way to win it is to realize that you are in fact. Part of this game and to start playing
Steve Halsey: it. And from where I sit, this really reinforces the importance of storytelling and what is that story that we wanna tell? Even just getting back to the title of, you know What This podcast is, how do We Build Brand Gravity, right?
What is that story? What is that thing that attracts people to our brand? And as I reflect on today's conversation, my big takeaway is that generative engine optimization or GEO is no longer optional. It is the new frontier of earned visibility. And the data in [00:49:00] the generative pulse really shows that earned media has never mattered more.
But as you both have said, the rules have changed. So thank you, Matt. Thank you Lauren, for being on the show. I invite all of our listeners to connect with Matt and Lauren to learn a little bit more. Um. Certainly a topic we're gonna continue to cover. Um, and I thank you for joining us on the summer of, as I said before, not Summer of Love.
It's the summer of large language models. Um, please tune back soon to join us for another episode of Building Brand Gravity. I'm Steve Halsey, one of your hosts. Thank you for joining.