When the Midland Gazette Facebook page went live with a tongue-in-cheek post announcing that Saginaw Bay would henceforth be called “Bay Bay,” the joke landed harder than writer Ben Tierney ever expected.
“I thought that would go nowhere,” Tierney said. “I had... Fox News, calling me. They drove up from Detroit to interview me, and I tried to joke with them... it's got to be a slow news day if you're driving up just for this.”
Within hours, locals were debating whether Bay Bay branded towels were real merch or a mere meme, and a downtown bar even rolled out a “Bay Bay” drink special. Underneath the laughs, though, the viral moment exposed a fault line: many Facebook users could not tell Tierney’s satire from fact.
Why we fall for fake social media posts
Digital-media researcher Troy Hicks, interim associate dean the College of Education and Human Services at Central Michigan University, says that confusion is baked into the way social platforms work.
“Part of it's our human nature and part of it's the technology,” Hicks said. “Sometimes we just fall right into the thing that we want to believe.”
He said that social media algorithms are designed to prioritize speed and emotional response over accuracy, surfacing content that’s likely to go viral, not necessarily what’s true. Once a post gains traction, it spreads faster than corrections and satire disclaimers.
Even when creators like Tierney clearly label posts as satire, often with bold “SATIRE” headers, Hicks said headlines often outrun their context, and once people share something, they’re more likely to defend it than question its validity.
At the Midland Daily News, editor Dave Clark said satire can still trip up readers when shared without context, especially online.
“There was a lot of people that thought that was a real story,” Clark said of a Midland Gazette post claiming Jelly Roll was mistaken for a homeless man at Burger King. “I did have a couple, you know, reach out to me and say, ‘Hey, why didn’t you cover this?’”
Clark said he appreciates the Midland Gazette’s irreverent take on local issues and noted that Tierney’s satire often makes sharp, relevant points about life in Midland.
He’s more concerned about the influence of partisan websites designed to look like local journalism, pointing to examples like The Midwesterner and Michigan Enjoyer.
“They have the look and the feel of the news website,” Clark said. “But they're really not. They're really paid for by someone to present a specific message or present a specific political ideology.”
While the Midland Daily News continues to answer reader questions on social media, Clark said the volume and velocity of information online can complicate how people interpret news, especially when partisan content is boosted by algorithms.
Enter the influencers
If audiences are drifting from hometown papers, where are they going? Increasingly, to online personalities who deliver information wrapped in personality. Pew Research Center found that 37% of adults under 30 regularly get news from influencers on social media, most of whom aren’t trained journalists.
That doesn’t surprise Perry Parks, an associate professor at Michigan State University’s School of Journalism.
Parks said younger generations who grew up with social media are used to seeing facts, feelings and values intertwined in a dynamic that can make traditional news feel distant or incomplete.
“There’s a relationship between growing up with this sort of influencer social media, where you see this sort of more authentic or authentic-looking content,” he said.
Parks contends the industry’s long-standing ideal of neutrality, born in the 19th-century, no longer resonates.
“Instead of asking journalists to be neutral,” he said, “if we ask them to be honest, that seems like a better thing to ask.”
Parks said it may be time to rethink journalism’s commitment to neutrality, especially as public trust continues to erode.
“Trust in journalists can hardly go down… why not try something different?” He said.
AI, copyright and the human question
In March, a federal judge allowed The New York Times to move forward with its copyright lawsuit against OpenAI, maker of ChatGPT, over the company’s use of Times articles to train its language models.
Publishers worried that AI chatbots regurgitating near-verbatim news would siphon web traffic, and ad dollars, away from source sites. OpenAI argued its data use was “fair use” and supports innovation, but critics said the technology could eventually replace human journalism altogether.
Parks said the journalism industry has long asked reporters to suppress emotional context in the name of objectivity.
He said that removing human context from news, and then automating it, only compound the problem.
“If you take a machine and actually program that to replace a person then you are even further detaching the human elements of news and events from the journalists and then ultimately the public,” he said.
Hicks said AI’s rise could at least spark stronger media literacy, if educators teach “lateral reading”: opening multiple tabs, checking sources and asking skeptical questions before hitting share.
Hicks said media literacy often comes down to something simple: “finding a source you trust, then mostly trusting it.” Parks said that trust grows when audiences know their journalists.
The Bay Bay post sparked questions about what’s real, what’s not and how quickly things spread online. But it also got people talking, and laughing, about their community.
Tierney said that’s the point.
“Hopefully it can bring people together,” he said.