Key Takeaways
- Google enforces 9 named violation categories under its Prohibited and Restricted Content policy — from spam and fake content to conflict of interest.
- 292 million reviews were removed in 2025 — but removal rates vary dramatically by violation type, from very high (profanity) to low-moderate (conflict of interest).
- 2026 brought three new enforcement updates: bans on staff-name solicitation, on-premises pressure, and active review gating enforcement.
- Negative reviews that reflect real experience are NOT violations. Google does not remove reviews simply because a business disagrees with them.
- Standard flags succeed 20-30% of the time; professional disputes with evidence packages reach 75-92% by citing the exact policy clause violated.
- Google's review policy framework: the 9 violation categories
- Spam and fake content
- Off-topic content
- Restricted content and illegal content
- Sexually explicit, offensive, and dangerous content
- Impersonation
- Conflict of interest
- What does NOT violate Google's review policy
- 2026 policy updates: what changed this year
- How to flag each violation type for maximum success
Google review policy violations are the only legitimate basis for getting a review removed from your Business Profile. Every flag you file, every appeal you submit, and every professional dispute that succeeds does so by connecting a specific review to a specific violation category in Google's Prohibited and Restricted Content policy. If you cannot point to a named violation, the review stays up — regardless of how unfair, inaccurate, or damaging it feels.
This guide covers every violation type Google recognizes, how removal rates differ across categories, what changed in the 2026 policy updates, and the critical distinction between reviews that violate policy and reviews that are simply negative. Google removed 292 million reviews in 2025, but the vast majority were caught by automated systems. When a business owner flags a review manually, the success rate drops to 20-30% — and the primary reason is that most flags fail to cite the correct violation category with supporting evidence.
Google's review policy framework: the 9 violation categories
Google's Prohibited and Restricted Content policy for Google Maps reviews defines nine named categories of content that can be flagged and removed. These categories are not suggestions or guidelines — they are the specific policy clauses that Google's human reviewers and automated classifiers use to evaluate every flag submitted through the platform.
Understanding these categories matters because citing the correct one in your flag is the single highest-impact action you can take to improve removal odds. A flag that says "this review violates the conflict of interest policy because the reviewer is a competing business owner at [business name]" is processed fundamentally differently than one that says "this review is unfair." The first cites a specific, verifiable policy clause. The second gives Google's reviewer nothing to work with.
| Violation category | Removal rate | Typical timeline | Evidence needed |
|---|---|---|---|
| Spam and fake content | High | 1-5 days | Account patterns, duplicate text analysis |
| Off-topic content | Moderate | 5-14 days | Explanation of why content is unrelated |
| Restricted content | Moderate-High | 3-7 days | Identification of regulated content |
| Illegal content | High | 1-5 days | Description of illegal promotion |
| Sexually explicit content | Very high | 1-3 days | Minimal — content speaks for itself |
| Offensive content | Very high | 1-3 days | Minimal — language triggers automated detection |
| Dangerous and derogatory content | High | 1-5 days | Context showing threat or incitement |
| Impersonation | Moderate | 7-14 days | Proof of identity mismatch |
| Conflict of interest | Low-Moderate | 14-28 days | Employment records, competitor links, social proof |
The table reveals a clear pattern: violations where the review text itself contains the evidence (profanity, explicit content, threats) have the highest removal rates and fastest timelines. Violations that require external proof (conflict of interest, impersonation) take longer and succeed less often through standard flagging. This asymmetry is the foundation of every flagging strategy that follows.
Spam and fake content
Spam and fake content is the broadest violation category and the one responsible for the largest share of Google's 292 million removals in 2025. This category covers bot-generated reviews, bulk posting from coordinated networks, incentivized reviews (where the reviewer received compensation for posting), and reviews posted from accounts created solely to manipulate a business's rating.
Google's automated detection for spam has improved significantly. The platform now analyzes account age, posting frequency, geographic consistency (does the reviewer live anywhere near the business?), text similarity across reviews, and burst patterns (multiple reviews appearing on the same listing within hours). When these signals converge, Google's classifiers catch spam before it ever publishes — which is why the majority of the 292 million figure represents pre-publication blocks, not post-publication removals.
For business owners, the most actionable subcategory here is incentivized reviews. The FTC's 2026 fake review rule now makes it a federal violation to purchase or incentivize reviews, and Google's policy mirrors this. If you have evidence that a competitor is buying reviews — screenshots of Fiverr listings, Facebook groups offering reviews for payment, or promotional materials promising discounts in exchange for reviews — that evidence materially strengthens a spam flag.
The other critical subcategory is coordinated review attacks. When a business receives 5, 10, or 20 negative reviews within a 24-48 hour window — often from accounts with no other review history — the pattern itself is evidence of spam. Flag these as a batch, not individually. Google's systems are built to detect coordinated manipulation, and batch flags trigger pattern-matching algorithms that individual flags do not.
Off-topic content
Off-topic reviews are content that bears no relationship to the actual business experience. This includes reviews about a different business posted on the wrong listing, political commentary left on a commercial business page, personal grievances unrelated to any transaction or interaction with the business, and social activism campaigns that target businesses for reasons unrelated to their products or services.
This category has a moderate removal rate — lower than spam or profanity — because the boundary between "off-topic" and "loosely related" requires human judgment. A review that says "the parking lot next to this store is terrible" is arguably off-topic (the parking lot is not the business) but also arguably relevant (it affects the customer experience). Google's reviewers must make these calls, and they tend to err on the side of leaving content up when there is any plausible connection to the business.
The strongest off-topic flags involve reviews that are clearly about the wrong business (the reviewer mentions a business name or address that does not match the listing), reviews that are purely political with zero mention of the business experience, or reviews that are personal messages directed at an individual rather than a business assessment. When flagging off-topic content, explicitly state why the content is unrelated — do not assume the reviewer will reach the same conclusion on their own.
Restricted content and illegal content
Restricted content covers reviews that inappropriately mention regulated goods and services — firearms, pharmaceuticals, alcohol, tobacco, gambling, and adult services. A review that describes how easy it was to purchase a controlled substance at a business, or that advertises the availability of regulated products outside of licensed channels, falls into this category. Removal rates are moderate-to-high because the presence of regulated terminology is detectable by automated classifiers, but context matters: mentioning that a restaurant serves alcohol is not a violation; advertising a specific drug available at a pharmacy without a prescription is.
Illegal content is narrower and more severe. This covers reviews that promote or facilitate illegal activities — soliciting illegal services, providing instructions for illegal acts, or encouraging others to break the law. Reviews in this category tend to be removed quickly because the liability risk to Google is high. However, there is an important distinction: a review that promotes illegal activity violates policy, but a review that accuses a business of illegal activity does not automatically violate policy. Google does not verify factual allegations. If a customer writes "this business is running a scam," Google treats that as an opinion or allegation — not as prohibited illegal content. The business's recourse for false factual claims may require legal action outside of Google's platform.
Sexually explicit, offensive, and dangerous content
These three categories share a common trait: the violation is typically self-evident from the review text, which means automated classifiers catch most violations before a human reviewer ever sees them. This makes them the highest-success-rate violation types for manual flags as well — when the automated system misses one, a manual flag with even minimal context will usually succeed.
Sexually explicit content includes graphic sexual descriptions, solicitations, and explicit imagery posted in review photos. Google's classifiers are highly tuned for this category, and removal timelines are typically 1-3 days. The rare cases that survive automated detection usually involve coded language or euphemisms, which is where manual flagging adds value.
Offensive content encompasses hate speech, racial or ethnic slurs, targeted harassment of individuals, and content that demeans people based on protected characteristics. This is the category most people think of when they imagine "obviously removable" reviews. A review containing slurs or hate speech directed at a business owner's race, gender, religion, or ethnicity will almost always be removed — and quickly. The nuance appears at the margins: a review that calls a business "run by idiots" is harsh but not hate speech; a review that uses ethnic slurs to describe the owners crosses the line.
Dangerous and derogatory content covers threats of violence, incitement to harm, and content that endangers individuals or groups. Direct threats ("I'm going to come back and burn this place down") are straightforward violations with high removal rates. Indirect threats and coded language require more context in the flag. If a review contains what you believe is an implicit threat, spell out the interpretation explicitly when flagging — Google's reviewer may not read the subtext the same way you do. Businesses facing threats in reviews should also consider contacting local law enforcement, as review-based extortion and threats may constitute criminal behavior.
Impersonation
Impersonation occurs when a reviewer pretends to be someone they are not — a different customer, a public figure, a government official, or even a representative of the business itself. This violation category also covers accounts created to mimic another person's identity, whether through name, profile photo, or biographical details designed to deceive.
The removal rate for impersonation is moderate, not because Google does not take it seriously, but because proving impersonation requires evidence that goes beyond the review text. You need to demonstrate that the person who posted the review is not who they claim to be. This might involve showing that the named reviewer was not a customer on the date in question, that the profile photo belongs to someone else, or that the account is using the identity of a real person who did not write the review.
One increasingly common form of impersonation involves competitor-posted reviews using fake customer accounts. A rival business creates one or more Google accounts with generic names, leaves negative reviews on your listing, and the accounts have no other review history. While this overlaps with the spam category, the impersonation angle can be stronger when you can demonstrate that the "customer" never existed in your records — no transaction, no appointment, no contact of any kind.
Conflict of interest
Conflict of interest is the most common violation type that business owners want to flag — and the hardest to get removed. This category covers three distinct scenarios: competitors leaving negative reviews on a rival's listing, former employees posting retaliatory reviews after termination, and business owners reviewing themselves or asking friends and family to post positive reviews.
The difficulty is structural. Unlike profanity or spam, a conflict of interest review often reads like a legitimate negative experience. A former employee who writes "the management at this place is terrible and they treat their staff horribly" has posted content that, on its face, looks like a customer complaint. Nothing in the text reveals the conflict. Google's reviewers cannot determine from the review alone that the poster is a disgruntled ex-employee — which is why standard flags in this category fail at such high rates.
To succeed with conflict of interest flags, you must provide external evidence. For former employee reviews, that means employment records, termination documentation, or social media posts where the ex-employee discusses the review. For competitor reviews, it means linking the reviewer's account to a competing business — through the reviewer's own Google profile, social media, or business registration records. For self-reviews, the evidence is typically the match between the reviewer account and the business owner's known accounts.
One important nuance: conflict of interest applies in both directions. A business owner leaving a 5-star review on their own listing violates the same policy as a competitor leaving a 1-star review. Google's enforcement is asymmetric — negative conflict-of-interest reviews get flagged far more often than positive ones — but the policy itself is symmetrical. If you are tempted to have friends, family, or employees leave positive reviews, know that these violate the same clause and carry the same risk of removal and account restriction.
What does NOT violate Google's review policy
This section may be the most important in the entire article. A significant percentage of denied flags — and wasted flagging credibility — comes from businesses flagging reviews that genuinely do not violate any policy category. Understanding the boundary between "damaging" and "violating" saves time, preserves your flagging reputation with Google, and focuses your energy on disputes you can actually win.
Negative opinions reflecting real experience. "Worst restaurant I've ever been to." "The staff was rude and unhelpful." "I waited 45 minutes and the food was cold." These are protected opinions based on customer experience. Google will not remove them, period. The appropriate response is a professional public reply — not a flag. For guidance on crafting effective replies, see our breakdown of how to respond to negative Google reviews without making things worse.
Low star ratings with no text. A 1-star review with no written content is almost impossible to flag successfully. There is no text to evaluate against any policy category. Google does not consider a low rating alone to be a violation. These reviews drag your average down but cannot be removed through any standard or professional channel.
Factual disputes between parties. "They charged me twice." "They never delivered what they promised." "The product broke after one week." Even if the business believes these statements are false, Google will not arbitrate the dispute. The platform does not function as a court — it does not evaluate competing claims of fact. The business's recourse is a public reply presenting its version of events, or in extreme cases, legal action for defamation if the statements are provably false and damaging.
Harsh language that does not cross into hate speech. "The owner is incompetent." "This place is a scam." "I would give zero stars if I could." Harsh, yes. Violating policy? No. Google's offensive content policy targets hate speech, slurs, and targeted harassment based on protected characteristics — not criticism that uses strong language. The distinction between "rude opinion" and "hate speech" is the line that matters.
The opinion versus fact distinction is worth emphasizing. "Terrible service" is opinion — protected. "They committed fraud" is a factual claim — potentially actionable, but not through Google's flagging system. Google does not verify whether factual allegations in reviews are true or false. This gap frustrates business owners who face demonstrably false claims in reviews, but the resolution path for those claims runs through the legal system, not through Google's moderation team.
2026 policy updates: what changed this year
Google's review policies are not static. The 2026 updates introduced three meaningful changes that expand enforcement into areas that were previously gray zones. If your business uses any of the practices described below, these changes apply to you directly.
| Policy change | What it bans | Violation category | Enforcement status |
|---|---|---|---|
| Staff-name solicitation ban | Asking customers to mention specific employee names in reviews | Spam / fake content | Active — reviews flagged for coached content |
| On-premises pressure restriction | Pressuring customers to leave reviews while still at the business location | Spam / fake content | Active — pattern detection for location-timed reviews |
| Review gating enforcement | Screening customers and only directing satisfied ones to leave reviews | Conflict of interest | Active — businesses caught gating face profile restrictions |
Staff-name solicitation ban. Many businesses — particularly in hospitality, automotive, and healthcare — have trained employees to ask customers: "If you leave a review, please mention my name." This practice inflated individual employee mentions in reviews, which some businesses used for internal performance tracking. Google now classifies coached review content as a form of spam. Reviews that appear to follow a scripted pattern (identical phrasing, specific name mentions across multiple reviews within a short window) are subject to removal.
On-premises pressure restriction. Asking a customer to leave a review while they are still at the business — tablet at the checkout counter, verbal request before leaving, QR code at the table — is now classified as pressure-based solicitation when the review is posted within minutes of the interaction. Google's pattern detection looks for reviews posted from the business's geographic coordinates within a narrow time window. This does not ban all review requests — businesses can still follow up by email or text after the visit. The restriction targets the in-the-moment pressure dynamic that produces reviews that may not reflect the customer's genuine, uncoerced opinion.
Review gating enforcement. Review gating — sending a satisfaction survey first and only directing happy customers to Google while routing unhappy customers to an internal feedback form — has been against Google's stated policy for years. The 2026 change is that Google is now actively enforcing it. Businesses caught using gating software or workflows face profile restrictions that can include temporary suspension of new review visibility. This is the most significant of the three changes because many reputation management platforms have built their entire product around gating workflows, and those products are now in direct conflict with active enforcement.
The common thread across all three updates: Google is closing loopholes that businesses have used to curate their review profiles without technically violating the letter of the old policy. The 2026 updates bring the enforcement in line with the spirit of the policy — that reviews should reflect genuine, uncoerced customer experiences.
How to flag each violation type for maximum success
The gap between standard flagging (20-30% success) and professional dispute filing (75-92%) comes down to preparation and specificity. The following approach, informed by patterns across thousands of disputes, applies the right strategy to each violation type.
For spam and fake content: Identify the specific spam signal — account age, review patterns, geographic inconsistency, text duplication across listings. If you see a coordinated attack (multiple reviews within 24-48 hours), flag them as a batch. Include screenshots of the reviewer profiles showing no other review history or reviews concentrated in a single industry (a hallmark of paid review networks). Reference the specific criteria that define a fake review in your flag.
For off-topic content: Spell out precisely why the review is unrelated to the business experience. "This review discusses a political issue and does not mention any product, service, or interaction with our business" is far stronger than "this is off-topic." If the review is about a different business, include the name and address of the business the reviewer likely intended.
For restricted and illegal content: Identify the specific regulated product or illegal activity referenced. Explain why the mention crosses from legitimate description into prohibited promotion. Context matters — a restaurant review mentioning wine is not a violation; a review advertising counterfeit prescriptions available at a pharmacy is.
For explicit, offensive, and dangerous content: These categories often succeed with standard flags because the text is self-evidently violating. For borderline cases — coded threats, dog-whistle language, implications rather than explicit statements — provide context in your flag explaining the meaning. Do not assume Google's reviewer will interpret ambiguous language the same way you do.
For impersonation: Provide evidence of the identity mismatch. Customer records showing no transaction on the date mentioned, proof that the profile photo belongs to someone else, or documentation that the named reviewer is a real person who did not write the review. The review text alone will rarely prove impersonation.
For conflict of interest: This is where evidence makes or breaks the flag. Employment records, business registration documents, social media connections, or communication records that establish the relationship between the reviewer and the business. Without this external proof, conflict of interest flags fail at rates below 15%. With it, success climbs to 40-55% through appeals and higher through professional services. For detailed playbooks on these disputes, see our guides on removing competitor reviews and handling former employee reviews.
Across all violation types, three principles hold: cite the exact policy clause by name, provide evidence before the initial flag (not just at the appeal stage), and appeal at day 3 after a denial — not day 7. The businesses that treat flagging as a documented dispute process, rather than a one-click complaint, are the businesses that achieve the highest removal rates.
- →How Google defines a fake review — the complete criteria
- →Google review removal timeline: every stage explained
- →Your removal request was denied — here is what to do next
- →Does Google actually remove flagged reviews? The real data
- →FTC fake review rule 2026: what businesses need to know
- →Removing reviews planted by competitors
- →How to handle retaliatory reviews from former employees
- →Is it legal to remove Google reviews? The full legal analysis
- →When suing over a fake Google review makes sense
- →Google review extortion: how to report and stop it
- →Responding to negative reviews without making things worse
Frequently asked questions
Google's review policy is not a single rule — it is a framework of nine distinct violation categories, each with different evidence thresholds, removal timelines, and success rates. The businesses that succeed at getting policy-violating reviews removed are the ones that treat the flagging process like a formal dispute: they identify the exact violation category, assemble evidence before filing, cite the specific policy clause in their flag, and appeal strategically when the initial flag is denied. The businesses that fail are the ones who click "Flag as inappropriate" and hope for the best. The policy framework is public. The violation categories are documented. The evidence thresholds are predictable. The difference between a 20% success rate and an 89% success rate is not luck — it is preparation. If you are dealing with a denied flag right now, start by identifying which of the nine categories your review actually violates — and whether you have the evidence to prove it.