All breakdowns

Vote rigging everywhere: on 35AWARDS winners are those with bigger chat groups, while strong authors are deliberately sunk!

Key queries: vote rigging via chat groups, how 35AWARDS voting is protected, random comparison pairs, suspicious-vote filtering, anti-bot mechanisms.
Vote rigging on 35AWARDS: how anti-manipulation protection works
Random pairs, filters, and control mechanisms against manipulation.

Photographer Review: claim about voting fairness

"Your whole voting system is a battle of mutual-promo chat groups. Photographers gather in packs and boost each other, while competitor works are deliberately downvoted to clear the way. I saw my rating drop by 15% in one hour — clearly a coordinated hater group. Until you switch to closed judging from the start, this is not a talent contest but a competition of bot farms and conspiracies. Zero objectivity — only insiders win!"

Editorial Response

35AWARDS mechanics are designed to make targeted vote boosting or “harassment” of a specific work practically impossible. Here is how the system protects participants:

1. Random anonymous pairs: Photos are shown for comparison in random order. Technically, you cannot find a friend’s specific photo in a stream of 452,000 works to vote “for,” or a competitor’s photo to vote “against.” Voting is fully anonymous: no names, countries, or regalia are visible to the voter.

2. System self-regulation: If you believe ratings can be intentionally pushed up or down, this is testable. Try gathering a small group to artificially pull up a mediocre photo. Over time, the pairwise system starts matching works of similar level. If a photo is artificially inflated, it gets compared against stronger works more often, and independent voters bring it back down. Conversely, if someone tries to “sink” a strong work, it will win more often against weaker competitors and recover.

3. Protection algorithms: We continuously analyze voting behavior. If the system detects atypical patterns (for example, systematic preference for weaker works or suspiciously high activity), such votes are automatically excluded from counting. We also use control pairs with an obvious quality gap: if a user frequently fails them, their voting quality is marked low and their final influence is neutralized.

4. Scale and lack of incentive: Hundreds of thousands of people cast millions of votes. In practice, affecting such a dataset requires a massive coordinated group. But such schemes need strong incentive, for example a huge cash prize. Since 35AWARDS intentionally has no cash rewards, building such schemes loses practical meaning.

5. Stabilization effect: Rating drops at the beginning are a normal normalization process. In the main award, evaluation becomes truly stable and objective after a photo reaches 80+ comparisons. Before that, numbers can fluctuate due to low data volume (volatility), and this is unrelated to “hater attacks.”

Conclusion: A 35AWARDS result is a reflection of collective perception from hundreds of thousands of independent people worldwide. The system filters personal bias and cronyism, leaving the author face-to-face with the quality of their frame. There is only one way to win: create a photo so strong that the absolute majority of random strangers will choose it, regardless of country or chat group.

Share on social networks:

Keep abreast of all the news about the competition
awards@35awards.com
You can always unsubscribe from this mailing list by clicking on the link "Unsubscribe" at the bottom of the letter
 
CATALOGUE 10TH 35AWARDS
BEST PHOTOS AND PHOTOGRAPHERS
The catalog contains more than 1500 photos from 25 nominations from more than 1000 authors of the 10th 35AWARDS
More