Vote rigging everywhere: on 35AWARDS winners are those with bigger chat groups, while strong authors are deliberately sunk!

Photographer Review: claim about voting fairness
Editorial Response
35AWARDS mechanics are designed to make targeted vote boosting or “harassment” of a specific work practically impossible. Here is how the system protects participants:
1. Random anonymous pairs: Photos are shown for comparison in random order. Technically, you cannot find a friend’s specific photo in a stream of 452,000 works to vote “for,” or a competitor’s photo to vote “against.” Voting is fully anonymous: no names, countries, or regalia are visible to the voter.
2. System self-regulation: If you believe ratings can be intentionally pushed up or down, this is testable. Try gathering a small group to artificially pull up a mediocre photo. Over time, the pairwise system starts matching works of similar level. If a photo is artificially inflated, it gets compared against stronger works more often, and independent voters bring it back down. Conversely, if someone tries to “sink” a strong work, it will win more often against weaker competitors and recover.
3. Protection algorithms: We continuously analyze voting behavior. If the system detects atypical patterns (for example, systematic preference for weaker works or suspiciously high activity), such votes are automatically excluded from counting. We also use control pairs with an obvious quality gap: if a user frequently fails them, their voting quality is marked low and their final influence is neutralized.
4. Scale and lack of incentive: Hundreds of thousands of people cast millions of votes. In practice, affecting such a dataset requires a massive coordinated group. But such schemes need strong incentive, for example a huge cash prize. Since 35AWARDS intentionally has no cash rewards, building such schemes loses practical meaning.
5. Stabilization effect: Rating drops at the beginning are a normal normalization process. In the main award, evaluation becomes truly stable and objective after a photo reaches 80+ comparisons. Before that, numbers can fluctuate due to low data volume (volatility), and this is unrelated to “hater attacks.”
Conclusion: A 35AWARDS result is a reflection of collective perception from hundreds of thousands of independent people worldwide. The system filters personal bias and cronyism, leaving the author face-to-face with the quality of their frame. There is only one way to win: create a photo so strong that the absolute majority of random strangers will choose it, regardless of country or chat group.


